SymbolicNeuralNetworks
SymbolicNeuralNetworks is a library for creating symbolic representations of relatively small neural networks on whose basis more complicated expressions can be build. It should mostly be used together with other packages like GeometricMachineLearning and GeometricIntegrators.
SymbolicNeuralNetworks.DerivativeSymbolicNeuralNetworks.GradientSymbolicNeuralNetworks.JacobianSymbolicNeuralNetworks.SymbolicNeuralNetworkSymbolicNeuralNetworks.SymbolicPullbackSymbolicNeuralNetworks._build_nn_functionSymbolicNeuralNetworks._build_nn_functionSymbolicNeuralNetworks._get_contentsSymbolicNeuralNetworks._get_paramsSymbolicNeuralNetworks._modify_integerSymbolicNeuralNetworks._modify_integer2SymbolicNeuralNetworks.apply_element_wiseSymbolicNeuralNetworks.apply_element_wiseSymbolicNeuralNetworks.build_nn_functionSymbolicNeuralNetworks.build_nn_functionSymbolicNeuralNetworks.build_nn_functionSymbolicNeuralNetworks.build_nn_functionSymbolicNeuralNetworks.derivativeSymbolicNeuralNetworks.fix_create_arraySymbolicNeuralNetworks.fix_map_reduceSymbolicNeuralNetworks.function_valued_parametersSymbolicNeuralNetworks.make_kernelSymbolicNeuralNetworks.make_kernel2SymbolicNeuralNetworks.modify_input_argumentsSymbolicNeuralNetworks.modify_input_arguments2SymbolicNeuralNetworks.rewrite_argumentsSymbolicNeuralNetworks.rewrite_arguments2SymbolicNeuralNetworks.symbolic_pullbackSymbolicNeuralNetworks.symboliccounter!SymbolicNeuralNetworks.symbolize!
SymbolicNeuralNetworks.Derivative — TypeDerivativeSymbolicNeuralNetworks.Gradient — TypeGradient <: DerivativeComputes and stores the gradient of a symbolic function with respect to the parameters of a SymbolicNeuralNetwork.
Constructors
Gradient(f, nn)Differentiate the symbolic f with respect to the parameters of nn.
Gradient(nn)Compute the symbolic output of nn and differentiate it with respect to the parameters of nn. This does:
nn.model(nn.input, params(nn))Examples
using SymbolicNeuralNetworks: SymbolicNeuralNetwork, Gradient, derivative
using AbstractNeuralNetworks
c = Chain(Dense(2, 1, tanh))
nn = SymbolicNeuralNetwork(c)
(Gradient(nn) |> derivative)[1].L1.bImplementation
Internally the constructors are using symbolic_pullback.
SymbolicNeuralNetworks.Jacobian — TypeJacobian <: DerivativeAn subtype of Derivative. Computes the derivatives of a neural network with respect to its inputs.
Constructors
Jacobian(f, nn)
Jacobian(nn)Compute the jacobian of a SymbolicNeuralNetwork with respect to the input arguments.
Keys
Jacobian has the following keys:
nn::SymbolicNeuralNetwork,f: a symbolic expression to be differentiated,□: a symbolic expression of the Jacobian.
If f is not supplied as an input argument than it is taken to be:
f = nn.model(nn.input, params(nn))Implementation
For a function $f:\mathbb{R}^n\to\mathbb{R}^m$ we choose the following convention for the Jacobian:
\[\square_{ij} = \frac{\partial}{\partial{}x_j}f_i, \text{ i.e. } \square \in \mathbb{R}^{m\times{}n}\]
This is also used by Zygote and ForwardDiff.
Examples
Here we compute the Jacobian of a single-layer neural network $x \to \mathrm{tanh}(Wx + b)$. Its element-wise derivative is:
\[ \frac{\partial}{\partial_i}\sigma(\sum_{k}w_{jk}x_k + b_j) = \sigma'(\sum_{k}w_{jk}x_k + b_j)w_{ji}.\]
Also note that for this calculation $\mathrm{tanh}(x) = \frac{e^{2x} - 1}{e^{2x} + 1}$ and $\mathrm{tanh}'(x) = \frac{4e^{2x}}{(e^{2x} + 1)^2}.$
We can use Jacobian together with build_nn_function:
using SymbolicNeuralNetworks
using SymbolicNeuralNetworks: Jacobian, derivative
using AbstractNeuralNetworks: Dense, Chain, NeuralNetwork, params
using Symbolics
import Random
Random.seed!(123)
input_dim = 5
output_dim = 2
d = Dense(input_dim, 2, tanh)
c = Chain(d)
nn = SymbolicNeuralNetwork(c)
□ = SymbolicNeuralNetworks.Jacobian(nn)
# here we need to access the derivative and convert it into a function
jacobian1 = build_nn_function(derivative(□), nn)
ps = params(NeuralNetwork(c, Float64))
input = rand(input_dim)
#derivative
Dtanh(x::Real) = 4 * exp(2 * x) / (1 + exp(2x)) ^ 2
analytic_jacobian(i, j) = Dtanh(sum(k -> ps.L1.W[j, k] * input[k], 1:input_dim) + ps.L1.b[j]) * ps.L1.W[j, i]
jacobian1(input, ps) ≈ [analytic_jacobian(i, j) for j ∈ 1:output_dim, i ∈ 1:input_dim]
# output
trueSymbolicNeuralNetworks.SymbolicNeuralNetwork — TypeSymbolicNeuralNetwork <: AbstractSymbolicNeuralNetworkA symbolic neural network realizes a symbolic represenation (of small neural networks).
Fields
The struct has the following fields:
architecture: the neural network architecture,model: the model (typically a Chain that is the realization of the architecture),params: the symbolic parameters of the network.sinput: the symbolic input of the network.
Constructors
SymbolicNeuralNetwork(nn)Make a SymbolicNeuralNetwork based on a AbstractNeuralNetworks.Network.
SymbolicNeuralNetworks.SymbolicPullback — TypeSymbolicPullback <: AbstractPullbackSymbolicPullback computes the symbolic pullback of a loss function.
Examples
using SymbolicNeuralNetworks
using AbstractNeuralNetworks
using AbstractNeuralNetworks: params
import Random
Random.seed!(123)
c = Chain(Dense(2, 1, tanh))
nn = NeuralNetwork(c)
snn = SymbolicNeuralNetwork(nn)
loss = FeedForwardLoss()
pb = SymbolicPullback(snn, loss)
ps = params(nn)
typeof(pb(ps, nn.model, (rand(2), rand(1)))[2](1))
# output
@NamedTuple{L1::@NamedTuple{W::Matrix{Float64}, b::Vector{Float64}}}Implementation
An instance of SymbolicPullback stores
loss: an instance of aNetworkLoss,fun: a function that is used to compute the pullback.
If we call the functor of an instance of SymbolicPullback on model, ps and input it returns:
_pullback.loss(model, ps, input...), _pullback.fun(input..., ps)where the second output argument is again a function.
Extended help
We note the following seeming peculiarity:
using SymbolicNeuralNetworks
using AbstractNeuralNetworks: Chain, Dense, NeuralNetwork, FeedForwardLoss, params
using Symbolics
import Random
Random.seed!(123)
c = Chain(Dense(2, 1, tanh))
nn = NeuralNetwork(c)
snn = SymbolicNeuralNetwork(nn)
loss = FeedForwardLoss()
pb = SymbolicPullback(snn, loss)
input_output = (rand(2), rand(1))
loss_and_pullback = pb(params(nn), nn.model, input_output)
# note that we apply the second argument to another input `1`
pb_values = loss_and_pullback[2](1)
@variables soutput[1:SymbolicNeuralNetworks.output_dimension(nn.model)]
symbolic_pullbacks = SymbolicNeuralNetworks.symbolic_pullback(loss(nn.model, params(snn), snn.input, soutput), snn)
pb_values2 = build_nn_function(symbolic_pullbacks, params(snn), snn.input, soutput)(input_output[1], input_output[2], params(nn))
pb_values == (pb_values2 |> SymbolicNeuralNetworks._get_contents |> SymbolicNeuralNetworks._get_params)
# output
trueSee the docstrings for symbolic_pullback, build_nn_function, _get_params and _get_contents for more info on the functions that we used here. The noteworthy thing in the expression above is that the functor of SymbolicPullback returns two objects: the first one is the loss value evaluated for the relevant parameters and inputs. The second one is a function that takes again an input argument and then finally returns the partial derivatives. But why do we need this extra step with another function?
In machine learning we typically do reverse accumulation to perform automatic differentiation (AD). Assuming we are given a function that is the composition of simpler functions $f = f_1\circ{}f_2\circ\cdots\circ{}f_n:\mathbb{R}^n\to\mathbb{R}^m$ reverse differentiation starts with output sensitivities and then successively feeds them through $f_n$, $f_{n-1}$ etc. So it does:
\[(\nabla_xf)^T = (\nabla_{x}f_1)^T(\nabla_{f_1(x)}f_2)^T\cdots(\nabla_{f_{n-1}(\cdots{}x)}f_n)^T(do),\]
where $do\in\mathbb{R}^m$ are the output sensitivities and the jacobians are stepwise multiplied from the left. So we propagate from the output stepwise back to the input. If we have $m=1$, i.e. if the output is one-dimensional, then the output sensitivities may simply be taken to be $do = 1$.
So in theory we could leave out this extra step: returning an object (that is stored in pb.fun) can be seen as unnecessary as we could simply store the equivalent of pb.fun(1.) in an instance of SymbolicPullback. It is however customary for a pullback to return a callable function (that depends on the output sensitivities), which is why we also choose to do this here, even if the output sensitivities are a scalar quantity.
SymbolicNeuralNetworks._build_nn_function — Method_build_nn_function(eq, params, sinput, soutput)Build a function that can process a matrix. See build_nn_function(::EqT, ::NeuralNetworkParameters, ::Symbolics.Arr).
Implementation
Note that we have two input arguments here which means this method processes code differently than _build_nn_function(::EqT, ::NeuralNetworkParameters, ::Symbolics.Arr, ::Symbolics.Arr). Here we call:
See the docstrings for those functions for details on how the code is modified.
SymbolicNeuralNetworks._build_nn_function — Method_build_nn_function(eq, params, sinput)Build a function that can process a matrix. This is used as a starting point for build_nn_function.
Examples
using SymbolicNeuralNetworks: _build_nn_function, SymbolicNeuralNetwork
using AbstractNeuralNetworks: params, Chain, Dense, NeuralNetwork
import Random
Random.seed!(123)
c = Chain(Dense(2, 1, tanh))
nn = NeuralNetwork(c)
snn = SymbolicNeuralNetwork(nn)
eq = c(snn.input, params(snn))
built_function = _build_nn_function(eq, params(snn), snn.input)
built_function([1. 2.; 3. 4.], params(nn), 1)
# output
1-element Vector{Float64}:
 0.9912108161055604Note that we have to supply an extra argument (index) to _build_nn_function that we do not have to supply to build_nn_function.
Implementation
This first calls Symbolics.build_function with the keyword argument expression = Val{true} and then modifies the generated code by calling:
See the docstrings for those functions for details on how the code is modified.
SymbolicNeuralNetworks._get_contents — Method_get_contents(nt::AbstractArray{<:NamedTuple})Return the contents of a one-dimensional vector.
Examples
using SymbolicNeuralNetworks: _get_contents
_get_contents([(a = "element_contained_in_vector", )])
# output
(a = "element_contained_in_vector",)SymbolicNeuralNetworks._get_params — Method_get_params(ps::NeuralNetworkParameters)Return the NamedTuple that's equivalent to the NeuralNetworkParameters.
SymbolicNeuralNetworks._modify_integer — Method_modify_integerIf the input is a single integer, subtract 1 from it.
Examples
using SymbolicNeuralNetworks: _modify_integer
s = ["2", "hello", "hello2", "3"]
_modify_integer.(s)
# output
4-element Vector{String}:
 "1"
 "hello"
 "hello2"
 "2"SymbolicNeuralNetworks._modify_integer2 — Method_modify_integer2If the input is a single integer, subtract 2 from it.
Examples
using SymbolicNeuralNetworks: _modify_integer2
s = ["3", "hello", "hello2", "4"]
_modify_integer2.(s)
# output
4-element Vector{String}:
 "1"
 "hello"
 "hello2"
 "2"SymbolicNeuralNetworks.apply_element_wise — Methodapply_element_wise(ps, params, input...)Apply a function element-wise. ps is an Array where each entry of the array is are NeuralNetworkParameters that store functions. See apply_element_wise(::NeuralNetworkParameters, ::NeuralNetworkParameters, ::Any).
Examples
Vector:
using SymbolicNeuralNetworks: apply_element_wise
using AbstractNeuralNetworks: NeuralNetworkParameters
# parameter values
params = NeuralNetworkParameters((a = 1., b = 2.))
ps = [NeuralNetworkParameters((val1 = (input, params) -> input .+ params.a, val2 = (input, params) -> input .+ params.b))]
apply_element_wise(ps, params, [1.])
# output
1-element Vector{NeuralNetworkParameters{(:val1, :val2), Tuple{Vector{Float64}, Vector{Float64}}}}:
 NeuralNetworkParameters{(:val1, :val2), Tuple{Vector{Float64}, Vector{Float64}}}((val1 = [2.0], val2 = [3.0]))Matrix:
using SymbolicNeuralNetworks: apply_element_wise
using AbstractNeuralNetworks: NeuralNetworkParameters
# parameter values
params = NeuralNetworkParameters((a = 1., b = 2.))
sc_ps = NeuralNetworkParameters((val1 = (input, params) -> input .+ params.a, val2 = (input, params) -> input .+ params.b))
ps = [sc_ps sc_ps]
apply_element_wise(ps, params, [1.]) |> typeof
# output
Matrix{NeuralNetworkParameters{(:val1, :val2), Tuple{Vector{Float64}, Vector{Float64}}}} (alias for Array{NeuralNetworkParameters{(:val1, :val2), Tuple{Array{Float64, 1}, Array{Float64, 1}}}, 2})Implementation
This is generating a @generated function.
SymbolicNeuralNetworks.apply_element_wise — Methodapply_element_wise(ps, params, input...)Apply a function element-wise. ps is a NeuralNetworkParameters-valued function.
Examples
using SymbolicNeuralNetworks: apply_element_wise
using AbstractNeuralNetworks: NeuralNetworkParameters
# parameter values
params = NeuralNetworkParameters((a = 1., b = 2.))
ps = NeuralNetworkParameters((val1 = (input, params) -> input + params.a, val2 = (input, params) -> input + params.b))
apply_element_wise(ps, params, 1.)
# output
NeuralNetworkParameters{(:val1, :val2), Tuple{Float64, Float64}}((val1 = 2.0, val2 = 3.0))Implementation
This is generating a @generated function.
SymbolicNeuralNetworks.build_nn_function — Methodbuild_nn_function(eqs::AbstractArray{<:NeuralNetworkParameters}, sparams, sinput...)Build an executable function based on an array of symbolic equations eqs.
Examples
using SymbolicNeuralNetworks: build_nn_function, SymbolicNeuralNetwork
using AbstractNeuralNetworks: Chain, Dense, NeuralNetwork, params
import Random
Random.seed!(123)
ch = Chain(Dense(2, 1, tanh))
nn = NeuralNetwork(ch)
snn = SymbolicNeuralNetwork(nn)
eqs = [(a = ch(snn.input, params(snn)), b = ch(snn.input, params(snn)).^2), (c = ch(snn.input, params(snn)).^3, )]
funcs = build_nn_function(eqs, params(snn), snn.input)
input = [1., 2.]
funcs_evaluated = funcs(input, params(nn))
# output
2-element Vector{NamedTuple}:
 (a = [0.985678060655224], b = [0.9715612392570434])
 (c = [0.9576465981186686],)SymbolicNeuralNetworks.build_nn_function — Methodbuild_nn_function(eqs, nn, soutput)Build an executable function that can also depend on an output. The resulting built_function is then called with:
built_function(input, output, ps)Also compare this to build_nn_function(::EqT, ::AbstractSymbolicNeuralNetwork).
Extended Help
See the extended help section of build_nn_function(::EqT, ::AbstractSymbolicNeuralNetwork).
SymbolicNeuralNetworks.build_nn_function — Methodbuild_nn_function(eqs::Union{NamedTuple, NeuralNetworkParameters}, sparams, sinput...)Return a function that takes an input, (optionally) an output and neural network parameters and returns a NeuralNetworkParameters-valued output.
Examples
using SymbolicNeuralNetworks: build_nn_function, SymbolicNeuralNetwork
using AbstractNeuralNetworks: Chain, Dense, NeuralNetwork, params
import Random
Random.seed!(123)
c = Chain(Dense(2, 1, tanh))
nn = NeuralNetwork(c)
snn = SymbolicNeuralNetwork(nn)
eqs = (a = c(snn.input, params(snn)), b = c(snn.input, params(snn)).^2)
funcs = build_nn_function(eqs, params(snn), snn.input)
input = [1., 2.]
funcs_evaluated = funcs(input, params(nn))
# output
(a = [0.985678060655224], b = [0.9715612392570434])Implementation
Internally this is using function_valued_parameters and apply_element_wise.
SymbolicNeuralNetworks.build_nn_function — Methodbuild_nn_function(eq, nn)Build an executable function based on a symbolic equation, a symbolic input array and a SymbolicNeuralNetwork.
This function can be called with:
built_function(input, ps)Implementation
Internally this is calling _build_nn_function and then parallelizing the expression via the index k.
Extended Help
The functions mentioned in the implementation section were adjusted ad-hoc to deal with problems that emerged on the fly. Other problems may occur. In case you bump into one please open an issue on github.
SymbolicNeuralNetworks.derivative — Methodderivative(g)Examples
We compare this to symbolic_pullback here:
using SymbolicNeuralNetworks: SymbolicNeuralNetwork, Gradient, derivative, symbolic_pullback
using AbstractNeuralNetworks
c = Chain(Dense(2, 1, tanh))
nn = SymbolicNeuralNetwork(c)
g = Gradient(nn)
∇ = derivative(g)
isequal(∇, symbolic_pullback(g.f, nn))
# output
trueSymbolicNeuralNetworks.fix_create_array — Methodfixcreatearray(s)
Fix a problem that occurs in connection with create_array.
The function create_array from SymbolicUtils.Code takes as first input the type of a symbolic array.  For reasons that are not entirely clear yet the first argument of create_array ends up being ˍ₋arg2, which is a NamedTuple of symoblic arrays. We solve this problem by replacing typeof(ˍ₋arg[0-9]+) with Array, which seems to be the most generic possible input to create_array.
Examples
using SymbolicNeuralNetworks: fix_create_array
s = "(SymbolicUtils.Code.create_array)(typeof(ˍ₋arg2)"
fix_create_array(s)
# output
"SymbolicUtils.Code.create_array(typeof(sinput)"Implementation
This is used for _build_nn_function(::EqT, ::NeuralNetworkParameters, ::Symbolics.Arr) and _build_nn_function(::EqT, ::NeuralNetworkParameters, ::Symbolics.Arr, ::Symbolics.Arr).
SymbolicNeuralNetworks.fix_map_reduce — Methodfix_map_reduce(s)Replace Symbolics._mapreduce with mapreduce (from Base).
When we generate a function with Symbolics.build_function it often contains Symbolics._mapreduce which cannot be differentiated with Zygote.  We get around this by replacing Symbolics._mapreduce with mapreduce and also doing:
replace(s, ", Colon(), (:init => false,)" => ", dims = Colon()")Implementation
This is used for _build_nn_function(::EqT, ::NeuralNetworkParameters, ::Symbolics.Arr) and _build_nn_function(::EqT, ::NeuralNetworkParameters, ::Symbolics.Arr, ::Symbolics.Arr).
SymbolicNeuralNetworks.function_valued_parameters — Methodfunction_valued_parameters(eqs::Union{NamedTuple, NeuralNetworkParameters}, sparams, sinput...)Return an executable function for each entry in eqs. This still has to be processed with apply_element_wise.
Examples
using SymbolicNeuralNetworks: function_valued_parameters, SymbolicNeuralNetwork
using AbstractNeuralNetworks: Chain, Dense, NeuralNetwork, params
import Random
Random.seed!(123)
c = Chain(Dense(2, 1, tanh))
nn = NeuralNetwork(c)
snn = SymbolicNeuralNetwork(nn)
eqs = (a = c(snn.input, params(snn)), b = c(snn.input, params(snn)).^2)
funcs = function_valued_parameters(eqs, params(snn), snn.input)
input = [1., 2.]
ps = params(nn)
a = c(input, ps)
b = c(input, ps).^2
(funcs.a(input, ps), funcs.b(input, ps)) .≈ (a, b)
# output
(true, true)SymbolicNeuralNetworks.make_kernel — MethodExamples
using SymbolicNeuralNetworks
s = "function (sinput, ps)\n begin\n getindex(sinput, 1) + getindex(sinput, 2) \n end\n end"
SymbolicNeuralNetworks.make_kernel(s)
# output
"function (sinput, ps, k)\n begin\n getindex(sinput, 1, k) + getindex(sinput, 2, k) \n end\n end"SymbolicNeuralNetworks.make_kernel2 — MethodExamples
using SymbolicNeuralNetworks
s = "function (sinput, soutput, ps)\n begin\n getindex(sinput, 1) + getindex(soutput, 2) \n end\n end"
SymbolicNeuralNetworks.make_kernel2(s)
# output
"function (sinput, soutput, ps, k)\n begin\n getindex(sinput, 1, k) + getindex(soutput, 2, k) \n end\n end"SymbolicNeuralNetworks.modify_input_arguments — Methodmodify_input_arguments(s)Change input arguments of type (sinput, ps.L1, ps.L2) etc to (sinput, ps). This should be used after rewrite_arguments. Also see build_nn_function.
Examples
using SymbolicNeuralNetworks: modify_input_arguments
s = "(sinput, ps.L1, ps.L2, ps.L3)"
modify_input_arguments(s)
# output
"(sinput, ps)"SymbolicNeuralNetworks.modify_input_arguments2 — Methodmodify_input_arguments2(s)Change input arguments of type (sinput, soutput, ps.L1, ps.L2) etc to (sinput, soutput, ps). This should be used after rewrite_arguments.
Examples
using SymbolicNeuralNetworks: modify_input_arguments2
s = "(sinput, soutput, ps.L1, ps.L2, ps.L3)"
modify_input_arguments2(s)
# output
"(sinput, soutput, ps)"SymbolicNeuralNetworks.rewrite_arguments — Methodrewrite_arguments(s)Replace ˍ₋arg2, ˍ₋arg3, ... with ps.L1, ps.L2 etc. This is used after Symbolics.build_function.
Examples
using SymbolicNeuralNetworks: rewrite_arguments
s = "We test if strings that contain ˍ₋arg2 and ˍ₋arg3 can be converted in the right way."
rewrite_arguments(s)
# output
"We test if strings that contain ps.L1 and ps.L2 can be converted in the right way."Implementation
The input is first split at the relevant points and then we call _modify_integer. The routine _modify_integer ensures that we start counting at 1 and not at 2. By defaut the arguments of the generated function that we get after applying Symbolics.build_function are (x, ˍ₋arg2, ˍ₋arg3) etc. We first change this to (x, ps.L2, ps.L3) etc. and then to (x, ps.L1, ps.L2) etc. via _modify_integer.
SymbolicNeuralNetworks.rewrite_arguments2 — Methodrewrite_arguments2(s)Replace ˍ₋arg3, ˍ₋arg4, ... with ps.L1, ps.L2 etc. Note that we subtract two from the input, unlike rewrite_arguments where it is one.
Examples
using SymbolicNeuralNetworks: rewrite_arguments2
s = "We test if strings that contain ˍ₋arg3 and ˍ₋arg4 can be converted in the right way."
rewrite_arguments2(s)
# output
"We test if strings that contain ps.L1 and ps.L2 can be converted in the right way."Implementation
The input is first split at the relevant points and then we call _modify_integer2. The routine _modify_integer2 ensures that we start counting at 1 and not at 3. See rewrite_arguments.
SymbolicNeuralNetworks.symbolic_pullback — Methodsymbolic_pullback(f, nn)This takes a symbolic fthat depends on the parameters innn` and returns the corresponding pullback (a symbolic expression).
This is used by Gradient and SymbolicPullback.
Examples
using SymbolicNeuralNetworks: SymbolicNeuralNetwork, symbolic_pullback
using AbstractNeuralNetworks
using AbstractNeuralNetworks: params
using LinearAlgebra: norm
c = Chain(Dense(2, 1, tanh))
nn = SymbolicNeuralNetwork(c)
output = c(nn.input, params(nn))
spb = symbolic_pullback(output, nn)
spb[1].L1.bSymbolicNeuralNetworks.symboliccounter! — Methodsymboliccounter!(cache, arg; redundancy)Add a specific argument to the cache.
Examples
using SymbolicNeuralNetworks: symboliccounter!
cache = Dict()
var = symboliccounter!(cache, :var)
(cache, var)
# output
(Dict{Any, Any}(:var => 1), :var_1)
SymbolicNeuralNetworks.symbolize! — Functionsymbolize!(cache, nt, var_name)Symbolize all the arguments in nt.
Examples
using SymbolicNeuralNetworks: symbolize!
cache = Dict()
sym = symbolize!(cache, .1, :X)
(sym, cache)
# output
(X_1, Dict{Any, Any}(:X => 1))using SymbolicNeuralNetworks: symbolize!
cache = Dict()
arr = rand(2, 1)
sym_scalar = symbolize!(cache, .1, :X)
sym_array = symbolize!(cache, arr, :Y)
(sym_array, cache)
# output
(Y_1[Base.OneTo(2),Base.OneTo(1)], Dict{Any, Any}(:X => 1, :Y => 1))Note that the for the second case the cache is storing a scalar under :X and an array under :Y. If we use the same label for both we get:
using SymbolicNeuralNetworks: symbolize!
cache = Dict()
arr = rand(2, 1)
sym_scalar = symbolize!(cache, .1, :X)
sym_array = symbolize!(cache, arr, :X)
(sym_array, cache)
# output
(X_2[Base.OneTo(2),Base.OneTo(1)], Dict{Any, Any}(:X => 2))We can also use symbolize! with NamedTuples:
using SymbolicNeuralNetworks: symbolize!
cache = Dict()
nt = (a = 1, b = [1, 2])
sym = symbolize!(cache, nt, :X)
(sym, cache)
# output
((a = X_1, b = X_2[Base.OneTo(2)]), Dict{Any, Any}(:X => 2))And for neural network parameters:
using SymbolicNeuralNetworks: symbolize!
using AbstractNeuralNetworks: NeuralNetwork, params, Chain, Dense
nn = NeuralNetwork(Chain(Dense(1, 2; use_bias = false), Dense(2, 1; use_bias = false)))
cache = Dict()
sym = symbolize!(cache, params(nn), :X) |> typeof
# output
AbstractNeuralNetworks.NeuralNetworkParameters{(:L1, :L2), Tuple{@NamedTuple{W::Symbolics.Arr{Symbolics.Num, 2}}, @NamedTuple{W::Symbolics.Arr{Symbolics.Num, 2}}}}Implementation
Internally this is using symboliccounter!. This function is also adjusting/altering the cache (that is optionally supplied as an input argument).