Optimizer Problems
A central object in SimpleSolvers are optimizer problems (see SimpleSolvers.AbstractOptimizerProblem). They are either SimpleSolvers.LinesearchProblems or OptimizerProblems. The goal of a solver (both LinearSolvers and NonlinearSolvers) is to make the optimizer problem have value zero. The goal of an Optimizer is to minimize a OptimizerProblem.
Examples
Multivariate Optimizer Problems
OptimizerProblems are used in a way similar to LinesearchProblems, the difference is that the derivative functions are replaced by gradient functions, i.e.:
derivative$\implies$gradient,derivative!$\implies$gradient!,derivative!!$\implies$gradient!!.
f(x::AbstractArray) = sum(x .^ 2)
x = rand(3)
obj = OptimizerProblem(f, x)OptimizerProblem (for vector-valued quantities only the first component is printed):
f(x) = NaN
g(x)₁ = NaN
x_f₁ = NaN
x_g₁ = NaN
Every instance of OptimizerProblem stores an instance of Gradient to which we similarly can apply the functions gradient or gradient!:
grad = GradientAutodiff(f, x)
gradient!(obj, grad, x)3-element Vector{Float64}:
1.042427591070766
1.1736135149066969
1.7817573961855622The difference to Gradient is that we also store the value for the evaluated gradient, which can be accessed by calling:
gradient(obj)3-element Vector{Float64}:
1.042427591070766
1.1736135149066969
1.7817573961855622