SimpleSolvers
SimpleSolvers.DEFAULT_ARMIJO_p
SimpleSolvers.DEFAULT_ARMIJO_α₀
SimpleSolvers.DEFAULT_ARMIJO_σ₀
SimpleSolvers.DEFAULT_ARMIJO_σ₁
SimpleSolvers.DEFAULT_BIERLAIRE_ε
SimpleSolvers.DEFAULT_BIERLAIRE_ξ
SimpleSolvers.DEFAULT_BRACKETING_k
SimpleSolvers.DEFAULT_BRACKETING_nmax
SimpleSolvers.DEFAULT_BRACKETING_s
SimpleSolvers.DEFAULT_GRADIENT_ϵ
SimpleSolvers.DEFAULT_ITERATIONS_QUASI_NEWTON_SOLVER
SimpleSolvers.DEFAULT_JACOBIAN_ϵ
SimpleSolvers.DEFAULT_WOLFE_c₁
SimpleSolvers.DEFAULT_s_REDUCTION
SimpleSolvers.MAX_NUMBER_OF_ITERATIONS_FOR_QUADRATIC_LINESEARCH
SimpleSolvers.N_STATIC_THRESHOLD
SimpleSolvers.AbstractObjective
SimpleSolvers.AbstractUnivariateObjective
SimpleSolvers.Backtracking
SimpleSolvers.BacktrackingCondition
SimpleSolvers.BacktrackingState
SimpleSolvers.BierlaireQuadratic
SimpleSolvers.BierlaireQuadraticState
SimpleSolvers.Bisection
SimpleSolvers.BisectionState
SimpleSolvers.BracketMinimumCriterion
SimpleSolvers.CurvatureCondition
SimpleSolvers.FixedPointIterator
SimpleSolvers.FixedPointIteratorCache
SimpleSolvers.Gradient
SimpleSolvers.GradientAutodiff
SimpleSolvers.GradientFiniteDifferences
SimpleSolvers.GradientFunction
SimpleSolvers.Hessian
SimpleSolvers.HessianAutodiff
SimpleSolvers.HessianBFGS
SimpleSolvers.HessianDFP
SimpleSolvers.HessianFunction
SimpleSolvers.Jacobian
SimpleSolvers.Jacobian
SimpleSolvers.Jacobian
SimpleSolvers.JacobianAutodiff
SimpleSolvers.JacobianFiniteDifferences
SimpleSolvers.JacobianFunction
SimpleSolvers.LU
SimpleSolvers.LUSolverCache
SimpleSolvers.LUSolverLAPACK
SimpleSolvers.LinearSolver
SimpleSolvers.LinearSolverCache
SimpleSolvers.LinearSolverMethod
SimpleSolvers.LinearSystem
SimpleSolvers.Linesearch
SimpleSolvers.LinesearchMethod
SimpleSolvers.LinesearchState
SimpleSolvers.MultivariateObjective
SimpleSolvers.NewtonOptimizerCache
SimpleSolvers.NewtonOptimizerState
SimpleSolvers.NewtonSolver
SimpleSolvers.NewtonSolver
SimpleSolvers.NewtonSolverCache
SimpleSolvers.NonlinearMethod
SimpleSolvers.NonlinearSolver
SimpleSolvers.NonlinearSolverStatus
SimpleSolvers.NonlinearSystem
SimpleSolvers.NonlinearSystem
SimpleSolvers.NonlinearSystem
SimpleSolvers.OptimizationAlgorithm
SimpleSolvers.Optimizer
SimpleSolvers.OptimizerResult
SimpleSolvers.OptimizerStatus
SimpleSolvers.Options
SimpleSolvers.Quadratic
SimpleSolvers.Quadratic2
SimpleSolvers.QuadraticState
SimpleSolvers.QuadraticState2
SimpleSolvers.Static
SimpleSolvers.StaticState
SimpleSolvers.SufficientDecreaseCondition
SimpleSolvers.TemporaryUnivariateObjective
SimpleSolvers.UnivariateObjective
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.update!
GeometricBase.value
LinearAlgebra.ldiv!
SimpleSolvers.QuasiNewtonSolver
SimpleSolvers._static
SimpleSolvers.absolute_tolerance
SimpleSolvers.adjust_α
SimpleSolvers.adjust_α
SimpleSolvers.alloc_d
SimpleSolvers.alloc_f
SimpleSolvers.alloc_g
SimpleSolvers.alloc_h
SimpleSolvers.alloc_x
SimpleSolvers.assess_convergence!
SimpleSolvers.assess_convergence!
SimpleSolvers.bisection
SimpleSolvers.bisection
SimpleSolvers.bracket_minimum
SimpleSolvers.bracket_minimum_with_fixed_point
SimpleSolvers.bracket_root
SimpleSolvers.cache
SimpleSolvers.check_gradient
SimpleSolvers.check_hessian
SimpleSolvers.check_jacobian
SimpleSolvers.clear!
SimpleSolvers.clear!
SimpleSolvers.clear!
SimpleSolvers.clear!
SimpleSolvers.clear!
SimpleSolvers.clear!
SimpleSolvers.compute_gradient!
SimpleSolvers.compute_hessian
SimpleSolvers.compute_hessian!
SimpleSolvers.compute_hessian!
SimpleSolvers.compute_hessian_ad!
SimpleSolvers.compute_jacobian!
SimpleSolvers.compute_jacobian!
SimpleSolvers.compute_new_iterate
SimpleSolvers.default_tolerance
SimpleSolvers.derivative
SimpleSolvers.derivative!
SimpleSolvers.derivative!!
SimpleSolvers.determine_initial_α
SimpleSolvers.direction
SimpleSolvers.f_argument
SimpleSolvers.f_calls
SimpleSolvers.factorize!
SimpleSolvers.factorize!
SimpleSolvers.find_maximum_value
SimpleSolvers.gradient
SimpleSolvers.gradient
SimpleSolvers.gradient
SimpleSolvers.gradient!
SimpleSolvers.gradient!
SimpleSolvers.gradient!!
SimpleSolvers.gradient_ad!
SimpleSolvers.gradient_fd!
SimpleSolvers.increase_iteration_number!
SimpleSolvers.increase_iteration_number!
SimpleSolvers.increase_iteration_number!
SimpleSolvers.initialize!
SimpleSolvers.initialize!
SimpleSolvers.initialize!
SimpleSolvers.initialize!
SimpleSolvers.initialize!
SimpleSolvers.isaOptimizationAlgorithm
SimpleSolvers.j_argument
SimpleSolvers.j_calls
SimpleSolvers.jacobian
SimpleSolvers.jacobian
SimpleSolvers.jacobian!
SimpleSolvers.jacobian!!
SimpleSolvers.linearsolver
SimpleSolvers.linesearch_objective
SimpleSolvers.linesearch_objective
SimpleSolvers.linesearch_objective
SimpleSolvers.meets_stopping_criteria
SimpleSolvers.meets_stopping_criteria
SimpleSolvers.method
SimpleSolvers.minimum_decrease_threshold
SimpleSolvers.nonlinearsystem
SimpleSolvers.nonlinearsystem
SimpleSolvers.print_status
SimpleSolvers.residual!
SimpleSolvers.residual!
SimpleSolvers.residual!
SimpleSolvers.rhs
SimpleSolvers.rhs
SimpleSolvers.shift_χ_to_avoid_stalling
SimpleSolvers.solution
SimpleSolvers.solve
SimpleSolvers.solve!
SimpleSolvers.solve!
SimpleSolvers.solve!
SimpleSolvers.solve!
SimpleSolvers.solve!
SimpleSolvers.solve!
SimpleSolvers.solve!
SimpleSolvers.solver_step!
SimpleSolvers.solver_step!
SimpleSolvers.solver_step!
SimpleSolvers.triple_point_finder
SimpleSolvers.value!
SimpleSolvers.value!
SimpleSolvers.value!!
SimpleSolvers.DEFAULT_ARMIJO_p
— Constantconst DEFAULT_ARMIJO_p
Constant used in BacktrackingState
. Its value is 0.5
SimpleSolvers.DEFAULT_ARMIJO_α₀
— Constantconst DEFAULT_ARMIJO_α₀
The default starting value for $\alpha$ used in SufficientDecreaseCondition
(also see BacktrackingState
and QuadraticState
). Its value is 1.0
SimpleSolvers.DEFAULT_ARMIJO_σ₀
— Constantconst DEFAULT_ARMIJO_σ₀
Constant used in QuadraticState
. Also see DEFAULT_ARMIJO_σ₁
.
It is meant to safeguard against stagnation when performing line searches (see [1]).
Its value is 0.1
SimpleSolvers.DEFAULT_ARMIJO_σ₁
— Constantconst DEFAULT_ARMIJO_σ₁
Constant used in QuadraticState
. Also see DEFAULT_ARMIJO_σ₀
. Its value is 0.5
SimpleSolvers.DEFAULT_BIERLAIRE_ε
— ConstantDEFAULT_BIERLAIRE_ε
A constant that determines the precision in BierlaireQuadraticState
. The constant recommended in [2] is 1E-3
.
Note that this constant may also depend on whether we deal with optimizers or solvers.
SimpleSolvers.DEFAULT_BIERLAIRE_ξ
— ConstantDEFAULT_BIERLAIRE_ξ
A constant on basis of which the b
in BierlaireQuadraticState
is perturbed in order "to avoid stalling" (see [2, Chapter 11.2.1]; in this reference the author recommends $10^{-7}$ as a value). Its value is 1.1920928955078125e-7.
SimpleSolvers.DEFAULT_BRACKETING_k
— Constantconst DEFAULT_BRACKETING_k
Gives the default ratio by which the bracket is increased if bracketing was not successful. See bracket_minimum
.
SimpleSolvers.DEFAULT_BRACKETING_nmax
— ConstantDefault constant
SimpleSolvers.DEFAULT_BRACKETING_s
— Constantconst DEFAULT_BRACKETING_s
Gives the default width of the interval (the bracket). See bracket_minimum
.
SimpleSolvers.DEFAULT_GRADIENT_ϵ
— ConstantDEFAULT_GRADIENT_ϵ
A constant on whose basis finite differences are computed.
SimpleSolvers.DEFAULT_ITERATIONS_QUASI_NEWTON_SOLVER
— ConstantThe default number of iterations before the Jacobian
is refactored in the QuasiNewtonSolver
SimpleSolvers.DEFAULT_JACOBIAN_ϵ
— ConstantDEFAULT_JACOBIAN_ϵ
A constant used for computing the finite difference Jacobian.
SimpleSolvers.DEFAULT_WOLFE_c₁
— Constantconst DEFAULT_WOLFE_c₁
A constant $\epsilon$ on which a finite difference approximation of the derivative of the objective is computed. This is then used in the following stopping criterion:
\[\frac{f(\alpha) - f(\alpha_0)}{\epsilon} < \alpha\cdot{}f'(\alpha_0).\]
Extended help
SimpleSolvers.DEFAULT_s_REDUCTION
— ConstantA factor by which s
is reduced in each bracketing iteration (see bracket_minimum_with_fixed_point
).
SimpleSolvers.MAX_NUMBER_OF_ITERATIONS_FOR_QUADRATIC_LINESEARCH
— ConstantThis constant is used for QuadraticState
and BierlaireQuadraticState
.
SimpleSolvers.N_STATIC_THRESHOLD
— ConstantThreshold for the maximum size a static matrix should have.
SimpleSolvers.AbstractObjective
— TypeAbstractObjective
An objective is a quantity to has to be made zero by a solver or minimized by an optimizer.
SimpleSolvers.AbstractUnivariateObjective
— TypeAbstractUnivariateObjective <: AbstractObjective
A subtype of AbstractObjective
that only depends on one variable. See UnivariateObjective
.
SimpleSolvers.Backtracking
— TypeBacktracking <: LinesearchMethod
The backtracking method.
Constructors
Backtracking()
Extended help
The backtracking algorithm starts by setting $y_0 \gets f(0)$ and $d_0 \gets \nabla_0f$.
The algorithm is executed by calling the functor of BacktrackingState
.
The following is then repeated until the stopping criterion is satisfied or config.max_iterations
(1000 by default) is reached:
if value!(obj, α) ≥ y₀ + ls.ϵ * α * d₀
α *= ls.p
else
break
end
The stopping criterion as an equation can be written as:
\[f(lpha) < y_0 + psilon lpha abla_0f = y_0 + psilon (lpha - 0) abla_0f.\]
Note that if the stopping criterion is not reached, $lpha$ is multiplied with $p$ and the process continues.
Sometimes the parameters $p$ and $psilon$ have different names such as $au$ and $c$.
SimpleSolvers.BacktrackingCondition
— TypeBacktrackingCondition
Abstract type comprising the conditions that are used for checking step sizes for the backtracking line search (see BacktrackingState
).
SimpleSolvers.BacktrackingState
— TypeBacktrackingState <: LinesearchState
Corresponding LinesearchState
to Backtracking
.
Keys
The keys are:
config::
Options
α₀
:ϵ=$(DEFAULT_WOLFE_c₁)
: a default step size on whose basis we compute a finite difference approximation of the derivative of the objective. Also seeDEFAULT_WOLFE_c₁
.p=$(DEFAULT_ARMIJO_p)
: a parameter with which $\alpha$ is decreased in every step until the stopping criterion is satisfied.
Functor
The functor is used the following way:
ls(obj, α = ls.α₀)
Implementation
The algorithm starts by setting:
\[x_0 \gets 0, y_0 \gets f(x_0), d_0 \gets f'(x_0), \alpha \gets \alpha_0,\]
where $f$ is the univariate objective (of type AbstractUnivariateObjective
) and $\alpha_0$ is stored in ls
. It then repeatedly does $\alpha \gets \alpha\cdot{}p$ until either (i) the maximum number of iterations is reached (the max_iterations
keyword in Options
) or (ii) the following holds:
\[ f(\alpha) < y_0 + \epsilon \cdot \alpha \cdot d_0,\]
where $\epsilon$ is stored in ls
.
SimpleSolvers.BierlaireQuadratic
— TypeBierlaireQuadratic <: LinesearchMethod
Algorithm taken from [2].
SimpleSolvers.BierlaireQuadraticState
— TypeBierlaireQuadraticState <: LinesearchState
Extended help
Note that the performance of BierlaireQuadratic
may heavily depend on the choice of DEFAULT_BIERLAIRE_ε
(i.e. the precision) and DEFAULT_BIERLAIRE_ξ
.
SimpleSolvers.Bisection
— TypeBisection <: LinesearchMethod
The bisection method.
Constructors
Bisection()
Extended help
The bisection algorithm starts with an interval and successively bisects it into smaller intervals until a root is found. See bisection
.
SimpleSolvers.BisectionState
— TypeBisectionState <: LinesearchState
Corresponding LinesearchState
to Bisection
.
See bisection
for the implementation of the algorithm.
Constructors
BisectionState(options)
BisectionState(; options)
SimpleSolvers.BracketMinimumCriterion
— TypeBracketMinimumCriterion <: BracketingCriterion
The criterion used for bracket_minimum
.
Functor
bc(yb, yc)
This checks whether yc
is bigger than yb
, i.e. whether c
is past the minimum.
SimpleSolvers.CurvatureCondition
— TypeCurvatureCondition <: LinesearchCondition
The second of the Wolfe conditions [3]. The first one is the SufficientDecreaseCondition
.
This encompasses the standard curvature condition and the strong curvature condition.
Constructor
CurvatureCondition(c, xₖ, gradₖ, pₖ, obj, grad; mode)
Here grad
has to be a Gradient
and obj
an AbstractObjective
. The other inputs are either arrays or numbers.
Implementation
For computational reasons CurvatureCondition
also has a field gradₖ₊₁
in which the temporary new gradient is saved.
SimpleSolvers.FixedPointIterator
— MethodSimpleSolvers.FixedPointIteratorCache
— TypeFixedPointIteratorCache
Stores the last solution xₖ
.
SimpleSolvers.Gradient
— TypeGradient
Abstract type. strcut
s that are derived from this need an assoicated functor that computes the gradient of a function (in-place).
Implementation
When a custom Gradient
is implemented, a functor is needed:
function (grad::Gradient)(g::AbstractVector, x::AbstractVector) end
This functor can also be called with gradient!
.
Examples
Examples include:
SimpleSolvers.GradientAutodiff
— TypeGradientAutodiff <: Gradient
A struct
that realizes Gradient
by using ForwardDiff
.
Keys
The struct
stores:
F
: a function that has to be differentiated.∇config
: result of applyingForwardDiff.GradientConfig
.
Constructors
GradientAutodiff(F, x::AbstractVector)
GradientAutodiff(F, nx::Integer)
Functor
The functor does:
grad(g, x) = ForwardDiff.gradient!(g, grad.F, x, grad.∇config)
SimpleSolvers.GradientFiniteDifferences
— TypeGradientFiniteDifferences <: Gradient
A struct
that realizes Gradient
by using finite differences.
Keys
The struct
stores:
F
: a function that has to be differentiated.ϵ
: small constant on whose basis the finite differences are computed.e
: auxiliary vector used for computing finite differences. It's of the form $e_1 = \begin{bmatrix} 1 & 0 & \cdots & 0 \end{bmatrix}$.tx
: auxiliary vector used for computing finite differences. It stores the offset in thex
vector.
Constructor(s)
GradientFiniteDifferences{T}(F, nx::Integer; ϵ)
By default for ϵ
is DEFAULT_GRADIENT_ϵ
.
Functor
The functor does:
for j in eachindex(x,g)
ϵⱼ = grad.ϵ * x[j] + grad.ϵ
fill!(grad.e, 0)
grad.e[j] = 1
grad.tx .= x .- ϵⱼ .* grad.e
f1 = grad.F(grad.tx)
grad.tx .= x .+ ϵⱼ .* grad.e
f2 = grad.F(grad.tx)
g[j] = (f2 - f1) / (2ϵⱼ)
end
SimpleSolvers.GradientFunction
— TypeGradientFunction <: Gradient
A struct
that realizes a Gradient
by explicitly supplying a function.
Keys
The struct
stores:
∇F!
: a function that can be applied in place.
Functor
The functor does:
grad(g, x) = grad.∇F!(g, x)
SimpleSolvers.Hessian
— TypeHessian
Abstract type. struct
s derived from this need an associated functor that computes the Hessian of a function (in-place).
Also see Gradient
.
Implementation
When a custom Hessian
is implemented, a functor is needed:
function (hessian::Hessian)(h::AbstractMatrix, x::AbstractVector) end
This functor can also be called with compute_hessian!
.
Examples
Examples include:
These examples can also be called with e.g. Hessian(x; mode = :autodiff)
.
SimpleSolvers.HessianAutodiff
— TypeHessianAutodiff <: Hessian
A struct
that realizes Hessian
by using ForwardDiff
.
Keys
The struct
stores:
F
: a function that has to be differentiated.H
: a matrix in which the (updated)Hessian
is stored.Hconfig
: result of applyingForwardDiff.HessianConfig
.
Constructors
HessianAutodiff(F, x::AbstractVector)
HessianAutodiff(F, nx::Integer)
Functor
The functor does:
hes(g, x) = ForwardDiff.hessian!(hes.H, hes.F, x, grad.Hconfig)
SimpleSolvers.HessianBFGS
— TypeHessianBFGS <: Hessian
SimpleSolvers.HessianDFP
— TypeHessianDFP <: Hessian
SimpleSolvers.HessianFunction
— TypeHessianFunction <: Hessian
A struct
that realizes a Hessian
by explicitly supplying a function.
Keys
The struct
stores:
H!
: a function that can be applied in place.
Functor
The functor does:
hes(H, x) = hes.H!(H, x)
SimpleSolvers.Jacobian
— TypeJacobian
Abstract type. struct
s that are derived from this need an associated functor that computes the Jacobian of a function (in-place).
Implementation
When a custom Jacobian
is implemented, a functor is needed:
function (j::Jacobian)(g::AbstractMatrix, x::AbstractVector) end
This functor can also be called with compute_jacobian!
.
Examples
Examples include:
SimpleSolvers.Jacobian
— MethodJacobian(solver::NewtonSolver)
Return the Jacobian
stored in the NonlinearSystem
of solver
.
SimpleSolvers.Jacobian
— MethodJacobian(x, nls::NonlinearSystem)
Return the Jacobian
stored in nls
. Also see jacobian(::NonlinearSystem)
.
SimpleSolvers.JacobianAutodiff
— TypeJacobianAutodiff <: Jacobian
A struct
that realizes Jacobian
by using ForwardDiff
.
Keys
The struct
stores:
F
: a function that has to be differentiated.Jconfig
: result of applyingForwardDiff.JacobianConfig
.ty
: vector that is used for evaluatingForwardDiff.jacobian!
Constructors
JacobianAutodiff(F, y::AbstractVector)
JacobianAutodiff(F, nx::Integer)
Functor
The functor does:
jac(J, x) = ForwardDiff.jacobian!(J, jac.ty, x, grad.Jconfig)
SimpleSolvers.JacobianFiniteDifferences
— TypeJacobianFiniteDifferences <: Jacobian
A struct
that realizes Jacobian
by using finite differences.
Keys
The struct
stores:
F
: a function that has to be differentiated.ϵ
: small constant on whose basis the finite differences are computed.f1
:f2
:e1
: auxiliary vector used for computing finite differences. It's of the form $e_1 = \begin{bmatrix} 1 & 0 & \cdots & 0 \end{bmatrix}$.e2
:tx
: auxiliary vector used for computing finite differences. It stores the offset in thex
vector.
Constructor(s)
JacobianFiniteDifferences{T}(F, nx::Integer, ny::Integer; ϵ)
By default for ϵ
is DEFAULT_JACOBIAN_ϵ
.
Functor
The functor does:
for j in eachindex(x)
ϵⱼ = jac.ϵ * x[j] + jac.ϵ
fill!(jac.e, 0)
jac.e[j] = 1
jac.tx .= x .- ϵⱼ .* jac.e
f(jac.f1, jac.tx)
jac.tx .= x .+ ϵⱼ .* jac.e
f(jac.f2, jac.tx)
for i in eachindex(x)
J[i,j] = (jac.f2[i] - jac.f1[i]) / (2ϵⱼ)
end
end
SimpleSolvers.JacobianFunction
— TypeJacobianFunction <: Jacobian
A struct
that realizes a Jacobian
by explicitly supplying a function.
Keys
The struct
stores:
DF!
: a function that can be applied in place.
Functor
The functor does:
jac(g, x) = jac.DF!(g, x)
SimpleSolvers.LU
— Typestruct LU <: LinearSolverMethod
A custom implementation of an LU solver, meant to solve a LinearSystem
.
Routines that use the LU solver include factorize!
, ldiv!
and solve!
. In practice the LU
solver is used by calling the LinearSolver
constructor and ldiv!
or solve!
, or with an instance of LU
as an argument directly, as shown in the Example section of this docstring.
constructor
The constructor is called with either no argument:
LU()
# output
LU{Missing}(missing, true)
or with pivot
and static
as optional booleans:
LU(; pivot=true, static=true)
# output
LU{Bool}(true, true)
Note that if we do not supply an explicit keyword static
, the corresponding field is missing
(as in the first case). Also see _static
.
Example
We use the LU
together with solve
to solve a linear system:
A = [1. 2. 3.; 5. 7. 11.; 13. 17. 19.]
v = rand(3)
ls = LinearSystem(A, v)
update!(ls, A, v)
lu = LU()
solve(lu, ls) ≈ inv(A) * v
# output
true
SimpleSolvers.LUSolverCache
— TypeLUSolverCache <: LinearSolverCache
Keys
A
: the factorized matrixA
,pivots
:perms
:info
SimpleSolvers.LUSolverLAPACK
— TypeLUSolverLAPACK <: LinearSolver
The LU Solver taken from LinearAlgebra.BLAS
.
SimpleSolvers.LinearSolver
— TypeLinearSolver <: AbstractSolver
A struct that stores LinearSolverMethod
s and LinearSolverCache
s. LinearSolver
s are used to solve LinearSystem
s.
Constructors
LinearSolver(method, cache)
LinearSolver(method, A)
LinearSolver(method, ls::LinearSystem)
LinearSolver(method, x)
We note that the constructors do not call the function factorize
, so only allocate a new matrix. The factorization needs to be done manually.
You can manually factorize by either calling factorize!
or solve!
.
SimpleSolvers.LinearSolverCache
— TypeLinearSolverCache
An abstract type that summarizes all the caches used for LinearSolver
s. See e.g. LUSolverCache
.
SimpleSolvers.LinearSolverMethod
— TypeLinearSolverMethod <: SolverMethod
Summarizes all the methods used for solving linear systems of equations such as the LU
method.
Extended help
The abstract type SolverMethod
was imported from GeometricBase
.
SimpleSolvers.LinearSystem
— TypeLinearSystem
A LinearSystem
describes $Ax = y$, where we want to solve for $x$.
Keys
A
y
Constructors
A LinearSystem
can be allocated by calling:
LinearSystem(A, y)
LinearSystem)(A)
LinearSystem(y)
LinearSystem{T}(n, m)
LinearSystem{T}(n)
Note that in any case the allocated system is initialized with NaN
s:
A = [1. 2. 3.; 4. 5. 6.; 7. 8. 9.]
y = [1., 2., 3.]
ls = LinearSystem(A, y)
# output
LinearSystem{Float64, Vector{Float64}, Matrix{Float64}}([NaN NaN NaN; NaN NaN NaN; NaN NaN NaN], [NaN, NaN, NaN])
In order to initialize the system with values, we have to call update!
:
update!(ls, A, y)
# output
LinearSystem{Float64, Vector{Float64}, Matrix{Float64}}([1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0], [1.0, 2.0, 3.0])
SimpleSolvers.Linesearch
— TypeLinesearch
A struct
that stores the LinesearchMethod
, the LinesearchState
and Options
.
Keys
algorithm::
LinesearchMethod
config::
Options
state::
LinesearchState
Constructors
The following constructors can be used:
Linesearch(alg, config, state)
Linesearch(; algorithm, config, kwargs...)
SimpleSolvers.LinesearchMethod
— TypeLinesearchMethod
Examples include StaticState
, Backtracking
, Bisection
and Quadratic
. See these examples for specific information on linesearch algorithms.
Extended help
A LinesearchMethod
always goes together with a LinesearchState
and each of those LinesearchState
s has a functor implemented:
ls(obj, α)
where obj is a AbstractUnivariateObjective
and α
is an initial step size. The output of this functor is then a final step size that is used for updating the parameters.
SimpleSolvers.LinesearchState
— TypeLinesearchState
Abstract type.
Examples include StaticState
, BacktrackingState
, BisectionState
and QuadraticState
.
Implementation
A struct
that is subtyped from LinesearchState
needs to implement the functors:
ls(x; kwargs...)
ls(obj::AbstractUnivariateObjective, x; kwargs...)
Additionaly the following function needs to be extended:
LinesearchState(algorithm::LinesearchMethod; kwargs...)
Constructors
The following is used to construct a specific line search state based on a LinesearchMethod
:
LinesearchState(algorithm::LinesearchMethod; T::DataType=Float64, kwargs...)
where the data type should be specified each time the constructor is called. This is done automatically when calling the constructor of NewtonSolver
for example.
Functors
The following functors are auxiliary helper functions:
ls(f::Callable; kwargs...) = ls(TemporaryUnivariateObjective(f, missing); kwargs...)
ls(f::Callable, x::Number; kwargs...) = ls(TemporaryUnivariateObjective(f, missing), x; kwargs...)
ls(f::Callable, g::Callable; kwargs...) = ls(TemporaryUnivariateObjective(f, g); kwargs...)
ls(f::Callable, g::Callable, x::Number; kwargs...) = ls(TemporaryUnivariateObjective(f, g), x; kwargs...)
SimpleSolvers.MultivariateObjective
— TypeMultivariateObjective <: AbstractObjective
Like UnivariateObjective
, but stores gradients instead of derivatives. Also compare this to NonlinearSystem
.
The type of the stored gradient has to be a subtype of Gradient
.
Functor
If MultivariateObjective
is called on a single function, the gradient is generated with GradientAutodiff
.
SimpleSolvers.NewtonOptimizerCache
— TypeNewtonOptimizerCache
Keys
x̄
: the previous iterate,x
: current iterate (this stores the guess called by the functions generated withlinesearch_objective
),δ
: direction of optimization step (difference betweenx
andx̄
); this is obtained by multiplyingrhs
with the inverse of the Hessian,g
: gradient value (this stores the gradient associated withx
called by the derivative part oflinesearch_objective
),rhs
: the right hand side used to compute the update.
To understand how these are used in practice see e.g. linesearch_objective
.
Also compare this to NewtonSolverCache
.
SimpleSolvers.NewtonOptimizerState
— TypeNewtonOptimizerState <: OptimizationAlgorithm
The optimizer state is needed to update the Optimizer
. This is different to OptimizerStatus
and OptimizerResult
which serve as diagnostic tools. It stores a LinesearchState
and a NewtonOptimizerCache
which is used to compute the line search objective at each iteration.
Keys
linesearch::
LinesearchState
cache::
NewtonOptimizerCache
SimpleSolvers.NewtonSolver
— TypeNewtonSolver
A struct
that comprises all Newton solvers. Those typically differ in the way the Jacobian
is computed.
Constructors
The NewtonSolver
can be called with an NonlinearSystem
or with a Callable
. Note however that the latter will probably be deprecated in the future.
linesearch = Quadratic()
F(y, x, params) = y .= tanh.(x)
x = [.5, .5]
y = zero(x)
F(y, x, nothing)
NewtonSolver(x, y; F = F, linesearch = linesearch)
# output
i= 0,
x= NaN,
f= NaN,
rxₐ= NaN,
rfₐ= NaN
What is shown here is the status of the NewtonSolver
, i.e. an instance of NonlinearSolverStatus
.
Keywords
nonlinearsystem::
NonlinearSystem
: the system that has to be solved. This can be accessed by callingnonlinearsystem
,jacobian::
Jacobian
linear::
LinearSolver
: the linear solver is used to compute thedirection
of the solver step (seesolver_step!
). This can be accessed by callinglinearsolver
,linesearch::
LinesearchState
refactorize::Int
: determines after how many steps the Jacobian is updated and refactored (seefactorize!
). If we haverefactorize > 1
, then we speak of aQuasiNewtonSolver
,cache::
NewtonSolverCache
config::
Options
status::
NonlinearSolverStatus
:
SimpleSolvers.NewtonSolver
— MethodSimpleSolvers.NewtonSolverCache
— TypeNewtonSolverCache
Stores x̄
, x
, δx
, rhs
, y
and J
.
Compare this to NewtonOptimizerCache
.
Keys
x̄
: the previous iterate,x
: the next iterate (or guess thereof). The guess is computed when calling the functions created bylinesearch_objective
,δx
: search direction. This is updated when callingsolver_step!
via theLinearSolver
stored in theNewtonSolver
,rhs
: the right-hand-side (this can be accessed by callingrhs
),y
: the objective evaluated atx
. This is used inlinesearch_objective
,J::AbstractMatrix
: the Jacobian evaluated atx
. This is used inlinesearch_objective
. Note that this is not of typeJacobian
!
Constructor
NewtonSolverCache(x, y)
SimpleSolvers.NonlinearMethod
— TypeA supertype collecting all nonlinear methods, including NewtonMethod
s.
SimpleSolvers.NonlinearSolver
— TypeNonlinearSolver <: AbstractSolver
A supertype that comprises e.g. NewtonSolver
.
SimpleSolvers.NonlinearSolverStatus
— TypeNonlinearSolverStatus
Stores absolute, relative and successive residuals for x
and f
. It is used as a diagnostic tool in NewtonSolver
.
Keys
i::Int
: iteration number,rxₐ
: absolute residual inx
,rxₛ
: successive residual inx
,rfₐ
: absolute residual inf
,rfₛ
: successive residual inf
,x
: the current solution (can also be accessed by callingsolution
),x̄
: previous solutionδ
: change in solution (seedirection
). This is updated by callingupdate!(::NonlinearSolverStatus, ::AbstractVector, ::NonlinearSystem)
,x̃
: a variable that gives the component-wise change via $\delta/x$,f₀
: initial function value,f
: current function value,f̄
: previous function value,γ
: records change inf
. This is updated by callingupdate!(::NonlinearSolverStatus, ::AbstractVector, ::NonlinearSystem)
,x_converged::Bool
f_converged::Bool
g_converged::Bool
f_increased::Bool
Examples
NonlinearSolverStatus{Float64}(3)
# output
i= 0,
x= NaN,
f= NaN,
rxₐ= NaN,
rfₐ= NaN
SimpleSolvers.NonlinearSystem
— TypeNonlinearSystem
A NonlinearSystem
describes $F(x) = y$, where we want to solve for $x$ and $F$ is in nonlinear in general (also compare this to LinearSystem
and MultivariateObjective
).
NonlinearSystem
s are used for solvers whereas MultivariateObjective
s are their equivalent for optimizers.
Keys
F
: accessed by callingFunction(nls)
,J::
Jacobian
: accessed by callingJacobian(nls)
,f
: accessed by callingvalue
(nls)
,j
: accessed by callingjacobian
(nls)
,x_f
: accessed by callingf_argument
(nls)
,x_j
: accessed by callingj_argument
(nls)
,f_calls
: accessed by callingf_calls
(nls)
,j_calls
: accessed by callingj_calls
(nls)
.
SimpleSolvers.NonlinearSystem
— MethodNonlinearSystem(F, x, f)
SimpleSolvers.NonlinearSystem
— MethodNonlinearSystem(F, J!, x, f)
SimpleSolvers.OptimizationAlgorithm
— TypeAn OptimizationAlgorithm
is a data structure that is used to dispatch on different algorithms.
It needs to implement three methods,
initialize!(alg::OptimizationAlgorithm, ::AbstractVector)
update!(alg::OptimizationAlgorithm, ::AbstractVector)
solver_step!(::AbstractVector, alg::OptimizationAlgorithm)
that initialize and update the state of the algorithm and perform an actual optimization step.
Further the following convenience methods should be implemented,
objective(alg::OptimizationAlgorithm)
gradient(alg::OptimizationAlgorithm)
hessian(alg::OptimizationAlgorithm)
linesearch(alg::OptimizationAlgorithm)
which return the objective to optimize, its gradient and (approximate) Hessian as well as the linesearch algorithm used in conjunction with the optimization algorithm if any.
See NewtonOptimizerState
for a struct
that was derived from OptimizationAlgorithm
.
SimpleSolvers.Optimizer
— TypeOptimizer
The optimizer that stores all the information needed for an optimization problem. This problem can be solved by calling solve!(::AbstractVector, ::Optimizer)
.
Keys
algorithm::
OptimizationAlgorithm
,objective::
MultivariateObjective
,config::
Options
,result::
OptimizerResult
,state::
OptimizationAlgorithm
.
SimpleSolvers.OptimizerResult
— TypeOptimizerResult
Stores an OptimizerStatus
as well as x
, f
and g
(as keys). OptimizerStatus
stores all other information (apart form x
,f
and g
); i.e. residuals etc.
SimpleSolvers.OptimizerStatus
— TypeOptimizerStatus
Stores residuals (relative and absolute) and various convergence properties.
See OptimizerResult
.
SimpleSolvers.Options
— TypeOptions
Keys
Configurable options with defaults (values 0 and NaN indicate unlimited):
x_abstol = 2eps(T)
: absolute tolerance forx
(the function argument). Used in e.g.assess_convergence!
andbisection
,x_reltol = 2eps(T)
: relative tolerance forx
(the function argument). Used in e.g.assess_convergence!
,x_suctol = 2eps(T)
: succesive tolerance forx
. Used in e.g.assess_convergence!
,f_abstol = zero(T)
: absolute tolerance for how close the function value should be to zero. Seeabsolute_tolerance
. Used in e.g.bisection
andassess_convergence!
,f_reltol = 2eps(T)
: relative tolerance for the function value. Used in e.g.assess_convergence!
,f_suctol = 2eps(T)
: succesive tolerance for the function value. Used in e.g.assess_convergence!
,f_mindec = T(10)^-4
: minimum value by which the function has to decrease (also seeminimum_decrease_threshold
),g_restol = 2eps(T)
: tolerance for the residual (?) of the gradient,x_abstol_break = -Inf
: seemeets_stopping_criteria
,x_reltol_break = Inf
: seemeets_stopping_criteria
,f_abstol_break = Inf
: seemeets_stopping_criteria
,f_reltol_break = Inf
: seemeets_stopping_criteria
.,g_restol_break = Inf
,f_calls_limit = 0
,g_calls_limit = 0
,h_calls_limit = 0
,allow_f_increases = true
,min_iterations = 0
,max_iterations = 1000
: the maximum number of iterations used in an alorithm, e.g.bisection
and the functor forBacktrackingState
,warn_iterations = 1000
,show_trace = false
,store_trace = false
,extended_trace = false
,show_every = 1
,verbosity = 1
Some of the constants are defined by the functions default_tolerance
and absolute_tolerance
.
SimpleSolvers.Quadratic
— TypeQuadratic <: LinesearchMethod
The quadratic method. Compare this to BierlaireQuadratic
. The algorithm is taken from [1].
Constructors
Quadratic()
Extended help
SimpleSolvers.Quadratic2
— TypeSimpleSolvers.QuadraticState
— TypeQuadraticState <: LinesearchState
Quadratic Polynomial line search.
Quadratic line search works by fitting a polynomial to a univariate objective (see AbstractUnivariateObjective
) and then finding the minimum of that polynomial. Also compare this to BierlaireQuadraticState
. The algorithm is taken from [1].
Keywords
config::
Options
α₀
: by defaultDEFAULT_ARMIJO_α₀
σ₀
: by defaultDEFAULT_ARMIJO_σ₀
σ₁
: by defaultDEFAULT_ARMIJO_σ₁
c
: by defaultDEFAULT_WOLFE_c₁
SimpleSolvers.QuadraticState2
— TypeQuadraticState2 <: LinesearchState
Quadratic Polynomial line search. This is similar to QuadraticState
, but performs multiple iterations in which all parameters $p_0$, $p_1$ and $p_2$ are changed. This is different from QuadraticState
(taken from [1]), where only $p_2$ is changed. We further do not check the SufficientDecreaseCondition
but rather whether the derivative is small enough.
This algorithm repeatedly builds new quadratic polynomials until a minimum is found (to sufficient accuracy). The iteration may also stop after we reaches the maximum number of iterations (see MAX_NUMBER_OF_ITERATIONS_FOR_QUADRATIC_LINESEARCH
).
Keywords
config::
Options
ε
: A constant that checks the precision/tolerance.s
: A constant that determines the initial interval for bracketing. By default this isDEFAULT_BRACKETING_s
.s_reduction:
A constant that determines the factor by whichs
is decreased in each new bracketing iteration.
SimpleSolvers.Static
— TypeStatic <: LinesearchMethod
The static method.
Constructors
Static(α)
Keys
Keys include: -α
: equivalent to a step size. The default is 1
.
Extended help
SimpleSolvers.StaticState
— TypeStaticState <: LinesearchState
The state for Static
.
Functors
For a Number
a
and an AbstractUnivariateObjective
obj
we have the following functors:
ls.(a) = ls.α
ls.(obj, a) = ls.α
SimpleSolvers.SufficientDecreaseCondition
— TypeSufficientDecreaseCondition <: LinesearchCondition
The condition that determines if $\alpha_k$ is big enough.
Constructor
SufficientDecreaseCondition(c₁, xₖ, fₖ, gradₖ, pₖ, obj)
Functors
sdc(xₖ₊₁, αₖ)
sdc(αₖ)
The second functor is shorthand for sdc(compute_new_iterate(sdc.xₖ, αₖ, sdc.pₖ), T(αₖ))
, also see compute_new_iterate
.
Extended help
We call the constant that pertains to the sufficient decrease condition $c$. This is typically called $c_1$ in the literature [3]. See DEFAULT_WOLFE_c₁
for the relevant constant
SimpleSolvers.TemporaryUnivariateObjective
— TypeTemporaryUnivariateObjective <: AbstractUnivariateObjective
Like UnivariateObjective
but doesn't store f
, d
, x_f
and x_d
as well as f_calls
and d_calls
.
In practice TemporaryUnivariateObjective
s are allocated by calling linesearch_objective
.
Constructors
Below we show a few constructors that can be used to allocate TemporaryUnivariateObjective
s. Note however that in practice one probably should not do that and instead call linesearch_objective
.
f(x) = x^2 - 1
g(x) = 2x
δx(x) = - g(x) / 2
x₀ = 3.
_f(α) = f(compute_new_iterate(x₀, α, δx(x₀)))
_d(α) = g(compute_new_iterate(x₀, α, δx(x₀)))
ls_obj = TemporaryUnivariateObjective{typeof(x₀)}(_f, _d)
# output
TemporaryUnivariateObjective{Float64, typeof(_f), typeof(_d)}(_f, _d)
Alternatively one can also do:
ls_obj = TemporaryUnivariateObjective(_f, _d, x₀)
# output
TemporaryUnivariateObjective{Float64, typeof(_f), typeof(_d)}(_f, _d)
Here we wrote ls_obj
to mean line search objective.
SimpleSolvers.UnivariateObjective
— TypeUnivariateObjective <: AbstractUnivariateObjective
Keywords
It stores the following:
F
: objectiveD
: derivative of objectivef
: cache for function outputd
: cache for derivative outputx_f
: x used to evaluate F (stored in f)x_d
: x used to evaluate D (stored in d)f_calls
: number of timesF
has been calledd_calls
: number of timesD
has been called
Constructor
There are several constructors, the most generic (besides the default one) is:
UnivariateObjective(F, D, x; f, d)
Where no keys are inferred, except x_f
and x_d
. f_calls
and d_calls
are set to zero.
The most general constructor (i.e. the one the needs the least specification) is:
f(x::Number) = x ^ 2
UnivariateObjective(f, 1.)
# output
UnivariateObjective:
f(x) = NaN
d(x) = NaN
x_f = NaN
x_d = NaN
number of f calls = 0
number of d calls = 0
where ForwardDiff
is used to generate the derivative of the (anonymous) function.
Functor
The functor calls value!
.
GeometricBase.update!
— Methodupdate!(iterator, x, params)
Update the solver::
FixedPointIterator
based on x
. This updates the cache (instance of type FixedPointIteratorCache
) and the status (instance of type NonlinearSolverStatus
). In course of updating the latter, we also update the nonlinear
stored in iterator
(and status(iterator)
).
GeometricBase.update!
— Methodupdate!(H, x)
Update a HessianAutodiff
object H
.
This is identical to initialize!
.
Implementation
Internally this is calling the HessianAutodiff
functor and therefore also ForwardDiff.hessian!
.
GeometricBase.update!
— MethodGeometricBase.update!
— Methodupdate!(solver, x, params)
Update the solver::
NewtonSolver
based on x
. This updates the cache (instance of type NewtonSolverCache
) and the status (instance of type NonlinearSolverStatus
). In course of updating the latter, we also update the nonlinear
stored in solver
(and status(solver)
).
GeometricBase.update!
— Methodupdate!(opt, x)
Compute objective and gradient at new solution and update result.
This first calls update!(::OptimizerResult, ::AbstractVector, ::AbstractVector, ::AbstractVector)
and then update!(::NewtonOptimizerState, ::AbstractVector)
. We note that the OptimizerStatus
(unlike the NewtonOptimizerState
) is updated when calling update!(::OptimizerResult, ::AbstractVector, ::AbstractVector, ::AbstractVector)
.
GeometricBase.update!
— Methodupdate!(cache, x, g)
Update the NewtonOptimizerCache
based on x
and g
.
GeometricBase.update!
— Methodupdate!(cache::NewtonOptimizerCache, x, g, hes)
Update an instance of NewtonOptimizerCache
based on x
.
This is used in update!(::NewtonOptimizerState, ::AbstractVector)
.
This sets:
\[\bar{x}^\mathtt{cache} \gets x, x^\mathtt{cache} \gets x, g^\mathtt{cache} \gets g, \mathrm{rhs}^\mathtt{cache} \gets -g, \delta^\mathtt{cache} \gets H^{-1}\mathrm{rhs}^\mathtt{cache},\]
where we wrote $H$ for the Hessian (i.e. the input argument hes
).
Also see update!(::NewtonSolverCache, ::AbstractVector)
.
Implementation
The multiplication by the inverse of $H$ is done with LinearAlgebra.ldiv!
.
GeometricBase.update!
— Methodupdate!(state::NewtonOptimizerState, x, g, hes)
Update an instance of NewtonOptimizerState
based on x
, g
and hes
, where g
can either be an AbstractVector
or a Gradient
and hes
is a Hessian
. This updates the NewtonOptimizerCache
contained in the NewtonOptimizerState
by calling update!(::NewtonOptimizerCache, ::AbstractVector, ::Union{AbstractVector, Gradient}, ::Hessian)
.
An instance of NewtonOptimizerState
stores the NewtonOptimizerCache
as well as a LinesearchState
. The LinesearchState
stays the same at every iteration, which is why only the NewtonOptimizerState
is updated.
Examples
We show that after initializing, update has to be called together with a Gradient
and a Hessian
:
If we only call update!
once there are still NaN
s in the ...
f(x) = sum(x.^2)
x = [1., 2.]
state = NewtonOptimizerState(x)
obj = MultivariateObjective(f, x)
g = gradient!(obj, x)
hes = HessianAutodiff(obj, x)
update!(hes, x)
update!(state, x, g, hes)
# output
NewtonOptimizerState{Float64, SimpleSolvers.BacktrackingState{Float64}, SimpleSolvers.NewtonOptimizerCache{Float64, Vector{Float64}}}(Backtracking with α₀ = 1.0, ϵ = 0.0001and p = 0.5., SimpleSolvers.NewtonOptimizerCache{Float64, Vector{Float64}}([1.0, 2.0], [1.0, 2.0], [-1.0, -2.0], [2.0, 4.0], [-2.0, -4.0]))
GeometricBase.update!
— Methodupdate!(status, x, nls)
Update the status::
NonlinearSolverStatus
based on x
for the NonlinearSystem
obj
.
This also updates the objective nls
!
Sets x̄
and f̄
to x
and f
respectively and computes δ
as well as γ
. The new x
and x̄
stored in status
are used to compute δ
. The new f
and f̄
stored in status
are used to compute γ
. See NonlinearSolverStatus
for an explanation of those variables.
GeometricBase.update!
— Methodupdate!(result, x, f, g)
Update the OptimizerResult
based on x
, f
and g
(all vectors). This involves updating the OptimizerStatus
stored in result
(by calling residual!
).
This also calls increase_iteration_number!(::OptimizerResult)
GeometricBase.update!
— Methodupdate!(hessian, x)
Update the Hessian
based on the vector x
. For an explicit example see e.g. update!(::HessianAutodiff)
.
GeometricBase.update!
— Methodupdate!(ls, A, y)
Set the rhs
vector to y
and the matrix stored in ls
to A
.
Calling update!
doesn't solve the LinearSystem
, you still have to call solve!
in combination with a LinearSolver
.
GeometricBase.update!
— Methodupdate!(cache, x)
Update the NewtonSolverCache
based on x
, i.e.:
cache.x̄
$\gets$ x,cache.x
$\gets$ x,cache.δx
$\gets$ 0.
GeometricBase.value
— Methodvalue(obj::AbstractObjective, x)
Evaluates the objective value at x
(i.e. computes obj.F(x)
).
Examples
using SimpleSolvers
obj = UnivariateObjective(x::Number -> x^2, 1.)
value(obj, 2.)
obj.f_calls
# output
1
Note that the f_calls
counter increased by one!
If value
is called on obj
(an AbstractObjective
) without supplying x
than the output of the last obj.F
call is returned:
using SimpleSolvers
obj = UnivariateObjective(x::Number -> x^2, 1.)
value(obj)
# output
NaN
In this example this is NaN
since the function hasn't been called yet.
LinearAlgebra.ldiv!
— Methodldiv!(x, lu, b)
Compute inv(cache(lsolver).A) * b
by utilizing the factorization of the lu solver (see LU
and LinearSolver
) and store the result in x
.
SimpleSolvers.QuasiNewtonSolver
— MethodQuasiNewtonSolver
A convenience constructor for NewtonSolver
. Also see DEFAULT_ITERATIONS_QUASI_NEWTON_SOLVER
.
Calling QuasiNewtonSolver
hence produces an instance of NewtonSolver
with the difference that refactorize ≠ 1
. The Jacobian
is thus not evaluated and refactored in every step.
Implementation
It does:
QuasiNewtonSolver(args...; kwargs...) = NewtonSolver(args...; refactorize=DEFAULT_ITERATIONS_QUASI_NEWTON_SOLVER, kwargs...)
SimpleSolvers._static
— Method_static(A)
Determine whether to allocate a StaticArray
or simply copy the input array. This is used when calling LinearSolverCache
on LU
. Every matrix that is smaller or equal to N_STATIC_THRESHOLD
is turned into a StaticArray
as a consequence.
SimpleSolvers.absolute_tolerance
— Methodabsolute_tolerance(T)
Determine the absolute tolerance for a specific data type. This is used in the constructor of Options
.
In comparison to default_tolerance
, this should return a very small number, close to zero (i.e. not just machine precision).
Examples
absolute_tolerance(Float64)
# output
0.0
absolute_tolerance(Float32)
# output
0.0f0
SimpleSolvers.adjust_α
— Methodadjust_α(ls, αₜ, α)
Check which conditions the new αₜ
is in $[\sigma_0\alpha_0, \simga_1\alpha_0]$ and return the updated α
accordingly (it is updated if it does not lie in the interval).
We first check the following:
\[ \alpha_t < \alpha_0\alpha,\]
where $\sigma_0$ is stored in ls
(i.e. in an instance of QuadraticState
). If this is not true we check:
\[ \alpha_t > \sigma_1\alpha,\]
where $\sigma_1$ is again stored in ls
. If this second condition is also not true we simply return the unchanged $\alpha_t$. So if \alpha_t
does not lie in the interval $(\sigma_0\alpha, \sigma_1\alpha)$ the interval is made bigger by either multiplying with $\sigma_0$ (default DEFAULT_ARMIJO_σ₀
) or $\sigma_1$ (default DEFAULT_ARMIJO_σ₁
).
SimpleSolvers.adjust_α
— Methodadjust_α(αₜ, α)
Adjust αₜ
based on the previous α
. Also see adjust_α(::QuadraticState{T}, ::T, ::T) where {T}
.
The check that $\alpha \in [\sigma_0\alpha_\mathrm{old}, \sigma_1\alpha_\mathrm{old}]$ should safeguard against stagnation in the iterates as well as checking that $\alpha$ decreases at least by a factor $\sigma_1$. The defaults for σ₀
and σ₁
are DEFAULT_ARMIJO_σ₀
and DEFAULT_ARMIJO_σ₁
respectively.
Implementation
Wee use defaults DEFAULT_ARMIJO_σ₀
and DEFAULT_ARMIJO_σ₁
.
SimpleSolvers.alloc_d
— Functionalloc_d(x)
Allocate NaN
s of the size of the derivative of f
(with respect to x
).
This is used in combination with a AbstractUnivariateObjective
.
SimpleSolvers.alloc_f
— Functionalloc_f(x)
Allocate NaN
s of the size the size of f
(evaluated at x
).
SimpleSolvers.alloc_g
— Functionalloc_g(x)
Allocate NaN
s of the size of the gradient of f
(with respect to x
).
This is used in combination with a MultivariateObjective
.
SimpleSolvers.alloc_h
— Functionalloc_h(x)
Allocate NaN
s of the size of the Hessian of f
(with respect to x
).
This is used in combination with a MultivariateObjective
.
SimpleSolvers.alloc_x
— Functionalloc_x(x)
Allocate NaN
s of the size of x
.
SimpleSolvers.assess_convergence!
— Methodassess_convergence!(status, config)
Check if one of the following is true for status::
NonlinearSolverStatus
:
status.rxₐ ≤ config.x_abstol
,status.rxₛ ≤ config.x_suctol
,status.rfₐ ≤ config.f_abstol
,status.rfₛ ≤ config.f_suctol
.
Also see meets_stopping_criteria
. The tolerances are by default determined with default_tolerance
.
SimpleSolvers.assess_convergence!
— Methodassess_convergence!(status, config)
Checks if the optimizer converged.
SimpleSolvers.bisection
— Methodbisection(f, x)
Use bracket_minimum
to find a starting interval and then do bisections.
SimpleSolvers.bisection
— Methodbisection(f, xmin, xmax; config)
Perform bisection of f
in the interval [xmin
, xmax
] with Options
config
.
The algorithm is repeated until a root is found (up to tolerance config.f_abstol
which is determined by default_tolerance
by default).
implementation
When calling bisection
it first checks if $x_\mathrm{min} < x_\mathrm{max}$ and else flips the two entries.
Extended help
The bisection algorithm divides an interval into equal halves until a root is found (up to a desired accuracy).
We first initialize:
\[\begin{aligned} x_0 \gets & x_\mathrm{min}, x_1 \gets & x_\mathrm{max}, \end{aligned}\]
and then repeat:
\[\begin{aligned} & x \gets \frac{x_0 + x_1}{2}, \\ & \text{if $f(x_0)f(x) > 0$} \\ & \qquad x_0 \gets x, \\ & \text{else} \\ & \qquad x_1 \gets x, \\ & \text{end} \end{aligned}\]
So the algorithm checks in each step where the sign change occurred and moves the $x_0$ or $x_1$ accordingly. The loop is terminated (and errors) if config.max_iterations
is reached (by default1000 and the Options
struct).
SimpleSolvers.bracket_minimum
— Methodbracket_minimum(f, x)
Move a bracket successively in the search direction (starting at x
) and increase its size until a local minimum of f
is found. This is used for performing Bisection
s when only one x
is given (and not an entire interval). This bracketing algorithm is taken from [4]. Also compare it to bracket_minimum_with_fixed_point
.
Keyword arguments
Extended help
For bracketing we need two constants $s$ and $k$ (see DEFAULT_BRACKETING_s
and DEFAULT_BRACKETING_k
).
Before we start the algorithm we initialize it, i.e. we check that we indeed have a descent direction:
\[\begin{aligned} & a \gets x, \\ & b \gets a + s, \\ & \mathrm{if} \quad f(b) > f(a)\\ & \qquad\text{Flip $a$ and $b$ and set $s\gets-s$.}\\ & \mathrm{end} \end{aligned}\]
The algorithm then successively computes:
\[c \gets b + s,\]
and then checks whether $f(c) > f(b)$. If this is true it returns $(a, c)$ or $(c, a)$, depending on whether $a<c$ or $c<a$ respectively. If this is not satisfied $a,$ $b$ and $s$ are updated:
\[\begin{aligned} a \gets & b, \\ b \gets & c, \\ s \gets & sk, \end{aligned}\]
and the algorithm is continued. If we have not found a sign chance after $n_\mathrm{max}$ iterations (see DEFAULT_BRACKETING_nmax
) the algorithm is terminated and returns an error. The interval that is returned by bracket_minimum
is then typically used as a starting point for bisection
.
The function bracket_root
is equivalent to bracket_minimum
with the only difference that the criterion we check for is:
\[f(c)f(b) < 0,\]
i.e. that a sign change in the function occurs.
See bracket_root
.
SimpleSolvers.bracket_minimum_with_fixed_point
— Methodbracket_minimum_with_fixed_point(f, x)
Find a bracket while keeping the left side (i.e. x
) fixed. The algorithm is similar to bracket_minimum
(also based on DEFAULT_BRACKETING_s
and DEFAULT_BRACKETING_k
) with the difference that for the latter the left side is also moving.
The function bracket_minimum_with_fixed_point
is used as a starting point for Quadratic
(taken from [1]), as the minimum of the polynomial approximation is:
\[p_2 = \frac{f(b) - f(a) - f'(0)b}{b^2},\]
where $b = \mathtt{bracket\_minimum\_with\_fixed\_point}(a)$. We check that $f(b) > f(a)$ in order to ensure that the curvature of the polynomial (i.e. $p_2$ is positive) and we have a minimum.
SimpleSolvers.bracket_root
— Methodbracket_root(f, x)
Make a bracket for the function based on x
(for root finding).
This is largely equivalent to bracket_minimum
. See the end of that docstring for more information.
SimpleSolvers.cache
— Methodcache(ls)
Return the cache (of type LinearSolverCache
) of the LinearSolver
.
SimpleSolvers.check_gradient
— Methodcheck_gradient(g)
Check norm, maximum value and minimum value of a vector.
Examples
using SimpleSolvers
g = [1., 1., 1., 2., 0.9, 3.]
SimpleSolvers.check_gradient(g; digits=3)
# output
norm(Gradient): 4.1
minimum(|Gradient|): 0.9
maximum(|Gradient|): 3.0
SimpleSolvers.check_hessian
— Methodcheck_hessian(H)
Check the condition number, determinant, max and min value of the Hessian
H
.
using SimpleSolvers
H = [1. √2.; √2. 3.]
SimpleSolvers.check_hessian(H)
# output
Condition Number of Hessian: 13.9282
Determinant of Hessian: 1.0
minimum(|Hessian|): 1.0
maximum(|Hessian|): 3.0
SimpleSolvers.check_jacobian
— Methodcheck_jacobian(J)
Check the condition number, determinant, max and min value of the Jacobian
J
.
using SimpleSolvers
J = [1. √2.; √2. 3.]
SimpleSolvers.check_jacobian(J)
# output
Condition Number of Jacobian: 13.9282
Determinant of Jacobian: 1.0
minimum(|Jacobian|): 1.0
maximum(|Jacobian|): 3.0
SimpleSolvers.clear!
— Methodclear!(obj)
Similar to initialize!
, but with only one input argument.
SimpleSolvers.clear!
— Methodclear!(nls::NonlinearSystem)
Similar to initialize!
, but with only one input argument.
SimpleSolvers.clear!
— Methodclear!(obj)
Similar to initialize!
, but with only one input argument.
SimpleSolvers.clear!
— Methodclear!(ls)
Write NaN
s into Matrix(ls)
and Vector(ls)
.
SimpleSolvers.clear!
— Methodclear!(result)
Clear all the information contained in result::
OptimizerResult
. This also calls clear!(::OptimizerStatus)
.
Calling initialize!
on an OptimizerResult
calls clear!
internally.
SimpleSolvers.clear!
— Methodclear!(obj)
Similar to initialize!
.
SimpleSolvers.compute_gradient!
— Functioncompute_gradient!
Alias for gradient!
. Will probably be deprecated.
SimpleSolvers.compute_hessian!
— Methodcompute_hessian!(h, x, ForH)
Compute the hessian of function ForH
at x
and store it in h
.
Implementation
Internally this allocates a Hessian
object.
SimpleSolvers.compute_hessian!
— Methodcompute_hessian!(h, x, hessian)
Compute the Hessian and store it in h
.
SimpleSolvers.compute_hessian
— Methodcompute_hessian(x, hessian)
Compute the Hessian at point x
and return the result.
Internally this calls compute_hessian!
.
SimpleSolvers.compute_hessian_ad!
— Methodcompute_hessian_ad!(g, x, F)
Build a HessianAutodiff
object based on F
and apply it to x
. The result is stored in H
.
Also see gradient_ad!
for the Gradient
version.
Implementation
This is using compute_hessian!
with the keyword mode
set to autodiff
.
SimpleSolvers.compute_jacobian!
— Methodcompute_jacobian!(j, x, jacobian::Jacobian, params)
Apply the Jacobian
and store the result in j
.
SimpleSolvers.compute_jacobian!
— Methodcompute_jacobian!(j, x, ForJ, params)
Allocate a Jacobian
object, apply it to x
, and store the result in j
.
SimpleSolvers.compute_new_iterate
— Methodcompute_new_iterate(xₖ, αₖ, pₖ)
Compute xₖ₊₁
based on a direction pₖ
and a step length αₖ
.
Extended help
In the case of vector spaces this function simply does:
xₖ + αₖ * pₖ
For manifolds we instead perform a retraction [5].
SimpleSolvers.default_tolerance
— Methoddefault_tolerance(T)
Determine the default tolerance for a specific data type. This is used in the constructor of Options
.
Examples
default_tolerance(Float64)
# output
4.440892098500626e-16
default_tolerance(Float32)
# output
2.3841858f-7
default_tolerance(Float16)
# output
Float16(0.001953)
SimpleSolvers.derivative!!
— Methodderivative!!(obj::AbstractObjective, x)
Similar to value!!
, but fo the derivative part (see UnivariateObjective
and TemporaryUnivariateObjective
).
SimpleSolvers.derivative!
— Methodderivative!(obj, x)
Similar to value!
, but fo the derivative part (see UnivariateObjective
).
SimpleSolvers.derivative
— Methodderivative(obj::AbstractObjective, x)
Similar to value
, but for the derivative part (see UnivariateObjective
).
SimpleSolvers.determine_initial_α
— Methoddetermine_initial_α(y₀, obj, α₀)
Check whether α₀
satisfies the BracketMinimumCriterion
for obj
. If the criterion is not satisfied we call bracket_minimum_with_fixed_point
. This is used as a starting point for using the functor of QuadraticState
and makes sure that α
describes a point past the minimum.
SimpleSolvers.direction
— Methoddirection(::NewtonOptimizerCache)
Return the direction of the gradient step (i.e. δ
) of an instance of NewtonOptimizerCache
.
SimpleSolvers.f_argument
— Methodf_argument(nls)
Return the argument that was last used for evaluating value!
for the NonlinearSystem
nls
.
SimpleSolvers.f_calls
— Methodf_calls(nls)
Tell how many times Function(nls)
has been called.
Examples
F(x) = tanh.(x)
x = [1., 2., 3.]
F!(y, x, params) = y .= F(x)
nls = NonlinearSystem(F!, x, F(x))
f_calls(nls)
# output
0
After calling value
once we get:
value!(nls, x, nothing)
f_calls(nls)
# output
1
SimpleSolvers.factorize!
— Methodfactorize!(lsolver)
Factorize the matrix stored in the LinearSolverCache
in lsolver
.
See factorize!(::LinearSolver{T, LUT}) where {T, LUT <: LU}
for a concrete example.
SimpleSolvers.factorize!
— Methodfactorize!(lsolver::LinearSolver, A)
Factorize the matrix A
and store the result in cache(lsolver).A
. Note that calling cache
on lsolver
returns the instance of LUSolverCache
stored in lsolver
.
Examples
A = [1. 2. 3.; 5. 7. 11.; 13. 17. 19.]
y = [1., 0., 0.]
x = similar(y)
lsolver = LinearSolver(LU(; static=false), x)
factorize!(lsolver, A)
cache(lsolver).A
# output
3×3 Matrix{Float64}:
13.0 17.0 19.0
0.0769231 0.692308 1.53846
0.384615 0.666667 2.66667
Here cache(lsolver).A
stores the factorized matrix. If we call factorize!
with two input arguments as above, the method first copies the matrix A
into the LUSolverCache
. We can equivalently also do:
A = [1. 2. 3.; 5. 7. 11.; 13. 17. 19.]
y = [1., 0., 0.]
lsolver = LinearSolver(LU(), A)
factorize!(lsolver)
cache(lsolver).A
# output
3×3 StaticArraysCore.MMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
13.0 17.0 19.0
0.0769231 0.692308 1.53846
0.384615 0.666667 2.66667
Also note the difference between the output types of the two refactorized matrices. This is because we set the keyword static
to false when calling LU
. Also see _static
.
SimpleSolvers.find_maximum_value
— Methodfind_maximum_value(v, k)
Find the maximum value of vector v
starting from the index k
. This is used for pivoting in factorize!
.
SimpleSolvers.gradient!!
— Methodgradient!!(obj::MultivariateObjective, x)
Like derivative!!
, but for MultivariateObjective
, not UnivariateObjective
.
SimpleSolvers.gradient!
— Methodgradient!(g, x, grad)
Apply the Gradient
grad
to x
and store the result in g
.
Implementation
This is equivalent to doing
grad(g, x)
SimpleSolvers.gradient!
— Methodgradient!(obj::MultivariateObjective, x)
Like derivative!
, but for MultivariateObjective
, not UnivariateObjective
.
SimpleSolvers.gradient
— Methodgradient(x, grad)
Apply grad
to x
and return the result.
Implementation
Internally this is using gradient!
.
SimpleSolvers.gradient
— Methodgradient(x, obj::MultivariateObjective)
Like derivative
, but for MultivariateObjective
, not UnivariateObjective
.
SimpleSolvers.gradient
— Methodgradient(::NewtonOptimizerCache)
Return the stored gradient (array) of an instance of NewtonOptimizerCache
SimpleSolvers.gradient_ad!
— Methodgradient_ad!(g, x, F)
Build a GradientAutodiff
object based on F
and apply it to x
. The result is stored in g
.
Also see gradient_fd!
for the finite differences version.
SimpleSolvers.gradient_fd!
— Methodgradient_fd!(g, x, F)
Build a GradientFiniteDifferences
object based on F
and apply it to x
. The result is stored in g
.
Also see gradient_ad!
for the autodiff version.
SimpleSolvers.increase_iteration_number!
— Methodincrease_iteration_number!(status)
Increase iteration number of status
.
Examples
status = NonlinearSolverStatus{Float64}(5)
increase_iteration_number!(status)
iteration_number(status)
# output
1
SimpleSolvers.increase_iteration_number!
— Methodincrease_iteration_number!(result)
Increase the iteration number of result
OptimizerResult
. This calls increase_iteration_number!(::OptimizerStatus)
.
SimpleSolvers.increase_iteration_number!
— Methodincrease_iteration_number!(status)
Increase the iteration number of a status
OptimizerStatus
. See increase_iteration_number!(::NonlinearSolverStatus)
.
SimpleSolvers.initialize!
— Methodinitialize!(hessian, x)
SimpleSolvers.initialize!
— Methodinitialize!(H, x)
Initialize a HessianAutodiff
object H
.
Implementation
Internally this is calling the HessianAutodiff
functor and therefore also ForwardDiff.hessian!
.
SimpleSolvers.initialize!
— Methodinitialize!(ls, x)
Initialize the LinearSystem
ls
. See clear!(::LinearSystem)
.
SimpleSolvers.initialize!
— Methodinitialize!(status, x)
Clear status::
NonlinearSolverStatus
(via the function clear!
).
SimpleSolvers.initialize!
— Methodinitialize!(cache, x)
Initialize the NewtonSolverCache
based on x
.
Implementation
This calls alloc_x
to do all the initialization.
SimpleSolvers.isaOptimizationAlgorithm
— MethodisaOptimizationAlgorithm(alg)
Verify if an object implements the OptimizationAlgorithm
interface.
SimpleSolvers.j_argument
— Methodj_argument(nls)
Return the argument that was last used for evaluating jacobian!
for the NonlinearSystem
nls
.
SimpleSolvers.j_calls
— Methodj_calls(nls)
Like f_calls
in relation to a NonlinearSystem
nls
, but for jacobian
(or jacobian!
).
SimpleSolvers.jacobian!!
— Methodjacobian!!(nls::NonlinearSystem, x)
Force the evaluation of the jacobian for a NonlinearSystem
. Like derivative!!
for UnivariateObjective
or gradient!!
for MultivariateObjective
.
SimpleSolvers.jacobian!
— Methodjacobian!(nls::NonlinearSystem, x)
Compute the Jacobian of nls
at x
and store it in jacobian(nls)
. Note that the evaluation of the Jacobian is not necessarily enforced here (unlike calling jacobian!!
). Like derivative!
for MultivariateObjective
and gradient!
for UnivariateObjective
.
SimpleSolvers.jacobian
— Methodjacobian(solver::NewtonSolver)
Return the evaluated Jacobian (a Matrix) stored in the NonlinearSystem
of solver
.
Also see jacobian(::NonlinearSystem)
and Jacobian(::NonlinearSystem)
.
SimpleSolvers.jacobian
— Methodjacobian(nls::NonlinearSystem)
Return the value of the jacobian stored in nls
(instance of NonlinearSystem
). Like derivative
for UnivariateObjective
or gradient
for MultivariateObjective
.
Also see Jacobian(::NonlinearSystem)
.
SimpleSolvers.linearsolver
— Methodlinearsolver(solver)
Return the linear part (i.e. a LinearSolver
) of an NewtonSolver
.
Examples
x = rand(3)
y = rand(3)
F(x) = tanh.(x)
F!(y, x, params) = y .= F(x)
s = NewtonSolver(x, y; F = F!)
linearsolver(s)
# output
LinearSolver{Float64, LU{Missing}, SimpleSolvers.LUSolverCache{Float64, StaticArraysCore.MMatrix{3, 3, Float64, 9}}}(LU{Missing}(missing, true), SimpleSolvers.LUSolverCache{Float64, StaticArraysCore.MMatrix{3, 3, Float64, 9}}([0.0 0.0 0.0; 0.0 0.0 0.0; 0.0 0.0 0.0], [0, 0, 0], [0, 0, 0], 0))
SimpleSolvers.linesearch_objective
— Methodlinesearch_objective(nl::NonlinearSolver, params)
Build a line search objective based on a NonlinearSolver
(almost always a NewtonSolver
in practice).
SimpleSolvers.linesearch_objective
— Methodlinesearch_objective(objective, cache)
Create TemporaryUnivariateObjective
for linesearch algorithm. The variable on which this objective depends is $\alpha$.
Example
x = [1, 0., 0.]
f = x -> sum(x .^ 3 / 6 + x .^ 2 / 2)
obj = MultivariateObjective(f, x)
gradient!(obj, x)
value!(obj, x)
cache = NewtonOptimizerCache(x)
hess = Hessian(obj, x; mode = :autodiff)
update!(hess, x)
update!(cache, x, obj.g, hess)
x₂ = [.9, 0., 0.]
gradient!(obj, x₂)
value!(obj, x₂)
update!(hess, x₂)
update!(cache, x₂, obj.g, hess)
ls_obj = linesearch_objective(obj, cache)
α = .1
(ls_obj.F(α), ls_obj.D(α))
# output
(0.4412947468016475, -0.8083161485821551)
In the example above we have to apply update!
twice on the instance of NewtonOptimizerCache
because it needs to store the current and the previous iterate.
Implementation
Calling the function and derivative stored in the TemporaryUnivariateObjective
created with linesearch_objective
does not allocate a new array, but uses the one stored in the instance of NewtonOptimizerCache
.
SimpleSolvers.linesearch_objective
— Methodlinesearch_objective(nls, cache, params)
Make a line search objective for a Newton solver (the cache
here is an instance of NewtonSolverCache
).
Implementation
Different from the linesearch_objective
for NewtonOptimizerCache
s, we apply l2norm
to the output of objective!
. This is because the solver operates on an objective with multiple outputs from which we have to find roots, whereas an optimizer operates on an objective with a single output of which we should find a minimum.
Also see linesearch_objective(::MultivariateObjective{T}, ::NewtonOptimizerCache{T}) where {T}
.
SimpleSolvers.meets_stopping_criteria
— Methodmeets_stopping_criteria(status, config)
Determines whether the iteration stops based on the current NonlinearSolverStatus
.
The function meets_stopping_criteria
may return true
even if the solver has not converged. To check convergence, call assess_convergence!
(with the same input arguments).
The function meets_stopping_criteria
returns true
if one of the following is satisfied:
- the
status::
NonlinearSolverStatus
is converged (checked withassess_convergence!
) anditeration_number(status) ≥ config.min_iterations
, status.f_increased
andconfig.allow_f_increases = false
(i.e.f
increased even though we do not allow it),iteration_number(status) ≥ config.max_iterations
,- if any component in
solution(status)
isNaN
, - if any component in
status.f
isNaN
, status.rxₐ > config.x_abstol_break
(by defaultInf
. In theory this returnstrue
if the residual gets too big,status.rfₐ > config.f_abstol_break
(by defaultInf
. In theory this returnstrue
if the residual gets too big,
So convergence is only one possible criterion for which meets_stopping_criteria
. We may also satisfy a stopping criterion without having convergence!
Examples
In the following example we show that meets_stopping_criteria
evaluates to true when used on a freshly allocated NonlinearSolverStatus
:
status = NonlinearSolverStatus{Float64}(5)
config = Options()
meets_stopping_criteria(status, config)
# output
true
This obviously has not converged. To check convergence we can use assess_convergence!
:
status = NonlinearSolverStatus{Float64}(5)
config = Options()
assess_convergence!(status, config)
# output
false
SimpleSolvers.meets_stopping_criteria
— Methodmeets_stopping_criteria(status, config)
Check if the optimizer has converged.
Implementation
meets_stopping_criteria
first calls assess_convergence!
and then checks if one of the following is true:
converged
(the output ofassess_convergence!
) istrue
andstatus.i
$\geq$config.min_iterations
,- if
config.allow_f_increases
isfalse
:status.f_increased
istrue
, status.i
$\geq$config.max_iterations
,status.rxₐ
$>$config.x_abstol_break
status.rxᵣ
$>$config.x_reltol_break
status.rfₐ
$>$config.f_abstol_break
status.rfᵣ
$>$config.f_reltol_break
status.rg
$>$config.g_restol_break
status.x_isnan
status.f_isnan
status.g_isnan
SimpleSolvers.method
— Methodmethod(ls)
Return the method (of type LinearSolverMethod
) of the LinearSolver
.
SimpleSolvers.minimum_decrease_threshold
— Methodminimum_decrease_threshold(T)
The minimum value by which a function $f$ should decrease during an iteration.
The default value of $10^-4$ is often used in the literature [bierlaire2015optimization], nocedal2006numerical(@cite).
Examples
minimum_decrease_threshold(Float64)
# output
0.0001
minimum_decrease_threshold(Float32)
# output
0.0001f0
SimpleSolvers.nonlinearsystem
— Methodnonlinearsystem(it)
Return the NonlinearSystem
contained in the FixedPointIterator
. Compare this to linearsolver
.
SimpleSolvers.nonlinearsystem
— Methodnonlinearsystem(solver)
Return the NonlinearSystem
contained in the NewtonSolver
. Compare this to linearsolver
.
SimpleSolvers.print_status
— Methodprint_status(status, config)
Print the solver status if:
- The following three are satisfied: (i)
config.verbosity
$\geq1$ (ii)assess_convergence!(status, config)
isfalse
(iii)iteration_number(status) > config.max_iterations
config.verbosity > 1
.
SimpleSolvers.residual!
— Methodresidual!(status)
Compute the residuals for status::
NonlinearSolverStatus
. Note that this does not update x
, f
, δ
or γ
. These are updated with update!(::NonlinearSolverStatus, ::AbstractVector, ::NonlinearSystem)
. The computed residuals are the following:
rxₐ
: absolute residual in $x$,rxₛ
: successive residual (the norm of $\delta$),rfₐ
: absolute residual in $f$,rfₛ
: successive residual (the norm of $\gamma$).
SimpleSolvers.residual!
— Methodresidual!(status, x, x̄, f, f̄, g, ḡ)
Compute the residual based on previous iterates (x̄
, f̄
, ḡ
) and current iterates (x
, f
, g
).
Also see assess_convergence!
and meets_stopping_criteria
.
SimpleSolvers.residual!
— Methodresidual!(result, x, f, g)
SimpleSolvers.rhs
— Methodrhs(cache)
Return the right hand side of an instance of NewtonOptimizerCache
SimpleSolvers.rhs
— Methodrhs(cache)
Return the right-hand side of the equation, stored in cache::
NewtonSolverCache
.
SimpleSolvers.shift_χ_to_avoid_stalling
— Methodshift_χ_to_avoid_stalling(χ, a, b, c, ε)
Check whether b
is closer to a
or c
and shift χ
accordingly.
SimpleSolvers.solution
— Methodsolution(status)
Return the current value of x
(i.e. the current solution).
SimpleSolvers.solve!
— FunctionSimpleSolvers.solve!
— Methodsolve!(x, ls::LinearSolver, A, b)
Solve the linear system described by:
\[ Ax = b,\]
and store it in x
. Here $A$ and $b$ are provided as an input arguments.
implementation
Note that, compared to solve(::LinearSolver, ::AbstractVector)
this method involves an additional factorization of A
.
SimpleSolvers.solve!
— Methodsolve!(x, ls::LinearSolver, b)
Solve the linear system described by:
\[ Ax = b,\]
and store it in x
. Here $b$ is provided as an input argument and the factorized $A$ is stored in the LinearSolver
ls
(respectively its LinearSolverCache
).
SimpleSolvers.solve!
— Methodsolve!(x, ls::LinearSolver, lsys::LinearSystem)
Solve the LinearSystem
lsys
with the LinearSolver
ls
and store the result in x
. Also see solve!(::LinearSolver, ::LinearSystem)
.
SimpleSolvers.solve!
— Methodsolve!(ls::LinearSolver, args...)
Solve the LinearSystem
with the LinearSolver
ls
.
SimpleSolvers.solve!
— MethodSimpleSolvers.solve!
— Methodsolve!(x, opt)
Solve the optimization problem described by opt::
Optimizer
and store the result in x
.
f(x) = sum(x .^ 2 + x .^ 3 / 3)
x = [1f0, 2f0]
opt = Optimizer(x, f; algorithm = Newton())
solve!(opt, x)
# output
2-element Vector{Float32}:
4.6478817f-8
3.0517578f-5
We can also check how many iterations it took:
iteration_number(opt)
# output
12
Too see the value of x
after one iteration confer the docstring of solver_step!
.
SimpleSolvers.solve
— Methodsolve(ls, method)
Solve the LinearSystem
ls
with the LinearSolverMethod
method
.
SimpleSolvers.solver_step!
— Methodsolver_step!(it, x, params)
Solve the problem stored in an instance it
of FixedPointIterator
.
SimpleSolvers.solver_step!
— Methodsolver_step!(s, x, params)
Compute one Newton step for f
based on the Jacobian
jacobian!
.
SimpleSolvers.solver_step!
— Methodsolver_step!(x, state)
Compute a full iterate for an instance of NewtonOptimizerState
state
.
This also performs a line search.
Examples
f(x) = sum(x .^ 2 + x .^ 3 / 3)
x = [1f0, 2f0]
opt = Optimizer(x, f; algorithm = Newton())
solver_step!(opt, x)
# output
2-element Vector{Float32}:
0.25
0.6666666
SimpleSolvers.triple_point_finder
— Methodtriple_point_finder(f, x)
Find three points a > b > c
s.t. f(a) > f(b)
and f(c) > f(b)
. This is used for performing a quadratic line search (see QuadraticState
).
Implementation
For δ
we take DEFAULT_BRACKETING_s
as default. For nmax we take [
DEFAULTBRACKETINGnmax`](@ref) as default.
Extended help
The algorithm is taken from [2, Chapter 11.2.1].
SimpleSolvers.value!!
— Methodvalue!!(obj::AbstractObjective, x)
Set obj.x_f
to x
and obj.f
to value(obj, x)
and return value(obj)
.
SimpleSolvers.value!
— Methodvalue!(obj::AbstractObjective, x)
Check if x
is not equal to obj.x_f
and then apply value!!
. Else simply return value(obj)
.
SimpleSolvers.value!
— Methodvalue!(nls::NonlinearSystem, x)
Check if x
is not equal to f_argument(nls)
and then apply value!!
. Else simply return value(nls)
.