Different Neural Network Losses
GeometricMachineLearning
has a number of loss functions implemented that can be called standard losses. How to implement custom losses is shown in the tutorials.
A Note on Physics-Informed Neural Networks
A popular trend in recent years has been considering known physical properties of the differential equation, or the entire differential equation, through the loss function [35]. This is one way of considering physical properties, and GeometricMachineLearning
allows for a flexible implementation of custom losses, but this is nonetheless discouraged. In general a neural networks consists of three ingredients:
Instead of considering certain properties through the loss function, we instead do so by enforcing them strongly through the network architecture and the optimizer; the latter pertains to manifold optimization. The advantages of this approach are the strong enforcement of properties that we now our network should have and much easier training because we do not have to tune hyperparameters.
GeometricMachineLearning.NetworkLoss
— TypeAn abstract type for all the neural network losses. If you want to implement CustomLoss <: NetworkLoss
you need to define a functor:
(loss::CustomLoss)(model, ps, input, output)
where model
is an instance of an AbstractExplicitLayer
or a Chain
and ps
the parameters.
GeometricMachineLearning.TransformerLoss
— TypeTransformerLoss(seq_length, prediction_window)
Make an instance of the transformer loss.
This is the loss for a transformer network (especially a transformer integrator).
Parameters
The prediction_window
specifies how many time steps are predicted into the future. It defaults to the value specified for seq_length
.
GeometricMachineLearning.FeedForwardLoss
— TypeFeedForwardLoss()
Make an instance of a loss for feedforward neural networks.
This doesn't have any parameters.
GeometricMachineLearning.AutoEncoderLoss
— TypeThis loss should always be used together with a neural network of type AutoEncoder
(and it is also the default for training such a network).
It simply computes:
\[\mathtt{AutoEncoderLoss}(nn\mathtt{::Loss}, input) = ||nn(input) - input||.\]
GeometricMachineLearning.ReducedLoss
— TypeReducedLoss(encoder, decoder)
Make an instance of ReducedLoss
based on an Encoder
and a Decoder
.
This loss should be used together with a NeuralNetworkIntegrator
or TransformerIntegrator
.
The loss is computed as:
\[\mathrm{loss}_{\mathcal{E}, \mathcal{D}}(\mathcal{NN}, \mathrm{input}, \mathrm{output}) = ||\mathcal{D}(\mathcal{NN}(\mathcal{E}(\mathrm{input}))) - \mathrm{output}||,\]
where $\mathcal{E}$ is the Encoder
, $\mathcal{D}$ is the Decoder
. $\mathcal{NN}$ is the neural network we compute the loss of.
- [35]
- M. Raissi, P. Perdikaris and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics 378, 686–707 (2019).