Parameter optimisation with mcasopt

Introduce the concept of mcasopt:

  • Defining solution criteria;

  • Defining an error metric for MCAS outputs (the objective function);

  • Using optimisation methods to search for improved parameter estimates; and

  • How mcasopt ties this all together.

The mcasopt package uses a numerical optimisation method to find an approximately-optimal model solution over some subset of the model parameters.

This requires four things to be defined by the user:

  • An input file that defines values for all of the model parameters;

  • A set of compound states;

  • A function that takes a vector of free parameters and updates the corresponding model parameters (i.e., determines the contents of a new MCAS input file); and

  • A vector of initial values for these free parameters.

Optimisation methods

The SciPy library provides a range of optimisation methods (see the reference documentation and optimisation tutorial for details). Some of these methods only require function evaluations, some require calculating or estimating the objective function’s Jacobian, and some require calculating the objective function’s Hessian. Depending on the method, bounds constraints, linear constraints, and/or non-linear constraints may also be provided.

Promising optimisation methods include the Nelder-Mead simplex algorithm and Powell’s method.

Note

Experimentation will likely be required to identify the optimal minimisation method. Because the error metric is discontinuous, we should avoid using optimisation methods that use the objective function’s Jacobian and/or Hessian.

We may also need to explore the global optimisation methods provided by SciPy, such as basin-hopping, differential evolution, simplicial homology global optimisation (SHGO), dual annealing, dividing rectangles (DIRECT), and even brute force (worst-case).

Note

Also consider genetic optimisation methods provided by pymoo and/or pygad.

Note

Also consider nevergrad, a gradient-free optimiser from Facebook Research.