Asynchronously Parallel Optimization Solver for finding Multiple Minima (APOSMM) coordinates concurrent local optimization runs in order to identify many local minima.

Required: mpmath, SciPy

Optional (see below): petsc4py, nlopt, DFO-LS

Configuring APOSMM

APOSMM works with a choice of optimizers, some requiring external packages. To import the optimization packages (and their dependencies) at a global level (recommended), add the following lines in the calling script before importing APOSMM:

import libensemble.gen_funcs
libensemble.gen_funcs.rc.aposmm_optimizers = <optimizers>

where optimizers is a string (or list of strings) from the available options:

"petsc", "nlopt", "dfols", "scipy", "external"

Issues with ensemble hanging or failed simulations?

Note that if using mpi4py comms, PETSc must be imported at the global level or the ensemble may hang.

Exception: In the case that you are using the MPIExecutor or other MPI inside a user function and you are using Open MPI, then you must:

  • Use local comms for libEnsemble (not mpi4py)

  • Must NOT include the rc line above

This is because PETSc imports MPI, and a global import of PETSc would result in nested MPI (which is not supported by Open MPI). When the above line is not used, an import local to the optimization function will happen.

To see the optimization algorithms supported, see LocalOptInterfacer.

Persistent APOSMM