APOSMM

Asynchronously Parallel Optimization Solver for finding Multiple Minima (APOSMM) coordinates concurrent local optimization runs to identify many local minima faster on parallel hardware.

Supported local optimization routines include:

  • DFO-LS Derivative-free solver for (bound constrained) nonlinear least-squares minimization

  • NLopt Library for nonlinear optimization, providing a common interface for various methods

  • scipy.optimize Open-source solvers for nonlinear problems, linear programming, constrained and nonlinear least-squares, root finding, and curve fitting.

  • PETSc/TAO Routines for the scalable (parallel) solution of scientific applications

Required: mpmath, SciPy

Optional (see below): petsc4py, nlopt, DFO-LS

Configuring APOSMM

APOSMM works with a choice of optimizers, some requiring external packages. Specify them on a global level before importing APOSMM:

import libensemble.gen_funcs
libensemble.gen_funcs.rc.aposmm_optimizers = <optimizers>

where optimizers is a string (or list-of-strings) from:

"petsc", "nlopt", "dfols", "scipy", "external"

Issues with ensemble hanging or failed simulations with PETSc?

If using the MPIExecutor or other MPI routines and your MPI backend is Open MPI, then you must:

  • Use local comms for libEnsemble (no mpirun, mpiexec, aprun, etc.).

  • Must NOT include the aposmm_optimizers line above.

This is because PETSc imports MPI, and a global import of PETSc results in nested MPI (which is not supported by Open MPI).

To see the optimization algorithms supported, see LocalOptInterfacer.

Persistent APOSMM

This module contains methods used our implementation of the Asynchronously Parallel Optimization Solver for finding Multiple Minima (APOSMM) method. https://doi.org/10.1007/s12532-017-0131-4

This implementation of APOSMM was developed by Kaushik Kulkarni and Jeffrey Larson in the summer of 2019.

persistent_aposmm.aposmm(H, persis_info, gen_specs, libE_info)

APOSMM coordinates multiple local optimization runs, dramatically reducing time for discovering multiple minima on parallel systems. APOSMM tracks these fields:

  • "x" [n floats]: Parameters being optimized over

  • "x_on_cube" [n floats]: Parameters scaled to the unit cube

  • "f" [float]: Objective function being minimized

  • "local_pt" [bool]: True if point from a local optimization run

  • "started_run" [bool]: True if point has started a local opt run

  • "num_active_runs" [int]: Number of active local runs point is in

  • "local_min" [float]: True if point has been ruled a local minima

  • "sim_id" [int]: Row number of entry in history

and optionally

  • "fvec" [m floats]: All objective components (if performing a least-squares calculation)

  • "grad" [n floats]: The gradient (if available) of the objective with respect to x.

Note:

  • If any of the above fields are desired after a libEnsemble run, name them in gen_specs["out"].

  • If intitializing APOSMM with past function values, make sure to include "x", "x_on_cube", "f", "local_pt", etc. in gen_specs["in"] (and, of course, include them in the H0 array given to libensemble).

Necessary quantities in gen_specs["user"] are:

  • "lb" [n floats]: Lower bound on search domain

  • "ub" [n floats]: Upper bound on search domain

  • "localopt_method" [str]: Name of an NLopt, PETSc/TAO, or SciPy method (see “advance_local_run” below for supported methods). When using a SciPy method, must supply "opt_return_codes", a list of integers that will be used to determine if the x produced by the localopt method should be ruled a local minimum. (For example, SciPy’s COBYLA has a “status” of 1 if at an optimum, but SciPy’s Nelder-Mead and BFGS have a “status” of 0 if at an optimum.)

  • "initial_sample_size" [int]: Number of uniformly sampled points to be evaluated before starting the localopt runs. Can be zero if no additional sampling is desired, but if zero there must be past sim_f values given to libEnsemble in H0.

Optional gen_specs["user"] entries are:

  • "max_active_runs" [int]: Bound on number of runs APOSMM is advancing

  • "sample_points" [numpy array]: Points to be sampled (original domain). If more sample points are needed by APOSMM during the course of the optimization, points will be drawn uniformly over the domain

  • "components" [int]: Number of objective components

  • "dist_to_bound_multiple" [float in (0, 1]]: What fraction of the distance to the nearest boundary should the initial step size be in localopt runs

  • "lhs_divisions" [int]: Number of Latin hypercube sampling partitions (0 or 1 results in uniform sampling)

  • "mu" [float]: Distance from the boundary that all localopt starting points must satisfy

  • "nu" [float]: Distance from identified minima that all starting points must satisfy

  • "rk_const" [float]: Multiplier in front of the r_k value

  • "stop_after_k_minima" [int]: Tell APOSMM to stop after this many local minima have been identified by a local optimization run.

  • "stop_after_k_runs" [int]: Tell APOSMM to stop after this many runs have ended. (The number of ended runs may be less than the number of minima if, for example, a local optimization run ends due to a evaluation constraint, but not convergence criteria.)

If the rules in decide_where_to_start_localopt produces more than "max_active_runs" in some iteration, then existing runs are prioritized.

And gen_specs["user"] must also contain fields for the given localopt_method’s convergence tolerances (e.g., gatol/grtol for PETSC/TAO or ftol_rel for NLopt)

See also

test_persistent_aposmm_scipy for basic APOSMM usage.

See also

test_persistent_aposmm_with_grad for an example where past function values are given to libEnsemble/APOSMM.

LocalOptInterfacer

This module contains methods for APOSMM to interface with various local optimization routines.

class aposmm_localopt_support.LocalOptInterfacer(user_specs, x0, f0, grad0=None)

This class defines the APOSMM interface to various local optimization routines.

Currently supported routines are

  • NLopt ['LN_SBPLX', LN_BOBYQA', 'LN_COBYLA', 'LN_NEWUOA', 'LN_NELDERMEAD', 'LD_MMA']

  • PETSc/TAO ['pounders', 'blmvm', 'nm']

  • SciPy ['scipy_Nelder-Mead', 'scipy_COBYLA', 'scipy_BFGS']

  • DFOLS ['dfols']

  • External local optimizer ['external_localopt'] (which use files to pass/receive x/f values)

iterate(data)

Returns an instance of either numpy.ndarray corresponding to the next iterative guess or ConvergedMsg when the solver has completed its run.

Parameters:
  • x_on_cube – A numpy array of the point being evaluated (for a handshake)

  • f – A numpy array of the function evaluation.

  • grad – A numpy array of the function’s gradient.

  • fvec – A numpy array of the function’s component values.

destroy()

Recursively kill any optimizer processes still running

close()

Join process and close queue