APOSMM
Asynchronously Parallel Optimization Solver for finding Multiple Minima (APOSMM) coordinates concurrent local optimization runs in order to identify many local minima.
Optional (see below): petsc4py, nlopt, DFO-LS
Configuring APOSMM
APOSMM works with a choice of optimizers, some requiring external packages. To import the optimization packages (and their dependencies) at a global level (recommended), add the following lines in the calling script before importing APOSMM:
import libensemble.gen_funcs
libensemble.gen_funcs.rc.aposmm_optimizers = <optimizers>
where optimizers
is a string (or list of strings) from the available options:
"petsc"
, "nlopt"
, "dfols"
, "scipy"
, "external"
Issues with ensemble hanging or failed simulations?
Note that if using mpi4py comms, PETSc must be imported at the global level or the ensemble may hang.
Exception: In the case that you are using the MPIExecutor or other MPI inside a user function and you are using Open MPI, then you must:
Use
local
comms for libEnsemble (notmpi4py
)Must NOT include the rc line above
This is because PETSc imports MPI, and a global import of PETSc would result in nested MPI (which is not supported by Open MPI). When the above line is not used, an import local to the optimization function will happen.
To see the optimization algorithms supported, see LocalOptInterfacer.
See also
Persistent APOSMM
This module contains methods used our implementation of the Asynchronously Parallel Optimization Solver for finding Multiple Minima (APOSMM) method. https://doi.org/10.1007/s12532-017-0131-4
This implementation of APOSMM was developed by Kaushik Kulkarni and Jeffrey Larson in the summer of 2019.
- persistent_aposmm.aposmm(H, persis_info, gen_specs, libE_info)
APOSMM coordinates multiple local optimization runs, starting from points which do not have a better point nearby (within a distance
r_k
). This generation function uses alocal_H
(serving a similar purpose asH
in libEnsemble) containing the fields:'x' [n floats]
: Parameters being optimized over'x_on_cube' [n floats]
: Parameters scaled to the unit cube'f' [float]
: Objective function being minimized'local_pt' [bool]
: True if point from a local optimization run'dist_to_unit_bounds' [float]
: Distance to domain boundary'dist_to_better_l' [float]
: Dist to closest better local opt point'dist_to_better_s' [float]
: Dist to closest better sample point'ind_of_better_l' [int]
: Index of point'dist_to_better_l
’ away'ind_of_better_s' [int]
: Index of point'dist_to_better_s
’ away'started_run' [bool]
: True if point has started a local opt run'num_active_runs' [int]
: Number of active local runs point is in'local_min' [float]
: True if point has been ruled a local minima'sim_id' [int]
: Row number of entry in history
and optionally
'fvec' [m floats]
: All objective components (if performing a least-squares calculation)'grad' [n floats]
: The gradient (if available) of the objective with respect to x.
Note:
If any of the above fields are desired after a libEnsemble run, name them in
gen_specs['out']
.If intitializing APOSMM with past function values, make sure to include
'x'
,'x_on_cube'
,'f'
,'local_pt'
, etc. ingen_specs['in']
(and, of course, include them in the H0 array given to libensemble).
Necessary quantities in
gen_specs['user']
are:'lb' [n floats]
: Lower bound on search domain'ub' [n floats]
: Upper bound on search domain'localopt_method' [str]
: Name of an NLopt, PETSc/TAO, or SciPy method (see ‘advance_local_run’ below for supported methods). When using a SciPy method, must supply'opt_return_codes'
, a list of integers that will be used to determine if the x produced by the localopt method should be ruled a local minimum. (For example, SciPy’s COBYLA has a ‘status’ of 1 if at an optimum, but SciPy’s Nelder-Mead and BFGS have a ‘status’ of 0 if at an optimum.)'initial_sample_size' [int]
: Number of uniformly sampled points must be returned (non-nan value) before a local opt run is started. Can be zero if no additional sampling is desired, but if zero there must be past sim_f values given to libEnsemble in H0.
Optional
gen_specs['user']
entries are:'sample_points' [numpy array]
: Points to be sampled (original domain). If more sample points are needed by APOSMM during the course of the optimization, points will be drawn uniformly over the domain'components' [int]
: Number of objective components'dist_to_bound_multiple' [float in (0, 1]]
: What fraction of the distance to the nearest boundary should the initial step size be in localopt runs'lhs_divisions' [int]
: Number of Latin hypercube sampling partitions (0 or 1 results in uniform sampling)'mu' [float]
: Distance from the boundary that all localopt starting points must satisfy'nu' [float]
: Distance from identified minima that all starting points must satisfy'rk_const' [float]
: Multiplier in front of the r_k value'max_active_runs' [int]
: Bound on number of runs APOSMM is advancing'stop_after_k_minima' [int]
: Tell APOSMM to stop after this many local minima have been identified by a local optimization run.'stop_after_k_runs' [int]
: Tell APOSMM to stop after this many runs have ended. (The number of ended runs may be less than the number of minima if, for example, a local optimization run ends due to a evaluation constraint, but not convergence criteria.)
If the rules in
decide_where_to_start_localopt
produces more than'max_active_runs'
in some iteration, then existing runs are prioritized.And
gen_specs['user']
must also contain fields for the given localopt_method’s convergence tolerances (e.g., gatol/grtol for PETSC/TAO or ftol_rel for NLopt)See also
test_persistent_aposmm_scipy for basic APOSMM usage.
See also
test_persistent_aposmm_with_grad for an example where past function values are given to libEnsemble/APOSMM.
- persistent_aposmm.initialize_APOSMM(H, user_specs, libE_info)
Computes common values every time that APOSMM is reinvoked
See also
- persistent_aposmm.decide_where_to_start_localopt(H, n, n_s, rk_const, ld=0, mu=0, nu=0)
APOSMM starts a local optimization runs from a point that:
is not in an active local optimization run,
is more than
mu
from the boundary (in the unit-cube domain),is more than
nu
from identified minima (in the unit-cube domain),does not have a better point within a distance
r_k
of it.
For further details, see the conditions (S1-S5 and L1-L8) in Table 1 of the APOSMM paper This method first identifies sample points satisfying S2-S5, and then identifies all localopt points that satisfy L1-L7. We then start from any sample point also satisfying S1. We do not check condition L8 currently.
We don’t consider points in the history that have not returned from computation, or that have a
nan
value. As APOSMM works on the unit cube, note thatmu
andnu
implicitly depend on the scaling of the original domain: adjusting the initial domain can make a run start (or not start) at a point that didn’t (or did) previously.- Parameters:
H (numpy.ndarray) – History array storing rows for each point. Numpy structured array.
n (int) – Problem dimension
n_s (int) – Number of sample points in H
r_k_const (float) – Radius for deciding when to start runs
ld (int) – Number of Latin hypercube sampling divisions (0 or 1 means uniform random sampling over the domain)
mu (float) – Nonnegative distance from the boundary that all starting points must satisfy
nu (float) – Nonnegative distance from identified minima that all starting points must satisfy
- Returns:
start_inds – Indices where a local opt run should be started, sorted by increasing function value.
- Return type:
list
See also
- persistent_aposmm.update_history_dist(H, n)
Updates distances/indices after new points that have been evaluated.
See also
LocalOptInterfacer
This module contains methods for APOSMM to interface with various local optimization routines.
- class aposmm_localopt_support.LocalOptInterfacer(user_specs, x0, f0, grad0=None)
This class defines the APOSMM interface to various local optimization routines.
Currently supported routines are
NLopt routines [‘LN_SBPLX’, ‘LN_BOBYQA’, ‘LN_COBYLA’, ‘LN_NEWUOA’, ‘LN_NELDERMEAD’, ‘LD_MMA’]
PETSc/TAO routines [‘pounders’, ‘blmvm’, ‘nm’]
SciPy routines [‘scipy_Nelder-Mead’, ‘scipy_COBYLA’, ‘scipy_BFGS’]
DFOLS [‘dfols’]
External local optimizer [‘external_localopt’] (which use files to pass/receive x/f values)
- iterate(data)
Returns an instance of either
numpy.ndarray
corresponding to the next iterative guess orConvergedMsg
when the solver has completed its run.- Parameters:
x_on_cube – A numpy array of the point being evaluated (for a handshake)
f – A numpy array of the function evaluation.
grad – A numpy array of the function’s gradient.
fvec – A numpy array of the function’s component values.
- destroy()
Recursively kill any optimizer processes still running
- close()
Join process and close queue
- aposmm_localopt_support.run_local_nlopt(user_specs, comm_queue, x0, f0, child_can_read, parent_can_read)
Runs an NLopt local optimization run starting at
x0
, governed by the parameters inuser_specs
.
- aposmm_localopt_support.run_local_tao(user_specs, comm_queue, x0, f0, child_can_read, parent_can_read)
Runs a PETSc/TAO local optimization run starting at
x0
, governed by the parameters inuser_specs
.
- aposmm_localopt_support.run_local_dfols(user_specs, comm_queue, x0, f0, child_can_read, parent_can_read)
Runs a DFOLS local optimization run starting at
x0
, governed by the parameters inuser_specs
.
- aposmm_localopt_support.run_local_ibcdfo_pounders(user_specs, comm_queue, x0, f0, child_can_read, parent_can_read)
Runs a IBCDFO local optimization run starting at
x0
, governed by the parameters inuser_specs
.Although IBCDFO methods can receive previous evaluations, few other methods support that, so APOSMM assumes the first point will be re-evaluated (but not be sent back to the manager).
- aposmm_localopt_support.run_local_scipy_opt(user_specs, comm_queue, x0, f0, child_can_read, parent_can_read)
Runs a SciPy local optimization run starting at
x0
, governed by the parameters inuser_specs
.
- aposmm_localopt_support.run_external_localopt(user_specs, comm_queue, x0, f0, child_can_read, parent_can_read)
Runs an external local optimization run starting at
x0
, governed by the parameters inuser_specs
.