libEnsemble: A complete toolkit for dynamic ensembles of calculations
Adaptive, portable, and scalable software for connecting “deciders” to experiments or simulations.
Dynamic ensembles: Generate parallel tasks on-the-fly based on previous computations.
Extreme portability and scaling: Run on or across laptops, clusters, and leadership-class machines.
Heterogeneous computing: Dynamically and portably assign CPUs, GPUs, or multiple nodes.
Application monitoring: Ensemble members can run, monitor, and cancel apps.
Data-flow between tasks: Running ensemble members can send and receive data.
Low start-up cost: No additional background services or processes required.
libEnsemble is effective at solving design, decision, and inference problems on parallel resources.
Installation
Install libEnsemble and its dependencies from PyPI using pip:
pip install libensemble
Other install methods are described in the docs.
Resources
Support:
Ask questions or report issues on GitHub.
Email
libEnsemble@lists.mcs.anl.gov
to request libEnsemble Slack page.Join the libEnsemble mailing list for updates about new releases.
Further Information:
Documentation is provided by ReadtheDocs.
Contributions to libEnsemble are welcome.
Browse production functions and workflows in the Community Examples repository.
Cite libEnsemble:
@article{Hudson2022,
title = {{libEnsemble}: A Library to Coordinate the Concurrent
Evaluation of Dynamic Ensembles of Calculations},
author = {Stephen Hudson and Jeffrey Larson and John-Luke Navarro and Stefan Wild},
journal = {{IEEE} Transactions on Parallel and Distributed Systems},
volume = {33},
number = {4},
pages = {977--988},
year = {2022},
doi = {10.1109/tpds.2021.3082815}
}
Basic Usage
Select or supply Simulator and Generator functions
Generator and Simulator Python functions respectively produce candidate parameters and perform/monitor computations that use those parameters. Coupling them together with libEnsemble is easy:
from my_simulators import beamline_simulation_function
from someones_calibrator import adaptive_calibrator_function
from libensemble import Ensemble, SimSpecs, GenSpecs, LibeSpecs, ExitCriteria
if __name__ == "__main__":
basic_settings = LibeSpecs(comms="local", nworkers=16, save_every_k_gens=100, kill_cancelled_sims=True)
simulation = SimSpecs(sim_f=beamline_simulation_function, inputs=["x"], out=[("f", float)])
outer_loop = GenSpecs(gen_f=adaptive_calibrator_function, inputs=["f"], out=[("x", float)])
when_to_stop = ExitCriteria(gen_max=500)
my_experiment = Ensemble(basic_settings, simulation, outer_loop, when_to_stop)
Output = my_experiment.run()
Launch and monitor apps on parallel resources
libEnsemble includes an Executor interface so application-launching functions are portable, resilient, and flexible. It automatically detects available resources and GPUs, and can dynamically assign them:
import numpy as np
from libensemble.executors import MPIExecutor
def beamline_simulation_function(Input):
particles = str(Input["x"])
args = "timesteps " + str(10) + " " + particles
exctr = MPIExecutor()
exctr.register_app("./path/to/particles.app", app_name="particles")
# GPUs selected by Generator, can autotune or set explicitly
task = exctr.submit(app_name="particles", app_args=args, num_procs=64, auto_assign_gpus=True)
task.wait()
try:
data = np.loadtxt("particles.stat")
final_energy = data[-1]
except Exception:
final_energy = np.nan
output = np.zeros(1, dtype=[("f", float)])
output["energy"] = final_energy
return output
See the user guide for more information.
Example Compatible Packages
libEnsemble and the Community Examples repository include example generator functions for the following libraries:
APOSMM Asynchronously parallel optimization solver for finding multiple minima. Supported local optimization routines include:
DFO-LS Derivative-free solver for (bound constrained) nonlinear least-squares minimization
NLopt Library for nonlinear optimization, providing a common interface for various methods
scipy.optimize Open-source solvers for nonlinear problems, linear programming, constrained and nonlinear least-squares, root finding, and curve fitting.
PETSc/TAO Routines for the scalable (parallel) solution of scientific applications
DEAP Distributed evolutionary algorithms
Distributed optimization methods for minimizing sums of convex functions. Methods include:
Primal-dual sliding (https://arxiv.org/pdf/2101.00143).
Distributed gradient descent with gradient tracking (https://arxiv.org/abs/1908.11444).
Proximal sliding (https://arxiv.org/abs/1406.0919).
ECNoise Estimating Computational Noise in Numerical Simulations
Surmise Modular Bayesian calibration/inference framework
Tasmanian Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN
VTMOP Fortran package for large-scale multiobjective multidisciplinary design optimization
libEnsemble has also been used to coordinate many computationally expensive simulations. Select examples include:
OPAL Object Oriented Parallel Accelerator Library. (See this IPAC manuscript.)
WarpX Advanced electromagnetic particle-in-cell code. (See example WarpX + libE scripts.)