Bebop
Bebop is a Cray CS400 cluster with Intel Broadwell compute nodes available in the Laboratory Computing Resources Center (LCRC) at Argonne National Laboratory.
Configuring Python
Begin by loading the Python 3 Anaconda module:
module load anaconda3
Create a conda virtual environment in which to install libEnsemble and all dependencies:
conda config --add channels intel
conda create --name my_env intelpython3_core python=3
source activate my_env
Installing libEnsemble and Dependencies
You should have an indication that the virtual environment is activated. Start by installing mpi4py in this environment, making sure to reference the preinstalled Intel MPI compiler. Your prompt should be similar to the following block:
CC=mpiicc MPICC=mpiicc pip install mpi4py --no-binary mpi4py
libEnsemble can then be installed via pip
or conda
. To install via pip
:
pip install libensemble
To install via conda
:
conda config --add channels conda-forge
conda install -c conda-forge libensemble
See here for more information on advanced options for installing libEnsemble.
Job Submission
Bebop uses PBS for job submission and management.
Interactive Runs
You can allocate four Broadwell nodes for thirty minutes through the following:
qsub -I -A <project_id> -l select=4:mpiprocs=4 -l walltime=30:00
Once in the interactive session, you may need to reload your modules:
cd $PBS_O_WORKDIR
module load anaconda3 gcc openmpi aocl
conda activate bebop_libe_env
Now run your script with four workers (one for generator and three for simulations):
python my_libe_script.py --nworkers 4
mpirun
should also work. This line launches libEnsemble with a manager and
three workers to one allocated compute node, with three nodes available for
the workers to launch calculations with the Executor or a launch command.
This is an example of running in centralized mode, and,
if using the Executor, libEnsemble should
be initiated with libE_specs["dedicated_mode"]=True
Note
When performing a distributed MPI libEnsemble run and not oversubscribing, specify one more MPI process than the number of allocated nodes. The manager and first worker run together on a node.
Note
You will need to reactivate your conda virtual environment and reload your modules! Configuring this routine to occur automatically is recommended.
Additional Information
See the LCRC Bebop docs here for more information about Bebop.