Multiprocessing with a Simple Sine

This introductory tutorial demonstrates the capability to perform ensembles of calculations in parallel using libEnsemble with Python’s multiprocessing.

The foundation of writing libEnsemble routines is accounting for four components:

  1. The generator function gen_f, which produces values for simulations

  2. The simulator function sim_f, which performs simulations based on values from gen_f

  3. The allocation function alloc_f, which decides which of the previous two functions should be called

  4. The calling script, which defines parameters and information about these functions and the libEnsemble task, then begins execution

libEnsemble initializes a manager process and as many worker processes as the user requests. The manager coordinates data transfer between workers and assigns each unit of work, consisting of a gen_f or sim_f function to run and accompanying data. These functions can perform their work in-line with Python or by launching and controlling user applications with libEnsemble’s executor. Workers then pass results back to the manager.

For this tutorial, we’ll write our gen_f and sim_f entirely in Python without other applications. Our gen_f will produce uniform randomly sampled values, and our sim_f will calculate the sine of each. By default we don’t need to write a new allocation function. All generated and simulated values alongside other parameters are stored in H, the history array.

Getting started

libEnsemble and its functions are written entirely in Python. Let’s make sure Python 3 is installed.

Note: If you have a Python version-specific virtual environment set up (e.g., conda), then python and pip will work in place of python3 and pip3.

$ python3 --version
Python 3.6.0            # This should be >= 3.6

For this tutorial, you need NumPy to perform calculations and (optionally) Matplotlib to visualize your results. Install libEnsemble and these other libraries with

$ pip3 install numpy
$ pip3 install libensemble
$ pip3 install matplotlib # Optional

If your system doesn’t allow you to perform these installations, try adding --user to the end of each command.

Generator function

Let’s begin the coding portion of this tutorial by writing our gen_f, or generator function.

An available libEnsemble worker will call this generator function with the following parameters:

  • H: The history array. Updated by the workers with gen_f and sim_f inputs and outputs, then returned to the user. libEnsemble passes H to the generator function in case the user wants to generate new values based on previous data.

  • persis_info: Dictionary with worker-specific information. In our case this dictionary contains mechanisms called random streams for generating random numbers.

  • gen_specs: Dictionary with user-defined and operational parameters for the gen_f. The user places function-specific parameters such as boundaries and batch sizes within the nested user dictionary, while parameters that libEnsemble depends on to operate the gen_f are placed outside user.

Later on, we’ll populate gen_specs and persis_info in our calling script.

For now, create a new Python file named Write the following:

 1import numpy as np
 3def gen_random_sample(H, persis_info, gen_specs, _):
 4    # underscore parameter for internal/testing arguments
 6    # Pull out user parameters to perform calculations
 7    user_specs = gen_specs['user']
 9    # Get lower and upper bounds from gen_specs
10    lower = user_specs['lower']
11    upper = user_specs['upper']
13    # Determine how many values to generate
14    num = len(lower)
15    batch_size = user_specs['gen_batch_size']
17    # Create array of 'batch_size' zeros
18    out = np.zeros(batch_size, dtype=gen_specs['out'])
20    # Replace those zeros with the random numbers
21    out['x'] = persis_info['rand_stream'].uniform(lower, upper, (batch_size, num))
23    # Send back our output and persis_info
24    return out, persis_info

Our function creates batch_size random numbers uniformly distributed between the lower and upper bounds. A random stream from persis_info is used to generate these values, where they are placed into a NumPy array that meets the specifications from gen_specs['out'].

Simulator function

Next, we’ll write our sim_f or simulator function. Simulator functions perform calculations based on values from the generator function. The only new parameter here is sim_specs, which serves a purpose similar to the gen_specs dictionary.

Create a new Python file named Write the following:

 1import numpy as np
 3def sim_find_sine(H, persis_info, sim_specs, _):
 4    # underscore for internal/testing arguments
 6    # Create an output array of a single zero
 7    out = np.zeros(1, dtype=sim_specs['out'])
 9    # Set the zero to the sine of the input value stored in H
10    out['y'] = np.sin(H['x'])
12    # Send back our output and persis_info
13    return out, persis_info

Our simulator function is called by a worker for every value in its batch from the generator function. This function calculates the sine of the passed value, then returns it so a worker can log it into H.

Calling script

Now we can write the calling script that configures our generator and simulator functions and calls libEnsemble.

Create an empty Python file named In this file, we’ll start by importing NumPy, libEnsemble, and the generator and simulator functions we just created.

Next, in a dictionary called libE_specs we’ll specify the number of workers and the type of manager/worker communication libEnsemble will use. Our communication method, 'local', refers to Python’s multiprocessing.

1import numpy as np
2from libensemble.libE import libE
3from generator import gen_random_sample
4from simulator import sim_find_sine
5from import add_unique_random_streams
7nworkers = 4
8libE_specs = {'nworkers': nworkers, 'comms': 'local'}

We configure the settings and specifications for our sim_f and gen_f functions in the gen_specs and sim_specs dictionaries, which we saw previously being passed to our functions. These dictionaries also describe to libEnsemble what inputs and outputs from those functions to expect.

 1gen_specs = {'gen_f': gen_random_sample,   # Our generator function
 2             'out': [('x', float, (1,))],  # gen_f output (name, type, size)
 3             'user': {
 4                'lower': np.array([-3]),   # lower boundary for random sampling
 5                'upper': np.array([3]),    # upper boundary for random sampling
 6                'gen_batch_size': 5        # number of x's gen_f generates per call
 7                }
 8             }
10sim_specs = {'sim_f': sim_find_sine,       # Our simulator function
11             'in': ['x'],                  # Input field names. 'x' from gen_f output
12             'out': [('y', float)]}        # sim_f output. 'y' = sine('x')

Recall that each worker is assigned an entry in the persis_info dictionary that, in this tutorial, contains a RandomState() random stream for uniform random sampling. We populate that dictionary here using a utility from the tools module. We then specify the circumstances where libEnsemble should stop execution in exit_criteria.

1persis_info = add_unique_random_streams({}, nworkers+1) # Worker numbers start at 1
3exit_criteria = {'sim_max': 80}           # Stop libEnsemble after 80 simulations

Now we’re ready to write our libEnsemble libE function call. This H is the final version of the history array. flag should be zero if no errors occur.

1H, persis_info, flag = libE(sim_specs, gen_specs, exit_criteria, persis_info,
2                            libE_specs=libE_specs)
4print([i for i in H.dtype.fields])  # (optional) to visualize our history array

That’s it! Now that these files are complete, we can run our simulation.

$ python3

If everything ran perfectly and you included the above print statements, you should get something similar to the following output for H (although the columns might be rearranged).

['y', 'given_time', 'gen_worker', 'sim_worker', 'given', 'returned', 'x', 'allocated', 'sim_id', 'gen_time']
[(-0.37466051, 1.559+09, 2, 2,  True,  True, [-0.38403059],  True,  0, 1.559+09)
(-0.29279634, 1.559+09, 2, 3,  True,  True, [-2.84444261],  True,  1, 1.559+09)
( 0.29358492, 1.559+09, 2, 4,  True,  True, [ 0.29797487],  True,  2, 1.559+09)
(-0.3783986 , 1.559+09, 2, 1,  True,  True, [-0.38806564],  True,  3, 1.559+09)
(-0.45982062, 1.559+09, 2, 2,  True,  True, [-0.47779319],  True,  4, 1.559+09)

In this arrangement, our output values are listed on the far left with the generated values being the fourth column from the right.

Two additional log files should also have been created. ensemble.log contains debugging or informational logging output from libEnsemble, while libE_stats.txt contains a quick summary of all calculations performed.

Here is graphed output using Matplotlib, with entries colored by which worker performed the simulation:


If you want to verify your results through plotting and installed Matplotlib earlier, copy and paste the following code into the bottom of your calling script and run python3 again

 1import matplotlib.pyplot as plt
 2colors = ['b', 'g', 'r', 'y', 'm', 'c', 'k', 'w']
 4for i in range(1, nworkers + 1):
 5    worker_xy = np.extract(H['sim_worker'] == i, H)
 6    x = [entry.tolist()[0] for entry in worker_xy['x']]
 7    y = [entry for entry in worker_xy['y']]
 8    plt.scatter(x, y, label='Worker {}'.format(i), c=colors[i-1])
10plt.title('Sine calculations for a uniformly sampled random distribution')
13plt.legend(loc = 'lower right')

Next steps

The following is another learning exercise based on the above code.

libEnsemble with MPI

MPI is a standard interface for parallel computing, implemented in libraries such as MPICH and used at extreme scales. MPI potentially allows libEnsemble’s manager and workers to be distributed over multiple nodes and works in some circumstances where Python’s multiprocessing does not. In this section, we’ll explore modifying the above code to use MPI instead of multiprocessing.

We recommend the MPI distribution MPICH for this tutorial, which can be found for a variety of systems here. You also need mpi4py, which can be installed with pip3 install mpi4py. If you’d like to use a specific version or distribution of MPI instead of MPICH, configure mpi4py with that MPI at installation with MPICC=<path/to/MPI_C_compiler> pip3 install mpi4py If this doesn’t work, try appending --user to the end of the command. See the mpi4py docs for more information.

Verify that MPI has installed correctly with mpirun --version.

Modifying the calling script

Only a few changes are necessary to make our code MPI-compatible. Modify the top of the calling script as follows:

 1import numpy as np
 2from libensemble.libE import libE
 3from generator import gen_random_sample
 4from simulator import sim_find_sine
 5from import add_unique_random_streams
 6from mpi4py import MPI
 8# nworkers = 4                                # nworkers will come from MPI
 9libE_specs = {'comms': 'mpi'}                 # 'nworkers' removed, 'comms' now 'mpi'
11nworkers = MPI.COMM_WORLD.Get_size() - 1
12is_manager = (MPI.COMM_WORLD.Get_rank() == 0)  # master process has MPI rank 0

So that only one process executes the graphing and printing portion of our code, modify the bottom of the calling script like this:

 1  H, persis_info, flag = libE(sim_specs, gen_specs, exit_criteria, persis_info,
 2                              libE_specs=libE_specs)
 4  if is_manager:
 5      # Some (optional) statements to visualize our history array
 6      print([i for i in H.dtype.fields])
 7      print(H)
 9      import matplotlib.pyplot as plt
10      colors = ['b', 'g', 'r', 'y', 'm', 'c', 'k', 'w']
12      for i in range(1, nworkers + 1):
13          worker_xy = np.extract(H['sim_worker'] == i, H)
14          x = [entry.tolist()[0] for entry in worker_xy['x']]
15          y = [entry for entry in worker_xy['y']]
16          plt.scatter(x, y, label='Worker {}'.format(i), c=colors[i-1])
18      plt.title('Sine calculations for a uniformly sampled random distribution')
19      plt.xlabel('x')
20      plt.ylabel('sine(x)')
21      plt.legend(loc='lower right')
22      plt.savefig('tutorial_sines.png')

With these changes in place, our libEnsemble code can be run with MPI by

$ mpirun -n 5 python3

where -n 5 tells mpirun to produce five processes, one of which will be the master process with the libEnsemble manager and the other four will run libEnsemble workers.

This tutorial is only a tiny demonstration of the parallelism capabilities of libEnsemble. libEnsemble has been developed primarily to support research on High-Performance computers, with potentially hundreds of workers performing calculations simultaneously. Please read our platform guides for introductions to using libEnsemble on many such machines.

libEnsemble’s can launch non-Python user applications and simulations across allocated compute resources. Try out this feature with a more-complicated libEnsemble use-case within our Electrostatic Forces tutorial.