Allocation Functions
Below are example allocation functions available in libEnsemble.
Important
See the API for allocation functions here.
Note
The default allocation function is give_sim_work_first
.
give_sim_work_first
- give_sim_work_first.give_sim_work_first(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info)
Decide what should be given to workers. This allocation function gives any available simulation work first, and only when all simulations are completed or running does it start (at most
alloc_specs["user"]["num_active_gens"]
) generator instances.Allows for a
alloc_specs["user"]["batch_mode"]
where no generation work is given out unless all entries inH
are returned.Can give points in highest priority, if
"priority"
is a field inH
. If alloc_specs[“user”][“give_all_with_same_priority”] is set to True, then all points with the same priority value are given as a batch to the sim.Workers performing sims will be assigned resources given in H[“resource_sets”] this field exists, else defaulting to one. Workers performing gens are assigned resource_sets given by persis_info[“gen_resources”] or zero.
This is the default allocation function if one is not defined.
tags: alloc, default, batch, priority
See also
test_uniform_sampling.py # noqa
- Parameters:
W (ndarray[Any, dtype[ScalarType]]) –
H (ndarray[Any, dtype[ScalarType]]) –
sim_specs (dict) –
gen_specs (dict) –
alloc_specs (dict) –
persis_info (dict) –
libE_info (dict) –
- Return type:
Tuple[dict]
give_sim_work_first.py
1import time
2from typing import Tuple
3
4import numpy as np
5import numpy.typing as npt
6
7from libensemble.tools.alloc_support import AllocSupport, InsufficientFreeResources
8
9
10def give_sim_work_first(
11 W: npt.NDArray,
12 H: npt.NDArray,
13 sim_specs: dict,
14 gen_specs: dict,
15 alloc_specs: dict,
16 persis_info: dict,
17 libE_info: dict,
18) -> Tuple[dict]:
19 """
20 Decide what should be given to workers. This allocation function gives any
21 available simulation work first, and only when all simulations are
22 completed or running does it start (at most ``alloc_specs["user"]["num_active_gens"]``)
23 generator instances.
24
25 Allows for a ``alloc_specs["user"]["batch_mode"]`` where no generation
26 work is given out unless all entries in ``H`` are returned.
27
28 Can give points in highest priority, if ``"priority"`` is a field in ``H``.
29 If alloc_specs["user"]["give_all_with_same_priority"] is set to True, then
30 all points with the same priority value are given as a batch to the sim.
31
32 Workers performing sims will be assigned resources given in H["resource_sets"]
33 this field exists, else defaulting to one. Workers performing gens are
34 assigned resource_sets given by persis_info["gen_resources"] or zero.
35
36 This is the default allocation function if one is not defined.
37
38 tags: alloc, default, batch, priority
39
40 .. seealso::
41 `test_uniform_sampling.py <https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/functionality_tests/test_uniform_sampling.py>`_ # noqa
42 """
43
44 user = alloc_specs.get("user", {})
45
46 if "cancel_sims_time" in user:
47 # Cancel simulations that are taking too long
48 rows = np.where(np.logical_and.reduce((H["sim_started"], ~H["sim_ended"], ~H["cancel_requested"])))[0]
49 inds = time.time() - H["sim_started_time"][rows] > user["cancel_sims_time"]
50 to_request_cancel = rows[inds]
51 for row in to_request_cancel:
52 H[row]["cancel_requested"] = True
53
54 if libE_info["sim_max_given"] or not libE_info["any_idle_workers"]:
55 return {}, persis_info
56
57 # Initialize alloc_specs["user"] as user.
58 batch_give = user.get("give_all_with_same_priority", False)
59 gen_in = gen_specs.get("in", [])
60
61 manage_resources = libE_info["use_resource_sets"]
62 support = AllocSupport(W, manage_resources, persis_info, libE_info)
63 gen_count = support.count_gens()
64 Work = {}
65
66 points_to_evaluate = ~H["sim_started"] & ~H["cancel_requested"]
67 for wid in support.avail_worker_ids():
68 if np.any(points_to_evaluate):
69 sim_ids_to_send = support.points_by_priority(H, points_avail=points_to_evaluate, batch=batch_give)
70 try:
71 Work[wid] = support.sim_work(wid, H, sim_specs["in"], sim_ids_to_send, persis_info.get(wid))
72 except InsufficientFreeResources:
73 break
74 points_to_evaluate[sim_ids_to_send] = False
75 else:
76 # Allow at most num_active_gens active generator instances
77 if gen_count >= user.get("num_active_gens", gen_count + 1):
78 break
79
80 # Do not start gen instances in batch mode if workers still working
81 if user.get("batch_mode") and not support.all_sim_ended(H):
82 break
83
84 # Give gen work
85 return_rows = range(len(H)) if gen_in else []
86 try:
87 Work[wid] = support.gen_work(wid, gen_in, return_rows, persis_info.get(wid))
88 except InsufficientFreeResources:
89 break
90 gen_count += 1
91
92 return Work, persis_info
fast_alloc
- fast_alloc.give_sim_work_first(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info)
This allocation function gives (in order) entries in
H
to idle workers to evaluate in the simulation function. The fields insim_specs["in"]
are given. If all entries in H have been given a be evaluated, a worker is told to call the generator function, provided this wouldn’t result in more thanalloc_specs["user"]["num_active_gen"]
active generators.This fast_alloc variation of give_sim_work_first is useful for cases that simply iterate through H, issuing evaluations in order and, in particular, is likely to be faster if there will be many short simulation evaluations, given that this function contains fewer column length operations.
tags: alloc, simple, fast
See also
test_fast_alloc.py # noqa
fast_alloc.py
1from libensemble.tools.alloc_support import AllocSupport, InsufficientFreeResources
2
3
4def give_sim_work_first(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info):
5 """
6 This allocation function gives (in order) entries in ``H`` to idle workers
7 to evaluate in the simulation function. The fields in ``sim_specs["in"]``
8 are given. If all entries in `H` have been given a be evaluated, a worker
9 is told to call the generator function, provided this wouldn't result in
10 more than ``alloc_specs["user"]["num_active_gen"]`` active generators.
11
12 This fast_alloc variation of give_sim_work_first is useful for cases that
13 simply iterate through H, issuing evaluations in order and, in particular,
14 is likely to be faster if there will be many short simulation evaluations,
15 given that this function contains fewer column length operations.
16
17 tags: alloc, simple, fast
18
19 .. seealso::
20 `test_fast_alloc.py <https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/regression_tests/test_fast_alloc.py>`_ # noqa
21 """
22
23 if libE_info["sim_max_given"] or not libE_info["any_idle_workers"]:
24 return {}, persis_info
25
26 user = alloc_specs.get("user", {})
27 manage_resources = libE_info["use_resource_sets"]
28
29 support = AllocSupport(W, manage_resources, persis_info, libE_info)
30
31 gen_count = support.count_gens()
32 Work = {}
33 gen_in = gen_specs.get("in", [])
34
35 for wid in support.avail_worker_ids():
36 # Skip any cancelled points
37 while persis_info["next_to_give"] < len(H) and H[persis_info["next_to_give"]]["cancel_requested"]:
38 persis_info["next_to_give"] += 1
39
40 # Give sim work if possible
41 if persis_info["next_to_give"] < len(H):
42 try:
43 Work[wid] = support.sim_work(wid, H, sim_specs["in"], [persis_info["next_to_give"]], [])
44 except InsufficientFreeResources:
45 break
46 persis_info["next_to_give"] += 1
47
48 elif gen_count < user.get("num_active_gens", gen_count + 1):
49 # Give gen work
50 return_rows = range(len(H)) if gen_in else []
51 try:
52 Work[wid] = support.gen_work(wid, gen_in, return_rows, persis_info.get(wid))
53 except InsufficientFreeResources:
54 break
55 gen_count += 1
56 persis_info["total_gen_calls"] += 1
57
58 return Work, persis_info
start_only_persistent
- start_only_persistent.only_persistent_gens(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info)
This allocation function will give simulation work if possible, but otherwise start up to
alloc_specs["user"]["num_active_gens"]
persistent generators (defaulting to one).By default, evaluation results are given back to the generator once all generated points have been returned from the simulation evaluation. If
alloc_specs["user"]["async_return"]
is set to True, then any returned points are given back to the generator.If any workers are marked as zero_resource_workers, then these will only be used for generators.
If any of the persistent generators has exited, then ensemble shutdown is triggered.
User options:
To be provided in calling script: E.g.,
alloc_specs["user"]["async_return"] = True
- init_sample_size: int, optional
Initial sample size - always return in batch. Default: 0
- num_active_gens: int, optional
Maximum number of persistent generators to start. Default: 1
- async_return: boolean, optional
Return results to gen as they come in (after sample). Default: False (batch return).
- active_recv_gen: boolean, optional
Create gen in active receive mode. If True, the manager does not need to wait for a return from the generator before sending further returned points. Default: False
tags: alloc, batch, async, persistent, priority
See also
test_persistent_sampling.py # noqa test_persistent_sampling_async.py # noqa test_persistent_surmise_calib.py # noqa test_persistent_uniform_gen_decides_stop.py # noqa
- start_only_persistent.only_persistent_workers(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info)
This allocation function will give simulation work if possible to any worker not listed as a zero_resource_worker. On the first call, the worker will be placed into a persistent state that will be maintained until libE is exited.
Otherwise, zero resource workers will be given up to a maximum of
alloc_specs["user"]["num_active_gens"]
persistent generators (defaulting to one).By default, evaluation results are given back to the generator once all generated points have been returned from the simulation evaluation. If
alloc_specs["user"]["async_return"]
is set to True, then any returned points are given back to the generator.If any of the persistent generators has exited, then ensemble shutdown is triggered.
Note, that an alternative to using zero resource workers would be to set a fixed number of simulation workers in persistent state at the start, allowing at least one worker for the generator - a minor alteration.
User options:
To be provided in calling script: E.g.,
alloc_specs["user"]["async_return"] = True
- init_sample_size: int, optional
Initial sample size - always return in batch. Default: 0
- num_active_gens: int, optional
Maximum number of persistent generators to start. Default: 1
- async_return: boolean, optional
Return results to gen as they come in (after sample). Default: False (batch return).
- active_recv_gen: boolean, optional
Create gen in active receive mode. If True, the manager does not need to wait for a return from the generator before sending further returned points. Default: False
See also
start_only_persistent.py
1import numpy as np
2
3from libensemble.message_numbers import EVAL_GEN_TAG, EVAL_SIM_TAG
4from libensemble.tools.alloc_support import AllocSupport, InsufficientFreeResources
5
6
7def only_persistent_gens(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info):
8 """
9 This allocation function will give simulation work if possible, but
10 otherwise start up to ``alloc_specs["user"]["num_active_gens"]``
11 persistent generators (defaulting to one).
12
13 By default, evaluation results are given back to the generator once
14 all generated points have been returned from the simulation evaluation.
15 If ``alloc_specs["user"]["async_return"]`` is set to True, then any
16 returned points are given back to the generator.
17
18 If any workers are marked as zero_resource_workers, then these will only
19 be used for generators.
20
21 If any of the persistent generators has exited, then ensemble shutdown
22 is triggered.
23
24 **User options**:
25
26 To be provided in calling script: E.g., ``alloc_specs["user"]["async_return"] = True``
27
28 init_sample_size: int, optional
29 Initial sample size - always return in batch. Default: 0
30
31 num_active_gens: int, optional
32 Maximum number of persistent generators to start. Default: 1
33
34 async_return: boolean, optional
35 Return results to gen as they come in (after sample). Default: False (batch return).
36
37 active_recv_gen: boolean, optional
38 Create gen in active receive mode. If True, the manager does not need to wait
39 for a return from the generator before sending further returned points.
40 Default: False
41
42 tags: alloc, batch, async, persistent, priority
43
44 .. seealso::
45 `test_persistent_sampling.py <https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/regression_tests/test_persistent_sampling.py>`_ # noqa
46 `test_persistent_sampling_async.py <https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/regression_tests/test_persistent_sampling_async.py>`_ # noqa
47 `test_persistent_surmise_calib.py <https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/regression_tests/test_persistent_surmise_calib.py>`_ # noqa
48 `test_persistent_uniform_gen_decides_stop.py <https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/regression_tests/test_persistent_uniform_gen_decides_stop.py>`_ # noqa
49 """
50
51 if libE_info["sim_max_given"] or not libE_info["any_idle_workers"]:
52 return {}, persis_info
53
54 # Initialize alloc_specs["user"] as user.
55 user = alloc_specs.get("user", {})
56 manage_resources = libE_info["use_resource_sets"]
57
58 active_recv_gen = user.get("active_recv_gen", False) # Persistent gen can handle irregular communications
59 init_sample_size = user.get("init_sample_size", 0) # Always batch return until this many evals complete
60 batch_give = user.get("give_all_with_same_priority", False)
61
62 support = AllocSupport(W, manage_resources, persis_info, libE_info)
63 gen_count = support.count_persis_gens()
64 Work = {}
65
66 # Asynchronous return to generator
67 async_return = user.get("async_return", False) and sum(H["sim_ended"]) >= init_sample_size
68
69 if gen_count < persis_info.get("num_gens_started", 0):
70 # When a persistent worker is done, trigger a shutdown (returning exit condition of 1)
71 return Work, persis_info, 1
72
73 # Give evaluated results back to a running persistent gen
74 for wid in support.avail_worker_ids(persistent=EVAL_GEN_TAG, active_recv=active_recv_gen):
75 gen_inds = H["gen_worker"] == wid
76 returned_but_not_given = np.logical_and.reduce((H["sim_ended"], ~H["gen_informed"], gen_inds))
77 if np.any(returned_but_not_given):
78 if async_return or support.all_sim_ended(H, gen_inds):
79 point_ids = np.where(returned_but_not_given)[0]
80 Work[wid] = support.gen_work(
81 wid,
82 gen_specs["persis_in"],
83 point_ids,
84 persis_info.get(wid),
85 persistent=True,
86 active_recv=active_recv_gen,
87 )
88 returned_but_not_given[point_ids] = False
89
90 # Now the give_sim_work_first part
91 points_to_evaluate = ~H["sim_started"] & ~H["cancel_requested"]
92 avail_workers = support.avail_worker_ids(persistent=False, zero_resource_workers=False)
93 for wid in avail_workers:
94 if not np.any(points_to_evaluate):
95 break
96
97 sim_ids_to_send = support.points_by_priority(H, points_avail=points_to_evaluate, batch=batch_give)
98
99 try:
100 Work[wid] = support.sim_work(wid, H, sim_specs["in"], sim_ids_to_send, persis_info.get(wid))
101 except InsufficientFreeResources:
102 break
103
104 points_to_evaluate[sim_ids_to_send] = False
105
106 # Start persistent gens if no worker to give out. Uses zero_resource_workers if defined.
107 if not np.any(points_to_evaluate):
108 avail_workers = support.avail_worker_ids(persistent=False, zero_resource_workers=True)
109
110 for wid in avail_workers:
111 if gen_count < user.get("num_active_gens", 1):
112 # Finally, start a persistent generator as there is nothing else to do.
113 try:
114 Work[wid] = support.gen_work(
115 wid,
116 gen_specs.get("in", []),
117 range(len(H)),
118 persis_info.get(wid),
119 persistent=True,
120 active_recv=active_recv_gen,
121 )
122 except InsufficientFreeResources:
123 break
124
125 persis_info["num_gens_started"] = persis_info.get("num_gens_started", 0) + 1
126 gen_count += 1
127
128 return Work, persis_info, 0
129
130
131def only_persistent_workers(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info):
132 """
133 This allocation function will give simulation work if possible to any worker
134 not listed as a zero_resource_worker. On the first call, the worker will be
135 placed into a persistent state that will be maintained until libE is exited.
136
137 Otherwise, zero resource workers will be given up to a maximum of
138 ``alloc_specs["user"]["num_active_gens"]`` persistent generators (defaulting to one).
139
140 By default, evaluation results are given back to the generator once
141 all generated points have been returned from the simulation evaluation.
142 If ``alloc_specs["user"]["async_return"]`` is set to True, then any
143 returned points are given back to the generator.
144
145 If any of the persistent generators has exited, then ensemble shutdown
146 is triggered.
147
148 Note, that an alternative to using zero resource workers would be to set
149 a fixed number of simulation workers in persistent state at the start, allowing
150 at least one worker for the generator - a minor alteration.
151
152 **User options**:
153
154 To be provided in calling script: E.g., ``alloc_specs["user"]["async_return"] = True``
155
156 init_sample_size: int, optional
157 Initial sample size - always return in batch. Default: 0
158
159 num_active_gens: int, optional
160 Maximum number of persistent generators to start. Default: 1
161
162 async_return: boolean, optional
163 Return results to gen as they come in (after sample). Default: False (batch return).
164
165 active_recv_gen: boolean, optional
166 Create gen in active receive mode. If True, the manager does not need to wait
167 for a return from the generator before sending further returned points.
168 Default: False
169
170
171 .. seealso::
172 `test_persistent_gensim_uniform_sampling.py <https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/regression_tests/test_persistent_gensim_uniform_sampling.py>`_ # noqa
173 """
174
175 if libE_info["sim_max_given"] or not libE_info["any_idle_workers"]:
176 return {}, persis_info
177
178 # Initialize alloc_specs["user"] as user.
179 user = alloc_specs.get("user", {})
180 manage_resources = libE_info["use_resource_sets"]
181 active_recv_gen = user.get("active_recv_gen", False) # Persistent gen can handle irregular communications
182 init_sample_size = user.get("init_sample_size", 0) # Always batch return until this many evals complete
183 batch_give = user.get("give_all_with_same_priority", False)
184
185 support = AllocSupport(W, manage_resources, persis_info, libE_info)
186 gen_count = support.count_persis_gens()
187 Work = {}
188
189 # Asynchronous return to generator
190 async_return = user.get("async_return", False) and sum(H["sim_ended"]) >= init_sample_size
191
192 if gen_count < persis_info.get("num_gens_started", 0):
193 # When a persistent gen worker is done, trigger a shutdown (returning exit condition of 1)
194 return Work, persis_info, 1
195
196 # Give evaluated results back to a running persistent gen
197 for wid in support.avail_worker_ids(persistent=EVAL_GEN_TAG, active_recv=active_recv_gen):
198 gen_inds = H["gen_worker"] == wid
199 returned_but_not_given = np.logical_and.reduce((H["sim_ended"], ~H["gen_informed"], gen_inds))
200 if np.any(returned_but_not_given):
201 if async_return or support.all_sim_ended(H, gen_inds):
202 point_ids = np.where(returned_but_not_given)[0]
203 Work[wid] = support.gen_work(
204 wid,
205 gen_specs["persis_in"],
206 point_ids,
207 persis_info.get(wid),
208 persistent=True,
209 active_recv=active_recv_gen,
210 )
211 returned_but_not_given[point_ids] = False
212
213 # Now the give_sim_work_first part
214 points_to_evaluate = ~H["sim_started"] & ~H["cancel_requested"]
215 avail_workers = list(
216 set(support.avail_worker_ids(persistent=False, zero_resource_workers=False))
217 | set(support.avail_worker_ids(persistent=EVAL_SIM_TAG, zero_resource_workers=False))
218 )
219 for wid in avail_workers:
220 if not np.any(points_to_evaluate):
221 break
222
223 sim_ids_to_send = support.points_by_priority(H, points_avail=points_to_evaluate, batch=batch_give)
224 try:
225 # Note that resources will not change if worker is already persistent.
226 Work[wid] = support.sim_work(
227 wid, H, sim_specs["in"], sim_ids_to_send, persis_info.get(wid), persistent=True
228 )
229 except InsufficientFreeResources:
230 break
231
232 points_to_evaluate[sim_ids_to_send] = False
233
234 # Start persistent gens if no sim work to give out. Uses zero_resource_workers if defined.
235 if not np.any(points_to_evaluate):
236 avail_workers = support.avail_worker_ids(persistent=False, zero_resource_workers=True)
237
238 for wid in avail_workers:
239 if gen_count < user.get("num_active_gens", 1):
240 # Finally, start a persistent generator as there is nothing else to do.
241 try:
242 Work[wid] = support.gen_work(
243 wid,
244 gen_specs.get("in", []),
245 range(len(H)),
246 persis_info.get(wid),
247 persistent=True,
248 active_recv=active_recv_gen,
249 )
250 except InsufficientFreeResources:
251 break
252 persis_info["num_gens_started"] = persis_info.get("num_gens_started", 0) + 1
253 gen_count += 1
254 del support
255 return Work, persis_info, 0
start_persistent_local_opt_gens
- libensemble.alloc_funcs.start_persistent_local_opt_gens.start_persistent_local_opt_gens(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info)
This allocation function will do the following:
Start up a persistent generator that is a local opt run at the first point identified by APOSMM’s decide_where_to_start_localopt. Note, it will do this only if at least one worker will be left to perform simulation evaluations.
If multiple starting points are available, the one with smallest function value is chosen.
If no candidate starting points exist, points from existing runs will be evaluated (oldest first).
If no points are left, call the generation function.
tags: alloc, persistent, aposmm
See also
test_uniform_sampling_then_persistent_localopt_runs.py # noqa