Optimizer Return Type

class glompo.optimizers.baseoptimizer.MinimizeResult[source]

The return value of BaseOptimizer classes. The results of an optimization can be accessed by:

success

Whether the optimization was successful or not.

Type:bool
x

The optimized parameters.

Type:Sequence[float]
fx

The corresponding function value of x.

Type:float
stats

Dictionary of various statistics related to the optimization.

Type:Dict[str, Any]
origin

Dictionary with configurations details of the optimizer which produced the result.

Type:Dict[str, Any]

Abstract Base Optimizer

class glompo.optimizers.baseoptimizer.BaseOptimizer(_opt_id: Optional[int] = None, _signal_pipe: Optional[multiprocessing.connection.Connection] = None, _results_queue: Optional[glompo.core._backends.ChunkingQueue] = None, _pause_flag: Optional[threading.Event] = None, workers: int = 1, backend: str = 'threads', is_log_detailed: bool = False, **kwargs)[source]

Abstract base class for optimizers used within the GloMPO framework. Cannot be used directly, must be superclassed by child classes which implement a specific optimization algorithm.

Attention

To Ensure GloMPO Functionality:

  1. Messages to the GloMPO manager must be sent via message_manager().

  2. Messages from the manager must be read by check_messages() which executes BaseOptimizer methods corresponding to the signals. The defaults provided in the BaseOptimizer class are generally suitable and should not need to be overwritten! The only methods which must implemented by the user are:

    1. minimize() which is the algorithm specific optimization loop;
    2. callstop() which interrupts the optimization loop.
  3. The statement self._pause_signal.wait() must appear somewhere in the body of the iterative loop to allow the optimizer to be paused by the manager as needed.

  4. Optional: the class should be able to handle resuming an optimization from any point using checkpoint_save() and checkpoint_load().

Tip

The TestSubclassGlompoCompatible test in test_optimizers.py can be used to test that an optimizer meets these criteria and is GloMPO compatible. Simply add your optimizer to AVAILABLE_CLASSES there.

Parameters:
  • _opt_id – Unique optimizer identifier.
  • _signal_pipe – Bidirectional pipe used to message management behaviour between the manager and optimizer.
  • _results_queue – Threading queue into which optimizer iteration results are centralised across all optimizers and sent to the manager.
  • _pause_flag – Event flag which can be used to pause the optimizer between iterations.
  • workers – The number of concurrent calculations used by the optimizer. Defaults to one. The manager will only start the optimizer if there are sufficient slots available for it.
  • backend – The type of concurrency used by the optimizers (processes or threads). This is not necessarily applicable to all optimizers. This will default to 'threads' unless forced to use 'processes' (see GloMPOManager.setup() and Parallelism).
  • is_log_detailed – See is_log_detailed.
  • **kwargs – Optimizer specific initialization arguments.

Notes

The user need not concern themselves with the particulars of the _opt_id, _signal_pipe, _results_queue and _pause_flag parameters. These are automatically generated by the manager.

Important

Make sure to call the superclass initialisation method when creating your own optimizers:

super().__init__(_opt_id,
                 _signal_pipe,
                 _results_queue,
                 _pause_flag,
                 workers,
                 backend,
                 is_log_detailed)
incumbent

Dictionary with keys 'x' and 'fx' which contain the lowest function value and associated parameter vector seen thus far by the optimizer.

Type:Dict[str, Any]
is_log_detailed

If True:

  1. When the task’s __call__() method is called, its detailed_call() method will actually be evaluated.
  2. All the return values from detailed_call() will be added to the log history of the optimizer.
  3. The function itself will only return the function value (as if the __call__() method had been used).

Note

This will not result in a doubling of the computational time as the original call will be intercepted. This setting is useful for cases where optimizers do not need/cannot handle the extra information generated by a detailed call but one would still like the iteration details logged for analysis.

Type:bool
logger

logging.Logger instance into which status messages may be added.

Type:logging.Logger
workers

Maximum number of threads/processes the optimizer may use for evaluating the objective function.

Type:int
is_restart

True if the optimizer is loaded from a checkpoint.

opt_id

The unique GloMPO generated identification number of the optimizer.

classmethod checkpoint_load(path: Union[pathlib.Path, str], **kwargs) → BaseOptimizer[source]

Recreates an optimizer from a saved snapshot.

Parameters:
  • path – Path to checkpoint file from which to build from. It must be a file produced by the corresponding checkpoint_save() method.
  • **kwargs – See __init__.

Notes

This is a basic implementation which should suit most optimizers; may need to be overwritten.

minimize(function: Callable[Sequence[float], float], x0: Sequence[float], bounds: Sequence[Tuple[float, float]], callbacks: Callable = None, **kwargs) → glompo.optimizers.baseoptimizer.MinimizeResult[source]

Run the optimization algorithm to minimize a function.

Parameters:
  • function – Function to be minimised. See BaseFunction for an API guide.
  • x0 – The initial optimizer starting point.
  • bounds – Min/max boundary limit pairs for each element of the input vector to the minimisation function.
  • callbacks – Code snippets usually called once per iteration that are able to signal early termination. Callbacks are leveraged differently by different optimizer implementations, the user is encouraged to consult the child classes for more details. Use of callbacks, however, is strongly discouraged.
check_messages() → List[int][source]

Processes and executes manager signals from the manager.

Danger

This implementation has been very carefully structured to operate as expected by the manager. Should be suitable for all optimizers. Should not be overwritten.

Returns:Signal keys received by the manager during the call.
Return type:List[int]
message_manager(key: int, message: Optional[str] = None)[source]

Sends arguments to the manager.

Caution

Should not be overwritten.

Parameters:
  • key

    Indicates the type of signal sent. The manager recognises the following keys:

    0: The optimizer has terminated normally according to its own internal convergence conditions.

    1: Confirm that a pause signal has been received from the manager and the optimizer has complied with the request.

    9: General message to be appended to the optimizer’s log.

  • message – Message to be appended when sending signal 9.
callstop(reason: str)[source]

Breaks out of the minimize() minimization loop.

checkpoint_save(path: Union[pathlib.Path, str], force: Optional[Set[str]] = None)[source]

Save current state, suitable for restarting.

Parameters:
  • path – Path to file into which the object will be dumped. Typically supplied by the manager.
  • force – Set of variable names which will be forced into the dumped file. Convenient shortcut for overwriting if fails for a particular optimizer because a certain variable is filtered out of the data dump.

Notes

  1. Only the absolutely critical aspects of the state of the optimizer need to be saved. The manager will resupply multiprocessing parameters when the optimizer is reconstructed.
  2. This method will almost never be called directly by the user. Rather it will called (via signals) by the manager.
  3. This is a basic implementation which should suit most optimizers; may need to be overwritten.
inject(x: Sequence[float], fx: float)[source]

Updates the incumbent with a better solution from the manager.