-
Notifications
You must be signed in to change notification settings - Fork 590
Description
The openmc.deplete module relies on calls to the OpenMC shared library via openmc.lib in order to perform neutron transport, update material compositions, etc., which means parallel execution has to be coordinated using mpi4py and the Python script itself should be called with mpiexec. The actual solve of the Bateman equations is done using the CRAM method, which is coded entirely in Python (using scipy.sparse); the parallelization strategy there is to break up the full list of materials being depleted over MPI processes (via mpi4py) and then over thread via multiprocessing. The default configuration of mpi4py and multiprocessing do not play well together due to the following sequence of events:
- Importing
mpi4pywill result inMPI_Initgetting called - For most Python versions, using a multiprocessing pool will create multiple processes via an OS fork
- MPI processes that are already initialized generally should not be forked as this results in implementation-dependent behavior and can result in deadlocks.
A few workarounds currently are to:
- Disable use of multiprocessing in depletion
- Use a start method other than "fork" in
multiprocessing - Disable automatic initialization/finalization from mpi4py
Further down the line, we may want to consider using free-threading in Python 3.14+ as this should allow us to get parallelization without creating new processes that cause problems with MPI.