This type of behavior is usually due to one of the MPI processes deadlocking due to a run function or similar that is called at the end of the time stepping in which a field routine <https://meep.readthedocs.io/en/latest/Python_User_Interface/#field-computations> (e.g., get_field_point, etc.) is called from within "if meep.am_master():". This issue is described in the Features/Parallel Meep/Technical Details <https://meep.readthedocs.io/en/latest/Parallel_Meep/#technical-details> in the paragraph starting with "Warning: Most Meep functions operating on the simulation...".

However, this doesn't seem to apply in your case.

On 12/2/20 19:13, Luke Durant wrote:
I have a very simple simulation (almost identical to the GaussianBeamSource example), but when I run with MPI (local node only, multi-core) it hangs before returning from 'sim.run(until=60)'.

The simulation appears to proceed correctly until the sim should be complete:
'
...
Meep progress: 45.56/60.0 = 75.9% done in 16.0s, 5.1s to go
on time step 4556 (time=45.56), 0.0034461 s/step
Meep progress: 57.18/60.0 = 95.3% done in 20.0s, 1.0s to go
on time step 5718 (time=57.18), 0.00344415 s/step
' [Hang forever].

It hangs with 2 threads or 32 threads (Ryzen 3990x CPU). Performance scales apparently correctly through the simulation otherwise (2 threads is 2x serial; 32 again much faster).


_______________________________________________
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Reply via email to