Hi. I think you are right about the synchronisation. I tried outputting the 
field after the harminv, that does make it better. I get the problem less 
often. Perhaps I should do something computationally expensive. Or does anyone 
what the all_wait call is in the scheme interface?

Best,
Dries

-------- Original message --------
From: Filip Dominec <filip.domi...@gmail.com> 
Date: 13/03/2014  09:01  (GMT+01:00) 
To: "Oosten, D. van (Dries)" <d.vanoos...@uu.nl>,meep-discuss 
<meep-discuss@ab-initio.mit.edu> 
Subject: Re: [Meep-discuss] meep mpi harminv problem 
 
Hi, it appears to me that harminv may not use the MPI synchronisation
at its end. If this is the case, it could result inĀ  a race condition,
when one instance of harminv ends the multithreaded program, while the
second one is still running.

I suggest to add some field-manipulating function at the very end of
your program, so that all threads have to wait for each other and
finish simultaneously. My python-meep scripts have meep.all_wait() at
their last line, and probably a similar function exists in scheme.

Does it help?

Regards,
Filip

2014-03-12 20:32 GMT+01:00, Oosten, D. van (Dries) <d.vanoos...@uu.nl>:
> Sorry guys, but just to clarify, mpirun often says things like
>
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 5 with PID 8389 on
> node workstation exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
>
> does that clarify things for anyone?
>
>
> ________________________________________
> From: Oosten, D. van (Dries)
> Sent: Wednesday, March 12, 2014 8:20 PM
> To: meep-discuss@ab-initio.mit.edu
> Subject: meep mpi harminv problem
>
> Hi guys,
>
> I have been struggling with the following issue. When I use harminv to find
> eigenmodes in meep, it is often unreliable when I use it through mpirun.
> This is especially the case when the workstation we run meep on is under
> heavy load. It seems to me that the process get killed by mpirun before they
> can give their results. Could this be the case and if so, what tests can I
> run to track this problem down?
>
> Thanks in advance!
>
> best,
> Dries
>
> _______________________________________________
> meep-discuss mailing list
> meep-discuss@ab-initio.mit.edu
> http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss
>
_______________________________________________
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Reply via email to