On Wed, 16 Aug 2017, Renato Poli wrote:

>
> I added this call on the master process (id=0):
>>       vector<double> _sys_solution;  /// in .hxx
>>       sys.solution->localize_to_one( _sys_solution );
>>
>> The system got stuck,
>>
>
> As I'd expect.  The solution only gets filled on proc_id=0, but it's
> still a collective operation: proc 0 is the only one receiving, but
> every other processor is sending.
>

A-há! It worked. Thanks.


>
> with all four processes with 100% processor usage.
>>
>
> That's MPI busy-waiting.  In real use cases, where you have at least
> one core for every MPI rank and you're trying to run as fast as
> possible, I guess it's the lowest latency thing to do, but I wish I
> know how to turn it off and make mpich2 or openmpi use blocking waits
> instead.  Sometimes I have one processor stopped by gdb and the others
> are just wasting CPU, sometimes I'd like to use N cores to run N*2 MPI
> ranks for debugging purposes and I'd like them not to step on each
> other's toes...
>

Can these processes simply be exit? In my case I simply do not need them
anymore.

Thanks Roy!
Regards,
Renato
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to