I don't understand 1 thing though and would appreciate your comments.
 
In various interfaces, like network sockets, or threads waiting for data from 
somewhere, there are various solutions based on _not_ checking the state of the 
socket or some sort of  queue continuously, but sort of getting _interrupted_ 
when there is data around, or like condition variables for threads.
I am not very clear on these points, but it seems that in these contexts, 
continuous polling is avoided and so actual CPU usage is usually not close to 
100%.
 
Why can't something similar be implemented with broadcast for e.g.?
 
-----Original Message-----
From: "Jeff Squyres" [jsquy...@cisco.com]
List-Post: users@lists.open-mpi.org
Date: 13/12/2010 03:55 PM
To: "Open MPI Users" 
Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

I think there *was* a decision and it effectively changed how sched_yield() 
effectively operates, and that it may not do what we expect any more.

See this thread (the discussion of Linux/sched_yield() comes in the later 
messages):

http://www.open-mpi.org/community/lists/users/2010/07/13729.php

I believe there's similar threads in the MPICH mailing list archives; that's 
why Dave posted on the OMPI list about it.

We briefly discussed replacing OMPI's sched_yield() with a usleep(1), but it 
was shot down.



Reply via email to