No, we really shouldn't. Having just fought with a program using usleep(1)
which was behaving even worse, working around this particular inability of the
Linux kernel development team to do something sane will only lead to more pain.
There are no good options, so the best option is to not try
Should we replace OMPI's use of sched_yield() with usleep()?
David -- could you try replacing the call to sched_yield() in
opal/runtime/opal_progress.c (somewhere around line 220) to usleep(1) and see
if that gives the behavior that you want (without twonking a /proc value)? You
might also
That did it. Thanks.
David
On Wed, 2010-07-21 at 15:29 -0500, Dave Goodell wrote:
> On Jul 21, 2010, at 2:54 PM CDT, Jed Brown wrote:
>
> > On Wed, 21 Jul 2010 15:20:24 -0400, David Ronis
> > wrote:
> >> Hi Jed,
> >>
> >> Thanks for the reply and suggestion. I tried
On Jul 21, 2010, at 2:54 PM CDT, Jed Brown wrote:
> On Wed, 21 Jul 2010 15:20:24 -0400, David Ronis wrote:
>> Hi Jed,
>>
>> Thanks for the reply and suggestion. I tried adding -mca
>> yield_when_idle 1 (and later mpi_yield_when_idle 1 which is what
>> ompi_info reports
I'm running linux (slackware 12.2), openmpi 1.4.2 and fftw3 3.2.4.
As to the planner always running in parallel, I suspect it isn't. It's
trying to optimize the split up the fft computation between different
codelets and different numbers of threads (including none). It tries
something and
On Wed, 21 Jul 2010 15:20:24 -0400, David Ronis wrote:
> Hi Jed,
>
> Thanks for the reply and suggestion. I tried adding -mca
> yield_when_idle 1 (and later mpi_yield_when_idle 1 which is what
> ompi_info reports the variable as) but it seems to have had 0 effect.
> My
Hi Jed,
Thanks for the reply and suggestion. I tried adding -mca
yield_when_idle 1 (and later mpi_yield_when_idle 1 which is what
ompi_info reports the variable as) but it seems to have had 0 effect.
My master goes into fftw planning routines for a minute or so (I see the
threads being created),
Hi David:
On Wed, Jul 21, 2010 at 02:10:53PM -0400, David Ronis wrote:
> I've got a mpi program on an 8-core box that runs in a master-slave
> mode. The slaves calculate something, pass data to the master, and
> then call MPI_Bcast waiting for the master to update and return some
> data via a
On Wed, 21 Jul 2010 14:10:53 -0400, David Ronis wrote:
> Is there another MPI routine that polls for data and then gives up its
> time-slice?
You're probably looking for the runtime option -mca yield_when_idle 1.
This will slightly increase latency, but allows other
I've got a mpi program on an 8-core box that runs in a master-slave
mode. The slaves calculate something, pass data to the master, and
then call MPI_Bcast waiting for the master to update and return some
data via a MPI_Bcast originating on the master.
One of the things the master does while
10 matches
Mail list logo