-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 13/02/12 22:11, Matthias Jurenz wrote:
> Do you have any idea? Please help!
Do you see the same bad latency in the old branch (1.4.5) ?
cheers,
Chris
- --
Christopher Samuel - Senior Systems Administrator
VLSCI - Victorian Life Sciences
devel-boun...@open-mpi.org a écrit sur 27/02/2012 15:53:06 :
> De : Ralph Castain
> A : Open MPI Developers
> Date : 27/02/2012 16:17
> Objet : Re: [OMPI devel] Problem with the openmpi-default-hostfile
> (on the trunk)
> Envoyé par :
I'll see what I can do when next I have access to a slurm machine - hopefully
in a day or two.
Are you sure you are at the top of the trunk? I reviewed the code, and it
clearly detects that the default hostile is empty and ignores it if so. Like I
said, I'm not seeing this behavior, and
Minor update:
I see some improvement when I set the MCA parameter mpi_yield_when_idle to 0
to enforce the "Agressive" performance mode:
$ mpirun -np 2 -mca mpi_yield_when_idle 0 -mca btl self,sm -bind-to-core -
cpus-per-proc 2 ./NPmpi_ompi1.5.5 -u 4 -n 10
0: n090
1: n090
Now starting the
Ron -- Many thanks!
Leif -- can you comment on this? (yes, I'm passing the buck to our ARM Open
MPI representative :-) )
On Feb 26, 2012, at 1:22 PM, Ron Broberg wrote:
> I would like to report the following information regarding compiling OpenMPI
> on Debian ARMv6. I won't submit this as a
When using Open MPI v1.4.5 I get ~1.1us. That's the same result as I get with
Open MPI v1.5.x using mpi_yield_when_idle=0.
So I think there is a bug in Open MPI (v1.5.4 and v1.5.5rc2) regarding to the
automatic performance mode selection.
When enabling the degraded performance mode for Open MPI
Hi Ron,
Excellent work! Indeed - simply dropping the DMBs can lead to memory
consistency issues even on ARMv6.
The architectural semantics for memory barriers exist in ARMv6 though - they
just weren't given dedicated mnemonics.
What you could do is to simply replace the inline "dmb" sequences
devel-boun...@open-mpi.org a écrit sur 28/02/2012 10:54:15 :
> De : Ralph Castain
> A : Open MPI Developers
> Date : 28/02/2012 10:54
> Objet : Re: [OMPI devel] Problem with the openmpi-default-hostfile
> (on the trunk)
> Envoyé par :
Are there any changes we need to make to OMPI?
On Feb 28, 2012, at 7:50 AM, Leif Lindholm wrote:
> Hi Ron,
>
> Excellent work! Indeed - simply dropping the DMBs can lead to memory
> consistency issues even on ARMv6.
>
> The architectural semantics for memory barriers exist in ARMv6 though -
Thanks - I'll fix that bug!
On Feb 28, 2012, at 6:48 AM, pascal.dev...@bull.net wrote:
> devel-boun...@open-mpi.org a écrit sur 28/02/2012 10:54:15 :
>
> > De : Ralph Castain
> > A : Open MPI Developers
> > Date : 28/02/2012 10:54
> > Objet : Re:
I'll look into this...
Thanks
Edgar
On 2/28/2012 8:36 AM, Ralph Castain wrote:
> I tried to build the trunk this morning on a machine where the fcoll
> framework could build and hit this:
>
> mca/fcoll/dynamic/.libs/libmca_fcoll_dynamic.a(fcoll_dynamic_file_write_all.o):
> In function
sorry, should be fixed with the last commit...
Thanks
Edgar
On 2/28/2012 8:37 AM, Edgar Gabriel wrote:
> I'll look into this...
>
> Thanks
> Edgar
>
> On 2/28/2012 8:36 AM, Ralph Castain wrote:
>> I tried to build the trunk this morning on a machine where the fcoll
>> framework could build
Thanks Edgar!
On Feb 28, 2012, at 7:43 AM, Edgar Gabriel wrote:
> sorry, should be fixed with the last commit...
>
> Thanks
> Edgar
>
> On 2/28/2012 8:37 AM, Edgar Gabriel wrote:
>> I'll look into this...
>>
>> Thanks
>> Edgar
>>
>> On 2/28/2012 8:36 AM, Ralph Castain wrote:
>>> I tried to
We'd need a few ifdefs, effectively.
One on the dmb/mcr and one on the 64-bit, depending on v6k or higher.
This would provide ARMv6 support only though - ARMv5 or earlier (like debian
"armel") will still miss out.
> -Original Message-
> From: devel-boun...@open-mpi.org
Ok. We'll rely on your patches. :-)
On Feb 28, 2012, at 11:00 AM, Leif Lindholm wrote:
> We'd need a few ifdefs, effectively.
>
> One on the dmb/mcr and one on the 64-bit, depending on v6k or higher.
>
> This would provide ARMv6 support only though - ARMv5 or earlier (like debian
>
There is a serious chilled water issue at IU right now; all non-essential
servers (including Open MPI's nightly build server) have been turned off. So
we have no new "official" 1.5.5 RC, and no new nightlies will be produced until
further notice.
However, to keep the 1.5.5 release train
Testing 1.5.5rc3 on a "representative sampling" of my many platforms
looks good.
In particular, I've retested various platforms that showed any
significant problems previously and found them to be fixed.
Though minor, I do see that the following patches I've posted are not
applied
+ Add a
By chance I noticed the following in the trunk:
Index: ompi-trunk/orte/mca/rml/oob/rml_oob_component.c
===
--- ompi-trunk/orte/mca/rml/oob/rml_oob_component.c (revision 26069)
+++ ompi-trunk/orte/mca/rml/oob/rml_oob_component.c
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 29/02/12 07:44, Jeffrey Squyres wrote:
> - BlueGene fixes
rc3 fixes the builds on our front end node, thanks!
- --
Christopher Samuel - Senior Systems Administrator
VLSCI - Victorian Life Sciences Computation Initiative
Email:
Fixed -- thanks!
On Feb 28, 2012, at 6:54 PM, Paul H. Hargrove wrote:
> By chance I noticed the following in the trunk:
>
> Index: ompi-trunk/orte/mca/rml/oob/rml_oob_component.c
> ===
> ---
On 2/28/2012 5:09 PM, Christopher Samuel wrote:
On 29/02/12 07:44, Jeffrey Squyres wrote:
> - BlueGene fixes
rc3 fixes the builds on our front end node, thanks!
And on a BG/L (not a typo) front-end too, where the same problem existed
in prior versions.
-Paul
--
Paul H. Hargrove
The CS department at IU, our hosting provider for www.open-mpi.org, is having a
serious chilled water issue right now -- all non-essential servers have been
powered off to reduce heat in their machine room. This includes hwloc's
nightly tarball build server.
As such, until the chilled water
22 matches
Mail list logo