Re: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors

2019-03-16 Thread Ralph H Castain
FWIW: I just ran a cycle of 10,000 spawns on my Mac without a problem using 
OMPI master, so I believe this has been resolved. I don’t know if/when the 
required updates might come into the various release branches.

Ralph


> On Mar 16, 2019, at 1:13 PM, Thomas Pak  wrote:
> 
> Dear Jeff,
> 
> I did find a way to circumvent this issue for my specific application by 
> spawning less frequently. However, I wanted to at least bring attention to 
> this issue for the OpenMPI community, as it can be reproduced with an 
> alarmingly simple program.
> 
> Perhaps the user's mailing list is not the ideal place for this. Would you 
> recommend that I report this issue on the developer's mailing list or open a 
> GitHub issue?
> 
> Best wishes,
> Thomas Pak
> 
> On Mar 16 2019, at 7:40 pm, Jeff Hammond  wrote:
> Is there perhaps a different way to solve your problem that doesn’t spawn so 
> much as to hit this issue?
> 
> I’m not denying there’s an issue here, but in a world of finite human effort 
> and fallible software, sometimes it’s easiest to just avoid the bugs 
> altogether.
> 
> Jeff
> 
> On Sat, Mar 16, 2019 at 12:11 PM Thomas Pak  > wrote:
> Dear all,
> 
> Does anyone have any clue on what the problem could be here? This seems to be 
> a persistent problem present in all currently supported OpenMPI releases and 
> indicates that there is a fundamental flaw in how OpenMPI handles dynamic 
> process creation.
> 
> Best wishes,
> Thomas Pak
> 
> From: "Thomas Pak"  >
> To: users@lists.open-mpi.org 
> Sent: Friday, 7 December, 2018 17:51:29
> Subject: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors
> 
> Dear all,
> 
> My MPI application spawns a large number of MPI processes using 
> MPI_Comm_spawn over its total lifetime. Unfortunately, I have experienced 
> that this results in problems for all currently supported OpenMPI versions 
> (2.1, 3.0, 3.1 and 4.0). I have written a short, self-contained program in C 
> (included below) that spawns child processes using MPI_Comm_spawn in an 
> infinite loop, where each child process exits after writing a message to 
> stdout. This short program leads to the following issues:
> 
> In versions 2.1.2 (Ubuntu package) and 2.1.5 (compiled from source), the 
> program leads to a pipe leak where pipes keep accumulating over time until my 
> MPI application crashes because the maximum number of pipes has been reached.
> 
> In versions 3.0.3 and 3.1.3 (both compiled from source), there appears to be 
> no pipe leak, but the program crashes with the following error message:
> PMIX_ERROR: UNREACHABLE in file ptl_tcp_component.c at line 1257
> 
> In version 4.0.0 (compiled from source), I have not been able to test this 
> issue very thoroughly because mpiexec ignores the --oversubscribe 
> command-line flag (as detailed in this GitHub issue 
> https://github.com/open-mpi/ompi/issues/6130 
> ). This prohibits the 
> oversubscription of processor cores, which means that spawning additional 
> processes immediately results in an error because "not enough slots" are 
> available. A fix for this was proposed recently 
> (https://github.com/open-mpi/ompi/pull/6139 
> ), but since the v4.0.x developer 
> branch is being actively developed right now, I decided not go into it.
> 
> I have found one e-mail thread on this mailing list about a similar problem 
> (https://www.mail-archive.com/users@lists.open-mpi.org/msg10543.html 
> ). In 
> this thread, Ralph Castain states that this is a known issue and suggests 
> that it is fixed in the then upcoming v1.3.x release. However, version 1.3 is 
> no longer supported and the issue has reappeared, hence this did not solve 
> the issue.
> 
> I have created a GitHub gist that contains the output from "ompi_info --all" 
> of all the OpenMPI installations mentioned here, as well as the config.log 
> files for the OpenMPI installations that I compiled from source: 
> https://gist.github.com/ThomasPak/1003160e396bb88dff27e53c53121e0c 
> .
> 
> I have also attached the code for the short program that demonstrates these 
> issues. For good measure, I have included it directly here as well:
> 
> """
> #include 
> #include 
> 
> int main(int argc, char *argv[]) {
> 
> // Initialize MPI
> MPI_Init(NULL, NULL);
> 
> // Get parent
> MPI_Comm parent;
> MPI_Comm_get_parent(&parent);
> 
> // If the process was not spawned
> if (parent == MPI_COMM_NULL) {
> 
> puts("I was not spawned!");
> 
> // Spawn child process in loop
> char *cmd = argv[0];
> char **cmd_argv = MPI_ARGV_NULL;
> int maxprocs = 1;
> MPI_Info info = MPI_INFO_NULL;
> int root = 0;
> MPI_Comm comm = MPI_COMM_SELF;
> MPI_Comm intercomm;
> int *array_of_e

Re: [OMPI users] local rank to rank comms

2019-03-11 Thread Ralph H Castain
OFI uses libpsm2 underneath it when omnipath detected 

Sent from my iPhone

> On Mar 11, 2019, at 9:06 AM, Gilles Gouaillardet 
>  wrote:
> 
> Michael,
> 
> You can
> 
> mpirun --mca pml_base_verbose 10 --mca btl_base_verbose 10 --mca 
> mtl_base_verbose 10 ...
> 
> It might show that pml/cm and mtl/psm2 are used. In that case, then yes, the 
> OmniPath library is used even for intra node communications. If this library 
> is optimized for intra node, then it will internally uses shared memory 
> instead of the NIC.
> 
> 
> You can force
> 
> mpirun --mca pml ob1 ...
> 
> 
> And btl/vader (shared memory) will be used for intra node communications ... 
> unless MPI tasks are from different jobs (read MPI_Comm_spawn())
> 
> Cheers,
> 
> Gilles
> 
> Michael Di Domenico  wrote:
>> i have a user that's claiming when two ranks on the same node want to
>> talk with each other, they're using the NIC to talk rather then just
>> talking directly.
>> 
>> i've never had to test such a scenario.  is there a way for me to
>> prove one way or another whether two ranks are talking through say the
>> kernel (or however it actually works) or using the nic?
>> 
>> i didn't set any flags when i compiled openmpi to change this.
>> 
>> i'm running ompi 3.1, pmix 2.2.1, and slurm 18.05 running atop omnipath
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] local rank to rank comms

2019-03-11 Thread Ralph H Castain
You are probably using the ofi mtl - could be psm2 uses loopback method?

Sent from my iPhone

> On Mar 11, 2019, at 8:40 AM, Michael Di Domenico  
> wrote:
> 
> i have a user that's claiming when two ranks on the same node want to
> talk with each other, they're using the NIC to talk rather then just
> talking directly.
> 
> i've never had to test such a scenario.  is there a way for me to
> prove one way or another whether two ranks are talking through say the
> kernel (or however it actually works) or using the nic?
> 
> i didn't set any flags when i compiled openmpi to change this.
> 
> i'm running ompi 3.1, pmix 2.2.1, and slurm 18.05 running atop omnipath
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] IRC/Discord?

2019-03-06 Thread Ralph H Castain
We currently reserve the Slack channel for developers. We might be willing to 
open a channel for users, but we’d have to discuss it - there is a concern that 
we not get overwhelmed :-)


> On Mar 6, 2019, at 3:06 AM, George Marselis  
> wrote:
> 
> Cool! May I please get an invitation? 
> 
> 
> Best Regards,
> 
> 
> George Marselis
> 
> ____
> From: users  on behalf of Ralph H Castain 
> 
> Sent: Tuesday, March 5, 2019 5:12 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] IRC/Discord?
> 
> Not IRC or discord, but we do make significant use of Slack: 
> open-mpi.slack.com
> 
> 
>> On Mar 5, 2019, at 8:04 AM, George Marselis  
>> wrote:
>> 
>> Hey guys,
>> 
>> Sorry to bother you. I was wondering if there is an IRC or discord channel 
>> for this mailing list.
>> 
>> (there is an IRC channel on Freenode under #openmpi but it's like 2 people 
>> in it)
>> 
>> Thank you for your time
>> 
>> Best Regards,
>> 
>> 
>> George Marselis
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] IRC/Discord?

2019-03-05 Thread Ralph H Castain
Not IRC or discord, but we do make significant use of Slack: open-mpi.slack.com


> On Mar 5, 2019, at 8:04 AM, George Marselis  
> wrote:
> 
> Hey guys, 
> 
> Sorry to bother you. I was wondering if there is an IRC or discord channel 
> for this mailing list. 
> 
> (there is an IRC channel on Freenode under #openmpi but it's like 2 people in 
> it)
> 
> Thank you for your time
> 
> Best Regards,
> 
> 
> George Marselis
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Building PMIx and Slurm support

2019-03-04 Thread Ralph H Castain


> On Mar 4, 2019, at 5:34 AM, Daniel Letai  wrote:
> 
> Gilles,
> On 3/4/19 8:28 AM, Gilles Gouaillardet wrote:
>> Daniel, 
>> 
>> 
>> On 3/4/2019 3:18 PM, Daniel Letai wrote: 
>>> 
 So unless you have a specific reason not to mix both, you might also give 
 the internal PMIx a try. 
>>> Does this hold true for libevent too? Configure complains if libevent for 
>>> openmpi is different than the one used for the other tools. 
>>> 
>> 
>> I am not exactly sure of which scenario you are running. 
>> 
>> Long story short, 
>> 
>>  - If you use an external PMIx, then you have to use an external libevent 
>> (otherwise configure will fail). 
>> 
>>It must be the same one used by PMIx, but I am not sure configure checks 
>> that. 
>> 
>> - If you use the internal PMIx, then it is up to you. you can either use the 
>> internal libevent, or an external one. 
>> 
> Thanks, that clarifies the issues I've experienced. Since PMIx doesn't have 
> to be the same for server and nodes, I can compile slurm with external PMIx 
> with system libevent, and compile openmpi with internal PMIx and libevent, 
> and that should work. Is that correct?

Yes - that is indeed correct!

> 
> BTW, building 4.0.1rc1 completed successfully using external for all, will 
> start testing in near future.
>> 
>> Cheers, 
>> 
>> 
>> Gilles 
>> 
> Thanks,
> Dani_L.
>> ___ 
>> users mailing list 
>> users@lists.open-mpi.org  
>> https://lists.open-mpi.org/mailman/listinfo/users 
>> 
> ___
> users mailing list
> users@lists.open-mpi.org 
> https://lists.open-mpi.org/mailman/listinfo/users 
> 
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Open MPI installation problem

2019-01-23 Thread Ralph H Castain
Your PATH and LD_LIBRARY_PATH setting is incorrect. You installed OMPI into 
$HOME/openmpi, so you should have done:

PATH=$HOME/openmpi/bin:$PATH
LD_LIBRARY_PATH=$HOME/openmpi/lib:$LD_LIBRARY_PATH

Ralph


> On Jan 23, 2019, at 6:36 AM, Serdar Hiçdurmaz  
> wrote:
> 
> Hi All,
> 
> I try to install Open MPI, which is prerequiste for liggghts (DEM software). 
> Some info about my current linux version :
> 
> NAME="SLED"
> VERSION="12-SP3"
> VERSION_ID="12.3"
> PRETTY_NAME="SUSE Linux Enterprise Desktop 12 SP3"
> ID="sled"
> 
> I installed Open MPI 1.6 by typing
> 
> ./configure --prefix=$HOME/openmpi
> make all
> make install
> 
> Here, it is discussed that openmpi 1.6 is compatible with OpenSuse 12.3 
> https://public.kitware.com/pipermail/paraview/2014-February/030487.html 
>  
> https://build.opensuse.org/package/show/openSUSE:12.3/openmpi 
> 
> To add OpenMPI to my path and LD_LIBRARY_PATH, I execute the following 
> comands on terminal:
> 
> export PATH=$PATH:/usr/lib64/mpi/gcc/openmpi/bin
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/mpi/gcc/openmpi/lib64
> 
> Then, in /liggghts/src directory, I execute make auto, this appears :
> 
> Creating list of contact models completed.
> make[1]: Entering directory 
> '/home/serdarhd/liggghts/LIGGGHTS-PUBLIC/src/Obj_auto'
> Makefile:456: *** 'Could not compile a simple MPI example. Test was done with 
> MPI_INC="" and MPICXX="mpicxx"'. Stop.
> make[1]: Leaving directory 
> '/home/serdarhd/liggghts/LIGGGHTS-PUBLIC/src/Obj_auto'
> Makefile:106: recipe for target 'auto' failed
> make: *** [auto] Error 2
> 
> Do you have any idea what the problem is here ? I went through the "makefile" 
> but it looks like quite complicated as linux beginner like me.
> 
> Thanks in advance. Regards,
> 
> Serdar
> 
> 
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] pmix and srun

2019-01-18 Thread Ralph H Castain
Good - thanks!

> On Jan 18, 2019, at 3:25 PM, Michael Di Domenico  
> wrote:
> 
> seems to be better now.  jobs are running
> 
> On Fri, Jan 18, 2019 at 6:17 PM Ralph H Castain  wrote:
>> 
>> I have pushed a fix to the v2.2 branch - could you please confirm it?
>> 
>> 
>>> On Jan 18, 2019, at 2:23 PM, Ralph H Castain  wrote:
>>> 
>>> Aha - I found it. It’s a typo in the v2.2.1 release. Sadly, our Slurm 
>>> plugin folks seem to be off somewhere for awhile and haven’t been testing 
>>> it. Sigh.
>>> 
>>> I’ll patch the branch and let you know - we’d appreciate the feedback.
>>> Ralph
>>> 
>>> 
>>>> On Jan 18, 2019, at 2:09 PM, Michael Di Domenico  
>>>> wrote:
>>>> 
>>>> here's the branches i'm using.  i did a git clone on the repo's and
>>>> then a git checkout
>>>> 
>>>> [ec2-user@labhead bin]$ cd /hpc/src/pmix/
>>>> [ec2-user@labhead pmix]$ git branch
>>>> master
>>>> * v2.2
>>>> [ec2-user@labhead pmix]$ cd ../slurm/
>>>> [ec2-user@labhead slurm]$ git branch
>>>> * (detached from origin/slurm-18.08)
>>>> master
>>>> [ec2-user@labhead slurm]$ cd ../ompi/
>>>> [ec2-user@labhead ompi]$ git branch
>>>> * (detached from origin/v3.1.x)
>>>> master
>>>> 
>>>> 
>>>> attached is the debug out from the run with the debugging turned on
>>>> 
>>>> On Fri, Jan 18, 2019 at 4:30 PM Ralph H Castain  wrote:
>>>>> 
>>>>> Looks strange. I’m pretty sure Mellanox didn’t implement the event 
>>>>> notification system in the Slurm plugin, but you should only be trying to 
>>>>> call it if OMPI is registering a system-level event code - which OMPI 3.1 
>>>>> definitely doesn’t do.
>>>>> 
>>>>> If you are using PMIx v2.2.0, then please note that there is a bug in it 
>>>>> that slipped through our automated testing. I replaced it today with 
>>>>> v2.2.1 - you probably should update if that’s the case. However, that 
>>>>> wouldn’t necessarily explain this behavior. I’m not that familiar with 
>>>>> the Slurm plugin, but you might try adding
>>>>> 
>>>>> PMIX_MCA_pmix_client_event_verbose=5
>>>>> PMIX_MCA_pmix_server_event_verbose=5
>>>>> OMPI_MCA_pmix_base_verbose=10
>>>>> 
>>>>> to your environment and see if that provides anything useful.
>>>>> 
>>>>>> On Jan 18, 2019, at 12:09 PM, Michael Di Domenico 
>>>>>>  wrote:
>>>>>> 
>>>>>> i compilied pmix slurm openmpi
>>>>>> 
>>>>>> ---pmix
>>>>>> ./configure --prefix=/hpc/pmix/2.2 --with-munge=/hpc/munge/0.5.13
>>>>>> --disable-debug
>>>>>> ---slurm
>>>>>> ./configure --prefix=/hpc/slurm/18.08 --with-munge=/hpc/munge/0.5.13
>>>>>> --with-pmix=/hpc/pmix/2.2
>>>>>> ---openmpi
>>>>>> ./configure --prefix=/hpc/ompi/3.1 --with-hwloc=external
>>>>>> --with-libevent=external --with-slurm=/hpc/slurm/18.08
>>>>>> --with-pmix=/hpc/pmix/2.2
>>>>>> 
>>>>>> everything seemed to compile fine, but when i do an srun i get the
>>>>>> below errors, however, if i salloc and then mpirun it seems to work
>>>>>> fine.  i'm not quite sure where the breakdown is or how to debug it
>>>>>> 
>>>>>> ---
>>>>>> 
>>>>>> [ec2-user@labcmp1 linux]$ srun --mpi=pmix_v2 -n 16 xhpl
>>>>>> [labcmp6:18353] PMIX ERROR: NOT-SUPPORTED in file
>>>>>> event/pmix_event_registration.c at line 101
>>>>>> [labcmp6:18355] PMIX ERROR: NOT-SUPPORTED in file
>>>>>> event/pmix_event_registration.c at line 101
>>>>>> [labcmp5:18355] PMIX ERROR: NOT-SUPPORTED in file
>>>>>> event/pmix_event_registration.c at line 101
>>>>>> --
>>>>>> It looks like MPI_INIT failed for some reason; your parallel process is
>>>>>> likely to abort.  There are many reasons that a parallel process can
>>>>>> fail during MPI_INIT; some of which are due to configuration or 
>>>>>> environment

Re: [OMPI users] pmix and srun

2019-01-18 Thread Ralph H Castain
I have pushed a fix to the v2.2 branch - could you please confirm it?


> On Jan 18, 2019, at 2:23 PM, Ralph H Castain  wrote:
> 
> Aha - I found it. It’s a typo in the v2.2.1 release. Sadly, our Slurm plugin 
> folks seem to be off somewhere for awhile and haven’t been testing it. Sigh.
> 
> I’ll patch the branch and let you know - we’d appreciate the feedback.
> Ralph
> 
> 
>> On Jan 18, 2019, at 2:09 PM, Michael Di Domenico  
>> wrote:
>> 
>> here's the branches i'm using.  i did a git clone on the repo's and
>> then a git checkout
>> 
>> [ec2-user@labhead bin]$ cd /hpc/src/pmix/
>> [ec2-user@labhead pmix]$ git branch
>> master
>> * v2.2
>> [ec2-user@labhead pmix]$ cd ../slurm/
>> [ec2-user@labhead slurm]$ git branch
>> * (detached from origin/slurm-18.08)
>> master
>> [ec2-user@labhead slurm]$ cd ../ompi/
>> [ec2-user@labhead ompi]$ git branch
>> * (detached from origin/v3.1.x)
>> master
>> 
>> 
>> attached is the debug out from the run with the debugging turned on
>> 
>> On Fri, Jan 18, 2019 at 4:30 PM Ralph H Castain  wrote:
>>> 
>>> Looks strange. I’m pretty sure Mellanox didn’t implement the event 
>>> notification system in the Slurm plugin, but you should only be trying to 
>>> call it if OMPI is registering a system-level event code - which OMPI 3.1 
>>> definitely doesn’t do.
>>> 
>>> If you are using PMIx v2.2.0, then please note that there is a bug in it 
>>> that slipped through our automated testing. I replaced it today with v2.2.1 
>>> - you probably should update if that’s the case. However, that wouldn’t 
>>> necessarily explain this behavior. I’m not that familiar with the Slurm 
>>> plugin, but you might try adding
>>> 
>>> PMIX_MCA_pmix_client_event_verbose=5
>>> PMIX_MCA_pmix_server_event_verbose=5
>>> OMPI_MCA_pmix_base_verbose=10
>>> 
>>> to your environment and see if that provides anything useful.
>>> 
>>>> On Jan 18, 2019, at 12:09 PM, Michael Di Domenico  
>>>> wrote:
>>>> 
>>>> i compilied pmix slurm openmpi
>>>> 
>>>> ---pmix
>>>> ./configure --prefix=/hpc/pmix/2.2 --with-munge=/hpc/munge/0.5.13
>>>> --disable-debug
>>>> ---slurm
>>>> ./configure --prefix=/hpc/slurm/18.08 --with-munge=/hpc/munge/0.5.13
>>>> --with-pmix=/hpc/pmix/2.2
>>>> ---openmpi
>>>> ./configure --prefix=/hpc/ompi/3.1 --with-hwloc=external
>>>> --with-libevent=external --with-slurm=/hpc/slurm/18.08
>>>> --with-pmix=/hpc/pmix/2.2
>>>> 
>>>> everything seemed to compile fine, but when i do an srun i get the
>>>> below errors, however, if i salloc and then mpirun it seems to work
>>>> fine.  i'm not quite sure where the breakdown is or how to debug it
>>>> 
>>>> ---
>>>> 
>>>> [ec2-user@labcmp1 linux]$ srun --mpi=pmix_v2 -n 16 xhpl
>>>> [labcmp6:18353] PMIX ERROR: NOT-SUPPORTED in file
>>>> event/pmix_event_registration.c at line 101
>>>> [labcmp6:18355] PMIX ERROR: NOT-SUPPORTED in file
>>>> event/pmix_event_registration.c at line 101
>>>> [labcmp5:18355] PMIX ERROR: NOT-SUPPORTED in file
>>>> event/pmix_event_registration.c at line 101
>>>> --
>>>> It looks like MPI_INIT failed for some reason; your parallel process is
>>>> likely to abort.  There are many reasons that a parallel process can
>>>> fail during MPI_INIT; some of which are due to configuration or environment
>>>> problems.  This failure appears to be an internal failure; here's some
>>>> additional information (which may only be relevant to an Open MPI
>>>> developer):
>>>> 
>>>> ompi_interlib_declare
>>>> --> Returned "Would block" (-10) instead of "Success" (0)
>>>> ...snipped...
>>>> [labcmp6:18355] *** An error occurred in MPI_Init
>>>> [labcmp6:18355] *** reported by process [140726281390153,15]
>>>> [labcmp6:18355] *** on a NULL communicator
>>>> [labcmp6:18355] *** Unknown error
>>>> [labcmp6:18355] *** MPI_ERRORS_ARE_FATAL (processes in this
>>>> communicator will now abort,
>>>> [labcmp6:18355] ***and potentially your MPI job)
>>>> [labcmp6:18352] *** An error occurred in M

Re: [OMPI users] Fwd: pmix and srun

2019-01-18 Thread Ralph H Castain
Aha - I found it. It’s a typo in the v2.2.1 release. Sadly, our Slurm plugin 
folks seem to be off somewhere for awhile and haven’t been testing it. Sigh.

I’ll patch the branch and let you know - we’d appreciate the feedback.
Ralph


> On Jan 18, 2019, at 2:09 PM, Michael Di Domenico  
> wrote:
> 
> here's the branches i'm using.  i did a git clone on the repo's and
> then a git checkout
> 
> [ec2-user@labhead bin]$ cd /hpc/src/pmix/
> [ec2-user@labhead pmix]$ git branch
>  master
> * v2.2
> [ec2-user@labhead pmix]$ cd ../slurm/
> [ec2-user@labhead slurm]$ git branch
> * (detached from origin/slurm-18.08)
>  master
> [ec2-user@labhead slurm]$ cd ../ompi/
> [ec2-user@labhead ompi]$ git branch
> * (detached from origin/v3.1.x)
>  master
> 
> 
> attached is the debug out from the run with the debugging turned on
> 
> On Fri, Jan 18, 2019 at 4:30 PM Ralph H Castain  wrote:
>> 
>> Looks strange. I’m pretty sure Mellanox didn’t implement the event 
>> notification system in the Slurm plugin, but you should only be trying to 
>> call it if OMPI is registering a system-level event code - which OMPI 3.1 
>> definitely doesn’t do.
>> 
>> If you are using PMIx v2.2.0, then please note that there is a bug in it 
>> that slipped through our automated testing. I replaced it today with v2.2.1 
>> - you probably should update if that’s the case. However, that wouldn’t 
>> necessarily explain this behavior. I’m not that familiar with the Slurm 
>> plugin, but you might try adding
>> 
>> PMIX_MCA_pmix_client_event_verbose=5
>> PMIX_MCA_pmix_server_event_verbose=5
>> OMPI_MCA_pmix_base_verbose=10
>> 
>> to your environment and see if that provides anything useful.
>> 
>>> On Jan 18, 2019, at 12:09 PM, Michael Di Domenico  
>>> wrote:
>>> 
>>> i compilied pmix slurm openmpi
>>> 
>>> ---pmix
>>> ./configure --prefix=/hpc/pmix/2.2 --with-munge=/hpc/munge/0.5.13
>>> --disable-debug
>>> ---slurm
>>> ./configure --prefix=/hpc/slurm/18.08 --with-munge=/hpc/munge/0.5.13
>>> --with-pmix=/hpc/pmix/2.2
>>> ---openmpi
>>> ./configure --prefix=/hpc/ompi/3.1 --with-hwloc=external
>>> --with-libevent=external --with-slurm=/hpc/slurm/18.08
>>> --with-pmix=/hpc/pmix/2.2
>>> 
>>> everything seemed to compile fine, but when i do an srun i get the
>>> below errors, however, if i salloc and then mpirun it seems to work
>>> fine.  i'm not quite sure where the breakdown is or how to debug it
>>> 
>>> ---
>>> 
>>> [ec2-user@labcmp1 linux]$ srun --mpi=pmix_v2 -n 16 xhpl
>>> [labcmp6:18353] PMIX ERROR: NOT-SUPPORTED in file
>>> event/pmix_event_registration.c at line 101
>>> [labcmp6:18355] PMIX ERROR: NOT-SUPPORTED in file
>>> event/pmix_event_registration.c at line 101
>>> [labcmp5:18355] PMIX ERROR: NOT-SUPPORTED in file
>>> event/pmix_event_registration.c at line 101
>>> --
>>> It looks like MPI_INIT failed for some reason; your parallel process is
>>> likely to abort.  There are many reasons that a parallel process can
>>> fail during MPI_INIT; some of which are due to configuration or environment
>>> problems.  This failure appears to be an internal failure; here's some
>>> additional information (which may only be relevant to an Open MPI
>>> developer):
>>> 
>>> ompi_interlib_declare
>>> --> Returned "Would block" (-10) instead of "Success" (0)
>>> ...snipped...
>>> [labcmp6:18355] *** An error occurred in MPI_Init
>>> [labcmp6:18355] *** reported by process [140726281390153,15]
>>> [labcmp6:18355] *** on a NULL communicator
>>> [labcmp6:18355] *** Unknown error
>>> [labcmp6:18355] *** MPI_ERRORS_ARE_FATAL (processes in this
>>> communicator will now abort,
>>> [labcmp6:18355] ***and potentially your MPI job)
>>> [labcmp6:18352] *** An error occurred in MPI_Init
>>> [labcmp6:18352] *** reported by process [1677936713,12]
>>> [labcmp6:18352] *** on a NULL communicator
>>> [labcmp6:18352] *** Unknown error
>>> [labcmp6:18352] *** MPI_ERRORS_ARE_FATAL (processes in this
>>> communicator will now abort,
>>> [labcmp6:18352] ***and potentially your MPI job)
>>> [labcmp6:18354] *** An error occurred in MPI_Init
>>> [labcmp6:18354] *** reported by process [140726281390153,14]
>>> [labcmp6:18354] *** on a NULL communicator
>>> [labc

Re: [OMPI users] Fwd: pmix and srun

2019-01-18 Thread Ralph H Castain
Looks strange. I’m pretty sure Mellanox didn’t implement the event notification 
system in the Slurm plugin, but you should only be trying to call it if OMPI is 
registering a system-level event code - which OMPI 3.1 definitely doesn’t do.

If you are using PMIx v2.2.0, then please note that there is a bug in it that 
slipped through our automated testing. I replaced it today with v2.2.1 - you 
probably should update if that’s the case. However, that wouldn’t necessarily 
explain this behavior. I’m not that familiar with the Slurm plugin, but you 
might try adding

PMIX_MCA_pmix_client_event_verbose=5
PMIX_MCA_pmix_server_event_verbose=5
OMPI_MCA_pmix_base_verbose=10

to your environment and see if that provides anything useful.

> On Jan 18, 2019, at 12:09 PM, Michael Di Domenico  
> wrote:
> 
> i compilied pmix slurm openmpi
> 
> ---pmix
> ./configure --prefix=/hpc/pmix/2.2 --with-munge=/hpc/munge/0.5.13
> --disable-debug
> ---slurm
> ./configure --prefix=/hpc/slurm/18.08 --with-munge=/hpc/munge/0.5.13
> --with-pmix=/hpc/pmix/2.2
> ---openmpi
> ./configure --prefix=/hpc/ompi/3.1 --with-hwloc=external
> --with-libevent=external --with-slurm=/hpc/slurm/18.08
> --with-pmix=/hpc/pmix/2.2
> 
> everything seemed to compile fine, but when i do an srun i get the
> below errors, however, if i salloc and then mpirun it seems to work
> fine.  i'm not quite sure where the breakdown is or how to debug it
> 
> ---
> 
> [ec2-user@labcmp1 linux]$ srun --mpi=pmix_v2 -n 16 xhpl
> [labcmp6:18353] PMIX ERROR: NOT-SUPPORTED in file
> event/pmix_event_registration.c at line 101
> [labcmp6:18355] PMIX ERROR: NOT-SUPPORTED in file
> event/pmix_event_registration.c at line 101
> [labcmp5:18355] PMIX ERROR: NOT-SUPPORTED in file
> event/pmix_event_registration.c at line 101
> --
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>  ompi_interlib_declare
>  --> Returned "Would block" (-10) instead of "Success" (0)
> ...snipped...
> [labcmp6:18355] *** An error occurred in MPI_Init
> [labcmp6:18355] *** reported by process [140726281390153,15]
> [labcmp6:18355] *** on a NULL communicator
> [labcmp6:18355] *** Unknown error
> [labcmp6:18355] *** MPI_ERRORS_ARE_FATAL (processes in this
> communicator will now abort,
> [labcmp6:18355] ***and potentially your MPI job)
> [labcmp6:18352] *** An error occurred in MPI_Init
> [labcmp6:18352] *** reported by process [1677936713,12]
> [labcmp6:18352] *** on a NULL communicator
> [labcmp6:18352] *** Unknown error
> [labcmp6:18352] *** MPI_ERRORS_ARE_FATAL (processes in this
> communicator will now abort,
> [labcmp6:18352] ***and potentially your MPI job)
> [labcmp6:18354] *** An error occurred in MPI_Init
> [labcmp6:18354] *** reported by process [140726281390153,14]
> [labcmp6:18354] *** on a NULL communicator
> [labcmp6:18354] *** Unknown error
> [labcmp6:18354] *** MPI_ERRORS_ARE_FATAL (processes in this
> communicator will now abort,
> [labcmp6:18354] ***and potentially your MPI job)
> srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
> slurmstepd: error: *** STEP 24.0 ON labcmp3 CANCELLED AT 2019-01-18T20:03:33 
> ***
> [labcmp5:18358] PMIX ERROR: NOT-SUPPORTED in file
> event/pmix_event_registration.c at line 101
> --
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>  ompi_interlib_declare
>  --> Returned "Would block" (-10) instead of "Success" (0)
> --
> [labcmp5:18357] PMIX ERROR: NOT-SUPPORTED in file
> event/pmix_event_registration.c at line 101
> [labcmp5:18356] PMIX ERROR: NOT-SUPPORTED in file
> event/pmix_event_registration.c at line 101
> srun: error: labcmp6: tasks 12-15: Exited with exit code 1
> srun: error: labcmp3: tasks 0-3: Killed
> srun: error: labcmp4: tasks 4-7: Killed
> srun: error: labcmp5: tasks 8-11: Killed
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Suppress mpirun exit error chatter

2019-01-06 Thread Ralph H Castain
Afraid not. What it saids is actually accurate - it didn’t say the application 
called “abort”. It saids that the job was aborted. There is a very different 
message when the application itself calls MPI_Abort.



> On Jan 6, 2019, at 1:19 PM, Jeff Wentworth via users 
>  wrote:
> 
> Hi everyone,
> 
> Is there any way to suppress this bit of mpirun error chatter as shown in the 
> example below?  Since the code (a.out) already issues an error message and 
> error return status, I was hoping to leave it at that.  Not a show stopper by 
> any means, but was just wondering if maybe there was some other setting 
> available.  Also, the code in question gracefully exited and did not call 
> MPI_Abort(), so the portion of the mpirun message mentioning an abort could 
> confuse end users.
> 
> Thanks.
> 
> % mpirun -q -np 2 ./a.out blort.txt
> a.out: blort.txt: No such file or directory
> 
> ---
> Primary job  terminated normally, but 1 process returned
> a non-zero exit code. Per user-direction, the job has been aborted.
> ---
> --
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] open-mpi.org is DOWN

2018-12-23 Thread Ralph H Castain
The security scanner has apologized for a false positive and fixed their system 
- the site has been restored.

Ralph


> On Dec 22, 2018, at 12:12 PM, Ralph H Castain  wrote:
> 
> Hello all
> 
> Apologies to everyone, but I received an alert this moring that malware has 
> been detected on the www.open-mpi.org site. I have tried to contact the 
> hosting agency and the security scanners, but nobody is around on this 
> pre-holiday weekend.
> 
> Accordingly, I have taken the site OFFLINE for the indeterminate future until 
> we can get this resolved. Sadly, with the holidays upon us, I don’t know how 
> long it will take to get responses from either company. Until we do, the site 
> will remain offline for safety reasons.
> 
> Ralph
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] open-mpi.org is DOWN

2018-12-22 Thread Ralph H Castain
Hello all

Apologies to everyone, but I received an alert this moring that malware has 
been detected on the www.open-mpi.org site. I have tried to contact the hosting 
agency and the security scanners, but nobody is around on this pre-holiday 
weekend.

Accordingly, I have taken the site OFFLINE for the indeterminate future until 
we can get this resolved. Sadly, with the holidays upon us, I don’t know how 
long it will take to get responses from either company. Until we do, the site 
will remain offline for safety reasons.

Ralph

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] singularity support

2018-12-12 Thread Ralph H Castain
FWIW: we also automatically detect that the application is a singularity 
container and do the right stuff


> On Dec 12, 2018, at 12:25 AM, Gilles Gouaillardet  wrote:
> 
> My understanding is that MPI tasks will be launched inside a singularity 
> container.
> 
> In a typical environment, mpirun spawns an instance of the orted on each 
> node, and then each orted daemon (or mpirun on the local node) fork&exec the 
> MPI tasks (a.out)
> 
> 
> With singularity, orted would fork&exec a container running a.out
> 
> (not sure if it is one container per task, or one container per node)
> 
> 
> Cheers,
> 
> 
> Gilles
> 
> On 12/10/2018 5:00 AM, Johnson, Glenn P wrote:
>> Greetings,
>> 
>> Could someone explain, or point to documentation that explains, what the 
>> --with-singularity option enables with regard to OpenMPI support for 
>> singularity containers?
>> 
>> Thanks.
>> 
>> Glenn Johnson
>> 
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Issue with MPI_Init in MPI_Comm_Spawn

2018-11-29 Thread Ralph H Castain
I ran a simple spawn test - you can find it in the OMPI code at 
orte/test/mpi/simple_spawn.c - and it worked fine:
$ mpirun -n 2 ./simple_spawn
[1858076673:0 pid 19909] starting up on node Ralphs-iMac-2.local!
[1858076673:1 pid 19910] starting up on node Ralphs-iMac-2.local!
1 completed MPI_Init
Parent [pid 19910] about to spawn!
0 completed MPI_Init
Parent [pid 19909] about to spawn!
[1858076674:0 pid 19911] starting up on node Ralphs-iMac-2.local!
[1858076674:1 pid 19912] starting up on node Ralphs-iMac-2.local!
[1858076674:2 pid 19913] starting up on node Ralphs-iMac-2.local!
Parent done with spawn
Parent sending message to child
Parent done with spawn
2 completed MPI_Init
Hello from the child 2 of 3 on host Ralphs-iMac-2.local pid 19913
1 completed MPI_Init
Hello from the child 1 of 3 on host Ralphs-iMac-2.local pid 19912
0 completed MPI_Init
Hello from the child 0 of 3 on host Ralphs-iMac-2.local pid 19911
Child 0 received msg: 38
Parent disconnected
Parent disconnected
Child 0 disconnected
Child 1 disconnected
Child 2 disconnected
19910: exiting
19911: exiting
19912: exiting
19913: exiting
19909: exiting
$

I then ran our spawn_multiple test - again, you can find it at 
orte/test/mpi/spawn_multiple.c:
$ mpirun -n 2 ./spawn_multiple
Parent [pid 19946] about to spawn!
Parent [pid 19947] about to spawn!
Parent done with spawn
Parent sending message to children
Parent done with spawn
Hello from the child 1 of 2 on host Ralphs-iMac-2.local pid 19949: argv[1] = bar
Hello from the child 0 of 2 on host Ralphs-iMac-2.local pid 19948: argv[1] = foo
Child 0 received msg: 38
Child 1 received msg: 38
Parent disconnected
Child 1 disconnected
Child 0 disconnected
Parent disconnected
$

How did you configure OMPI, and how were you running your example?


> On Nov 28, 2018, at 9:33 AM, Kiker, Kathleen R  
> wrote:
> 
> Good Afternoon, 
>  
> I’m trying to diagnose an issue I’ve been having with MPI_Comm_Spawn. When I 
> run the simple example program:
>  
> #include "mpi.h"
> #include 
> #include 
>  
> int main( int argc, char *argv[] )
> {
> int np[2] = { 1, 1 };
> int errcodes[2];
> MPI_Comm parentcomm, intercomm;
> char *cmds[2] = { "spawn_example", "spawn_example" };
> MPI_Info infos[2] = { MPI_INFO_NULL, MPI_INFO_NULL };
>  
> MPI_Init( &argc, &argv );
> MPI_Comm_get_parent( &parentcomm );
> if (parentcomm == MPI_COMM_NULL)
> {
> /* Create 2 more processes - this example must be called 
> spawn_example.exe for this to work. */
> MPI_Comm_spawn_multiple( 2, cmds, MPI_ARGVS_NULL, np, infos, 0, 
> MPI_COMM_WORLD, &intercomm, errcodes );
> printf("I'm the parent.\n");
> }
> else
> {
> printf("I'm the spawned.\n");
> }
> fflush(stdout);
> MPI_Finalize();
> return 0;
> }
>  
> I get the output:
>  
> --
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>  
>   ompi_dpm_dyn_init() failed
>   --> Returned "Unreachable" (-12) instead of "Success" (0)
> --
>  
> I’m using OpenMPI 3.1.1. I know past versions (like 2.x) had a similar issue, 
> but I believe those were fixed by this version. Is there something else that 
> can cause this?
>  
> Thank you, 
> Kathleen
> ___
> users mailing list
> users@lists.open-mpi.org 
> https://lists.open-mpi.org/mailman/listinfo/users 
> 
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI2 + slurm

2018-11-23 Thread Ralph H Castain
Couple of comments. Your original cmd line:

>>   srun -n 2 mpirun MPI-hellow

tells srun to launch two copies of mpirun, each of which is to run as many 
processes as there are slots assigned to the allocation. srun will get an 
allocation of two slots, and so you’ll get two concurrent MPI jobs, each 
consisting of two procs.

Your other cmd line:

>>srun -c 2 mpirun -np 2 MPI-hellow

told srun to get two slots but only run one copy (the default value of the -n 
option) of mpirun, and you told mpirun to launch two procs. So you got one job 
consisting of two procs.

What you probably want to do is what Gilles advised. However, Slurm 16.05 only 
supports PMIx v1, so you’d want to download and build PMIx v1.2.5, and then 
build Slurm against it. OMPI v2.0.2 may have a slightly older copy of PMIx in 
it (I honestly don’t remember) - to be safe, it would be best to configure OMPI 
to use the 1.2.5 you installed for Slurm. You’ll also be required to build OMPI 
against an external copy of libevent and hwloc to ensure OMPI is linked against 
the same versions used by PMIx.

Or you can just build OMPI against the Slurm PMI library - up to you.

Ralph


> On Nov 23, 2018, at 2:31 AM, Gilles Gouaillardet 
>  wrote:
> 
> Lothar,
> 
> it seems you did not configure Open MPI with --with-pmi=
> 
> If SLURM was built with PMIx support, then an other option is to use that.
> First, srun --mpi=list will show you the list of available MPI
> modules, and then you could
> srun --mpi=pmix_v2 ... MPI_Hellow
> If you believe that should be the default, then you should contact
> your sysadmin that can make that for you.
> 
> You you want to use PMIx, then I recommend you configure Open MPI with
> the same external PMIx that was used to
> build SLURM (e.g. configure --with-pmix=). Though PMIx
> has cross version support, using the same PMIx will avoid you running
> incompatible PMIx versions.
> 
> 
> Cheers,
> 
> Gilles
> On Fri, Nov 23, 2018 at 5:20 PM Lothar Brendel
>  wrote:
>> 
>> Hi guys,
>> 
>> I've always been somewhat at a loss regarding slurm's idea about tasks vs. 
>> jobs. That didn't cause any problems, though, until passing to OpenMPI2 
>> (2.0.2 that is, with slurm 16.05.9).
>> 
>> Running http://mpitutorial.com/tutorials/mpi-hello-world as an example with 
>> just
>> 
>>srun -n 2 MPI-hellow
>> 
>> yields
>> 
>> Hello world from processor node31, rank 0 out of 1 processors
>> Hello world from processor node31, rank 0 out of 1 processors
>> 
>> i.e. the two tasks don't see each other MPI-wise. Well, srun doesn't include 
>> an mpirun.
>> 
>> But running
>> 
>>srun -n 2 mpirun MPI-hellow
>> 
>> produces
>> 
>> Hello world from processor node31, rank 1 out of 2 processors
>> Hello world from processor node31, rank 0 out of 2 processors
>> Hello world from processor node31, rank 1 out of 2 processors
>> Hello world from processor node31, rank 0 out of 2 processors
>> 
>> i.e. I get *two* independent MPI-tasks with 2 processors each. (The same 
>> applies if stating explicitly "mpirun -np 2".)
>> I never could make sense of this squaring, I rather used to run my jobs like
>> 
>>srun -c 2 mpirun -np 2 MPI-hellow
>> 
>> which provided the desired job with *one* task using 2 processors. Passing 
>> from OpenMPI 1.6.5 to 2.0.2 (Debian Jessie -> Stretch), though, I'm getting 
>> the error
>> "There are not enough slots available in the system to satisfy the 2 slots
>> that were requested by the application:
>>  MPI-hellow" now.
>> 
>> The environment on the node contains
>> 
>> SLURM_CPUS_ON_NODE=2
>> SLURM_CPUS_PER_TASK=2
>> SLURM_JOB_CPUS_PER_NODE=2
>> SLURM_NTASKS=1
>> SLURM_TASKS_PER_NODE=1
>> 
>> which looks fine to me, but mpirun infers slots=1 from that (confirmed by 
>> ras_base_verbose 5). In deed, looking into 
>> orte/mca/ras/slurm/ras_slurm_module.c, I find that while 
>> orte_ras_slurm_allocate() reads the value of SLURM_CPUS_PER_TASK into its 
>> local variable cpus_per_task, it doesn't use it anywhere. Rather, the number 
>> of slots is determined from SLURM_TASKS_PER_NODE.
>> 
>> Is this intended behaviour?
>> 
>> What's wrong here? I know that I can use --oversubscribe, but that seems 
>> rather a workaround.
>> 
>> Thanks in advance,
>>Lothar
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OMPI 3.1.x, PMIx, SLURM, and mpiexec/mpirun

2018-11-12 Thread Ralph H Castain
mpirun should definitely still work in parallel with srun - they aren’t 
mutually exclusive. OMPI 3.1.2 contains PMIx v2.1.3.

The problem here is that you built Slurm against PMIx v2.0.2, which is not 
cross-version capable. You can see the cross-version situation here: 
https://pmix.org/support/faq/how-does-pmix-work-with-containers/

Your options would be to build OMPI against the same PMIx 2.0.2 you used for 
Slurm, or update the PMIx version you used for Slurm to something that can 
support cross-version operations.

Ralph


> On Nov 11, 2018, at 5:21 PM, Bennet Fauber  wrote:
> 
> I have been having some difficulties getting the right combination of SLURM, 
> PMIx, and OMPI 3.1.x (specifically 3.1.2) to compile in such a way that both 
> the srun method of starting jobs and mpirun/mpiexec will also run.
> 
> If someone has a slurm 18.08 or newer, PMIx, and OMPI 3.x that works with 
> both srun and mpirun and wouldn't mind sending me the version numbers and any 
> tips for getting this to work, I would appreciate it.
> 
> Should mpirun still work?  If that is just off the table and I missed the 
> memo, please let me know.
> 
> I'm asking for both because of programs like OpenFOAM and others where mpirun 
> is built into the application.  I have OMPI 1.10.7 built with similar flags, 
> and it seems to work.
> 
> [bennet@beta-build mpi_example]$ srun ./test_mpi
> The sum = 0.866386
> Elapsed time is:  0.000458
> 
> [bennet@beta-build mpi_example]$ mpirun ./test_mpi
> The sum = 0.866386
> Elapsed time is:  0.000295
> 
> SLURM documentation doesn't seem to list a recommended PMIx, that I can find. 
>  I can't find where the version of PMIx that is bundled with OMPI is 
> specified.
> 
> I have slurm 18.08.0, which is built against pmix-2.0.2.  We settled on that 
> version with SLURM 17.something prior to SLURM supporting PMIx 2.1.  Is OMPI 
> 3.1.2 balking at too old a PMIx?
> 
> Sorry to be so at sea.
> 
> I built OMPI with
> 
> ./configure \
> --prefix=${PREFIX} \
> --mandir=${PREFIX}/share/man \
> --with-pmix=/opt/pmix/2.0.2 \
> --with-libevent=external \
> --with-hwloc=external \
> --with-slurm \
> --with-verbs \
> --disable-dlopen --enable-shared \
> CC=gcc CXX=g++ FC=gfortran
> 
> I have a simple test program, and it runs with
> 
> [bennet@beta-build mpi_example]$ srun ./test_mpi
> The sum = 0.866386
> Elapsed time is:  0.000573
> 
> but, on a login node, where I just want a few processors on the local node, 
> not to run on the compute nodes of the cluster, mpirun fails with
> 
> [bennet@beta-build mpi_example]$ mpirun -np 2 ./test_mpi
> [beta-build.stage.arc-ts.umich.edu:102541 
> ] [[13610,1],0] 
> ORTE_ERROR_LOG: Not found in file base/ess_base_std_app.c at line 219
> [beta-build.stage.arc-ts.umich.edu:102542 
> ] [[13610,1],1] 
> ORTE_ERROR_LOG: Not found in file base/ess_base_std_app.c at line 219
> --
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   store DAEMON URI failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --
> [beta-build.stage.arc-ts.umich.edu:102541 
> ] [[13610,1],0] 
> ORTE_ERROR_LOG: Not found in file ess_pmi_module.c at line 401
> [beta-build.stage.arc-ts.umich.edu:102542 
> ] [[13610,1],1] 
> ORTE_ERROR_LOG: Not found in file ess_pmi_module.c at line 401
> --
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --
> --
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional inform

Re: [OMPI users] Bug with Open-MPI Processor Count

2018-11-01 Thread Ralph H Castain
Hmmm - try adding a value for nprocs instead of leaving it blank. Say “-np 7”

Sent from my iPhone

> On Nov 1, 2018, at 11:56 AM, Adam LeBlanc  wrote:
> 
> Hello Ralph,
> 
> Here is the output for a failing machine:
> 
> [130_02:44:13_aleblanc@farbauti]{~}$ > mpirun --mca 
> btl_openib_warn_no_device_params_found 0 --mca orte_base_help_aggregate 0 
> --mca btl openib,vader,self --mca pml ob1 --mca btl_openib_receive_queues 
> P,65536,120,64,32 -hostfile /home/soesterreich/ce-mpi-hosts --mca 
> ras_base_verbose 5 IMB-MPI1
> 
> ==   ALLOCATED NODES   ==
>   farbauti: flags=0x11 slots=1 max_slots=0 slots_inuse=0 state=UP
>   hyperion-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   io-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   jarnsaxa-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   rhea-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   tarqeq-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   tarvos-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
> =
> --
> There are not enough slots available in the system to satisfy the 7 slots
> that were requested by the application:
>   10
> 
> Either request fewer slots for your application, or make more slots available
> for use.
> --
> 
> 
> Here is an output of a passing machine:
> 
> [1_02:54:26_aleblanc@hyperion]{~}$ > mpirun --mca 
> btl_openib_warn_no_device_params_found 0 --mca orte_base_help_aggregate 0 
> --mca btl openib,vader,self --mca pml ob1 --mca btl_openib_receive_queues 
> P,65536,120,64,32 -hostfile /home/soesterreich/ce-mpi-hosts --mca 
> ras_base_verbose 5 IMB-MPI1
> 
> ==   ALLOCATED NODES   ==
>   hyperion: flags=0x11 slots=1 max_slots=0 slots_inuse=0 state=UP
>   farbauti-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   io-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   jarnsaxa-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   rhea-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   tarqeq-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
>   tarvos-ce: flags=0x10 slots=1 max_slots=0 slots_inuse=0 state=UNKNOWN
> =
> 
> 
> Yes the hostfile is available on all nodes through an NFS mount for all of 
> our home directories.
> 
>> On Thu, Nov 1, 2018 at 2:44 PM Adam LeBlanc  wrote:
>> 
>> 
>> -- Forwarded message -
>> From: Ralph H Castain 
>> Date: Thu, Nov 1, 2018 at 2:34 PM
>> Subject: Re: [OMPI users] Bug with Open-MPI Processor Count
>> To: Open MPI Users 
>> 
>> 
>> I’m a little under the weather and so will only be able to help a bit at a 
>> time. However, a couple of things to check:
>> 
>> * add -mca ras_base_verbose 5 to the cmd line to see what mpirun thought the 
>> allocation was
>> 
>> * is the hostfile available on every node?
>> 
>> Ralph
>> 
>>> On Nov 1, 2018, at 10:55 AM, Adam LeBlanc  wrote:
>>> 
>>> Hello Ralph,
>>> 
>>> Attached below is the verbose output for a failing machine and a passing 
>>> machine.
>>> 
>>> Thanks,
>>> Adam LeBlanc
>>> 
>>>> On Thu, Nov 1, 2018 at 1:41 PM Adam LeBlanc  wrote:
>>>> 
>>>> 
>>>> -- Forwarded message -
>>>> From: Ralph H Castain 
>>>> Date: Thu, Nov 1, 2018 at 1:07 PM
>>>> Subject: Re: [OMPI users] Bug with Open-MPI Processor Count
>>>> To: Open MPI Users 
>>>> 
>>>> 
>>>> Set rmaps_base_verbose=10 for debugging output 
>>>> 
>>>> Sent from my iPhone
>>>> 
>>>>> On Nov 1, 2018, at 9:31 AM, Adam LeBlanc  wrote:
>>>>> 
>>>>> The version by the way for Open-MPI is 3.1.2.
>>>>> 
>>>>> -Adam LeBlanc
>>>>> 
>>>>>> On Thu, Nov 1, 2018 at 12:05 PM Adam LeBlanc  
>>>>>> wrote:
>>>>>> Hello,
>>>>>> 
>>>>>> I am an employee of the UNH InterOperability Lab, and we are in the 
>>>>>> process of testing OFED-4.17-R

Re: [OMPI users] Bug with Open-MPI Processor Count

2018-11-01 Thread Ralph H Castain
I’m a little under the weather and so will only be able to help a bit at a 
time. However, a couple of things to check:

* add -mca ras_base_verbose 5 to the cmd line to see what mpirun thought the 
allocation was

* is the hostfile available on every node?

Ralph

> On Nov 1, 2018, at 10:55 AM, Adam LeBlanc  wrote:
> 
> Hello Ralph,
> 
> Attached below is the verbose output for a failing machine and a passing 
> machine.
> 
> Thanks,
> Adam LeBlanc
> 
> On Thu, Nov 1, 2018 at 1:41 PM Adam LeBlanc  <mailto:alebl...@iol.unh.edu>> wrote:
> 
> 
> ------ Forwarded message -
> From: Ralph H Castain mailto:r...@open-mpi.org>>
> Date: Thu, Nov 1, 2018 at 1:07 PM
> Subject: Re: [OMPI users] Bug with Open-MPI Processor Count
> To: Open MPI Users  <mailto:users@lists.open-mpi.org>>
> 
> 
> Set rmaps_base_verbose=10 for debugging output 
> 
> Sent from my iPhone
> 
> On Nov 1, 2018, at 9:31 AM, Adam LeBlanc  <mailto:alebl...@iol.unh.edu>> wrote:
> 
>> The version by the way for Open-MPI is 3.1.2.
>> 
>> -Adam LeBlanc
>> 
>> On Thu, Nov 1, 2018 at 12:05 PM Adam LeBlanc > <mailto:alebl...@iol.unh.edu>> wrote:
>> Hello,
>> 
>> I am an employee of the UNH InterOperability Lab, and we are in the process 
>> of testing OFED-4.17-RC1 for the OpenFabrics Alliance. We have purchased 
>> some new hardware that has one processor, and noticed an issue when running 
>> mpi jobs on nodes that do not have similar processor counts. If we launch 
>> the MPI job from a node that has 2 processors, it will fail and stating 
>> there are not enough resources and will not start the run, like so:
>> 
>> --
>> There are not enough slots available in the system to satisfy the 14 slots
>> that were requested by the application:
>>   IMB-MPI1
>> 
>> Either request fewer slots for your application, or make more slots available
>> for use.
>> --
>> 
>> If we launch the MPI job from the node with one processor, without changing 
>> the mpirun command at all, it runs as expected.
>> 
>> Here is the command being run:
>> 
>> mpirun --mca btl_openib_warn_no_device_params_found 0 --mca 
>> orte_base_help_aggregate 0 --mca btl openib,vader,self --mca pml ob1 --mca 
>> btl_openib_receive_queues P,65536,120,64,32 -hostfile 
>> /home/soesterreich/ce-mpi-hosts IMB-MPI1
>> 
>> Here is the hostfile being used:
>> 
>> farbauti-ce.ofa.iol.unh.edu <http://farbauti-ce.ofa.iol.unh.edu/> slots=1
>> hyperion-ce.ofa.iol.unh.edu <http://hyperion-ce.ofa.iol.unh.edu/> slots=1
>> io-ce.ofa.iol.unh.edu <http://io-ce.ofa.iol.unh.edu/> slots=1
>> jarnsaxa-ce.ofa.iol.unh.edu <http://jarnsaxa-ce.ofa.iol.unh.edu/> slots=1
>> rhea-ce.ofa.iol.unh.edu <http://rhea-ce.ofa.iol.unh.edu/> slots=1
>> tarqeq-ce.ofa.iol.unh.edu <http://tarqeq-ce.ofa.iol.unh.edu/> slots=1
>> tarvos-ce.ofa.iol.unh.edu <http://tarvos-ce.ofa.iol.unh.edu/> slots=1
>> 
>> This seems like a bug and we would like some help to explain and fix what is 
>> happening. The IBTA plugfest saw similar behaviours, so this should be 
>> reproduceable.
>> 
>> Thanks,
>> Adam LeBlanc
>> ___
>> users mailing list
>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>> https://lists.open-mpi.org/mailman/listinfo/users 
>> <https://lists.open-mpi.org/mailman/listinfo/users>___
> users mailing list
> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> https://lists.open-mpi.org/mailman/listinfo/users 
> <https://lists.open-mpi.org/mailman/listinfo/users>___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Bug with Open-MPI Processor Count

2018-11-01 Thread Ralph H Castain
Set rmaps_base_verbose=10 for debugging output 

Sent from my iPhone

> On Nov 1, 2018, at 9:31 AM, Adam LeBlanc  wrote:
> 
> The version by the way for Open-MPI is 3.1.2.
> 
> -Adam LeBlanc
> 
>> On Thu, Nov 1, 2018 at 12:05 PM Adam LeBlanc  wrote:
>> Hello,
>> 
>> I am an employee of the UNH InterOperability Lab, and we are in the process 
>> of testing OFED-4.17-RC1 for the OpenFabrics Alliance. We have purchased 
>> some new hardware that has one processor, and noticed an issue when running 
>> mpi jobs on nodes that do not have similar processor counts. If we launch 
>> the MPI job from a node that has 2 processors, it will fail and stating 
>> there are not enough resources and will not start the run, like so:
>> 
>> --
>> There are not enough slots available in the system to satisfy the 14 slots
>> that were requested by the application:
>>   IMB-MPI1
>> 
>> Either request fewer slots for your application, or make more slots available
>> for use.
>> --
>> 
>> If we launch the MPI job from the node with one processor, without changing 
>> the mpirun command at all, it runs as expected.
>> 
>> Here is the command being run:
>> 
>> mpirun --mca btl_openib_warn_no_device_params_found 0 --mca 
>> orte_base_help_aggregate 0 --mca btl openib,vader,self --mca pml ob1 --mca 
>> btl_openib_receive_queues P,65536,120,64,32 -hostfile 
>> /home/soesterreich/ce-mpi-hosts IMB-MPI1
>> 
>> Here is the hostfile being used:
>> 
>> farbauti-ce.ofa.iol.unh.edu slots=1
>> hyperion-ce.ofa.iol.unh.edu slots=1
>> io-ce.ofa.iol.unh.edu slots=1
>> jarnsaxa-ce.ofa.iol.unh.edu slots=1
>> rhea-ce.ofa.iol.unh.edu slots=1
>> tarqeq-ce.ofa.iol.unh.edu slots=1
>> tarvos-ce.ofa.iol.unh.edu slots=1
>> 
>> This seems like a bug and we would like some help to explain and fix what is 
>> happening. The IBTA plugfest saw similar behaviours, so this should be 
>> reproduceable.
>> 
>> Thanks,
>> Adam LeBlanc
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] SC'18 PMIx BoF meeting

2018-10-15 Thread Ralph H Castain
Hello all

[I’m sharing this on the OMPI mailing lists (as well as the PMIx one) as PMIx 
has become tightly integrated to the OMPI code since v2.0 was released]

The PMIx Community will once again be hosting a Birds-of-a-Feather meeting at 
SuperComputing. This year, however, will be a little different! PMIx has come a 
long, long way over the last four years, and we are starting to see 
application-level adoption of the various APIs. Accordingly, we will be 
devoting most of this year’s meeting to a tutorial-like review of several 
use-cases, including:

* fault-tolerant OpenSHMEM implementation
* interlibrary resource coordination using OpenMP and MPI
* population modeling and swarm intelligence models running natively in an HPC 
environment
* use of the PMIx_Query interface

The meeting has been shifted to Wed night, 5:15-6:45pm, in room C144. Please 
share this with others who you feel might be interested, and do plan to attend!
Ralph

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] opal_pmix_base_select failed for master and 4.0.0

2018-10-12 Thread Ralph H Castain
Hi Siegmar

The patch was merged into the v4.0.0 branch on Oct 10th, so should be available 
in the nightly tarball from that date onward.


> On Oct 6, 2018, at 2:12 AM, Siegmar Gross 
>  wrote:
> 
> Hi Jeff, hi Ralph,
> 
> Great, it works again! Thank you very much for your help. I'm really happy,
> if the undefined references for Sun C are resolved and there are no new
> problems for that compiler :-)). Do you know when the pmix patch will be
> integrated into version 4.0.0?
> 
> 
> Best regards
> 
> Siegmar
> 
> 
> On 10/5/18 4:33 PM, Jeff Squyres (jsquyres) via users wrote:
>> Oops!  We had a typo in yesterday's fix -- fixed:
>> https://github.com/open-mpi/ompi/pull/5847
>> Ralph also put double extra super protection to make triple sure that this 
>> error can't happen again in:
>> https://github.com/open-mpi/ompi/pull/5846
>> Both of these should be in tonight's nightly snapshot.
>> Thank you!
>>> On Oct 5, 2018, at 5:45 AM, Ralph H Castain  wrote:
>>> 
>>> Please send Jeff and I the opal/mca/pmix/pmix4x/pmix/config.log again - 
>>> we’ll need to see why it isn’t building. The patch definitely is not in the 
>>> v4.0 branch, but it should have been in master.
>>> 
>>> 
>>>> On Oct 5, 2018, at 2:04 AM, Siegmar Gross 
>>>>  wrote:
>>>> 
>>>> Hi Ralph, hi Jeff,
>>>> 
>>>> 
>>>> On 10/3/18 8:14 PM, Ralph H Castain wrote:
>>>>> Jeff and I talked and believe the patch in 
>>>>> https://github.com/open-mpi/ompi/pull/5836 should fix the problem.
>>>> 
>>>> 
>>>> Today I've installed openmpi-master-201810050304-5f1c940 and
>>>> openmpi-v4.0.x-201810050241-c079666. Unfortunately, I still get the
>>>> same error for all seven versions that I was able to build.
>>>> 
>>>> loki hello_1 114 mpicc --showme
>>>> gcc -I/usr/local/openmpi-master_64_gcc/include -fexceptions -pthread 
>>>> -std=c11 -m64 -Wl,-rpath -Wl,/usr/local/openmpi-master_64_gcc/lib64 
>>>> -Wl,--enable-new-dtags -L/usr/local/openmpi-master_64_gcc/lib64 -lmpi
>>>> 
>>>> loki hello_1 115 ompi_info | grep "Open MPI repo revision"
>>>>  Open MPI repo revision: v2.x-dev-6262-g5f1c940
>>>> 
>>>> loki hello_1 116 mpicc hello_1_mpi.c
>>>> 
>>>> loki hello_1 117 mpiexec -np 2 a.out
>>>> [loki:25575] [[64603,0],0] ORTE_ERROR_LOG: Not found in file 
>>>> ../../../../../openmpi-master-201810050304-5f1c940/orte/mca/ess/hnp/ess_hnp_module.c
>>>>  at line 320
>>>> --
>>>> It looks like orte_init failed for some reason; your parallel process is
>>>> likely to abort.  There are many reasons that a parallel process can
>>>> fail during orte_init; some of which are due to configuration or
>>>> environment problems.  This failure appears to be an internal failure;
>>>> here's some additional information (which may only be relevant to an
>>>> Open MPI developer):
>>>> 
>>>>  opal_pmix_base_select failed
>>>>  --> Returned value Not found (-13) instead of ORTE_SUCCESS
>>>> --
>>>> loki hello_1 118
>>>> 
>>>> 
>>>> I don't know, if you have already applied your suggested patch or if the
>>>> error message is still from a version without that patch. Do you need
>>>> anything else?
>>>> 
>>>> 
>>>> Best regards
>>>> 
>>>> Siegmar
>>>> 
>>>> 
>>>>>> On Oct 2, 2018, at 2:50 PM, Jeff Squyres (jsquyres) via users 
>>>>>>  wrote:
>>>>>> 
>>>>>> (Ralph sent me Siegmar's pmix config.log, which Siegmar sent to him 
>>>>>> off-list)
>>>>>> 
>>>>>> It looks like Siegmar passed --with-hwloc=internal.
>>>>>> 
>>>>>> Open MPI's configure understood this and did the appropriate things.
>>>>>> PMIX's configure didn't.
>>>>>> 
>>>>>> I think we need to add an adjustment into the PMIx configure.m4 in 
>>>>>> OMPI...
>>>>>> 
>>>>>> 
>>>>>>> On Oct 2, 2018, at 5:25 PM, Ralph H Castain  wrote

Re: [OMPI users] issue compiling openmpi 3.2.1 with pmi and slurm

2018-10-10 Thread Ralph H Castain
mix/flux mca/pmix/pmix2x 
> mca/pmix/s1 mca/pmix/s2'
> MCA_opal_pmix_STATIC_COMPONENTS=''
> MCA_opal_pmix_STATIC_LTLIBS=''
> MCA_opal_pmix_STATIC_SUBDIRS=''
> MCA_orte_ess_ALL_COMPONENTS=' env hnp pmi singleton tool alps lsf slurm tm'
> MCA_orte_ess_ALL_SUBDIRS=' mca/ess/env mca/ess/hnp mca/ess/pmi 
> mca/ess/singleton mca/ess/tool mca/ess/alps mca/ess/lsf mca/ess/slurm 
> mca/ess/tm'
> MCA_orte_ess_DSO_COMPONENTS=' env hnp pmi singleton tool slurm'
> MCA_orte_ess_DSO_SUBDIRS=' mca/ess/env mca/ess/hnp mca/ess/pmi 
> mca/ess/singleton mca/ess/tool mca/ess/slurm'
> OPAL_CONFIGURE_CLI=' \'\''--prefix=/usr/local/\'\'' \'\''--with-cuda\'\'' 
> \'\''--with-slurm\'\'' \'\''--with-pmi=/usr/local/slurm/include/slurm\'\'' 
> \'\''--with-pmi-libdir=/usr/local/slurm/lib64\'\'''
> opal_pmi1_CPPFLAGS=''
> opal_pmi1_LDFLAGS=''
> opal_pmi1_LIBS='-lpmi'
> opal_pmi1_rpath=''
> opal_pmi2_CPPFLAGS=''
> opal_pmi2_LDFLAGS=''
> opal_pmi2_LIBS='-lpmi2'
> opal_pmi2_rpath=''
> opal_pmix_ext1x_CPPFLAGS=''
> opal_pmix_ext1x_LDFLAGS=''
> opal_pmix_ext1x_LIBS=''
> opal_pmix_ext2x_CPPFLAGS=''
> opal_pmix_ext2x_LDFLAGS=''
> opal_pmix_ext2x_LIBS=''
> opal_pmix_pmix2x_CPPFLAGS='-I/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x/pmix/include
>  -I/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x/pmix 
> -I/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x/pmix/include 
> -I/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x/pmix'
> opal_pmix_pmix2x_DEPENDENCIES='/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x/pmix/src/libpmix.la
>  <http://libpmix.la/>'
> opal_pmix_pmix2x_LDFLAGS=''
> opal_pmix_pmix2x_LIBS='/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x/pmix/src/libpmix.la
>  <http://libpmix.la/>'
> pmix_alps_CPPFLAGS=''
> pmix_alps_LDFLAGS=''
> pmix_alps_LIBS=''
> pmix_cray_CPPFLAGS=''
> pmix_cray_LDFLAGS=''
> pmix_cray_LIBS=''
>  
> From: users  On Behalf Of Ralph H Castain
> Sent: Wednesday, October 10, 2018 11:26 AM
> To: Open MPI Users 
> Subject: Re: [OMPI users] issue compiling openmpi 3.2.1 with pmi and slurm
>  
> It appears that the CPPFLAGS isn’t getting set correctly as the component 
> didn’t find the Slurm PMI-1 header file. Perhaps it would help if we saw the 
> config.log output so we can see where OMPI thought the file was located.
>  
> 
> 
> On Oct 10, 2018, at 6:44 AM, Ross, Daniel B. via users 
> mailto:users@lists.open-mpi.org>> wrote:
>  
> I have been able to configure without issue using the following options:
> ./configure --prefix=/usr/local/ --with-cuda --with-slurm 
> --with-pmi=/usr/local/slurm/include/slurm 
> --with-pmi-libdir=/usr/local/slurm/lib64
>  
> Everything compiles just fine until I get this error:
>  
> make[3]: Leaving directory 
> `/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x'
> make[2]: Leaving directory 
> `/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x'
> Making all in mca/pmix/s1
> make[2]: Entering directory 
> `/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/s1'
>   CC   mca_pmix_s1_la-pmix_s1.lo
> pmix_s1.c:29:17: fatal error: pmi.h: No such file or directory
> #include 
>  ^
> compilation terminated.
> make[2]: *** [mca_pmix_s1_la-pmix_s1.lo] Error 1
> make[2]: Leaving directory 
> `/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/s1'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory `/usr/local/src/openmpi/openmpi-3.1.2/opal'
> make: *** [all-recursive] Error 1
>  
>  
> any ideas why I am getting this error?
> Thanks
>  
> ___
> users mailing list
> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> https://lists.open-mpi.org/mailman/listinfo/users 
> <https://lists.open-mpi.org/mailman/listinfo/users>
>  
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] issue compiling openmpi 3.2.1 with pmi and slurm

2018-10-10 Thread Ralph H Castain
It appears that the CPPFLAGS isn’t getting set correctly as the component 
didn’t find the Slurm PMI-1 header file. Perhaps it would help if we saw the 
config.log output so we can see where OMPI thought the file was located.


> On Oct 10, 2018, at 6:44 AM, Ross, Daniel B. via users 
>  wrote:
> 
> I have been able to configure without issue using the following options:
> ./configure --prefix=/usr/local/ --with-cuda --with-slurm 
> --with-pmi=/usr/local/slurm/include/slurm 
> --with-pmi-libdir=/usr/local/slurm/lib64
>  
> Everything compiles just fine until I get this error:
>  
> make[3]: Leaving directory 
> `/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x'
> make[2]: Leaving directory 
> `/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/pmix2x'
> Making all in mca/pmix/s1
> make[2]: Entering directory 
> `/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/s1'
>   CC   mca_pmix_s1_la-pmix_s1.lo
> pmix_s1.c:29:17: fatal error: pmi.h: No such file or directory
> #include 
>  ^
> compilation terminated.
> make[2]: *** [mca_pmix_s1_la-pmix_s1.lo] Error 1
> make[2]: Leaving directory 
> `/usr/local/src/openmpi/openmpi-3.1.2/opal/mca/pmix/s1'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory `/usr/local/src/openmpi/openmpi-3.1.2/opal'
> make: *** [all-recursive] Error 1
>  
>  
> any ideas why I am getting this error?
> Thanks
>  
> ___
> users mailing list
> users@lists.open-mpi.org 
> https://lists.open-mpi.org/mailman/listinfo/users 
> 
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Unable to spawn MPI processes on multiple nodes with recent version of OpenMPI

2018-10-06 Thread Ralph H Castain
Just FYI: on master (and perhaps 4.0), child jobs do not inherit their parent's 
mapping policy by default. You have to add “-mca rmaps_base_inherit 1” to your 
mpirun cmd line.


> On Oct 6, 2018, at 10:00 AM, Andrew Benson  
> wrote:
> 
> Thanks, I'll try this right away.
> 
> Thanks,
> Andrew
> 
> 
> --
> 
> * Andrew Benson: http://users.obs.carnegiescience.edu/abenson/contact.html 
> <http://users.obs.carnegiescience.edu/abenson/contact.html>
> 
> * Galacticus: http://sites.google.com/site/galacticusmodel 
> <http://sites.google.com/site/galacticusmodel>
> On Sat, Oct 6, 2018, 9:02 AM Ralph H Castain  <mailto:r...@open-mpi.org>> wrote:
> Sorry for delay - this should be fixed by 
> https://github.com/open-mpi/ompi/pull/5854 
> <https://github.com/open-mpi/ompi/pull/5854>
> 
> > On Sep 19, 2018, at 8:00 AM, Andrew Benson  > <mailto:abenso...@gmail.com>> wrote:
> > 
> > On further investigation removing the "preconnect_all" option does change 
> > the 
> > problem at least. Without "preconnect_all" I no longer see:
> > 
> > --
> > At least one pair of MPI processes are unable to reach each other for
> > MPI communications.  This means that no Open MPI device has indicated
> > that it can be used to communicate between these processes.  This is
> > an error; Open MPI requires that all MPI processes be able to reach
> > each other.  This error can sometimes be the result of forgetting to
> > specify the "self" BTL.
> > 
> >  Process 1 ([[32179,2],15]) is on host: node092
> >  Process 2 ([[32179,2],0]) is on host: unknown!
> >  BTLs attempted: self tcp vader
> > 
> > Your MPI job is now going to abort; sorry.
> > --
> > 
> > 
> > Instead it hangs for several minutes and finally aborts with:
> > 
> > --
> > A request has timed out and will therefore fail:
> > 
> >  Operation:  LOOKUP: orted/pmix/pmix_server_pub.c:345
> > 
> > Your job may terminate as a result of this problem. You may want to
> > adjust the MCA parameter pmix_server_max_wait and try again. If this
> > occurred during a connect/accept operation, you can adjust that time
> > using the pmix_base_exchange_timeout parameter.
> > --
> > [node091:19470] *** An error occurred in MPI_Comm_spawn
> > [node091:19470] *** reported by process [1614086145,0]
> > [node091:19470] *** on communicator MPI_COMM_WORLD
> > [node091:19470] *** MPI_ERR_UNKNOWN: unknown error
> > [node091:19470] *** MPI_ERRORS_ARE_FATAL (processes in this communicator 
> > will 
> > now abort,
> > [node091:19470] ***and potentially your MPI job)
> > 
> > I've tried increasing both pmix_server_max_wait and 
> > pmix_base_exchange_timeout 
> > as suggested in the error message, but the result is unchanged (it just 
> > takes 
> > longer to time out).
> > 
> > Once again, if I remove "--map-by node" it runs successfully.
> > 
> > -Andrew
> > 
> > 
> > 
> > On Sunday, September 16, 2018 7:03:15 AM PDT Ralph H Castain wrote:
> >> I see you are using “preconnect_all” - that is the source of the trouble. I
> >> don’t believe we have tested that option in years and the code is almost
> >> certainly dead. I’d suggest removing that option and things should work.
> >>> On Sep 15, 2018, at 1:46 PM, Andrew Benson  >>> <mailto:abenso...@gmail.com>> wrote:
> >>> 
> >>> I'm running into problems trying to spawn MPI processes across multiple
> >>> nodes on a cluster using recent versions of OpenMPI. Specifically, using
> >>> the attached Fortan code, compiled using OpenMPI 3.1.2 with:
> >>> 
> >>> mpif90 test.F90 -o test.exe
> >>> 
> >>> and run via a PBS scheduler using the attached test1.pbs, it fails as can
> >>> be seen in the attached testFAIL.err file.
> >>> 
> >>> If I do the same but using OpenMPI v1.10.3 then it works successfully,
> >>> giving me the output in the attached testSUCCESS.err file.
> >>> 
> >>> From testing a few different versions of OpenMPI it seems that the
> >>> behavior
> >>> changed between v1.10.7 and v2.0.4.
> >>> 
> >>> Is

Re: [OMPI users] Unable to spawn MPI processes on multiple nodes with recent version of OpenMPI

2018-10-06 Thread Ralph H Castain
Sorry for delay - this should be fixed by 
https://github.com/open-mpi/ompi/pull/5854

> On Sep 19, 2018, at 8:00 AM, Andrew Benson  wrote:
> 
> On further investigation removing the "preconnect_all" option does change the 
> problem at least. Without "preconnect_all" I no longer see:
> 
> --
> At least one pair of MPI processes are unable to reach each other for
> MPI communications.  This means that no Open MPI device has indicated
> that it can be used to communicate between these processes.  This is
> an error; Open MPI requires that all MPI processes be able to reach
> each other.  This error can sometimes be the result of forgetting to
> specify the "self" BTL.
> 
>  Process 1 ([[32179,2],15]) is on host: node092
>  Process 2 ([[32179,2],0]) is on host: unknown!
>  BTLs attempted: self tcp vader
> 
> Your MPI job is now going to abort; sorry.
> --
> 
> 
> Instead it hangs for several minutes and finally aborts with:
> 
> --
> A request has timed out and will therefore fail:
> 
>  Operation:  LOOKUP: orted/pmix/pmix_server_pub.c:345
> 
> Your job may terminate as a result of this problem. You may want to
> adjust the MCA parameter pmix_server_max_wait and try again. If this
> occurred during a connect/accept operation, you can adjust that time
> using the pmix_base_exchange_timeout parameter.
> --
> [node091:19470] *** An error occurred in MPI_Comm_spawn
> [node091:19470] *** reported by process [1614086145,0]
> [node091:19470] *** on communicator MPI_COMM_WORLD
> [node091:19470] *** MPI_ERR_UNKNOWN: unknown error
> [node091:19470] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will 
> now abort,
> [node091:19470] ***and potentially your MPI job)
> 
> I've tried increasing both pmix_server_max_wait and 
> pmix_base_exchange_timeout 
> as suggested in the error message, but the result is unchanged (it just takes 
> longer to time out).
> 
> Once again, if I remove "--map-by node" it runs successfully.
> 
> -Andrew
> 
> 
> 
> On Sunday, September 16, 2018 7:03:15 AM PDT Ralph H Castain wrote:
>> I see you are using “preconnect_all” - that is the source of the trouble. I
>> don’t believe we have tested that option in years and the code is almost
>> certainly dead. I’d suggest removing that option and things should work.
>>> On Sep 15, 2018, at 1:46 PM, Andrew Benson  wrote:
>>> 
>>> I'm running into problems trying to spawn MPI processes across multiple
>>> nodes on a cluster using recent versions of OpenMPI. Specifically, using
>>> the attached Fortan code, compiled using OpenMPI 3.1.2 with:
>>> 
>>> mpif90 test.F90 -o test.exe
>>> 
>>> and run via a PBS scheduler using the attached test1.pbs, it fails as can
>>> be seen in the attached testFAIL.err file.
>>> 
>>> If I do the same but using OpenMPI v1.10.3 then it works successfully,
>>> giving me the output in the attached testSUCCESS.err file.
>>> 
>>> From testing a few different versions of OpenMPI it seems that the
>>> behavior
>>> changed between v1.10.7 and v2.0.4.
>>> 
>>> Is there some change in options needed to make this work with newer
>>> OpenMPIs?
>>> 
>>> Output from omp_info --all is attached. config.log can be found here:
>>> 
>>> http://users.obs.carnegiescience.edu/abenson/config.log.bz2
>>> 
>>> Thanks for any help you can offer!
>>> 
>>> -Andrew>> ESS.err.bz2>___ users mailing
>>> list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> 
> -- 
> 
> * Andrew Benson: http://users.obs.carnegiescience.edu/abenson/contact.html
> 
> * Galacticus: https://bitbucket.org/abensonca/galacticus
> 

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] opal_pmix_base_select failed for master and 4.0.0

2018-10-05 Thread Ralph H Castain
Please send Jeff and I the opal/mca/pmix/pmix4x/pmix/config.log again - we’ll 
need to see why it isn’t building. The patch definitely is not in the v4.0 
branch, but it should have been in master.


> On Oct 5, 2018, at 2:04 AM, Siegmar Gross 
>  wrote:
> 
> Hi Ralph, hi Jeff,
> 
> 
> On 10/3/18 8:14 PM, Ralph H Castain wrote:
>> Jeff and I talked and believe the patch in 
>> https://github.com/open-mpi/ompi/pull/5836 
>> <https://github.com/open-mpi/ompi/pull/5836> should fix the problem.
> 
> 
> Today I've installed openmpi-master-201810050304-5f1c940 and
> openmpi-v4.0.x-201810050241-c079666. Unfortunately, I still get the
> same error for all seven versions that I was able to build.
> 
> loki hello_1 114 mpicc --showme
> gcc -I/usr/local/openmpi-master_64_gcc/include -fexceptions -pthread -std=c11 
> -m64 -Wl,-rpath -Wl,/usr/local/openmpi-master_64_gcc/lib64 
> -Wl,--enable-new-dtags -L/usr/local/openmpi-master_64_gcc/lib64 -lmpi
> 
> loki hello_1 115 ompi_info | grep "Open MPI repo revision"
>  Open MPI repo revision: v2.x-dev-6262-g5f1c940
> 
> loki hello_1 116 mpicc hello_1_mpi.c
> 
> loki hello_1 117 mpiexec -np 2 a.out
> [loki:25575] [[64603,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../openmpi-master-201810050304-5f1c940/orte/mca/ess/hnp/ess_hnp_module.c
>  at line 320
> --
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>  opal_pmix_base_select failed
>  --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --
> loki hello_1 118
> 
> 
> I don't know, if you have already applied your suggested patch or if the
> error message is still from a version without that patch. Do you need
> anything else?
> 
> 
> Best regards
> 
> Siegmar
> 
> 
>>> On Oct 2, 2018, at 2:50 PM, Jeff Squyres (jsquyres) via users 
>>>  wrote:
>>> 
>>> (Ralph sent me Siegmar's pmix config.log, which Siegmar sent to him 
>>> off-list)
>>> 
>>> It looks like Siegmar passed --with-hwloc=internal.
>>> 
>>> Open MPI's configure understood this and did the appropriate things.
>>> PMIX's configure didn't.
>>> 
>>> I think we need to add an adjustment into the PMIx configure.m4 in OMPI...
>>> 
>>> 
>>>> On Oct 2, 2018, at 5:25 PM, Ralph H Castain  wrote:
>>>> 
>>>> Hi Siegmar
>>>> 
>>>> I honestly have no idea - for some reason, the PMIx component isn’t seeing 
>>>> the internal hwloc code in your environment.
>>>> 
>>>> Jeff, Brice - any ideas?
>>>> 
>>>> 
>>>>> On Oct 2, 2018, at 1:18 PM, Siegmar Gross 
>>>>>  wrote:
>>>>> 
>>>>> Hi Ralph,
>>>>> 
>>>>> how can I confirm that HWLOC built? Some hwloc files are available
>>>>> in the built directory.
>>>>> 
>>>>> loki openmpi-master-201809290304-73075b8-Linux.x86_64.64_gcc 111 find . 
>>>>> -name '*hwloc*'
>>>>> ./opal/mca/btl/usnic/.deps/btl_usnic_hwloc.Plo
>>>>> ./opal/mca/hwloc
>>>>> ./opal/mca/hwloc/external/.deps/hwloc_external_component.Plo
>>>>> ./opal/mca/hwloc/base/hwloc_base_frame.lo
>>>>> ./opal/mca/hwloc/base/.deps/hwloc_base_dt.Plo
>>>>> ./opal/mca/hwloc/base/.deps/hwloc_base_maffinity.Plo
>>>>> ./opal/mca/hwloc/base/.deps/hwloc_base_frame.Plo
>>>>> ./opal/mca/hwloc/base/.deps/hwloc_base_util.Plo
>>>>> ./opal/mca/hwloc/base/hwloc_base_dt.lo
>>>>> ./opal/mca/hwloc/base/hwloc_base_util.lo
>>>>> ./opal/mca/hwloc/base/hwloc_base_maffinity.lo
>>>>> ./opal/mca/hwloc/base/.libs/hwloc_base_util.o
>>>>> ./opal/mca/hwloc/base/.libs/hwloc_base_dt.o
>>>>> ./opal/mca/hwloc/base/.libs/hwloc_base_maffinity.o
>>>>> ./opal/mca/hwloc/base/.libs/hwloc_base_frame.o
>>>>> ./opal/mca/hwloc/.libs/libmca_hwloc.la
>>>>> ./opal/mca/hwloc/.libs/libmca_hwloc.a
>>>>> ./opal/mca/hwloc/libmca_hwloc.la
>>>>&

Re: [OMPI users] Cannot run MPI code on multiple cores with PBS

2018-10-03 Thread Ralph H Castain
Actually, I see that you do have the tm components built, but they cannot be 
loaded because you are missing libcrypto from your LD_LIBRARY_PATH


> On Oct 3, 2018, at 12:33 PM, Ralph H Castain  wrote:
> 
> Did you configure OMPI —with-tm=? It looks like we didn’t 
> build PBS support and so we only see one node with a single slot allocated to 
> it.
> 
> 
>> On Oct 3, 2018, at 12:02 PM, Castellana Michele > <mailto:michele.castell...@curie.fr>> wrote:
>> 
>> Dear all,
>> I am having trouble running an MPI code across multiple cores on a new 
>> computer cluster, which uses PBS. Here is a minimal example, where I want to 
>> run two MPI processes, each on  a different node. The PBS script is 
>> 
>> #!/bin/bash
>> #PBS -l walltime=00:01:00
>> #PBS -l mem=1gb
>> #PBS -l nodes=2:ppn=1
>> #PBS -q batch
>> #PBS -N test
>> mpirun -np 2 ./code.o
>> 
>> and when I submit it with 
>> 
>> $qsub script.sh
>> 
>> I get the following message in the PBS error file
>> 
>> $ cat test.e1234 
>> [shbli040:08879] mca_base_component_repository_open: unable to open 
>> mca_plm_tm: libcrypto.so.0.9.8: cannot open shared object file: No such file 
>> or directory (ignored)
>> [shbli040:08879] mca_base_component_repository_open: unable to open 
>> mca_oob_ud: libibverbs.so.1: cannot open shared object file: No such file or 
>> directory (ignored)
>> [shbli040:08879] mca_base_component_repository_open: unable to open 
>> mca_ras_tm: libcrypto.so.0.9.8: cannot open shared object file: No such file 
>> or directory (ignored)
>> --
>> There are not enough slots available in the system to satisfy the 2 slots
>> that were requested by the application:
>>   ./code.o
>> 
>> Either request fewer slots for your application, or make more slots available
>> for use.
>> —
>> 
>> The PBS version is
>> 
>> $ qstat --version
>> Version: 6.1.2
>> 
>> and here is some additional information on the MPI version
>> 
>> $ mpicc -v
>> Using built-in specs.
>> COLLECT_GCC=/bin/gcc
>> COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
>> Target: x86_64-redhat-linux
>> […]
>> Thread model: posix
>> gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) 
>> 
>> Do you guys know what may be the issue here? 
>> 
>> Thank you
>> Best,
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> ___
>> users mailing list
>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Cannot run MPI code on multiple cores with PBS

2018-10-03 Thread Ralph H Castain
Did you configure OMPI —with-tm=? It looks like we didn’t 
build PBS support and so we only see one node with a single slot allocated to 
it.


> On Oct 3, 2018, at 12:02 PM, Castellana Michele  
> wrote:
> 
> Dear all,
> I am having trouble running an MPI code across multiple cores on a new 
> computer cluster, which uses PBS. Here is a minimal example, where I want to 
> run two MPI processes, each on  a different node. The PBS script is 
> 
> #!/bin/bash
> #PBS -l walltime=00:01:00
> #PBS -l mem=1gb
> #PBS -l nodes=2:ppn=1
> #PBS -q batch
> #PBS -N test
> mpirun -np 2 ./code.o
> 
> and when I submit it with 
> 
> $qsub script.sh
> 
> I get the following message in the PBS error file
> 
> $ cat test.e1234 
> [shbli040:08879] mca_base_component_repository_open: unable to open 
> mca_plm_tm: libcrypto.so.0.9.8: cannot open shared object file: No such file 
> or directory (ignored)
> [shbli040:08879] mca_base_component_repository_open: unable to open 
> mca_oob_ud: libibverbs.so.1: cannot open shared object file: No such file or 
> directory (ignored)
> [shbli040:08879] mca_base_component_repository_open: unable to open 
> mca_ras_tm: libcrypto.so.0.9.8: cannot open shared object file: No such file 
> or directory (ignored)
> --
> There are not enough slots available in the system to satisfy the 2 slots
> that were requested by the application:
>   ./code.o
> 
> Either request fewer slots for your application, or make more slots available
> for use.
> —
> 
> The PBS version is
> 
> $ qstat --version
> Version: 6.1.2
> 
> and here is some additional information on the MPI version
> 
> $ mpicc -v
> Using built-in specs.
> COLLECT_GCC=/bin/gcc
> COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
> Target: x86_64-redhat-linux
> […]
> Thread model: posix
> gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) 
> 
> Do you guys know what may be the issue here? 
> 
> Thank you
> Best,
> 
> 
> 
> 
> 
> 
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] opal_pmix_base_select failed for master and 4.0.0

2018-10-03 Thread Ralph H Castain
Jeff and I talked and believe the patch in 
https://github.com/open-mpi/ompi/pull/5836 should fix the problem.


> On Oct 2, 2018, at 2:50 PM, Jeff Squyres (jsquyres) via users 
>  wrote:
> 
> (Ralph sent me Siegmar's pmix config.log, which Siegmar sent to him off-list)
> 
> It looks like Siegmar passed --with-hwloc=internal.
> 
> Open MPI's configure understood this and did the appropriate things.
> PMIX's configure didn't.
> 
> I think we need to add an adjustment into the PMIx configure.m4 in OMPI...
> 
> 
>> On Oct 2, 2018, at 5:25 PM, Ralph H Castain  wrote:
>> 
>> Hi Siegmar
>> 
>> I honestly have no idea - for some reason, the PMIx component isn’t seeing 
>> the internal hwloc code in your environment.
>> 
>> Jeff, Brice - any ideas?
>> 
>> 
>>> On Oct 2, 2018, at 1:18 PM, Siegmar Gross 
>>>  wrote:
>>> 
>>> Hi Ralph,
>>> 
>>> how can I confirm that HWLOC built? Some hwloc files are available
>>> in the built directory.
>>> 
>>> loki openmpi-master-201809290304-73075b8-Linux.x86_64.64_gcc 111 find . 
>>> -name '*hwloc*'
>>> ./opal/mca/btl/usnic/.deps/btl_usnic_hwloc.Plo
>>> ./opal/mca/hwloc
>>> ./opal/mca/hwloc/external/.deps/hwloc_external_component.Plo
>>> ./opal/mca/hwloc/base/hwloc_base_frame.lo
>>> ./opal/mca/hwloc/base/.deps/hwloc_base_dt.Plo
>>> ./opal/mca/hwloc/base/.deps/hwloc_base_maffinity.Plo
>>> ./opal/mca/hwloc/base/.deps/hwloc_base_frame.Plo
>>> ./opal/mca/hwloc/base/.deps/hwloc_base_util.Plo
>>> ./opal/mca/hwloc/base/hwloc_base_dt.lo
>>> ./opal/mca/hwloc/base/hwloc_base_util.lo
>>> ./opal/mca/hwloc/base/hwloc_base_maffinity.lo
>>> ./opal/mca/hwloc/base/.libs/hwloc_base_util.o
>>> ./opal/mca/hwloc/base/.libs/hwloc_base_dt.o
>>> ./opal/mca/hwloc/base/.libs/hwloc_base_maffinity.o
>>> ./opal/mca/hwloc/base/.libs/hwloc_base_frame.o
>>> ./opal/mca/hwloc/.libs/libmca_hwloc.la
>>> ./opal/mca/hwloc/.libs/libmca_hwloc.a
>>> ./opal/mca/hwloc/libmca_hwloc.la
>>> ./opal/mca/hwloc/hwloc201
>>> ./opal/mca/hwloc/hwloc201/.deps/hwloc201_component.Plo
>>> ./opal/mca/hwloc/hwloc201/hwloc201_component.lo
>>> ./opal/mca/hwloc/hwloc201/hwloc
>>> ./opal/mca/hwloc/hwloc201/hwloc/include/hwloc
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/libhwloc_embedded.la
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/.deps/hwloc_pci_la-topology-pci.Plo
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/.deps/hwloc_gl_la-topology-gl.Plo
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/.deps/hwloc_cuda_la-topology-cuda.Plo
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/.deps/hwloc_xml_libxml_la-topology-xml-libxml.Plo
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/.deps/hwloc_opencl_la-topology-opencl.Plo
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/.deps/hwloc_nvml_la-topology-nvml.Plo
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/.libs/libhwloc_embedded.la
>>> ./opal/mca/hwloc/hwloc201/hwloc/hwloc/.libs/libhwloc_embedded.a
>>> ./opal/mca/hwloc/hwloc201/.libs/hwloc201_component.o
>>> ./opal/mca/hwloc/hwloc201/.libs/libmca_hwloc_hwloc201.la
>>> ./opal/mca/hwloc/hwloc201/.libs/libmca_hwloc_hwloc201.a
>>> ./opal/mca/hwloc/hwloc201/libmca_hwloc_hwloc201.la
>>> ./orte/mca/rtc/hwloc
>>> ./orte/mca/rtc/hwloc/rtc_hwloc.lo
>>> ./orte/mca/rtc/hwloc/.deps/rtc_hwloc.Plo
>>> ./orte/mca/rtc/hwloc/.deps/rtc_hwloc_component.Plo
>>> ./orte/mca/rtc/hwloc/mca_rtc_hwloc.la
>>> ./orte/mca/rtc/hwloc/.libs/mca_rtc_hwloc.so
>>> ./orte/mca/rtc/hwloc/.libs/mca_rtc_hwloc.la
>>> ./orte/mca/rtc/hwloc/.libs/rtc_hwloc.o
>>> ./orte/mca/rtc/hwloc/.libs/rtc_hwloc_component.o
>>> ./orte/mca/rtc/hwloc/.libs/mca_rtc_hwloc.soT
>>> ./orte/mca/rtc/hwloc/.libs/mca_rtc_hwloc.lai
>>> ./orte/mca/rtc/hwloc/rtc_hwloc_component.lo
>>> loki openmpi-master-201809290304-73075b8-Linux.x86_64.64_gcc 112
>>> 
>>> And some files are available in the install directory.
>>> 
>>> loki openmpi-master_64_gcc 116 find . -name '*hwloc*'
>>> ./share/openmpi/help-orte-rtc-hwloc.txt
>>> ./share/openmpi/help-opal-hwloc-base.txt
>>> ./lib64/openmpi/mca_rtc_hwloc.so
>>> ./lib64/openmpi/mca_rtc_hwloc.la
>>> loki openmpi-master_64_gcc 117
>>> 
>>> I don't see any unavailable libraries so that the only available
>>> hwloc library should work.
>>> 
>>> loki openmpi 126 ldd -v mca_r

Re: [OMPI users] opal_pmix_base_select failed for master and 4.0.0

2018-10-02 Thread Ralph H Castain
so.6
>libc.so.6 (GLIBC_2.3.2) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
>libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6
>/lib64/libc.so.6:
>ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2
>ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => 
> /lib64/ld-linux-x86-64.so.2
>/usr/local/gcc-8.2.0/lib64/libgcc_s.so.1:
>libc.so.6 (GLIBC_2.14) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
>/lib64/libselinux.so.1:
>libdl.so.2 (GLIBC_2.2.5) => /lib64/libdl.so.2
>ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2
>libc.so.6 (GLIBC_2.14) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.8) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.7) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6
>/lib64/libcap.so.2:
>libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.8) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
>/lib64/libresolv.so.2:
>libc.so.6 (GLIBC_2.14) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6
>libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6
>/usr/lib64/libpcre.so.1:
>libpthread.so.0 (GLIBC_2.2.5) => /lib64/libpthread.so.0
>libc.so.6 (GLIBC_2.14) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
>libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6
> loki openmpi 127
> 
> Hopefully that helps to find the problem. I will answer your emails
> tommorrow if you need anything else.
> 
> 
> Best regards
> 
> Siegmar
> 
> 
> Am 02.10.2018 um 19:48 schrieb Ralph H Castain:
>> So the problem is here when configuring the internal PMIx code:
>> configure:3383: === HWLOC
>> configure:36189: checking for hwloc in
>> configure:36201: result: Could not find internal/lib or internal/lib64
>> configure:36203: error: Can not continue
>> Can you confirm that HWLOC built? I believe we require it, but perhaps 
>> something is different about this environment.
>>> On Oct 2, 2018, at 6:36 AM, Ralph H Castain  wrote:
>>> 
>>> Looks like PMIx failed to build - can you send the config.log?
>>> 
>>>> On Oct 2, 2018, at 12:00 AM, Siegmar Gross 
>>>>  wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> yesterday I've installed openmpi-v4.0.x-201809290241-a7e275c and
>>>> openmpi-master-201805080348-b39bbfb on my "SUSE Linux Enterprise Server
>>>> 12.3 (x86_64)" with Sun C 5.15, gcc 6.4.0, Intel icc 18.0.3, and Portland
>>>> Group pgcc 18.4-0. Unfortunately, I get the following error for all seven
>>>> installed versions (Sun C couldn't built master as I mentioned in another
>>>> email).
>>>> 
>>>> 
>>>> loki hello_1 118 mpiexec -np 4 --host loki:2,nfs2:2 hello_1_mpi
>>>> [loki:11423] [[45859,0],0] ORTE_ERROR_LOG: Not found in file 
>>>> ../../../../../openmpi-v4.0.x-201809290241-a7e275c/orte/mca/ess/hnp/ess_hnp_module.c
>>>>  at line 321
>>>> --
>>>> It looks like orte_init failed for some reason; your parallel process is
>>>> likely to abort.  There are many reasons that a parallel process can
>>>> fail during orte_init; some of which are due to configuration or
>>>> environment problems.  This failure appears to be an internal failure;
>>>> here's some additional information (which may only be relevant to an
>>>> Open MPI developer):
>>>> 
>>>> opal_pmix_base_select failed
>>>> --> Returned value Not found (-13) instead of ORTE_SUCCESS
>>>> --
>>>> loki hello_1 119
>>>> 
>>>> 
>>>> 
>>>> I would be grateful, if somebody can fix the problem. Do you need anything
>>>> else? Thank you very much for any help in advance.
>>>> 
>>>> 
>>>> Kind regards
>>>> 
>>>> Siegmar
>>>> ___
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> 
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] opal_pmix_base_select failed for master and 4.0.0

2018-10-02 Thread Ralph H Castain
So the problem is here when configuring the internal PMIx code:

configure:3383: === HWLOC
configure:36189: checking for hwloc in
configure:36201: result: Could not find internal/lib or internal/lib64
configure:36203: error: Can not continue

Can you confirm that HWLOC built? I believe we require it, but perhaps 
something is different about this environment.


> On Oct 2, 2018, at 6:36 AM, Ralph H Castain  wrote:
> 
> Looks like PMIx failed to build - can you send the config.log?
> 
>> On Oct 2, 2018, at 12:00 AM, Siegmar Gross 
>>  wrote:
>> 
>> Hi,
>> 
>> yesterday I've installed openmpi-v4.0.x-201809290241-a7e275c and
>> openmpi-master-201805080348-b39bbfb on my "SUSE Linux Enterprise Server
>> 12.3 (x86_64)" with Sun C 5.15, gcc 6.4.0, Intel icc 18.0.3, and Portland
>> Group pgcc 18.4-0. Unfortunately, I get the following error for all seven
>> installed versions (Sun C couldn't built master as I mentioned in another
>> email).
>> 
>> 
>> loki hello_1 118 mpiexec -np 4 --host loki:2,nfs2:2 hello_1_mpi
>> [loki:11423] [[45859,0],0] ORTE_ERROR_LOG: Not found in file 
>> ../../../../../openmpi-v4.0.x-201809290241-a7e275c/orte/mca/ess/hnp/ess_hnp_module.c
>>  at line 321
>> --
>> It looks like orte_init failed for some reason; your parallel process is
>> likely to abort.  There are many reasons that a parallel process can
>> fail during orte_init; some of which are due to configuration or
>> environment problems.  This failure appears to be an internal failure;
>> here's some additional information (which may only be relevant to an
>> Open MPI developer):
>> 
>> opal_pmix_base_select failed
>> --> Returned value Not found (-13) instead of ORTE_SUCCESS
>> --
>> loki hello_1 119
>> 
>> 
>> 
>> I would be grateful, if somebody can fix the problem. Do you need anything
>> else? Thank you very much for any help in advance.
>> 
>> 
>> Kind regards
>> 
>> Siegmar
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] opal_pmix_base_select failed for master and 4.0.0

2018-10-02 Thread Ralph H Castain
Looks like PMIx failed to build - can you send the config.log?

> On Oct 2, 2018, at 12:00 AM, Siegmar Gross 
>  wrote:
> 
> Hi,
> 
> yesterday I've installed openmpi-v4.0.x-201809290241-a7e275c and
> openmpi-master-201805080348-b39bbfb on my "SUSE Linux Enterprise Server
> 12.3 (x86_64)" with Sun C 5.15, gcc 6.4.0, Intel icc 18.0.3, and Portland
> Group pgcc 18.4-0. Unfortunately, I get the following error for all seven
> installed versions (Sun C couldn't built master as I mentioned in another
> email).
> 
> 
> loki hello_1 118 mpiexec -np 4 --host loki:2,nfs2:2 hello_1_mpi
> [loki:11423] [[45859,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../openmpi-v4.0.x-201809290241-a7e275c/orte/mca/ess/hnp/ess_hnp_module.c
>  at line 321
> --
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>  opal_pmix_base_select failed
>  --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --
> loki hello_1 119
> 
> 
> 
> I would be grateful, if somebody can fix the problem. Do you need anything
> else? Thank you very much for any help in advance.
> 
> 
> Kind regards
> 
> Siegmar
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] mpirun noticed that process rank 5 with PID 0 on node localhost exited on signal 9 (Killed).

2018-09-28 Thread Ralph H Castain
Ummm…looks like you have a problem in your input deck to that application. Not 
sure what we can say about it…


> On Sep 28, 2018, at 9:47 AM, Zeinab Salah  wrote:
> 
> Hi everyone,
> I use openmpi-3.0.2 and I want to run chimere model with 8 processors, but in 
> the step of parallel mode, the run stopped with the following error message,
> Please could you help me? 
> Thank you in advance
> Zeinab
> 
>  +++ CHIMERE RUNNING IN PARALLEL MODE +++
>   MPI SUB-DOMAINS :
> rank  izstart  izend  nzcount  imstart imend  nmcount i   j
> 
>1   1  14  14   1  22  22   1   1
>2  15  27  13   1  22  22   2   1
>3  28  40  13   1  22  22   3   1
>4  41  53  13   1  22  22   4   1
>5   1  14  14  23  43  21   1   2
>6  15  27  13  23  43  21   2   2
>7  28  40  13  23  43  21   3   2
>8  41  53  13  23  43  21   4   2
>  Sub domain dimensions:   14  22
> 
>  boundary conditions: 
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.list
>3  boundary conditions file(s) found
>  Opening 
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-gas
>  Opening 
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-aer
>  Opening 
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-dust
>  Opening 
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-gas
>  Opening 
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-aer
>  Opening 
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-dust
> ---
> Primary job  terminated normally, but 1 process returned
> a non-zero exit code. Per user-direction, the job has been aborted.
> ---
> --
> mpirun noticed that process rank 5 with PID 0 on node localhost exited on 
> signal 9 (Killed).
> --
> 
> real  3m51.733s
> user  0m5.044s
> sys   1m8.617s
> Abnormal termination of step2.sh
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Unable to spawn MPI processes on multiple nodes with recent version of OpenMPI

2018-09-16 Thread Ralph H Castain
I see you are using “preconnect_all” - that is the source of the trouble. I 
don’t believe we have tested that option in years and the code is almost 
certainly dead. I’d suggest removing that option and things should work.


> On Sep 15, 2018, at 1:46 PM, Andrew Benson  wrote:
> 
> I'm running into problems trying to spawn MPI processes across multiple nodes 
> on a cluster using recent versions of OpenMPI. Specifically, using the 
> attached 
> Fortan code, compiled using OpenMPI 3.1.2 with:
> 
> mpif90 test.F90 -o test.exe
> 
> and run via a PBS scheduler using the attached test1.pbs, it fails as can be 
> seen in the attached testFAIL.err file. 
> 
> If I do the same but using OpenMPI v1.10.3 then it works successfully, giving 
> me the output in the attached testSUCCESS.err file.
> 
> From testing a few different versions of OpenMPI it seems that the behavior 
> changed between v1.10.7 and v2.0.4. 
> 
> Is there some change in options needed to make this work with newer OpenMPIs?
> 
> Output from omp_info --all is attached. config.log can be found here:
> 
> http://users.obs.carnegiescience.edu/abenson/config.log.bz2
> 
> Thanks for any help you can offer!
> 
> -Andrew___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] No network interfaces were found for out-of-band communications.

2018-09-12 Thread Ralph H Castain
Just looking at the code, we do require that at least the loopback device be 
available. So you need to “activate” the Ethernet support, but you can restrict 
it to only loopback, which should meet your security requirement.


> On Sep 12, 2018, at 8:10 AM, Jeff Squyres (jsquyres) via users 
>  wrote:
> 
> Can you send all the information listed here:
> 
>https://www.open-mpi.org/community/help/
> 
> 
> 
>> On Sep 12, 2018, at 11:03 AM, Greg Russell  wrote:
>> 
>> OpenMPI-3.1.2
>> 
>> Sent from my iPhone
>> 
>> On Sep 12, 2018, at 10:50 AM, Ralph H Castain  wrote:
>> 
>>> What OMPI version are we talking about here?
>>> 
>>> 
>>>> On Sep 11, 2018, at 6:56 PM, Greg Russell  wrote:
>>>> 
>>>> I have a single machine w 96 cores.  It runs CentOS7 and is not connected 
>>>> to any network as it needs to isolated for security.
>>>> 
>>>> I attempted the standard install process and upon attempting to run 
>>>> ./mpirun I find the error message
>>>> 
>>>> "No network interfaces were found for out-of-band communications. We 
>>>> require at least one available network for out-of-band messaging."
>>>> 
>>>> I'm a rookie with openMPI so I'm guessing maybe some configuration flags 
>>>> might fix the whole problem?  Any ideas are very much appreciated.
>>>> 
>>>> Thank you,
>>>> Russell
>>>> ___
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> 
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] No network interfaces were found for out-of-band communications.

2018-09-12 Thread Ralph H Castain
What OMPI version are we talking about here?


> On Sep 11, 2018, at 6:56 PM, Greg Russell  wrote:
> 
> I have a single machine w 96 cores.  It runs CentOS7 and is not connected to 
> any network as it needs to isolated for security.
> 
> I attempted the standard install process and upon attempting to run ./mpirun 
> I find the error message
> 
> "No network interfaces were found for out-of-band communications. We require 
> at least one available network for out-of-band messaging."
> 
> I'm a rookie with openMPI so I'm guessing maybe some configuration flags 
> might fix the whole problem?  Any ideas are very much appreciated.
> 
> Thank you,
> Russell
> ___
> users mailing list
> users@lists.open-mpi.org 
> https://lists.open-mpi.org/mailman/listinfo/users 
> 
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] stdout/stderr question

2018-09-10 Thread Ralph H Castain
Looks like there is a place in orte/mca/state/state_base_fns.c:850 that also 
outputs to orte_clean_output instead of using show_help. Outside of those two 
places, everything else seems to go to show_help.


> On Sep 10, 2018, at 8:58 PM, Gilles Gouaillardet  wrote:
> 
> It seems I got it wrong :-(
> 
> 
> Can you please give the attached patch a try ?
> 
> 
> FWIW, an other option would be to opal_output(orte_help_output, ...) but we 
> would have to make orte_help_output "public first.
> 
> 
> Cheers,
> 
> 
> Gilles
> 
> 
> 
> 
> On 9/11/2018 11:14 AM, emre brookes wrote:
>> Gilles Gouaillardet wrote:
>>> I investigated a this a bit and found that the (latest ?) v3 branches have 
>>> the expected behavior
>>> 
>>> (e.g. the error messages is sent to stderr)
>>> 
>>> 
>>> Since it is very unlikely Open MPI 2.1 will ever be updated, I can simply 
>>> encourage you to upgrade to a newer Open MPI version.
>>> 
>>> Latest fully supported versions are currently such as 3.1.2 or 3.0.2
>>> 
>>> 
>>> 
>>> Cheers,
>>> 
>>> Gilles
>>> 
>>> 
>> So you tested 3.1.2 or something newer with this error?
>> 
>>> But the originally reported error still goes to stdout:
>>> 
>>> $ /src/ompi-3.1.2/bin/mpicxx test_without_mpi_abort.cpp
>>> $ /src/ompi-3.1.2/bin/mpirun -np 2 a.out > stdout
>>> -- 
>>> mpirun detected that one or more processes exited with non-zero status, 
>>> thus causing
>>> the job to be terminated. The first process to do so was:
>>> 
>>>   Process name: [[22380,1],0]
>>>   Exit code:255
>>> -- 
>>> $ cat stdout
>>> hello from 0
>>> hello from 1
>>> ---
>>> Primary job  terminated normally, but 1 process returned
>>> a non-zero exit code. Per user-direction, the job has been aborted.
>>> ---
>>> $
>> -Emre
>> 
>> 
>> 
>>> 
>>> On 9/11/2018 2:27 AM, Ralph H Castain wrote:
>>>> I’m not sure why this would be happening. These error outputs go through 
>>>> the “show_help” functionality, and we specifically target it at stderr:
>>>> 
>>>>  /* create an output stream for us */
>>>>  OBJ_CONSTRUCT(&lds, opal_output_stream_t);
>>>>  lds.lds_want_stderr = true;
>>>>  orte_help_output = opal_output_open(&lds);
>>>> 
>>>> Jeff: is it possible the opal_output system is ignoring the request and 
>>>> pushing it to stdout??
>>>> Ralph
>>>> 
>>>> 
>>>>> On Sep 5, 2018, at 4:11 AM, emre brookes  wrote:
>>>>> 
>>>>> Thanks Gilles,
>>>>> 
>>>>> My goal is to separate openmpi errors from the stdout of the MPI program 
>>>>> itself so that errors can be identified externally (in particular in an 
>>>>> external framework running MPI jobs from various developers).
>>>>> 
>>>>> My not so "well written MPI program" was doing this:
>>>>>MPI_Finalize();
>>>>>exit( errorcode );
>>>>> Which I assume you are telling me was bad practice & will replace with
>>>>>MPI_Abort( MPI_COMM_WORLD, errorcode );
>>>>>MPI_Finalize();
>>>>>exit( errorcode );
>>>>> I was previously a bit put off of MPI_Abort due to the vagueness of the 
>>>>> man page:
>>>>>> _Description_
>>>>>> This routine makes a "best attempt" to abort all tasks in the group of 
>>>>>> comm. This function does not require that the invoking environment take 
>>>>>> any action with the error code. However, a UNIX or POSIX environment 
>>>>>> should handle this as a return errorcode from the main program or an 
>>>>>> abort (errorcode).
>>>>> & I didn't really have an MPI issue to "Abort", but had used this for a 
>>>>> user input or parameter issue.
>>>>> Nevertheless, I accept your best practice recommendation.
&

Re: [OMPI users] stdout/stderr question

2018-09-10 Thread Ralph H Castain
I’m not sure why this would be happening. These error outputs go through the 
“show_help” functionality, and we specifically target it at stderr:

/* create an output stream for us */
OBJ_CONSTRUCT(&lds, opal_output_stream_t);
lds.lds_want_stderr = true;
orte_help_output = opal_output_open(&lds);

Jeff: is it possible the opal_output system is ignoring the request and pushing 
it to stdout??
Ralph


> On Sep 5, 2018, at 4:11 AM, emre brookes  wrote:
> 
> Thanks Gilles,
> 
> My goal is to separate openmpi errors from the stdout of the MPI program 
> itself so that errors can be identified externally (in particular in an 
> external framework running MPI jobs from various developers).
> 
> My not so "well written MPI program" was doing this:
>   MPI_Finalize();
>   exit( errorcode );
> Which I assume you are telling me was bad practice & will replace with
>   MPI_Abort( MPI_COMM_WORLD, errorcode );
>   MPI_Finalize();
>   exit( errorcode );
> I was previously a bit put off of MPI_Abort due to the vagueness of the man 
> page:
>> _Description_
>> This routine makes a "best attempt" to abort all tasks in the group of comm. 
>> This function does not require that the invoking environment take any action 
>> with the error code. However, a UNIX or POSIX environment should handle this 
>> as a return errorcode from the main program or an abort (errorcode). 
> & I didn't really have an MPI issue to "Abort", but had used this for a user 
> input or parameter issue.
> Nevertheless, I accept your best practice recommendation.
> 
> It was not only the originally reported message, other messages went to 
> stdout.
> Initially used the Ubuntu 16 LTS  "$ apt install openmpi-bin libopenmpi-dev" 
> which got me version (1.10.2),
> but this morning compiled and tested 2.1.5, with the same behavior, e.g.:
> 
> $ /src/ompi-2.1.5/bin/mpicxx test_using_mpi_abort.cpp
> $ /src/ompi-2.1.5/bin/mpirun -np 2 a.out > stdout
> [domain-name-embargoed:26078] 1 more process has sent help message 
> help-mpi-api.txt / mpi-abort
> [domain-name-embargoed:26078] Set MCA parameter "orte_base_help_aggregate" to 
> 0 to see all help / error messages
> $ cat stdout
> hello from 0
> hello from 1
> --
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode -1.
> 
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --
> $
> 
> Tested 3.1.2, where this has been *somewhat* fixed:
> 
> $ /src/ompi-3.1.2/bin/mpicxx test_using_mpi_abort.cpp
> $ /src/ompi-3.1.2/bin/mpirun -np 2 a.out > stdout
> --
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode -1.
> 
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --
> [domain-name-embargoed:19784] 1 more process has sent help message 
> help-mpi-api.txt / mpi-abort
> [domain-name-embargoed:19784] Set MCA parameter "orte_base_help_aggregate" to 
> 0 to see all help / error messages
> $ cat stdout
> hello from 1
> hello from 0
> $
> 
> But the originally reported error still goes to stdout:
> 
> $ /src/ompi-3.1.2/bin/mpicxx test_without_mpi_abort.cpp
> $ /src/ompi-3.1.2/bin/mpirun -np 2 a.out > stdout
> --
> mpirun detected that one or more processes exited with non-zero status, thus 
> causing
> the job to be terminated. The first process to do so was:
> 
>  Process name: [[22380,1],0]
>  Exit code:255
> --
> $ cat stdout
> hello from 0
> hello from 1
> ---
> Primary job  terminated normally, but 1 process returned
> a non-zero exit code. Per user-direction, the job has been aborted.
> ---
> $
> 
> Summary:
> 1.10.2, 2.1.5 both send most openmpi generated messages to stdout.
> 3.1.2 sends at least one type of openmpi generated messages to stdout.
> I'll continue with my "wrapper" strategy for now, as it seems safest and
> most broadly deployable [e.g. on compute resources where I need to use admin 
> installed versions of MPI],
> but it would be nice for openmpi to ensure all generated messages end up in 
> stderr.
> 
> -Emre
> 
> Gilles Gouaillardet wrote:
>> Open MPI should likely write this message on stderr, I will have a look at 
>> that.
>> 
>> 
>> That being said, and though I have no intention to dodge the question, this 
>> case should not happen.
>> 
>> A well w

Re: [OMPI users] What happened to orte-submit resp. DVM?

2018-08-29 Thread Ralph H Castain


> On Aug 29, 2018, at 1:59 AM, Reuti  wrote:
> 
>> 
>> Am 29.08.2018 um 04:46 schrieb Ralph H Castain > <mailto:r...@open-mpi.org>>:
>> You must have some stale code because those tools no longer exist.
> 
> Aha. This code is then by accient still in 3.1.2:
> 
> $ find openmpi-3.1.2 -name "*submit*"
> openmpi-3.1.2/orte/orted/orted_submit.h
> openmpi-3.1.2/orte/orted/orted_submit.c
> openmpi-3.1.2/orte/orted/.deps/liborted_mpir_la-orted_submit.Plo

No, that code is correct - but it isn’t a tool. It’s just some internal code we 
moved into a file of that name.

> 
> -- Reuti
> 
> 
>> Note that we are (gradually) replacing orte-dvm with PRRTE:
>> 
>> https://github.com/pmix/prrte 
>> 
>> See the “how-to” guides for PRRTE towards the bottom of this page: 
>> https://pmix.org/support/how-to/
>> 
>> If you still want to use the orte-dvm tool in OMPI, then you start 
>> applications against it using the “prun” tool - works the same as in PRRTE
>> 
>> Ralph
>> 
>> 
>>> On Aug 28, 2018, at 1:38 PM, Reuti  wrote:
>>> 
>>> Hi,
>>> 
>>> Should orte-submit/ompi-submit still be available in 3.x.y? I can spot the 
>>> source, but it's neither build, nor any man page included.
>>> 
>>> -- Reuti
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> ___
> users mailing list
> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> https://lists.open-mpi.org/mailman/listinfo/users 
> <https://lists.open-mpi.org/mailman/listinfo/users>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] What happened to orte-submit resp. DVM?

2018-08-28 Thread Ralph H Castain
You must have some stale code because those tools no longer exist. Note that we 
are (gradually) replacing orte-dvm with PRRTE:

https://github.com/pmix/prrte  

See the “how-to” guides for PRRTE towards the bottom of this page: 
https://pmix.org/support/how-to/ 

If you still want to use the orte-dvm tool in OMPI, then you start applications 
against it using the “prun” tool - works the same as in PRRTE

Ralph


> On Aug 28, 2018, at 1:38 PM, Reuti  wrote:
> 
> Hi,
> 
> Should orte-submit/ompi-submit still be available in 3.x.y? I can spot the 
> source, but it's neither build, nor any man page included.
> 
> -- Reuti
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] cannot run openmpi 2.1

2018-08-11 Thread Ralph H Castain
Put "oob=^usock” in your default mca param file, or add OMPI_MCA_oob=^usock to 
your environment

> On Aug 11, 2018, at 5:54 AM, Kapetanakis Giannis  
> wrote:
> 
> Hi,
> 
> I'm struggling to get 2.1.x to work with our HPC.
> 
> Version 1.8.8 and 3.x works fine.
> 
> In 2.1.3 and 2.1.4 I get errors and segmentation faults. The builds are with 
> infiniband and slurm support.
> mpirun locally works fine. Any help to debug this?
> 
> [node39:20090] [[50526,1],2] usock_peer_recv_connect_ack: received unexpected 
> process identifier [[50526,0],0] from [[50526,0],1]
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20088] [[50526,1],0] usock_peer_recv_connect_ack: received unexpected 
> process identifier [[50526,0],0] from [[50526,0],1]
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],0] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],0] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20096] [[50526,1],8] usock_peer_recv_connect_ack: received unexpected 
> process identifier [[50526,0],0] from [[50526,0],1]
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],0] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],8] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],0] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],8] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],0] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],8] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],0] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],8] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],6] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],0] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],8] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],6] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20094] [[50526,1],6] usock_peer_recv_connect_ack: received unexpected 
> process identifier [[50526,0],0] from [[50526,0],1]
> [node39:20053] [[50526,0],0]-[[50526,1],2] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],0] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],8] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20053] [[50526,0],0]-[[50526,1],6] mca_oob_usock_peer_recv_handler: 
> invalid socket state(1)
> [node39:20097] [[50526,1],9] usock_peer_recv_connect_ack: received unexpected 
> process identifier [[50526,0],0] from [[50526,0],1]
> [node39:20092] [[50526,1],4] usock_peer_recv_connect_ack: received unexpected 
> process identifier [[50526,0],0] from [[50526,0],1]
> 
> 
> a part from debug:
> 
> [node39:20515] mca:oob:select: Inserting component
> [node39:20515] mca:oob:select: Found 3 active transports
> [node39:20515] [[50428,1],9]: set_addr to uri 
> 3304849408.1;usock;tcp://192.168.20.113,10.1.7.69:37147;ud://181895.60.1
> [node39:20515] [[50428,1],9]:set_addr checking if peer [[50428,0],1] is 
> reachable via component usock
> [node39:20515] [[50428,1],9]:[oob_usock_component.c:349] connect to 
> [[50428,0],1]
> [node39:20515] [[50428,1],9]: peer [[50428,0],1] is reachable via component 
> usock
> [node39:20515] [[50428,1],9]:set_addr checking if peer [[50428,0],1] is 
> reachable via component tcp
> [node39:20515] [[50428,1],9] oob:tcp: ignoring address usock
> [node39:20515] [[50428,1],9] oob:tcp: working peer [[50428,0],1] address 
> tcp://192.168.20.113,10.1.7.69:37147
> [node39:20515] [[50428,1],9] PASSING ADDR 192.168.20.113 TO MODULE
> [node39:20515] [[50

Re: [OMPI users] local communicator and crash of the code

2018-08-03 Thread Ralph H Castain
Those two command lines look exactly the same to me - what am I missing?


> On Aug 3, 2018, at 10:23 AM, Diego Avesani  wrote:
> 
> Dear all,
> 
> I am experiencing a strange error.
> 
> In my code I use three group communications:
> MPI_COMM_WORLD
> MPI_MASTERS_COMM
> LOCAL_COMM
> 
> which have in common some CPUs.
> 
> when I run my code as 
>  mpirun -np 4 --oversubscribe ./MPIHyperStrem
> 
> I have no problem, while when I run it as
>  
>  mpirun -np 4 --oversubscribe ./MPIHyperStrem
> 
> sometimes it crushes and sometimes not.
> 
> It seems that all is linked to 
> CALL MPI_REDUCE(QTS(tstep,:), QTS(tstep,:), nNode, MPI_DOUBLE_PRECISION, 
> MPI_SUM, 0, MPI_LOCAL_COMM, iErr)
> 
> which works with in local.
> 
> What do you think? Can you please suggestion some debug test?
> Is a problem related to local communications?
> 
> Thanks
> 
> 
> 
> Diego
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Settings oversubscribe as default?

2018-08-03 Thread Ralph H Castain
The equivalent MCA param is rmaps_base_oversubscribe=1. You can add 
OMPI_MCA_rmaps_base_oversubscribe to your environ, or set 
rmaps_base_oversubscribe in your default MCA param file.


> On Aug 3, 2018, at 1:24 AM, Florian Lindner  wrote:
> 
> Hello,
> 
> I can use --oversubscribe to enable oversubscribing. What is OpenMPI way to 
> set this as a default, e.g. through a config file option or an environment 
> variable?
> 
> Thanks,
> Florian
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Comm_connect: Data unpack would read past end of buffer

2018-08-03 Thread Ralph H Castain
The buffer being overrun isn’t anything to do with you - it’s an internal 
buffer used as part of creating the connections. It indicates a problem in OMPI.

The 1.10 series is out of the support window, but if you want to stick with it 
you should at least update to the last release in that series - believe that is 
1.10.7.

The OMPI v2.x series had problems that don’t support dynamics, so you should 
skip that one. If you want to come all the way forward, you should take the 
OMPI v3.x series.

Ralph


> On Aug 3, 2018, at 3:40 AM, Florian Lindner  wrote:
> 
> Hello,
> 
> I have this piece of code:
> 
> MPI_Comm icomm;
> INFO << "Accepting connection on " << portName;
> MPI_Comm_accept(portName.c_str(), MPI_INFO_NULL, 0, MPI_COMM_SELF, &icomm);
> 
> and sometimes (like in 1 of 5 runs), I get:
> 
> [helium:33883] [[32673,1],0] ORTE_ERROR_LOG: Data unpack would read past end 
> of buffer in file dpm_orte.c at line 406
> [helium:33883] *** An error occurred in MPI_Comm_accept
> [helium:33883] *** reported by process [2141257729,0]
> [helium:33883] *** on communicator MPI_COMM_SELF
> [helium:33883] *** MPI_ERR_UNKNOWN: unknown error
> [helium:33883] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will 
> now abort,
> [helium:33883] ***and potentially your MPI job)
> [helium:33883] [0] 
> func:/usr/lib/libopen-pal.so.13(opal_backtrace_buffer+0x33) [0x7fc1ad0ac6e3]
> [helium:33883] [1] func:/usr/lib/libmpi.so.12(ompi_mpi_abort+0x365) 
> [0x7fc1af4955e5]
> [helium:33883] [2] 
> func:/usr/lib/libmpi.so.12(ompi_mpi_errors_are_fatal_comm_handler+0xe2) 
> [0x7fc1af487e72]
> [helium:33883] [3] func:/usr/lib/libmpi.so.12(ompi_errhandler_invoke+0x145) 
> [0x7fc1af4874b5]
> [helium:33883] [4] func:/usr/lib/libmpi.so.12(MPI_Comm_accept+0x262) 
> [0x7fc1af4a90e2]
> [helium:33883] [5] func:./mpiports() [0x41e43d]
> [helium:33883] [6] 
> func:/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7fc1ad7a1830]
> [helium:33883] [7] func:./mpiports() [0x41b249]
> 
> 
> Before that I check for the length of portName
> 
>  DEBUG << "COMM ACCEPT portName.size() = " << portName.size();
>  DEBUG << "MPI_MAX_PORT_NAME = " << MPI_MAX_PORT_NAME;
> 
> which both return 1024.
> 
> I am completely puzzled, how I can get a buffer issue, except something 
> faulty with std::string portName.
> 
> Any clues?
> 
> Launch command: mpirun -n 4 -mca opal_abort_print_stack 1 
> OpenMPI 1.10.2 @ Ubuntu 16.
> 
> Thanks,
> Florian
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] hwloc, OpenMPI and unsupported OSes and toolchains

2018-03-21 Thread Ralph H Castain
I don’t see how Open MPI can operate without pthreads

> On Mar 19, 2018, at 3:23 PM, Gregory (tim) Kelly  wrote:
> 
> Hello Everyone,
> I'm inquiring to find someone that can answer some multi-part questions about 
> hwloc, OpenMPI and an alternative OS and toolchain.  I have a project as part 
> of my PhD work, and it's not a simple, one-part question.  For brevity, I am 
> omitting details about the OS and toolchain, other than that neither are 
> supported.  If forced to choose between OpenMPI and the OS/toolchain, I am 
> likely to choose the OS/toolchain and pursue other avenues for 
> parallelization.  That's part of what I am trying to determine with my 
> inquiry.
> 
> To summarize some of the question areas:
> 
> 1) The OS I am working with does not support MP
> 2) nor does it support pthreads
> 3) the hardware is quad-core SoC with an integrated memory controller
> 4) I'd like to see if it possible to utilize hwloc and shmem to build an 
> asymmetric multi-processing system where only one core has I/O but the other 
> three can run the executable
> 
> This is a fairly dedicated system to be used for analyzing ODEs (disease 
> models).  The hardware is cheap ($200) and uses very little power (can run 
> off a 12v battery), and the toolchain and OS are all BSD-licensed (and 
> everything will be published under that license).
> 
> If someone is available for off-line discussion (to minimize unnecessary 
> traffic to the list), I'd be more than willing to summarize the conversation 
> and contribute it to the online documentation.
> 
> Thank you,
> tim
> -- 
> 
> "Nuclear power is a hell of a way to boil water."  -- Albert Einstein
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Low performance of Open MPI-1.3 over Gigabit

2009-03-04 Thread Ralph H. Castain
It would also help to have some idea how you installed and ran this -
e.g., did you set mpi_paffinity_alone so that the processes would bind to
their processors? That could explain the cpu vs. elapsed time since it
helps the processes from being swapped out as much.

Ralph


> Your Intel processors are I assume not the new Nehalem/I7 ones? The older
> quad-core ones are seriously memory bandwidth limited when running a
> memory
> intensive application. That might explain why using all 8 cores per node
> slows down your calculation.
>
> Why do you get such a difference between cpu time and elapsed time? Is
> your
> code doing any file IO or maybe waiting for one of the processors? Do you
> use
> non-blocking communication wherever possible?
>
> Regards,
>
> Mattijs
>
> On Wednesday 04 March 2009 05:46, Sangamesh B wrote:
>> Hi all,
>>
>> Now LAM-MPI is also installed and tested the fortran application by
>> running with LAM-MPI.
>>
>> But LAM-MPI is performing still worse than Open MPI
>>
>> No of nodes:3 cores per node:8  total core: 3*8=24
>>
>>CPU TIME :1 HOURS 51 MINUTES 23.49 SECONDS
>>ELAPSED TIME :7 HOURS 28 MINUTES  2.23 SECONDS
>>
>> No of nodes:6  cores used per node:4  total core: 6*4=24
>>
>>CPU TIME :0 HOURS 51 MINUTES 50.41 SECONDS
>>ELAPSED TIME :6 HOURS  6 MINUTES 38.67 SECONDS
>>
>> Any help/suggetsions to diagnose this problem.
>>
>> Thanks,
>> Sangamesh
>>
>> On Wed, Feb 25, 2009 at 12:51 PM, Sangamesh B 
>> wrote:
>> > Dear All,
>> >
>> >    A fortran application is installed with Open MPI-1.3 + Intel
>> > compilers on a Rocks-4.3 cluster with Intel Xeon Dual socket Quad core
>> > processor @ 3GHz (8cores/node).
>> >
>> >    The time consumed for different tests over a Gigabit connected
>> > nodes are as follows: (Each node has 8 GB memory).
>> >
>> > No of Nodes used:6  No of cores used/node:4 total mpi processes:24
>> >       CPU TIME :    1 HOURS 19 MINUTES 14.39 SECONDS
>> >   ELAPSED TIME :    2 HOURS 41 MINUTES  8.55 SECONDS
>> >
>> > No of Nodes used:6  No of cores used/node:8 total mpi processes:48
>> >       CPU TIME :    4 HOURS 19 MINUTES 19.29 SECONDS
>> >   ELAPSED TIME :    9 HOURS 15 MINUTES 46.39 SECONDS
>> >
>> > No of Nodes used:3  No of cores used/node:8 total mpi processes:24
>> >       CPU TIME :    2 HOURS 41 MINUTES 27.98 SECONDS
>> >   ELAPSED TIME :    4 HOURS 21 MINUTES  0.24 SECONDS
>> >
>> > But the same application performs well on another Linux cluster with
>> > LAM-MPI-7.1.3
>> >
>> > No of Nodes used:6  No of cores used/node:4 total mpi processes:24
>> > CPU TIME :    1hours:30min:37.25s
>> > ELAPSED TIME  1hours:51min:10.00S
>> >
>> > No of Nodes used:12  No of cores used/node:4 total mpi processes:48
>> > CPU TIME :    0hours:46min:13.98s
>> > ELAPSED TIME  1hours:02min:26.11s
>> >
>> > No of Nodes used:6  No of cores used/node:8 total mpi processes:48
>> > CPU TIME :     1hours:13min:09.17s
>> > ELAPSED TIME  1hours:47min:14.04s
>> >
>> > So there is a huge difference between CPU TIME & ELAPSED TIME for Open
>> > MPI jobs.
>> >
>> > Note: On the same cluster Open MPI gives better performance for
>> > inifiniband nodes.
>> >
>> > What could be the problem for Open MPI over Gigabit?
>> > Any flags need to be used?
>> > Or is it not that good to use Open MPI on Gigabit?
>> >
>> > Thanks,
>> > Sangamesh
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> --
>
> Mattijs Janssens
>
> OpenCFD Ltd.
> 9 Albert Road,
> Caversham,
> Reading RG4 7AN.
> Tel: +44 (0)118 9471030
> Email: m.janss...@opencfd.co.uk
> URL: http://www.OpenCFD.co.uk
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



Re: [OMPI users] Query regarding OMPI_MCA_ns_nds_vpid env variable

2008-07-11 Thread Ralph H Castain



On 7/11/08 8:33 AM, "Ashley Pittman" 
wrote:

> On Fri, 2008-07-11 at 08:01 -0600, Ralph H Castain wrote:
>>>> I believe this is partly what motivated the creation of the MPI envars - to
>>>> create a vehicle that -would- be guaranteed stable for just these purposes.
>>>> The concern was that users were doing things that accessed internal envars
>>>> which we changed from version to version. The new envars will remain fixed.
>>> 
>>> Absolutely, these are useful time and time again so should be part of
>>> the API and hence stable.  Care to mention what they are and I'll add it
>>> to my note as something to change when upgrading to 1.3 (we are looking
>>> at testing a snapshot in the near future).
>> 
>> Surely:
>> 
>> OMPI_COMM_WORLD_SIZE#procs in the job
>> OMPI_COMM_WORLD_LOCAL_SIZE  #procs in this job that are sharing the node
>> OMPI_UNIVERSE_SIZE  total #slots allocated to this user
>> (across all nodes)
>> OMPI_COMM_WORLD_RANKproc's rank
>> OMPI_COMM_WORLD_LOCAL_RANK  local rank on node - lowest rank'd proc on
>> the node is given local_rank=0
>> 
>> If there are others that would be useful, now is definitely the time to
>> speak up!
> 
> The only other one I'd like to see is some kind of global identifier for
> the job but as far as I can see I don't believe that openmpi has such a
> concept.

Not really - of course, many environments have a jobid they assign at time
of allocation. We could create a unified identifier from that to ensure a
consistent name was always available, but the problem would be that not all
environments provide it (e.g., rsh). To guarantee that the variable would
always be there, we would have to make something up in those cases.

 could easily be done I suppose - let me raise the question
internally and see the response.

Thanks!
Ralph

> 
> Ashley Pittman.
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Query regarding OMPI_MCA_ns_nds_vpid env variable

2008-07-11 Thread Ralph H Castain



On 7/11/08 7:50 AM, "Ashley Pittman" 
wrote:

> On Fri, 2008-07-11 at 07:42 -0600, Ralph H Castain wrote:
>> 
>> 
>> On 7/11/08 7:32 AM, "Ashley Pittman" 
>> wrote:
>> 
>>> On Fri, 2008-07-11 at 07:20 -0600, Ralph H Castain wrote:
>>>> This variable is only for internal use and has no applicability to a user.
>>>> Basically, it is used by the local daemon to tell an application process
>>>> its
>>>> rank when launched.
>>>> 
>>>> Note that it disappears in v1.3...so I wouldn't recommend looking for it.
>>>> Is
>>>> there something you are trying to do with it?
>>> 
>>> Recently on this list I recommended somebody use it for their needs.
>>> 
>>> http://www.open-mpi.org/community/lists/users/2008/06/5983.php
>> 
>> Ah - yeah, that one slid by me. I'll address it directly.
> 
> I was quite surprised that openmpi didn't have a command option for this
> actually, it's quite a common thing to use.

Nobody asked... ;-)

>  
>>>> Reason I ask: some folks wanted to know things like the MPI rank prior to
>>>> calling MPI_Init, so we added a few MPI envar's that are available from
>>>> beginning of process execution, if that is what you are looking for.
>>> 
>>> It's also essential for Valgrind support which can use it to name
>>> logfiles according to rank using the --log-file=valgrind.out.%
>>> q{OMPI_MCA_ns_nds_vpid} option.
>> 
>> Well, it won't hurt for now - but it won't work with 1.3 or beyond. It's
>> always risky to depend upon a code's internal variables as developers feel
>> free to change those as circumstances dictate since users aren't supposed to
>> be affected.
>> 
>> I believe this is partly what motivated the creation of the MPI envars - to
>> create a vehicle that -would- be guaranteed stable for just these purposes.
>> The concern was that users were doing things that accessed internal envars
>> which we changed from version to version. The new envars will remain fixed.
> 
> Absolutely, these are useful time and time again so should be part of
> the API and hence stable.  Care to mention what they are and I'll add it
> to my note as something to change when upgrading to 1.3 (we are looking
> at testing a snapshot in the near future).

Surely:

OMPI_COMM_WORLD_SIZE#procs in the job
OMPI_COMM_WORLD_LOCAL_SIZE  #procs in this job that are sharing the node
OMPI_UNIVERSE_SIZE  total #slots allocated to this user
(across all nodes)
OMPI_COMM_WORLD_RANKproc's rank
OMPI_COMM_WORLD_LOCAL_RANK  local rank on node - lowest rank'd proc on
the node is given local_rank=0

If there are others that would be useful, now is definitely the time to
speak up!

> 
> Ashley Pittman.
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Outputting rank and size for all outputs.

2008-07-11 Thread Ralph H Castain
Not until next week's meeting, but I would guess we would simply prepend the
rank. The issue will be how often to tag the output since we write it in
fragments to avoid blocking - so do we tag the fragment, look for newlines
and tag each line, etc.

We'll figure something out... ;-)


On 7/11/08 7:52 AM, "Mark Dobossy"  wrote:

> That sounds great Ralph!  Do you have any more details about how the
> process rank would be added?
> 
> And thanks for the other suggestions from Ashley and Galen.  Both
> methods look like they would work great, and are probably a little
> nicer than my current setup.
> 
> -Mark
> 
> 
> On Jul 11, 2008, at 9:46 AM, Ralph H Castain wrote:
> 
>> Adding the ability to tag stdout/err with the process rank is fairly
>> simple.
>> We are going to talk about this next week at a design meeting - we
>> have
>> several different tagging schemes that people have requested, so we
>> want to
>> define a way to meet them all that doesn't create too much ugliness
>> in the
>> code.
>> 
>> Will get back to you on this one. Regardless, the earliest version
>> it could
>> show up in would be 1.3 (which is a tight question given current
>> release
>> plans).
>> 
>> 
>> On 6/24/08 9:36 AM, "Ashley Pittman" > thinking.com>
>> wrote:
>> 
>>> 
>>> If you are using the openmpi mpirun then you can put the following
>>> in a
>>> wrapper script which will prefix stdout in a manner similar to what
>>> you
>>> appear to want.  Simply add the wrapper script before the name of
>>> your
>>> application.
>>> 
>>> Is this the kind of thing you were aiming for?  I'm quite surprised
>>> mpirun doesn't have an option for this actually, it's a fairly common
>>> thing to want.
>>> 
>>> Ashley Pittman.
>>> 
>>> #!/bin/sh
>>> 
>>> $@ | sed "s/^/\[rk:$OMPI_MCA_ns_nds_vpid,sz:
>>> $OMPI_MCA_ns_nds_num_procs
>>> \]/"
>>> 
>>> On Tue, 2008-06-24 at 11:06 -0400, Mark Dobossy wrote:
>>>> Lately I have been doing a great deal of MPI debugging.  I have,
>>>> on an
>>>> occasion or two, fallen into the trap of "Well, that error MUST be
>>>> coming from rank X.  There is no way it could be coming from any
>>>> other
>>>> rank..."  Then proceeding to debug what's happening at rank X,
>>>> only to
>>>> find out a few frustrating hours later that rank Y is throwing the
>>>> output (I'm sure no one else out there has fallen into this
>>>> trap).  It
>>>> was at that point, I decided to write up some code to automatically
>>>> (sort of) output the rank and size of my domain with every
>>>> output.  I
>>>> write mostly in C++, and this is what I came up with:
>>>> 
>>>> #include 
>>>> #include 
>>>> 
>>>> std::ostream &mpi_info(std::ostream &s) {
>>>> int rank, size;
>>>> rank = MPI::COMM_WORLD.Get_rank();
>>>> size = MPI::COMM_WORLD.Get_size();
>>>> s << "[rk:" << rank << ",sz:" << size << "]: ";
>>>> return s;
>>>> }
>>>> 
>>>> Then in my code, I have changed:
>>>> 
>>>> std::cerr << "blah" << std::endl;
>>>> 
>>>> to:
>>>> 
>>>> std::cerr << mpi_info << "blah" << std::endl;
>>>> 
>>>> (or cout, or file stream, etc...)
>>>> 
>>>> where "blah" is some amazingly informative error message.
>>>> 
>>>> Are there other ways people do this?  Simpler ways perhaps?
>>>> 
>>>> -Mark
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Outputting rank and size for all outputs.

2008-07-11 Thread Ralph H Castain
Adding the ability to tag stdout/err with the process rank is fairly simple.
We are going to talk about this next week at a design meeting - we have
several different tagging schemes that people have requested, so we want to
define a way to meet them all that doesn't create too much ugliness in the
code.

Will get back to you on this one. Regardless, the earliest version it could
show up in would be 1.3 (which is a tight question given current release
plans).


On 6/24/08 9:36 AM, "Ashley Pittman" 
wrote:

> 
> If you are using the openmpi mpirun then you can put the following in a
> wrapper script which will prefix stdout in a manner similar to what you
> appear to want.  Simply add the wrapper script before the name of your
> application.
> 
> Is this the kind of thing you were aiming for?  I'm quite surprised
> mpirun doesn't have an option for this actually, it's a fairly common
> thing to want.
> 
> Ashley Pittman.
> 
> #!/bin/sh
> 
> $@ | sed "s/^/\[rk:$OMPI_MCA_ns_nds_vpid,sz:$OMPI_MCA_ns_nds_num_procs
> \]/"
> 
> On Tue, 2008-06-24 at 11:06 -0400, Mark Dobossy wrote:
>> Lately I have been doing a great deal of MPI debugging.  I have, on an
>> occasion or two, fallen into the trap of "Well, that error MUST be
>> coming from rank X.  There is no way it could be coming from any other
>> rank..."  Then proceeding to debug what's happening at rank X, only to
>> find out a few frustrating hours later that rank Y is throwing the
>> output (I'm sure no one else out there has fallen into this trap).  It
>> was at that point, I decided to write up some code to automatically
>> (sort of) output the rank and size of my domain with every output.  I
>> write mostly in C++, and this is what I came up with:
>> 
>> #include 
>> #include 
>> 
>> std::ostream &mpi_info(std::ostream &s) {
>> int rank, size;
>> rank = MPI::COMM_WORLD.Get_rank();
>> size = MPI::COMM_WORLD.Get_size();
>> s << "[rk:" << rank << ",sz:" << size << "]: ";
>> return s;
>> }
>> 
>> Then in my code, I have changed:
>> 
>> std::cerr << "blah" << std::endl;
>> 
>> to:
>> 
>> std::cerr << mpi_info << "blah" << std::endl;
>> 
>> (or cout, or file stream, etc...)
>> 
>> where "blah" is some amazingly informative error message.
>> 
>> Are there other ways people do this?  Simpler ways perhaps?
>> 
>> -Mark
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Query regarding OMPI_MCA_ns_nds_vpid env variable

2008-07-11 Thread Ralph H Castain



On 7/11/08 7:32 AM, "Ashley Pittman" 
wrote:

> On Fri, 2008-07-11 at 07:20 -0600, Ralph H Castain wrote:
>> This variable is only for internal use and has no applicability to a user.
>> Basically, it is used by the local daemon to tell an application process its
>> rank when launched.
>> 
>> Note that it disappears in v1.3...so I wouldn't recommend looking for it. Is
>> there something you are trying to do with it?
> 
> Recently on this list I recommended somebody use it for their needs.
> 
> http://www.open-mpi.org/community/lists/users/2008/06/5983.php

Ah - yeah, that one slid by me. I'll address it directly.

> 
>> Reason I ask: some folks wanted to know things like the MPI rank prior to
>> calling MPI_Init, so we added a few MPI envar's that are available from
>> beginning of process execution, if that is what you are looking for.
> 
> It's also essential for Valgrind support which can use it to name
> logfiles according to rank using the --log-file=valgrind.out.%
> q{OMPI_MCA_ns_nds_vpid} option.

Well, it won't hurt for now - but it won't work with 1.3 or beyond. It's
always risky to depend upon a code's internal variables as developers feel
free to change those as circumstances dictate since users aren't supposed to
be affected.

I believe this is partly what motivated the creation of the MPI envars - to
create a vehicle that -would- be guaranteed stable for just these purposes.
The concern was that users were doing things that accessed internal envars
which we changed from version to version. The new envars will remain fixed.

Of course, that only applies to 1.3 and beyond... ;-)

> 
> Ashley,
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Query regarding OMPI_MCA_ns_nds_vpid env variable

2008-07-11 Thread Ralph H Castain
This variable is only for internal use and has no applicability to a user.
Basically, it is used by the local daemon to tell an application process its
rank when launched.

Note that it disappears in v1.3...so I wouldn't recommend looking for it. Is
there something you are trying to do with it?

Reason I ask: some folks wanted to know things like the MPI rank prior to
calling MPI_Init, so we added a few MPI envar's that are available from
beginning of process execution, if that is what you are looking for.

Ralph



On 7/11/08 7:05 AM, "Aditya Vasal"  wrote:

> Hi,
>  
> I would be glad to receive some information regarding the environment variable
> OMPI_MCA_ns_nds_vpid
> 1> It¹s Importance
> 2> It¹s Use
>  
> Thanks,
> Aditya Vasal
>  
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users






Re: [OMPI users] ORTE_ERROR_LOG timeout

2008-07-08 Thread Ralph H Castain
Several thins are going on here. First, this error message:

> mpirun noticed that job rank 1 with PID 9658 on node mac1 exited on signal
> 6 (Aborted).
> 2 additional processes aborted (not shown)

indicates that your application procs are aborting for some reason. The
system is then attempting to shutdown and somehow got itself "hung", hence
the timeout error message.

I'm not sure that increasing the timeout value will help in this situation.
Unfortunately, 1.2.x has problems with this scenario (1.3 is -much- better!
;-)). If you want to try adjusting the timeout anyway, you can do so with:

mpirun -mca orte_abort_timeout x ...

where x is the specified timeout in seconds.

Hope that helps.
Ralph



On 7/8/08 8:55 AM, "Alastair Basden"  wrote:

> Hi,
> I've got some code that uses openmpi, and sometimes, it crashes, after
> printing somthing like:
> 
> [mac1:09654] [0,0,0] ORTE_ERROR_LOG: Timeout in file
> base/pls_base_orted_cmds.c at line 275
> [mac1:09654] [0,0,0] ORTE_ERROR_LOG: Timeout in file pls_rsh_module.c at
> line 1166
> [mac1:09654] [0,0,0] ORTE_ERROR_LOG: Timeout in file errmgr_hnp.c at line
> 90
> mpirun noticed that job rank 1 with PID 9658 on node mac1 exited on signal
> 6 (Aborted).
> 2 additional processes aborted (not shown)
> [mac1:09654] [0,0,0] ORTE_ERROR_LOG: Timeout in file
> base/pls_base_orted_cmds.c at line 188
> [mac1:09654] [0,0,0] ORTE_ERROR_LOG: Timeout in file pls_rsh_module.c at
> line 1198
> --
> mpirun was unable to cleanly terminate the daemons for this job. Returned
> value Timeout instead of ORTE_SUCCESS.
> --
> 
> In this case, all processes were running on the same machine, so its not a
> connection problem.  Is this a bug, or something else wrong?  Is there a
> way to increase the timeout time?
> 
> Thanks...
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] mpirun w/ enable-mpi-threads spinning up cputime when app path is invalid

2008-07-02 Thread Ralph H Castain
Sorry - went to one of your links to get that info.

We know OMPI 1.2.x isn't thread safe. This is unfortunately another example
of it. Hopefully, 1.3 will be better.

Ralph



On 7/2/08 11:01 AM, "Ralph H Castain"  wrote:

> Out of curiosity - what version of OMPI are you using?
> 
> 
> On 7/2/08 10:46 AM, "Steve Johnson"  wrote:
> 
>> If mpirun is given an application that isn't in the PATH, then instead of
>> exiting it prints the error that it failed to find the executable and then
>> proceeds spins up cpu time.  strace shows an endless stream of sched_yield().
>> 
>> For example, if "blah" doesn't exist:
>> mpirun -np 16 blah
>> Ditto if ./blah doesn't exist and mpirun is called as
>> mpirun -np 16 ./blah
>> 
>> OS: CentOS 5.1
>> Kernel: 2.6.18-92.1.1.el5.centos.plus
>> Arch: x86_64
>> glibc/pthread: glibc-2.5-18.el5_1.1
>> GCC: 4.1.2-14.el5
>> 
>> CC=gcc
>> CXX=g++
>> F77=gfortran
>> FC=gfortran
>> ./configure --with-tm --prefix=$HOME/openmpi --libdir=$HOME/openmpi/lib64
>> --enable-mpi-threads
>> 
>> A qsig -s 15 will terminate the mpirun processes.
>> 
>> ompi_info is at http://isc.tamu.edu/~steve/ompi_info.txt
>> config.log.bz is at http://isc.tamu.edu/~steve/ompi_config.log.bz2
>> 
>> Also confirmed this on openSUSE 10.2, 2.6.18.8-0.9-default, x86_64,
>> glibc-2.5-34.7, gcc-4.1.3-29.
>> 
>> // Steve
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] mpirun w/ enable-mpi-threads spinning up cputime when app path is invalid

2008-07-02 Thread Ralph H Castain
Out of curiosity - what version of OMPI are you using?


On 7/2/08 10:46 AM, "Steve Johnson"  wrote:

> If mpirun is given an application that isn't in the PATH, then instead of
> exiting it prints the error that it failed to find the executable and then
> proceeds spins up cpu time.  strace shows an endless stream of sched_yield().
> 
> For example, if "blah" doesn't exist:
> mpirun -np 16 blah
> Ditto if ./blah doesn't exist and mpirun is called as
> mpirun -np 16 ./blah
> 
> OS: CentOS 5.1
> Kernel: 2.6.18-92.1.1.el5.centos.plus
> Arch: x86_64
> glibc/pthread: glibc-2.5-18.el5_1.1
> GCC: 4.1.2-14.el5
> 
> CC=gcc
> CXX=g++
> F77=gfortran
> FC=gfortran
> ./configure --with-tm --prefix=$HOME/openmpi --libdir=$HOME/openmpi/lib64
> --enable-mpi-threads
> 
> A qsig -s 15 will terminate the mpirun processes.
> 
> ompi_info is at http://isc.tamu.edu/~steve/ompi_info.txt
> config.log.bz is at http://isc.tamu.edu/~steve/ompi_config.log.bz2
> 
> Also confirmed this on openSUSE 10.2, 2.6.18.8-0.9-default, x86_64,
> glibc-2.5-34.7, gcc-4.1.3-29.
> 
> // Steve
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Need some help regarding Linpack execution

2008-07-02 Thread Ralph H Castain
You also might want to resend this to the MPICH mailing list ­ this is the
Open MPI mailing list

;-)


On 7/2/08 8:03 AM, "Swamy Kandadai"  wrote:

> Hi:
> May be you do not have 12 entries in your machine.list file. You need to have
> atleast np lines in your machine.list
> 
> Dr. Swamy N. Kandadai
> IBM Senior Certified Executive IT Specialist
> STG WW  Modular Systems Benchmark Center
> STG WW HPC and BI CoC Benchmark Center
> Phone:( 845) 433 -8429 (8-293) Fax:(845)432-9789
> sw...@us.ibm.com
> http://w3.ibm.com/sales/systems/benchmarks
> 
> 
> 
> 
> 
> "Aditya Vasal" 
> 
> 
> "Aditya Vasal" 
> Sent by: users-boun...@open-mpi.org 07/02/2008 07:36 AM Please respond to
> Open MPI Users 
> To 
> 
> 
> 
> cc 
> 
> 
> Subject 
> 
> [OMPI users] Need some help regarding Linpack execution
> 
> Hi, 
>  
> I want to perform LINPACK test on my m/c, I have only 1 GB RAM on the m/c
> where I want to run 12 parallel Linpack processes on SLES 10.
> I am using of Mpich-1.2.7p1. (Mpich is built with ­rsh=ssh option)
> I have modified HPL.dat accordingly,
> P = 3
> Q = 4(so as to make PxQ = 12)
> N = 8640 (so as to make use of only 56% of available memory and
> leave rest for host processes)
> NB = 11520
> I have also set ulimit ­l unlimited..
> Created a machine.list file by specifying my m/c¹s IP address 12 times. (So as
> to execute all 12 processes on the same m/c) and using GotoBLAS for the
> Linpack execution
>  
> Execution command:
> mpirun ­np 12 ­machinefile machine.list xhpl
>  
> Upon execution, I get following error:
>  
> HPL ERROR from process # 0, on line 419 of function HPL_pdinfo:
 >>> Need at least 12 processes for these tests <<<
>  
> Please guide me where am I going wrong
>  
>  
> Best Regards,
> Aditya  Vasal 
> 
> Software Engg | Semiconductor Solutions Group |KPIT Cummins Infosystems Ltd. |
> +91 99 70 168 581 |aditya.va...@kpitcummins.com
>   |www.kpitcummins.com
> http://www.kpitcummins.com/> 
>  ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] mca parameters: meaning and use

2008-06-26 Thread Ralph H Castain
Actually, I suspect the requestor was hoping for an explanation somewhat
more illuminating than the terse comments output by ompi_info. ;-)

Bottom line is "no". We have talked numerous times about the need to do
this, but unfortunately there has been little accomplished. I doubt it will
happen anytime soon.

If there are specific areas you would like help with, you can post the
questions here and people will try to explain.

Ralph


On 6/26/08 4:55 AM, "Adrian Knoth"  wrote:

> On Thu, Jun 26, 2008 at 12:32:14PM +0200, jody wrote:
> 
>> Hi
> 
> Hi!
> 
>> As the FAQ only contains explanations for a small subset of all MCA
>> parameters, I wondered whether there is a list explaining the meaning
>> and use of them...
> 
> ompi_info --param all all
> 
> HTH




Re: [OMPI users] Displaying Selected MCA Modules

2008-06-23 Thread Ralph H Castain
I can guarantee bproc support isn't broken in 1.2 - we use it on several
production machines every day, and it works fine. I heard of only one
potential problem having to do with specifying multiple app_contexts on a
cmd line, but we are still trying to confirm that it wasn't operator error.

In the 1.2 series, we don't pass mca params back to the orteds. The reason
this was done is that there are s many mca params that could be set that
we would frequently overrun the system limit on cmd line length. Remember,
those params can be in a system-level file, a user-level file, the
environment, and/or on the cmd line!

This restriction has been lifted in 1.3, but we didn't back-port it to the
1.2 series. So I'm afraid that the orted is going to pick the environment it
senses.

Of more interest would be understanding why your build isn't working in
bproc. Could you send me the error you are getting? I'm betting that the
problem lies in determining the node allocation as that is the usual place
we hit problems - not much is "standard" about how allocations are
communicated in the bproc world, though we did try to support a few of the
more common methods.

Ralph



On 6/23/08 2:12 PM, "Joshua Bernstein" 
wrote:

> 
> 
> Ralph Castain wrote:
>> Hi Joshua
>> 
>> Again, forwarded by the friendly elf - so include me directly in any reply.
>> 
>> I gather from Jeff that you are attempting to do something with bproc -
>> true? If so, I will echo what Jeff said: bproc support in OMPI is being
>> dropped with the 1.3 release due to lack of interest/support. Just a "heads
>> up".
> 
> Understood.
> 
>> If you are operating in a bproc environment, then I'm not sure why you are
>> specifying that the system use the rsh launcher. Bproc requires some very
>> special handling which is only present in the bproc launcher. You can run
>> both MPI and non-MPI apps with it, but bproc is weird and so OMPI some
>> -very- different logic in it to make it all work.
> 
> Well, I'm trying to determine how broken, if at all, the bproc support
> is in OpenMPI. So considering out of the gate it wasn't working, I
> thought I'd try to disable the built in BProc stuff and fall back to RSH.
> 
>> I suspect the problem you are having is that all of the frameworks are
>> detecting bproc and trying to run accordingly. This means that the orted is
>> executing process startup procedures for bproc - which are totally different
>> than for any other environment (e.g., rsh). If mpirun is attempting to
>> execute an rsh launch, and the orted is expecting a bproc launch, then I can
>> guarantee that no processes will be launched and you will hang.
> 
> Exactly, what I'm seeing now...
> 
>> I'm not sure there is a way in 1.2 to tell the orteds to ignore the fact
>> that they see bproc and do something else. I can look, but would rather wait
>> to hear if that is truly what you are trying to do, and why.
> 
> I would really appreciate it if you wouldn't mind looking. From reading
> the documentation I didn't realize that mpirun and the orted were doing
> two different things. I thought the --mca parameter applied to both.
> 
> -Joshua Bernstein
> Software Engineer
> Penguin Computing




Re: [OMPI users] null characters in output

2008-06-19 Thread Ralph H Castain
No, I haven't seen that - if you can provide an example, we can take a look
at it.

Thanks
Ralph



On 6/19/08 8:15 AM, "Sacerdoti, Federico"
 wrote:

> Ralph, another issue perhaps you can shed some light on.
> 
> When launching with orterun, we sometimes see null characters in the
> stdout output. These do not show up on a terminal, but when piped to a
> file they are visible in an editor. They also can show up in the middle
> of a line, and so can interfere with greps on the output, etc.
> 
> Have you seen this before? I am working on a simple test case, but
> unfortunately have not found one that is deterministic so far.
> 
> Thanks,
> Federico 
> 
> -Original Message-
> From: Ralph H Castain [mailto:r...@lanl.gov]
> Sent: Tuesday, June 17, 2008 1:09 PM
> To: Sacerdoti, Federico; Open MPI Users 
> Subject: Re: [OMPI users] SLURM and OpenMPI
> 
> I can believe 1.2.x has problems in that regard. Some of that has
> nothing to
> do with slurm and reflects internal issues with 1.2.
> 
> We have made it much more resistant to those problems in the upcoming
> 1.3
> release, but there is no plan to retrofit those changes to 1.2. Part of
> the
> problem was that we weren't using the --kill-on-bad-exit flag when we
> called
> srun internally, which has been fixed for 1.3.
> 
> BTW: we actually do use srun to launch the daemons - we just call it
> internally from inside orterun. The only real difference is that we use
> orterun to setup the cmd line and then tell the daemons what they need
> to
> do. The issues you are seeing relate to our ability to detect that srun
> has
> failed, and/or that one or more daemons have failed to launch or do
> something they were supposed to do. The 1.2 system has problems in that
> regard, which was one motivation for the 1.3 overhaul.
> 
> I would argue that slurm allowing us to attempt to launch on a
> no-longer-valid allocation is a slurm issue, not OMPI's. As I said, we
> use
> srun to launch the daemons - the only reason we hang is that srun is not
> returning with an error. I've seen this on other systems as well, but
> have
> no real answer - if slurm doesn't indicate an error has occurred, I'm
> not
> sure what I can do about it.
> 
> We are unlikely to use srun to directly launch jobs (i.e., to have slurm
> directly launch the job from an srun cmd line without mpirun) anytime
> soon.
> It isn't clear there is enough benefit to justify the rather large
> effort,
> especially considering what would be required to maintain scalability.
> Decisions on all that are still pending, though, which means any
> significant
> change in that regard wouldn't be released until sometime next year.
> 
> Ralph
> 
> On 6/17/08 10:39 AM, "Sacerdoti, Federico"
>  wrote:
> 
>> Ralph,
>> 
>> I was wondering what the status of this feature was (using srun to
>> launch orted daemons)? I have two new bug reports to add from our
>> experience using orterun from 1.2.6 on our 4000 CPU infiniband
> cluster.
>> 
>> 1. Orterun will happily hang if it is asked to run on an invalid slurm
>> job, e.g. if the job has exceeded its timelimit. This would be
> trivially
>> fixed if you used srun to launch, as they would fail with non-zero
> exit
>> codes.
>> 
>> 2. A very simple orterun invocation hangs instead of exiting with an
>> error. In this case the executable does not exist, and we would expect
>> orterun to exit non-zero. This has caused
>> headaches with some workflow management script that automatically
> start
>> jobs.
>> 
>> salloc -N2 -p swdev orterun dummy-binary-I-dont-exist
>> [hang]
>> 
>> orterun dummy-binary-I-dont-exist
>> [hang]
>> 
>> Thanks,
>> Federico
>> 
>> -Original Message-
>> From: Sacerdoti, Federico
>> Sent: Friday, March 21, 2008 5:41 PM
>> To: 'Open MPI Users'
>> Subject: RE: [OMPI users] SLURM and OpenMPI
>> 
>> 
>> Ralph wrote:
>> "I don't know if I would say we "interfere" with SLURM - I would say
>> that we
>> are only lightly integrated with SLURM at this time. We use SLURM as a
>> resource manager to assign nodes, and then map processes onto those
>> nodes
>> according to the user's wishes. We chose to do this because srun
> applies
>> its
>> own load balancing algorithms if you launch processes directly with
> it,
>> which leaves the user with little flexibility to specify their desired
>> rank/slot mapping. We chose to support the greater flexibility."
>>  
>> Ralp

Re: [OMPI users] SLURM and OpenMPI

2008-06-19 Thread Ralph H Castain
Well, if the only system I cared about was slurm, there are some things I
could possibly do to make things better, but at the expense of our support
for other environments - which is unacceptable.

There are a few technical barriers to doing this without the orteds on
slurm, and a major licensing issue that prohibits us from calling any slurm
APIs. How all that gets resolved is unclear.

Frankly, one reason we don't put more emphasis on it is that we don't see a
significant launch time difference between the two modes, and we truly do
want to retain the ability to utilize different error response strategies
(which slurm will not allow - you can only follow theirs).

So I would say we simply have different objectives than what you stated, and
different concerns that make a deeper slurm integration less favorable. May
still happen, but not anytime soon.

Ralph



On 6/19/08 8:08 AM, "Sacerdoti, Federico"
 wrote:

> Ralph thanks for your quick response.
> 
> Regarding your fourth paragraph, slurm will not let you run on a
> no-longer-valid allocation, an srun will correctly exit non-zero with a
> useful failure reason. So perhaps openmpi 1.3 with your changes will
> just work, I look forward to testing it.
> 
> E.g.
> $ srun hostname
> srun: error: Unable to confirm allocation for job 745346: Invalid job id
> specified
> srun: Check SLURM_JOBID environment variable for expired or invalid job.
> 
> 
> Regarding srun to launch the jobs directly (no orteds), I am sad to hear
> the idea is not in favor. We have found srun to be extremely scalable
> (tested up to 4096 MPI processes) and very good at cleaning up after an
> error or node failure. It seems you could simplify orterun quite a bit
> by relying on slurm (or whatever  resource manager) to handle job
> cleanup after failures; it is their responsibility after all, and they
> have better knowledge about the health and availability of nodes than
> any launcher can hope for.
> 
> I helped write an mvapich launcher used internally called mvrun, which
> was used for several years. I wrote a lot of logic to run down and stop
> all processes when one had failed, which I understand you have as well.
> We came to the conclusion that slurm was in a better position to handle
> such failures, and in fact did it more effectively. For example if slurm
> detects a node has failed, it will stop the job, allocate an additional
> free node to make up the deficit, then relaunch. It more difficult (to
> put it mildly) for a job launcher to do that.
> 
> Thanks again,
> Federico
> 
> -Original Message-
> From: Ralph H Castain [mailto:r...@lanl.gov]
> Sent: Tuesday, June 17, 2008 1:09 PM
> To: Sacerdoti, Federico; Open MPI Users 
> Subject: Re: [OMPI users] SLURM and OpenMPI
> 
> I can believe 1.2.x has problems in that regard. Some of that has
> nothing to
> do with slurm and reflects internal issues with 1.2.
> 
> We have made it much more resistant to those problems in the upcoming
> 1.3
> release, but there is no plan to retrofit those changes to 1.2. Part of
> the
> problem was that we weren't using the --kill-on-bad-exit flag when we
> called
> srun internally, which has been fixed for 1.3.
> 
> BTW: we actually do use srun to launch the daemons - we just call it
> internally from inside orterun. The only real difference is that we use
> orterun to setup the cmd line and then tell the daemons what they need
> to
> do. The issues you are seeing relate to our ability to detect that srun
> has
> failed, and/or that one or more daemons have failed to launch or do
> something they were supposed to do. The 1.2 system has problems in that
> regard, which was one motivation for the 1.3 overhaul.
> 
> I would argue that slurm allowing us to attempt to launch on a
> no-longer-valid allocation is a slurm issue, not OMPI's. As I said, we
> use
> srun to launch the daemons - the only reason we hang is that srun is not
> returning with an error. I've seen this on other systems as well, but
> have
> no real answer - if slurm doesn't indicate an error has occurred, I'm
> not
> sure what I can do about it.
> 
> We are unlikely to use srun to directly launch jobs (i.e., to have slurm
> directly launch the job from an srun cmd line without mpirun) anytime
> soon.
> It isn't clear there is enough benefit to justify the rather large
> effort,
> especially considering what would be required to maintain scalability.
> Decisions on all that are still pending, though, which means any
> significant
> change in that regard wouldn't be released until sometime next year.
> 
> Ralph
> 
> On 6/17/08 10:39 AM, "Sacerdoti, Federico"
>  wrote:
> 
>> Ralph,
>

Re: [OMPI users] SLURM and OpenMPI

2008-06-17 Thread Ralph H Castain
I can believe 1.2.x has problems in that regard. Some of that has nothing to
do with slurm and reflects internal issues with 1.2.

We have made it much more resistant to those problems in the upcoming 1.3
release, but there is no plan to retrofit those changes to 1.2. Part of the
problem was that we weren't using the --kill-on-bad-exit flag when we called
srun internally, which has been fixed for 1.3.

BTW: we actually do use srun to launch the daemons - we just call it
internally from inside orterun. The only real difference is that we use
orterun to setup the cmd line and then tell the daemons what they need to
do. The issues you are seeing relate to our ability to detect that srun has
failed, and/or that one or more daemons have failed to launch or do
something they were supposed to do. The 1.2 system has problems in that
regard, which was one motivation for the 1.3 overhaul.

I would argue that slurm allowing us to attempt to launch on a
no-longer-valid allocation is a slurm issue, not OMPI's. As I said, we use
srun to launch the daemons - the only reason we hang is that srun is not
returning with an error. I've seen this on other systems as well, but have
no real answer - if slurm doesn't indicate an error has occurred, I'm not
sure what I can do about it.

We are unlikely to use srun to directly launch jobs (i.e., to have slurm
directly launch the job from an srun cmd line without mpirun) anytime soon.
It isn't clear there is enough benefit to justify the rather large effort,
especially considering what would be required to maintain scalability.
Decisions on all that are still pending, though, which means any significant
change in that regard wouldn't be released until sometime next year.

Ralph

On 6/17/08 10:39 AM, "Sacerdoti, Federico"
 wrote:

> Ralph,
> 
> I was wondering what the status of this feature was (using srun to
> launch orted daemons)? I have two new bug reports to add from our
> experience using orterun from 1.2.6 on our 4000 CPU infiniband cluster.
> 
> 1. Orterun will happily hang if it is asked to run on an invalid slurm
> job, e.g. if the job has exceeded its timelimit. This would be trivially
> fixed if you used srun to launch, as they would fail with non-zero exit
> codes.
> 
> 2. A very simple orterun invocation hangs instead of exiting with an
> error. In this case the executable does not exist, and we would expect
> orterun to exit non-zero. This has caused
> headaches with some workflow management script that automatically start
> jobs.
> 
> salloc -N2 -p swdev orterun dummy-binary-I-dont-exist
> [hang]
> 
> orterun dummy-binary-I-dont-exist
> [hang]
> 
> Thanks,
> Federico
> 
> -Original Message-
> From: Sacerdoti, Federico
> Sent: Friday, March 21, 2008 5:41 PM
> To: 'Open MPI Users'
> Subject: RE: [OMPI users] SLURM and OpenMPI
> 
> 
> Ralph wrote:
> "I don't know if I would say we "interfere" with SLURM - I would say
> that we
> are only lightly integrated with SLURM at this time. We use SLURM as a
> resource manager to assign nodes, and then map processes onto those
> nodes
> according to the user's wishes. We chose to do this because srun applies
> its
> own load balancing algorithms if you launch processes directly with it,
> which leaves the user with little flexibility to specify their desired
> rank/slot mapping. We chose to support the greater flexibility."
>  
> Ralph, we wrote a launcher for mvapich that uses srun to launch but
> keeps tight control of where processes are started. The way we did it
> was to force srun to launch a single process on a particular node.
> 
> The launcher calls many of these:
>  srun --jobid $JOBID -N 1 -n 1 -w host005 CMD ARGS
> 
> Hope this helps (and we are looking forward to a tighter orterun/slurm
> integration as you know).
> 
> Regards,
> Federico
> 
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Ralph Castain
> Sent: Thursday, March 20, 2008 6:41 PM
> To: Open MPI Users 
> Cc: Ralph Castain
> Subject: Re: [OMPI users] SLURM and OpenMPI
> 
> Hi there
> 
> I am no slurm expert. However, it is our understanding that
> SLURM_TASKS_PER_NODE means the number of slots allocated to the job, not
> the
> number of tasks to be executed on each node. So the 4(x2) tells us that
> we
> have 4 slots on each of two nodes to work with. You got 4 slots on each
> node
> because you used the -N option, which told slurm to assign all slots on
> that
> node to this job - I assume you have 4 processors on your nodes. OpenMPI
> parses that string to get the allocation, then maps the number of
> specified
> processes against it.
> 
> It is possible that the interpretation of SLURM_TASKS_PER_NODE is
> different
> when used to allocate as opposed to directly launch processes. Our
> typical
> usage is for someone to do:
> 
> srun -N 2 -A
> mpirun -np 2 helloworld
> 
> In other words, we use srun to create an allocation, and then run mpirun
> separately within it.
> 
> 
> I am t

Re: [OMPI users] Application Context and OpenMPI 1.2.4

2008-06-17 Thread Ralph H Castain
Hi Pat

A friendly elf forwarded this to me, so please be sure to explicitly include
me on any reply.

Was that the only error message you received? I would have expected a trail
of "error_log" outputs that would help me understand where this came from.
If not, I can give you some debug flags to set once I know the environment.

Usually, that error indicates a mismatch between the backend daemon and
mpirun - one is from 1.2.4, for example, while another was from some other
build - but it is hard to tell without seeing some more error output.

I assume this is using ssh as a launch environment?

Thanks
Ralph



On 6/16/08 7:37 AM, "pat.o'bry...@exxonmobil.com"
 wrote:

> 
> I am having a problem using  an "application context" with OpenMPI
> 1.2.4.
> My invocation of "mpirun" is shown below along with the "--app" file.
> 
> Invocation:
>  export LD_LIBRARY_PATH="/usr/local/openmpi-1.2.4/gnu/lib"
>  /usr/local/openmpi-1.2.4/gnu/bin/mpirun --app /my_id/appschema
> 
> Contents of "--app" file:
> -np 1 /my_id/Gnu/hello_mpi
> 
> Ldd of hello_mpi:
>  ldd hello_mpi
> libm.so.6 => /lib64/tls/libm.so.6 (0x002a9566c000)
> libmpi.so.0 => /usr/local/openmpi-1.2.4/gnu/lib/libmpi.so.0
> (0x002a957f3000)
> libopen-rte.so.0 => /usr/local/openmpi-1.2.4
> /gnu/lib/libopen-rte.so.0 (0x002a9598e000)
> libopen-pal.so.0 => /usr/local/openmpi-1.2.4
> /gnu/lib/libopen-pal.so.0 (0x002a95ae9000)
> libdl.so.2 => /lib64/libdl.so.2 (0x002a95c47000)
> libnsl.so.1 => /lib64/libnsl.so.1 (0x002a95d4b000)
> libutil.so.1 => /lib64/libutil.so.1 (0x002a95e62000)
> libpthread.so.0 => /lib64/tls/libpthread.so.0
> (0x002a95f65000)
> libc.so.6 => /lib64/tls/libc.so.6 (0x002a9607b000)
> /lib64/ld-linux-x86-64.so.2 (0x002a95556000)
> 
> Error message:
>ORTE_ERROR_LOG: Data unpack had inadequate space in file
> dss/dss_unpack.c at line 90
> 
> Any help would be greatly appreciated.
> 
> Thanks,
> Pat O'Bryant
> 
> 
> 
> J.W. (Pat) O'Bryant,Jr.
> Business Line Infrastructure
> Technical Systems, HPC
> Office: 713-431-7022
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] specifying hosts in mpi_spawn()

2008-06-02 Thread Ralph H Castain
Appreciate the clarification. Unfortunately, the answer is ³no² for any of
our current releases. We only use the ³host² info argument to tell us which
nodes to use ­ the info has no bearing on the eventual mapping of ranks to
nodes. Repeated entries are simply ignored.

I was mainly asking for the version to check if you were working with our
svn trunk. The upcoming 1.3 release does support mapping such as you
describe. However, it currently only supports it for entries in a hostfile,
not as specified via ­host or in the host info argument.

Historically, we have maintained a direct correspondence between hostfile
and ­host operations ­ i.e., whatever you can do with a hostfile could also
be done via ­host. I¹ll have to discuss with the developers whether or not
to extend this to sequential mapping of ranks.

The short answer, therefore, is that we don¹t support what you are
requesting at this time, and may not support it in 1.3 (though you could get
around that perhaps by putting the ordering in a file).

Ralph
 


On 5/30/08 11:32 AM, "Bruno Coutinho"  wrote:

> I'm using open mpi 1.2.6 from the open mpi site, but I can switch to another
> version if necessary.
> 
> 
> 2008/5/30 Ralph H Castain :
>> I'm afraid I cannot answer that question without first knowing what version
>> of Open MPI you are using. Could you provide that info?
>> 
>> Thanks
>> Ralph
>> 
>> 
>> 
>> On 5/29/08 6:41 PM, "Bruno Coutinho"  wrote:
>> 
>>> > How mpi handles the host string passed in the info argument to
>>> > mpi_comm_spawn() ?
>>> >
>>> > if I set host to:
>>> > "host1,host2,host3,host2,host2,host1"
>>> >
>>> > then ranks 0 and 5 will run in host1, ranks 1,3,4 in host 2 and rank 3
>>> > in host3?
>>> > ___
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
> 




Re: [OMPI users] specifying hosts in mpi_spawn()

2008-05-30 Thread Ralph H Castain
I'm afraid I cannot answer that question without first knowing what version
of Open MPI you are using. Could you provide that info?

Thanks
Ralph



On 5/29/08 6:41 PM, "Bruno Coutinho"  wrote:

> How mpi handles the host string passed in the info argument to
> mpi_comm_spawn() ?
> 
> if I set host to:
> "host1,host2,host3,host2,host2,host1"
> 
> then ranks 0 and 5 will run in host1, ranks 1,3,4 in host 2 and rank 3
> in host3?
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Proper use of sigaction in Open MPI?

2008-04-24 Thread Ralph H Castain
I have never tested this before, so I could be wrong. However, my best guess
is that the following is happening:

1. you trap the signal and do your cleanup. However, when your proc now
exits, it does not exit with a status of "terminated-by-signal". Instead, it
exits normally.

2. the local daemon sees the proc exit, but since it exit'd normally, it
takes no action to abort the job. Hence, mpirun has no idea that anything
"wrong" has happened, nor that it should do anything about it.

3. if you re-raise the signal, the proc now exits with
"terminated-by-signal", so the abort procedure works as intended.

Since you call mpi_finalize before leaving, even the upcoming 1.3 release
would be "fooled" by this behavior. It will again think that the proc exit'd
normally, and happily wait for all the procs to "complete".

Now, if -all- of your procs receive this signal and terminate, then the
system should shutdown. But I gather from your note that this isn't the case
- that only a subset, perhaps only one, of the procs is taking this action?

If all of the procs are exiting, then it is possible that there is a bug in
the 1.2 release that is getting confused by the signals. Mpirun does trap
SIGTERM to order a clean abort of all procs, so it is possible that a race
condition is getting activated and causing mpirun to hang. Unfortunately,
that can happen in the 1.2 series. The 1.3 release should be more robust in
that regard.

I don't think what you are doing will cause any horrid problems. Like I
said, I have never tried something like this, so I might be surprised.

But if you job cleans up the way you want, I certainly wouldn't worry about
it. At the worst, there might be some dangling tmp files from Open MPI.

Ralph



On 4/24/08 8:51 AM, "Jeff Squyres (jsquyres)"  wrote:

> Thoughts?
> 
> Is this a "fixed in 1.3" issue?
> 
> -jms
> Sent from my PDA.  No type good.
> 
>  -Original Message-
> From:   Keller, Jesse [mailto:jesse.kel...@roche.com]
> Sent:   Thursday, April 24, 2008 09:35 AM Eastern Standard Time
> To: us...@open-mpi.org
> Subject:[OMPI users] Proper use of sigaction in Open MPI?
> 
> Hello, all -
> 
> 
> 
> I have an OpenMPI application that generates a file while it runs.  No big
> deal.  However, I¹d like to delete the partial file if the job is aborted via
> a user signal.  In a non-MPI application, I¹d use sigaction to intercept the
> SIGTERM and delete the open files there.  I¹d then call the ³old² signal
> handler.   When I tried this with my OpenMPI program, the signal was caught,
> the files deleted, the processes exited, but the MPI exec command as a whole
> did not exit.   This is the technique, by the way, that was described in this
> IBM MPI document:
> 
> 
> 
> http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ib
> m.cluster.pe.doc/pe_linux42/am106l0037.html
> 
> 
> 
> My question is, what is the ³right² way to do this under OpenMPI?  The only
> way I got the thing to work was by resetting the sigaction to the old handler
> and re-raising the signal.  It seems to work, but I want to know if I am going
> to get ³bit² by this.  Specifically, am I ³closing² MPI correctly by doing
> this?
> 
> 
> 
> I am running OpenMPI 1.2.5 under Fedora 8 on Linux in a x86_64 environment.
> My compiler is gcc 4.1.2.  This behavior happens when all processes are
> running on the same node using shared memory and between nodes when using TCP
> transport.  I don¹t have access to any other transport.
> 
> 
> 
> Thanks for your help.
> 
> 
> 
> Jesse Keller
> 
> 454 Life Sciences
> 
> 
> 
> Here¹s a code snippet to demonstrate what I¹m talking about.
> 
> 
> 
> --
> --
> 
> 
> 
> struct sigaction sa_old_term;  /* Global. */
> 
> 
> 
> void
> 
> SIGTERM_handler(int signal , siginfo_t * siginfo , void * a)
> 
> {
> 
> UnlinkOpenedFiles(); /* Global function to delete partial files. */
> 
> /* The commented code doesn¹t work. */
> 
> //if (sa_old_term.sa_sigaction)
> 
> //{
> 
> //  sa_old_term.sa_flags =SA_SIGINFO;
> 
> //  (*sa_old_term.sa_sigaction)(signal,siginfo,a);
> 
> //}
> 
> sigaction(SIGTERM, &sa_old_term,NULL);
> 
> raise(signal);
> 
> }
> 
> 
> 
> int main( int argc, char * argv)
> 
> {
> 
> MPI::Init(argc, argv);
> 
>
> 
> struct sigaction sa_term;
> 
> sigemptyset(&sa_term.sa_mask);
> 
> sa_term.sa_flags = SA_SIGINFO;
> 
> sa_term.sa_sigaction = SIGTERM_handler;
> 
> sigaction(SIGTERM, &sa_term, &sa_old_term);
> 
> 
> 
>doSomeMPIComputation();
> 
>MPI::Finalize();
> 
>return 0;
> 
> }
> 
> 
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] FW: problems with hostfile when doing MPMD

2008-04-14 Thread Ralph H Castain
Hi Jody

I believe this was intended for the Users mailing list, so I'm sending the
reply there.

We do plan to provide more explanation on these in the 1.3 release - believe
me, you are not alone in puzzling over all the configuration params! Many of
us in the developer community also sometimes wonder what they all do.

Sorry it is confusing - hopefully, it will become clearer soon

Ralph



-- Forwarded Message
> From: jody 
> Date: Mon, 14 Apr 2008 10:23:00 +0200
> To: Ralph Castain 
> Subject: Re: [OMPI users] problems with hostfile when doing MPMD
> 
> Ralph, Rolf
> 
> Thanks for your suggestion.
> After i rebuilt --enable-heterogeneous it did indeed work.
> Fortunately i have only 8 machines in my cluster otherwise
> listing all the nodes in the commandline would be quite exhausting.
> 
> BTW: is there a sort of overview over all the parameters one can pass
> to configure?
> In the FAQ some of them are spread across all questions, and searching
> on the MPI
> also did not work.
> Given an open-mpi installation, are all parameters given to configure
> listed there?
> I found "Heterogeneous support : yes", and "Prefix : /opt/openmpi"
> which are the ones
> i used. As to the others, i assume they are default settings. But if i
> would want to change
> any of those settings, it would be difficult to find the parameter name for
> it.
> 
> Jody
> 
> On Mon, Apr 14, 2008 at 1:14 AM, Ralph Castain  wrote:
>> I believe this -should- work, but can't verify it myself. The most important
>>  thing is to be sure you built with --enable-heterogeneous or else it will
>>  definitely fail.
>> 
>>  Ralph
>> 
>> 
>> 
>> 
>> 
>>  On 4/10/08 7:17 AM, "Rolf Vandevaart"  wrote:
>> 
>>> 
>>> On a CentOS Linux box, I see the following:
>>> 
 grep 113 /usr/include/asm-i386/errno.h
>>> #define EHOSTUNREACH 113 /* No route to host */
>>> 
>>> I have also seen folks do this to figure out the errno.
>>> 
 perl -e 'die$!=113'
>>> No route to host at -e line 1.
>>> 
>>> I am not sure why this is happening, but you could also check the Open
>>> MPI User's Mailing List Archives where there are other examples of
>>> people running into this error.  A search of "113" had a few hits.
>>> 
>>> http://www.open-mpi.org/community/lists/users
>>> 
>>> Also, I assume you would see this problem with or without the
>>> MPI_Barrier if you add this parameter to your mpirun line:
>>> 
>>>  --mca mpi_preconnect_all 1
>>> 
>>> The MPI_Barrier is causing the bad behavior because by default
>>> connections are setup up lazily. Therefore only when the MPI_Barrier
>>> call is made and we start communicating and establishing connections do
>>> we start seeing the communication problems.
>>> 
>>> Rolf
>>> 
>>> jody wrote:
 Rolf,
 I was able to run hostname on the two noes that way,
 and also a simplified version of my testprogram (without a barrier)
 works. Only MPI_Barrier shows bad behaviour.
 
 Do you know what this message means?
 [aim-plankton][0,1,2][btl_tcp_endpoint.c:
 572:mca_btl_tcp_endpoint_complete_connect]
 connect() failed with errno=113
 Does it give an idea what could be the problem?
 
 Jody
 
 On Thu, Apr 10, 2008 at 2:20 PM, Rolf Vandevaart
  wrote:
> This worked for me although I am not sure how extensive our 32/64
> interoperability support is.  I tested on Solaris using the TCP
> interconnect and a 1.2.5 version of Open MPI.  Also, we configure
> with
> the --enable-heterogeneous flag which may make a difference here.
> Also
> this did not work for me over the sm btl.
> 
> By the way, can you run a simple /bin/hostname across the two nodes?
> 
> 
>  burl-ct-v20z-4 61 =>/opt/SUNWhpc/HPC7.1/bin/mpicc -m32 simple.c -o
> simple.32
>  burl-ct-v20z-4 62 =>/opt/SUNWhpc/HPC7.1/bin/mpicc -m64 simple.c -o
> simple.64
>  burl-ct-v20z-4 63 =>/opt/SUNWhpc/HPC7.1/bin/mpirun -gmca
> btl_tcp_if_include bge1 -gmca btl sm,self,tcp -host burl-ct-v20z-4 -
> np 3
> simple.32 : -host burl-ct-v20z-5 -np 3 simple.64
> [burl-ct-v20z-4]I am #0/6 before the barrier
> [burl-ct-v20z-5]I am #3/6 before the barrier
> [burl-ct-v20z-5]I am #4/6 before the barrier
> [burl-ct-v20z-4]I am #1/6 before the barrier
> [burl-ct-v20z-4]I am #2/6 before the barrier
> [burl-ct-v20z-5]I am #5/6 before the barrier
> [burl-ct-v20z-5]I am #3/6 after the barrier
> [burl-ct-v20z-4]I am #1/6 after the barrier
> [burl-ct-v20z-5]I am #5/6 after the barrier
> [burl-ct-v20z-5]I am #4/6 after the barrier
> [burl-ct-v20z-4]I am #2/6 after the barrier
> [burl-ct-v20z-4]I am #0/6 after the barrier
>  burl-ct-v20z-4 64 =>/opt/SUNWhpc/HPC7.1/bin/mpirun -V mpirun (Open
> MPI) 1.2.5r16572
> 
> Report bugs to http://www.open-mpi.org/community/help/
>  burl-ct-v20z-4 65 =>
> 
> 
> 
> 
> jody wrote:
>> i narrowed it down:
>> The majority of processes get s

Re: [OMPI users] Need explanation for the following ORTE error message

2008-01-23 Thread Ralph H Castain



On 1/23/08 8:26 AM, "David Gunter"  wrote:

> A user of one of our OMPI 1.2.3 builds encountered the following error
> message during an MPI job run:
> 
> ORTE_ERROR_LOG: File read failure in file
> util/universe_setup_file_io.c at line 123

It means that at some point in the past, an mpirun attempted to startup,
started to write a file that includes info on its name and contact info, and
then was aborted. The user subsequently restarted the job, it saw the file
and attempted to read it, but the info in the file was incomplete.

This can be ignored - we eliminated that handshake from future versions, so
you'll never see it after 1.2.

Ralph

> 
> He reported that the job ran normally other than that but we are
> wondering what this message means.
> 
> Thanks,
> david
> --
> David Gunter
> HPC-3: Parallel Tools Team
> Los Alamos National Laboratory
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] orte in persistent mode

2008-01-02 Thread Ralph H Castain
Hi Neeraj

No, we still don't support having a persistent set of daemons acting as some
kind of "virtual machine" like LAM/MPI did. We at one time had talked about
adding it. However, our most recent efforts have actually taken us away from
supporting that mode of operation. As a result, I very much doubt we will
support it anytime in the foreseeable future.

Just to clarify: this isn't a "problem" with the existing system. It was a
design decision not to support that mode of operation. We had considered
revising that decision, but other considerations have led us further from a
design that would support it. It seems doubtful that we will do so anytime
soon.

Ralph




On 12/31/07 1:26 AM, "Neeraj Chourasia"  wrote:

> Dear All,
> 
>  I am wondering if ORTE can be run in persistent mode. It has
> already been raised in Mailing list (
> http://www.open-mpi.org/community/lists/users/2006/03/0939.php)
> ,  where it was said that the problem is still there. I just want to
> know, if its fixed or being fixed ?
> 
> Reason, why i am looking at is in large clusters, mpirun takes lot
> of time starting orted (by ssh) on remote nodes. If orte is already
> running, hopefully we can save considerable time.
> Any comments is appreciated.
> 
> -Neeraj
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Torque and OpenMPI 1.2

2007-12-20 Thread Ralph H Castain
For anyone truly interested, the revised hostfile behavior to be supported
beginning with release 1.3 is described on the Open MPI wiki:

https://svn.open-mpi.org/trac/ompi/wiki/HostFilePlan

Many thanks to the folks from Sun for providing that summary!
Ralph


On 12/19/07 3:17 PM, "Ralph H Castain"  wrote:

> It is fully implemented, but on my development branch at the moment. We hope
> to bring that over to the trunk late Jan - primarily need to complete some
> work on MPI-2 dynamic process management and give Josh a chance to repair
> the checkpoint/restart functionality before we bring it over.
> 
> Ralph
> 
> 
> 
> On 12/19/07 3:07 PM, "Adams, Brian M"  wrote:
> 
>> Ralph,
>> 
>> Thanks for the clarification as I'm dealing with workarounds for this at
>> Sandia as well...
>> 
>> I might have missed this earlier in the dialog, but is this capability
>> in the SVN trunk right now, or still on the TODO list?
>> 
>> Brian
>> 
>> Brian M. Adams, PhD (bria...@sandia.gov)
>> Optimization and Uncertainty Estimation
>> Sandia National Laboratories
>> P.O. Box 5800, Mail Stop 1318
>> Albuquerque, NM 87185-1318
>> Voice: 505-284-8845, FAX: 505-284-2518
>> 
>>> -Original Message-
>>> From: users-boun...@open-mpi.org
>>> [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph H Castain
>>> Sent: Wednesday, December 19, 2007 2:35 PM
>>> To: Open MPI Users ; pat.o'bry...@exxonmobil.com
>>> Cc: Castain, Ralph H. (LANL)
>>> Subject: Re: [OMPI users] Torque and OpenMPI 1.2
>>> 
>>> 
>>> Open MPI 1.3 will support use of the hostfile and the tm
>>> launcher simultaneously. It will work slightly differently,
>>> though, with respect to the hostfile:
>>> 
>>> 1. PBS_NODEFILE will be read to obtain a complete list of
>>> what has been allocated to us
>>> 
>>> 2. you will be allowed to provide a hostfile for each
>>> app_context as a separate entry to define the hosts to be
>>> used for that specific app_context.
>>> The hosts in your hostfile, however, must be included in the
>>> PBS_NODEFILE.
>>> 
>>> Basically, the hostfile argument will serve as a filter to
>>> the hosts provided via PBS_NODEFILE. We will use the TM
>>> launcher (unless, of course, you tell us to do otherwise), so
>>> the issues I mentioned before will go away.
>>> 
>>> There will be a FAQ entry describing the revised hostfile
>>> behavior in some detail. We think the change will help
>>> rationalize the behavior so it is more consistent across all
>>> the different use-cases people have invented. ;-)
>>> 
>>> Hope that helps
>>> Ralph
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> 
>>> 
>> 
> 
> 




Re: [OMPI users] mpirun: specify multiple install prefixes

2007-12-20 Thread Ralph H Castain
I'm afraid not - nor is it in the plans for 1.3 either. I'm afraid it fell
through the cracks as the needs inside the developer community moved into
other channels.

I'll raise the question internally and see if people feel we should do this.
It wouldn't be hard to put it into 1.3 at this point, but will be very hard
to do so if not done very soon.

Thanks for the reminder!
Ralph



On 12/14/07 9:45 AM, "Pignot Geoffroy"  wrote:

> Hi,
> 
> I just would like to known if this functionality (a prefix field in
> hostfile if i understand well ) has been integrated in the 1.2.4 ??
> 
> Thanks for your answer
> 
> --- On Mar 22, 2007, at 10:38 AM, Ralph Castain wrote:
> We had a nice chat about this on the OpenRTE telecon this morning. The
> question of what to do with multiple prefix's has been a long-running
> issue,
> most recently captured in bug trac report #497. The problem is that
> prefix
> is intended to tell us where to find the ORTE/OMPI executables, and
> therefore is associated with a node - not an app_context. What we
> haven't
> been able to define is an appropriate notation that a user can exploit
> to
> tell us the association.
> This issue has arisen on several occasions where either (a) users have
> heterogeneous clusters with a common file system, so the prefix must be
> adjusted on each *type* of node to point to the correct type of
> binary; and
> (b) for whatever reason, typically on rsh/ssh clusters, users have
> installed
> the binaries in different locations on some of the nodes. In this latter
> case, the reports have been from homogeneous clusters, so the *type* of
> binary was never the issue - it just wasn't located where we expected.
> Sun's solution is (I believe) what most of us would expect - they locate
> their executables in the same relative location on all their nodes. The
> binary in that location is correct for that local architecture. This
> requires, though, that the "prefix" location not be on a common file
> system.
> Unfortunately, that isn't the case with LANL's roadrunner, nor can we
> expect
> that everyone will follow that sensible approach :-). So we need a
> notation
> to support the "exception" case where someone needs to truly specify
> prefix
> versus node(s).
> We discussed a number of options, including auto-detecting the local
> arch
> and appending it to the specified "prefix" and several others. After
> discussing them, those of us on the call decided that adding a field
> to the
> hostfile that specifies the prefix to use on that host would be the best
> solution. This could be done on a cluster-level basis, so - although
> it is
> annoying to create the data file - at least it would only have to be
> done
> once.
> Again, this is the exception case, so requiring a little inconvenience
> seems
> a reasonable thing to do.
> Anyone have heartburn and/or other suggestions? If not, we might start
> to
> play with this next week. We would have to do some small modifications
> to
> the RAS, RMAPS, and PLS components to ensure that any multi-prefix
> info gets
> correctly propagated and used across all platforms for consistent
> behavior.
> Ralph
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Torque and OpenMPI 1.2

2007-12-19 Thread Ralph H Castain
It is fully implemented, but on my development branch at the moment. We hope
to bring that over to the trunk late Jan - primarily need to complete some
work on MPI-2 dynamic process management and give Josh a chance to repair
the checkpoint/restart functionality before we bring it over.

Ralph



On 12/19/07 3:07 PM, "Adams, Brian M"  wrote:

> Ralph,
> 
> Thanks for the clarification as I'm dealing with workarounds for this at
> Sandia as well...
> 
> I might have missed this earlier in the dialog, but is this capability
> in the SVN trunk right now, or still on the TODO list?
> 
> Brian
> 
> Brian M. Adams, PhD (bria...@sandia.gov)
> Optimization and Uncertainty Estimation
> Sandia National Laboratories
> P.O. Box 5800, Mail Stop 1318
> Albuquerque, NM 87185-1318
> Voice: 505-284-8845, FAX: 505-284-2518
> 
>> -Original Message-
>> From: users-boun...@open-mpi.org
>> [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph H Castain
>> Sent: Wednesday, December 19, 2007 2:35 PM
>> To: Open MPI Users ; pat.o'bry...@exxonmobil.com
>> Cc: Castain, Ralph H. (LANL)
>> Subject: Re: [OMPI users] Torque and OpenMPI 1.2
>> 
>> 
>> Open MPI 1.3 will support use of the hostfile and the tm
>> launcher simultaneously. It will work slightly differently,
>> though, with respect to the hostfile:
>> 
>> 1. PBS_NODEFILE will be read to obtain a complete list of
>> what has been allocated to us
>> 
>> 2. you will be allowed to provide a hostfile for each
>> app_context as a separate entry to define the hosts to be
>> used for that specific app_context.
>> The hosts in your hostfile, however, must be included in the
>> PBS_NODEFILE.
>> 
>> Basically, the hostfile argument will serve as a filter to
>> the hosts provided via PBS_NODEFILE. We will use the TM
>> launcher (unless, of course, you tell us to do otherwise), so
>> the issues I mentioned before will go away.
>> 
>> There will be a FAQ entry describing the revised hostfile
>> behavior in some detail. We think the change will help
>> rationalize the behavior so it is more consistent across all
>> the different use-cases people have invented. ;-)
>> 
>> Hope that helps
>> Ralph
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
> 




Re: [OMPI users] Torque and OpenMPI 1.2

2007-12-19 Thread Ralph H Castain

Open MPI 1.3 will support use of the hostfile and the tm launcher
simultaneously. It will work slightly differently, though, with respect to
the hostfile:

1. PBS_NODEFILE will be read to obtain a complete list of what has been
allocated to us

2. you will be allowed to provide a hostfile for each app_context as a
separate entry to define the hosts to be used for that specific app_context.
The hosts in your hostfile, however, must be included in the PBS_NODEFILE.

Basically, the hostfile argument will serve as a filter to the hosts
provided via PBS_NODEFILE. We will use the TM launcher (unless, of course,
you tell us to do otherwise), so the issues I mentioned before will go away.

There will be a FAQ entry describing the revised hostfile behavior in some
detail. We think the change will help rationalize the behavior so it is more
consistent across all the different use-cases people have invented. ;-)

Hope that helps
Ralph


On 12/19/07 2:05 PM, "pat.o'bry...@exxonmobil.com"
 wrote:

> Ralph,
> Thanks for the information. I am assuming OpenMPI 1.3 will support
> the
> "-hostfile" without the extra parms. Will 1.3 also carry the same
> restrictions you list below?
>   Pat
> 
> J.W. (Pat) O'Bryant,Jr.
> Business Line Infrastructure
> Technical Systems, HPC
> Office: 713-431-7022
> 
> 
> 
> 
>  Ralph H
>  Castain
>   
>  To
>  Sent by: "Open MPI Users
>  users-bounces@   "
>  open-
> mpi.org   cc
>   Ralph H Castain 
>  
> Subject
>  12/19/07 10:10   Re: [OMPI users] Torque and
> OpenMPI
>  AM   1.2
> 
> 
>  Please respond
>to
>  Open MPI Users
>i.org>
> 
> 
> 
> 
> 
> 
> 
> 
> Just to be clear: what this does is tell Open MPI to launch using the
> SSH
> launcher. This will work okay, but means that Torque doesn't know
> about the
> children and cannot monitor them. It also won't work on clusters (such
> as
> the ones we have here) that do not allow you to ssh procs onto the
> backend
> nodes.
> 
> If you are going this route, you actually don't need the --with-tm
> configure
> option. Your command line basically tells the system to ignore anything
> associated with tm anyway - you are operating just as if you were in an
> ssh-only cluster.
> 
> If it works for you, that is great - just be aware of the limitations
> and
> disclaimers. I would only suggest it be used as a temporary workaround
> as
> opposed to a general practice.
> 
> Ralph
> 
>> 
>>> From: "Caird, Andrew J" 
>>> Date: December 19, 2007 9:40:27 AM EST
>>> To: "Open MPI Users" 
>>> Subject: Re: [OMPI users] Torque and OpenMPI 1.2
>>> Reply-To: Open MPI Users 
>>> 
>>> 
>>> Glad to hear that worked for you.
>>> 
>>> Full credit goes to Brock Palen who told me about this.  It turns
>>> out we also have a user who wanted to do that.  And meta-credit goes
>>> to the OMPI developers for making a consistent and flexible set of
>>> MPI tools and libraries.
>>> 
>>> --andy
>>> 
>>> 
>>>> -Original Message-
>>>> From: users-boun...@open-mpi.org
>>>> [mailto:users-boun...@open-mpi.org] On Behalf Of
>>>> pat.o'bry...@exxonmobil.com
>>>> Sent: Wednesday, December 19, 2007 9:37 AM
>>>> To: Open MPI Users
>>>> Subject: Re: [OMPI users] Torque and OpenMPI 1.2
>>>> 
>>>> Andrew,
>>>>That worked like a champ. Now my users can have it both
>>>> ways. For the
>>>> record, my control statements looked like the following:
>>>> 
>>>> /opt/openmpi-1.2.4/bin/mpirun -mca pls ^tm -np $NP -hostfile
>>>> $PBS_NODEFILE
>>>> $my_binary_path
>>>> 
>>>> My job works just fine and reports no errors. This version of
>>>> OpenMPI was
>>>> built with "--with-tm=/usr/local/pbs".
>>>> 
>>>> Thanks for your help,
>>>>  Pat
>>>> 
>>>> 
>>>> J.W. (Pat) O'Bryant,Jr.
>>>> Business Line Infrastructure
>>>> Technical Systems, HPC
>>>> Office: 713-431-7022
>>>> 
&

Re: [OMPI users] Torque and OpenMPI 1.2

2007-12-19 Thread Ralph H Castain
Your suggestion worked. So long as I specifically state
>>>>> "--without-tm",
>>>>> the OpenMPI 1.2.4 build allows the use of "-hostfile".
>>>> Apparently, by
>>>>> default, OpenMPI 1.2.4 will incorporate Torque if it
>>>> exists, so it is
>>>>> necessary to specifically request "no Torque support".  I
>>>>> used the normal
>>>>> Torque processes to submit the job and specified "-hostfile
>>>>> $PBS_NODEFILE".
>>>>> Everything worked.
>>>>>   Thanks for your help,
>>>>>Pat
>>>>> 
>>>>> J.W. (Pat) O'Bryant,Jr.
>>>>> Business Line Infrastructure
>>>>> Technical Systems, HPC
>>>>> Office: 713-431-7022
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> Terry
>>>>> 
>>>>> Frankcombe
>>>>> 
>>>>> >>>>  To
>>>>> .se> Open MPI Users
>>>>> 
>>>>> Sent by:
>>>>>  cc
>>>>> users-bounces@
>>>>> 
>>>>> open-mpi.org
>>>>> Subject
>>>>>  Re: [OMPI users] Torque
>>>>> and OpenMPI
>>>>>  1.2
>>>>> 
>>>>> 12/18/07 01:45
>>>>> 
>>>>> PM
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> Please respond
>>>>> 
>>>>>   to
>>>>> 
>>>>> Open MPI Users
>>>>> 
>>>>> >>>> 
>>>>> i.org>
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> On Tue, 2007-12-18 at 11:59 -0700, Ralph H Castain wrote:
>>>>>> Hate to be a party-pooper, but the answer is "no" in
>>>> OpenMPI 1.2. We
>>>>> don't
>>>>>> allow the use of a hostfile in a Torque environment in
>>>> that version.
>>>>>> 
>>>>>> We have changed this for v1.3, but you'll have to wait for
>>>>> that release.
>>>>> 
>>>>> 
>>>>> Can one not build OpenMPI without tm support and spawn remote
>>>>> jobs using
>>>>> the other mechanisms, using only $PBS_NODEFILE (or a
>>>> derivative of the
>>>>> file that that points to) in the script?
>>>>> 
>>>>> Ciao
>>>>> Terry
>>>>> 
>>>>> 
>>>>> --
>>>>> Dr Terry Frankcombe
>>>>> Physical Chemistry, Department of Chemistry
>>>>> Göteborgs Universitet
>>>>> SE-412 96 Göteborg Sweden
>>>>> Ph: +46 76 224 0887   Skype: terry.frankcombe
>>>>> 
>>>>> 
>>>>> ___
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> 
>>>>> 
>>>>> ___
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> 
>>>> 
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> 
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> 
>>> 
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 





Re: [OMPI users] Torque and OpenMPI 1.2

2007-12-18 Thread Ralph H Castain
Hate to be a party-pooper, but the answer is "no" in OpenMPI 1.2. We don't
allow the use of a hostfile in a Torque environment in that version.

We have changed this for v1.3, but you'll have to wait for that release.

Sorry
Ralph



On 12/18/07 11:12 AM, "pat.o'bry...@exxonmobil.com"
 wrote:

> Tim,
>  Will OpenMPI 1.2.1 allow the use of a "hostfile"?
>  Thanks,
>   Pat
> 
> J.W. (Pat) O'Bryant,Jr.
> Business Line Infrastructure
> Technical Systems, HPC
> Office: 713-431-7022
> 
> 
> 
> 
>  Tim Prins
> 
> pi.org>To
>  Sent by: Open MPI Users  mpi.org>
>  users-
> bounces@ cc
>  open-mpi.org
>  
> Subject
>   Re: [OMPI users] Torque and
> OpenMPI
>  12/18/07 11:57   1.2
>  AM
> 
> 
>  Please respond
>to
>  Open MPI Users
>i.org>
> 
> 
> 
> 
> 
> 
> 
> 
> Open MPI v1.2 had some problems with the TM configuration code which was
> fixed
> in v1.2.1. So any version v1.2.1 or later should work fine (and, as you
> indicate, 1.2.4 works fine).
> 
> Tim
> 
> On Tuesday 18 December 2007 12:48:40 pm pat.o'bry...@exxonmobil.com
> wrote:
>> Jeff,
>>Here is the result of the "pbs-config". By the way, I have
> successfully
>> built OpenMPI 1.2.4 on this same system. The "config.log" for OpenMPI
> 1.2.4
>> shows the correct Torque path. That is not surprising since the
> "configure"
>> script for OpenMPI 1.2.4 uses "pbs-config" while the configure
>> script for
>> OpenMPI 1.2 does not.
>> 
> ---
>> - # pbs-config --libs
>> -L/usr/local/pbs/x86_64/lib -ltorque -Wl,--rpath
>> -Wl,/usr/local/pbs/x86_64/lib
>> 
> ---
>> -
>> 
>>Now, to address your concern about the nodes, my users are not
> "adding
>> nodes" to those provided by Torque. They are using a "proper subset"
>> of
> the
>> nodes.  Also,  I believe I read this comment on the OpenMPI web site
> which
>> seems to imply an oversight as far as the "-hostfile" is concerned.
>> 
> ---
>> 
> 
>> - Can I specify a hostfile
>> or use
>> the --host option to mpirun when running in a Torque / PBS
>> environment?
>> As of version v1.2.1, no.
>> Open MPI will fail to launch processes properly when a hostfile is
> specifed
>> on the mpirun command line, or if the mpirun [--host] option is used.
>> 
>> 
>> We're working on correcting the error. A future version of Open MPI
>> will
>> likely launch on the hosts specified either in the hostfile or via the
>> --host option as long as they are a proper subset of the hosts
>> allocated
> to
>> the Torque / PBS Pro job.
>> 
> ---
>> 
> 
>> - Thanks,
>> 
>> J.W. (Pat) O'Bryant,Jr.
>> Business Line Infrastructure
>> Technical Systems, HPC
>> Office: 713-431-7022
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-18 Thread Ralph H Castain



On 12/18/07 7:35 AM, "Elena Zhebel"  wrote:

> Thanks a lot! Now it works!
> The solution is to use mpirun -n 1 -hostfile my.hosts *.exe and pass MPI_Info
> Key to the Spawn function!
> 
> One more question: is it necessary to start my "master" program with
> mpirun -n 1 -hostfile my_hostfile -host my_master_host my_master.exe ?

No, it isn't necessary - assuming that my_master_host is the first host
listed in your hostfile! If you are only executing one my_master.exe (i.e.,
you gave -n 1 to mpirun), then we will automatically map that process onto
the first host in your hostfile.

If you want my_master.exe to go on someone other than the first host in the
file, then you have to give us the -host option.

> 
> Are there other possibilities for easy start?
> I would say just to run ./my_master.exe , but then the master process doesn't
> know about the available in the network hosts.

You can set the hostfile parameter in your environment instead of on the
command line. Just set OMPI_MCA_rds_hostfile_path = my.hosts.

You can then just run ./my_master.exe on the host where you want the master
to reside - everything should work the same.

Just as an FYI: the name of that environmental variable is going to change
in the 1.3 release, but everything will still work the same.

Hope that helps
Ralph


>  
> Thanks and regards,
> Elena
> 
> 
> -Original Message-
> From: Ralph H Castain [mailto:r...@lanl.gov]
> Sent: Monday, December 17, 2007 5:49 PM
> To: Open MPI Users ; Elena Zhebel
> Cc: Ralph H Castain
> Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration
> 
> 
> 
> 
> On 12/17/07 8:19 AM, "Elena Zhebel"  wrote:
> 
>> Hello Ralph,
>> 
>> Thank you for your answer.
>> 
>> I'm using OpenMPI 1.2.3. , compiler glibc232, Linux Suse 10.0.
>> My "master" executable runs only on the one local host, then it spawns
>> "slaves" (with MPI::Intracomm::Spawn).
>> My question was: how to determine the hosts where these "slaves" will be
>> spawned?
>> You said: "You have to specify all of the hosts that can be used by
>> your job
>> in the original hostfile". How can I specify the host file? I can not
>> find it
>> in the documentation.
> 
> Hmmm...sorry about the lack of documentation. I always assumed that the MPI
> folks in the project would document such things since it has little to do
> with the underlying run-time, but I guess that fell through the cracks.
> 
> There are two parts to your question:
> 
> 1. how to specify the hosts to be used for the entire job. I believe that is
> somewhat covered here:
> http://www.open-mpi.org/faq/?category=running#simple-spmd-run
> 
> That FAQ tells you what a hostfile should look like, though you may already
> know that. Basically, we require that you list -all- of the nodes that both
> your master and slave programs will use.
> 
> 2. how to specify which nodes are available for the master, and which for
> the slave.
> 
> You would specify the host for your master on the mpirun command line with
> something like:
> 
> mpirun -n 1 -hostfile my_hostfile -host my_master_host my_master.exe
> 
> This directs Open MPI to map that specified executable on the specified host
> - note that my_master_host must have been in my_hostfile.
> 
> Inside your master, you would create an MPI_Info key "host" that has a value
> consisting of a string "host1,host2,host3" identifying the hosts you want
> your slave to execute upon. Those hosts must have been included in
> my_hostfile. Include that key in the MPI_Info array passed to your Spawn.
> 
> We don't currently support providing a hostfile for the slaves (as opposed
> to the host-at-a-time string above). This may become available in a future
> release - TBD.
> 
> Hope that helps
> Ralph
> 
>> 
>> Thanks and regards,
>> Elena
>> 
>> -Original Message-
>> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
>> Behalf Of Ralph H Castain
>> Sent: Monday, December 17, 2007 3:31 PM
>> To: Open MPI Users 
>> Cc: Ralph H Castain
>> Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster
>> configuration
>> 
>> On 12/12/07 5:46 AM, "Elena Zhebel"  wrote:
>>> 
>>> 
>>> Hello,
>>> 
>>> I'm working on a MPI application where I'm using OpenMPI instead of
>>> MPICH.
>>> 
>>> In my "master" program I call the function MPI::Intracomm::Spawn which
>> spawns
>>> "slave" processes. It is not clear for me how to sp

Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-17 Thread Ralph H Castain



On 12/17/07 8:19 AM, "Elena Zhebel"  wrote:

> Hello Ralph,
> 
> Thank you for your answer.
> 
> I'm using OpenMPI 1.2.3. , compiler glibc232, Linux Suse 10.0.
> My "master" executable runs only on the one local host, then it spawns
> "slaves" (with MPI::Intracomm::Spawn).
> My question was: how to determine the hosts where these "slaves" will be
> spawned?
> You said: "You have to specify all of the hosts that can be used by
> your job
> in the original hostfile". How can I specify the host file? I can not
> find it
> in the documentation.

Hmmm...sorry about the lack of documentation. I always assumed that the MPI
folks in the project would document such things since it has little to do
with the underlying run-time, but I guess that fell through the cracks.

There are two parts to your question:

1. how to specify the hosts to be used for the entire job. I believe that is
somewhat covered here:
http://www.open-mpi.org/faq/?category=running#simple-spmd-run

That FAQ tells you what a hostfile should look like, though you may already
know that. Basically, we require that you list -all- of the nodes that both
your master and slave programs will use.

2. how to specify which nodes are available for the master, and which for
the slave.

You would specify the host for your master on the mpirun command line with
something like:

mpirun -n 1 -hostfile my_hostfile -host my_master_host my_master.exe

This directs Open MPI to map that specified executable on the specified host
- note that my_master_host must have been in my_hostfile.

Inside your master, you would create an MPI_Info key "host" that has a value
consisting of a string "host1,host2,host3" identifying the hosts you want
your slave to execute upon. Those hosts must have been included in
my_hostfile. Include that key in the MPI_Info array passed to your Spawn.

We don't currently support providing a hostfile for the slaves (as opposed
to the host-at-a-time string above). This may become available in a future
release - TBD.

Hope that helps
Ralph

> 
> Thanks and regards,
> Elena
> 
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Ralph H Castain
> Sent: Monday, December 17, 2007 3:31 PM
> To: Open MPI Users 
> Cc: Ralph H Castain
> Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster
> configuration
> 
> On 12/12/07 5:46 AM, "Elena Zhebel"  wrote:
>> 
>> 
>> Hello,
>> 
>> I'm working on a MPI application where I'm using OpenMPI instead of
>> MPICH.
>> 
>> In my "master" program I call the function MPI::Intracomm::Spawn which
> spawns
>> "slave" processes. It is not clear for me how to spawn the "slave"
> processes
>> over the network. Currently "master" creates "slaves" on the same
>> host.
>> 
>> If I use 'mpirun --hostfile openmpi.hosts' then processes are spawn
>> over
> the
>> network as expected. But now I need to spawn processes over the
>> network
> from
>> my own executable using MPI::Intracomm::Spawn, how can I achieve it?
>> 
> 
> I'm not sure from your description exactly what you are trying to do,
> nor in
> what environment this is all operating within or what version of Open
> MPI
> you are using. Setting aside the environment and version issue, I'm
> guessing
> that you are running your executable over some specified set of hosts,
> but
> want to provide a different hostfile that specifies the hosts to be
> used for
> the "slave" processes. Correct?
> 
> If that is correct, then I'm afraid you can't do that in any version
> of Open
> MPI today. You have to specify all of the hosts that can be used by
> your job
> in the original hostfile. You can then specify a subset of those hosts
> to be
> used by your original "master" program, and then specify a different
> subset
> to be used by the "slaves" when calling Spawn.
> 
> But the system requires that you tell it -all- of the hosts that are
> going
> to be used at the beginning of the job.
> 
> At the moment, there is no plan to remove that requirement, though
> there has
> been occasional discussion about doing so at some point in the future.
> No
> promises that it will happen, though - managed environments, in
> particular,
> currently object to the idea of changing the allocation on-the-fly. We
> may,
> though, make a provision for purely hostfile-based environments (i.e.,
> unmanaged) at some time in the future.
> 
> Ralph
> 
>> 
>> 
>> Thanks in advance for any help.
>> 
>> Elena
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-17 Thread Ralph H Castain



On 12/12/07 5:46 AM, "Elena Zhebel"  wrote:

>  
>  
> Hello,
>  
>  
>  
> I'm working on a MPI application where I'm using OpenMPI instead of MPICH.
>  
> In my "master" program I call the function MPI::Intracomm::Spawn which spawns
> "slave" processes. It is not clear for me how to spawn the "slave" processes
> over the network. Currently "master" creates "slaves" on the same host.
>  
> If I use 'mpirun --hostfile openmpi.hosts' then processes are spawn over the
> network as expected. But now I need to spawn processes over the network from
> my own executable using MPI::Intracomm::Spawn, how can I achieve it?
>  

I'm not sure from your description exactly what you are trying to do, nor in
what environment this is all operating within or what version of Open MPI
you are using. Setting aside the environment and version issue, I'm guessing
that you are running your executable over some specified set of hosts, but
want to provide a different hostfile that specifies the hosts to be used for
the "slave" processes. Correct?

If that is correct, then I'm afraid you can't do that in any version of Open
MPI today. You have to specify all of the hosts that can be used by your job
in the original hostfile. You can then specify a subset of those hosts to be
used by your original "master" program, and then specify a different subset
to be used by the "slaves" when calling Spawn.

But the system requires that you tell it -all- of the hosts that are going
to be used at the beginning of the job.

At the moment, there is no plan to remove that requirement, though there has
been occasional discussion about doing so at some point in the future. No
promises that it will happen, though - managed environments, in particular,
currently object to the idea of changing the allocation on-the-fly. We may,
though, make a provision for purely hostfile-based environments (i.e.,
unmanaged) at some time in the future.

Ralph

>  
>  
> Thanks in advance for any help.
>  
> Elena
>  
>  
>  ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users





Re: [OMPI users] ORTE_ERROR_LOG: Data unpack had inadequate space in file gpr_replica_cmd_processor.c at line 361

2007-12-14 Thread Ralph H Castain
You can always run locally as it doesn't startup a new daemon - hence, there
are no communications involved, which is what is causing the error message.

Check the remote nodes (and your path on those nodes) to make sure that the
Open MPI version you would pickup is the same as the one on your head node.
I know you believe you are running with the same version, but you can be
surprised - people remove the other source, for example, but forget to
remove the libraries and binaries. Or their path when we ssh the daemons
points to a place where a different version is installed (remember, the path
is often different for a login vs ssh).

What environment are you operating in - are you using rsh to launch on the
remote nodes? Are the remote nodes the same architecture as the head node?

Ralph



On 12/14/07 9:59 AM, "Qiang Xu"  wrote:

> Ralph:
> 
> I did first install OpenMPI-1.2.3 and got the same error message.
> ORTE_ERROR_LOG: Data unpack had inadequate space in file dss/dss_unpack.c at
> line 90
> ORTE_ERROR_LOG: Data unpack had inadequate space in file
> gpr_replica_cmd_processor.c at line 361
> 
> And after I reading the mailing list, I upgraded to OpenMPI-1.2.4.
> I remove the OpenMPI-1.2.3, but still show the same error message.
> 
> Now I also upgraded to gcc4.1.1, so gfortran is the fortran compiler.
> 
> ./configure --prefix=/home/qiang/OpenMPI-1.2.4/ CC=gcc F77=gfortran
> F90=gfortran
> 
> [qiang@grid11 ~]$ gcc -v
> Using built-in specs.
> Target: i386-redhat-linux
> Configured with: 
> ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info
>  --enable-shared --enable-threads=posix --enable-checking=release
> --with-system-zlib
>  --enable-__cxa_atexit --disable-libunwind-exceptions
> --with-gxx-include-dir=/usr/include/c++/3.4.3
>  --enable-libgcj-multifile --enable-languages=c,c++,java,f95
> --enable-java-awt=gtk
>  --disable-dssi --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
> --with-cpu=generic
>  --host=i386-redhat-linux
> Thread model: posix
> gcc version 4.1.1 20070105 (Red Hat 4.1.1-53)
> 
> Still the problem is there. But I can run the NAS benchmark locally without
> specifying the machinefile.
>  [qiang@compute-0-1 bin]$ mpirun -n 4 mg.B.4
> 
> 
>  NAS Parallel Benchmarks 2.3 -- MG Benchmark
> 
>  No input file. Using compiled defaults
>  Size: 256x256x256  (class B)
>  Iterations:  20
>  Number of processes: 4
> 
>  Initialization time:   6.783 seconds
> 
>  Benchmark completed
>  VERIFICATION SUCCESSFUL
>  L2 Norm is   0.180056440136E-05
>  Error is 0.351679609371E-16
> 
> 
>  MG Benchmark Completed.
>  Class   =B
>  Size=  256x256x256
>  Iterations  =   20
>  Time in seconds =47.19
>  Total processes =4
>  Compiled procs  =4
>  Mop/s total =   412.43
>  Mop/s/process   =   103.11
>  Operation type  =   floating point
>  Verification=   SUCCESSFUL
>  Version =  2.3
>  Compile date=  13 Dec 2007
> 
>  Compile options:
> MPIF77   = mpif77
> FLINK= mpif77
> FMPI_LIB = -L~/MyMPI/lib -lmpi_f77
> FMPI_INC = -I~/MyMPI/include
> FFLAGS   = -O3
> FLINKFLAGS   = (none)
> RAND = (none)
> 
> 
>  Please send the results of this run to:
> 
>  NPB Development Team
>  Internet: n...@nas.nasa.gov
> 
>  If email is not available, send this to:
> 
>  MS T27A-1
>  NASA Ames Research Center
>  Moffett Field, CA  94035-1000
> 
>  Fax: 415-604-3957
> 
> 
> If I try to use multiple nodes, I got the error messages:
> ORTE_ERROR_LOG: Data unpack had inadequate space in file dss/dss_unpack.c at
> line 90
> ORTE_ERROR_LOG: Data unpack had inadequate space in file
> gpr_replica_cmd_processor.c at line 361
> 
>  But only OpenMPI-1.2.4 was installed? Did I miss something?
> 
> Qiang
> 
> 
> 
> 
> 
> 
> - Original Message -
> From: "Ralph H Castain" 
> To: ; "Qiang Xu" 
> Sent: Friday, December 14, 2007 7:34 AM
> Subject: Re: [OMPI users] ORTE_ERROR_LOG: Data unpack had inadequate space
> in file gpr_replica_cmd_processor.c at line 361
> 
> 
>> Hi Qiang
>> 
>> This error message usually indicates that you have more than one Open MPI
>> installation around, and that the backend nodes are picking up a different
>> version than mpirun is using. Check to make sure that you have a
>> consistent
>> version across all the nodes.
>> 
>&g

Re: [OMPI users] ORTE_ERROR_LOG: Data unpack had inadequate space in file gpr_replica_cmd_processor.c at line 361

2007-12-14 Thread Ralph H Castain
Hi Qiang

This error message usually indicates that you have more than one Open MPI
installation around, and that the backend nodes are picking up a different
version than mpirun is using. Check to make sure that you have a consistent
version across all the nodes.

I also noted you were building with --enable-threads. As you've probably
seen on our discussion lists, remember that Open MPI isn't really thread
safe yet. I don't think that is the problem here, but wanted to be sure you
were aware of the potential for problems.

Ralph



On 12/13/07 5:31 PM, "Qiang Xu"  wrote:

> I installed OpenMPI-1.2.4 on our cluster.
> Here is the compute node infor
>  
> [qiang@compute-0-1 ~]$ uname -a
> Linux compute-0-1.local 2.6.9-42.0.2.ELsmp #1 SMP Wed Aug 23 00:17:26 CDT 2006
> i686 i686 i386 GNU/Linux
> [qiang@compute-0-1 bin]$ gcc -v
> Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.6/specs
> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
> --infodir=/usr/share/info --enable-shared --enable-threads=posix
> --disable-checking --with-system-zlib --enable-__cxa_atexit
> --disable-libunwind-exceptions --enable-java-awt=gtk --host=i386-redhat-linux
> Thread model: posix
> gcc version 3.4.6 20060404 (Red Hat 3.4.6-8)
>  
> Then I compiled NAS bechmarks, got some warning but went through.
> [qiang@compute-0-1 NPB2.3-MPI]$ make suite
> make[1]: Entering directory `/home/qiang/NPB2.3/NPB2.3-MPI'
>=
>=  NAS Parallel Benchmarks 2.3  =
>=  MPI/F77/C=
>=
>  
> cd MG; make NPROCS=16 CLASS=B
> make[2]: Entering directory `/home/qiang/NPB2.3/NPB2.3-MPI/MG'
> make[3]: Entering directory `/home/qiang/NPB2.3/NPB2.3-MPI/sys'
> cc -g  -o setparams setparams.c
> make[3]: Leaving directory `/home/qiang/NPB2.3/NPB2.3-MPI/sys'
> ../sys/setparams mg 16 B
> make.def modified. Rebuilding npbparams.h just in case
> rm -f npbparams.h
> ../sys/setparams mg 16 B
> mpif77 -c -I~/MyMPI/include  mg.f
> mg.f: In subroutine `zran3':
> mg.f:1001: warning:
>  call mpi_allreduce(rnmu,ss,1,dp_type,
>   1
> mg.f:2115: (continued):
> call mpi_allreduce(jg(0,i,1), jg_temp,4,MPI_INTEGER,
>  2
> Argument #1 of `mpi_allreduce' is one type at (2) but is some other type at
> (1) [info -f g77 M GLOBALS]
> mg.f:1001: warning:
>  call mpi_allreduce(rnmu,ss,1,dp_type,
>   1
> mg.f:2115: (continued):
> call mpi_allreduce(jg(0,i,1), jg_temp,4,MPI_INTEGER,
>  2
> Argument #2 of `mpi_allreduce' is one type at (2) but is some other type at
> (1) [info -f g77 M GLOBALS]
> mg.f:1001: warning:
>  call mpi_allreduce(rnmu,ss,1,dp_type,
>   1
> mg.f:2139: (continued):
> call mpi_allreduce(jg(0,i,0), jg_temp,4,MPI_INTEGER,
>  2
> Argument #1 of `mpi_allreduce' is one type at (2) but is some other type at
> (1) [info -f g77 M GLOBALS]
> mg.f:1001: warning:
>  call mpi_allreduce(rnmu,ss,1,dp_type,
>   1
> mg.f:2139: (continued):
> call mpi_allreduce(jg(0,i,0), jg_temp,4,MPI_INTEGER,
>  2
> Argument #2 of `mpi_allreduce' is one type at (2) but is some other type at
> (1) [info -f g77 M GLOBALS]
> cd ../common; mpif77 -c -I~/MyMPI/include  print_results.f
> cd ../common; mpif77 -c -I~/MyMPI/include  randdp.f
> cd ../common; mpif77 -c -I~/MyMPI/include  timers.f
> mpif77  -o ../bin/mg.B.16 mg.o ../common/print_results.o ../common/randdp.o
> ../common/timers.o -L~/MyMPI/lib -lmpi_f77
> make[2]: Leaving directory `/home/qiang/NPB2.3/NPB2.3-MPI/MG'
> make[1]: Leaving directory `/home/qiang/NPB2.3/NPB2.3-MPI'
> make[1]: Entering directory `/home/qiang/NPB2.3/NPB2.3-MPI'
> But when I tried to run it, I got the following error messages:
> [qiang@compute-0-1 bin]$ mpirun -machinefile m8 -n 16 mg.C.16
> [compute-0-1.local:11144] [0,0,0] ORTE_ERROR_LOG: Data unpack had inadequate
> space in file dss/dss_unpack.c at line 90
> [compute-0-1.local:11144] [0,0,0] ORTE_ERROR_LOG: Data unpack had inadequate
> space in file gpr_replica_cmd_processor.c at line 361
> I found some info on the mailling list, but it doesn't help for my case.
> Could anyone give me some advice? Or I have to upgrade the GNU compiler?
>  
> Thanks.
>  
> Qiang
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 





Re: [OMPI users] Q: Problems launching MPMD applications? ('mca_oob_tcp_peer_try_connect' error 103)

2007-12-06 Thread Ralph H Castain



On 12/5/07 8:47 AM, "Brian Dobbins"  wrote:

> Hi Josh,
> 
>> I believe the problem is that you are only applying the MCA
>> parameters to the first app context instead of all of them:
> 
>   Thank you very much.. applying the parameters with -gmca works fine with the
> test case (and I'll try the actual one soon).   However and this is minor
> since it works with method (1),...
>  
>> There are two main ways of doing this:
>>  2) Alternatively you can duplicate the MCA parameters for each app context:
> 
>   .. This actually doesn't work.  I had thought of that and tried it, and I
> still get the same connection problems.  I just rechecked this again to be
> sure. 

That is correct - the root problem here is that the command line MCA params
are not propagated to the remote daemons when we launch in 1.2. So launch of
the remote daemons fails as they are not looking at the correct interface to
link themselves into the system.

The apps themselves would have launched okay given the duplicate MCA params
as we store the params for each app_context and pass them along when the
daemon spawns them - you just can't get them launched because the daemons
fail first.

The aggregated MCA params flow through a different mechanism altogether,
which is why they work.

We have fixed this on our development trunk so the command line params get
passed - should work fine in future releases.

Ralph

> 
>   Again, many thanks for the help!
> 
>   With best wishes,
>   - Brian
> 
> 
> Brian Dobbins
> Yale University HPC
> 
> 
>  ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users





Re: [OMPI users] mca_oob_tcp_peer_try_connect problem

2007-11-29 Thread Ralph H Castain
Hi Bob

I'm afraid the person most familiar with the oob subsystem recently left the
project, so we are somewhat hampered at the moment. I don't recognize the
"Software caused connection abort" error message - it doesn't appear to be
one of ours (at least, I couldn't find it anywhere in our code base, though
I can't swear it isn't there in some dark corner), and I don't find it in my
own sys/errno.h file.

With those caveats, all I can say is that something appears to be blocking
the connection from your remote node back to the head node. Are you sure
both nodes are available on IPv4 (since you disabled IPv6)? Can you try
ssh'ing to the remote node and doing a ping to the head node using the IPv4
interface?

Do you have another method you could use to check and see if max14 will
accept connections from max15? If I interpret the error message correctly,
it looks like something in the connect handshake is being aborted. We try a
couple of times, but then give up and try other interfaces - since no other
interface is available, you get that other error message and we abort.

Sorry I can't be more help - like I said, this is now a weak spot in our
coverage that needs to be rebuilt.

Ralph



On 11/28/07 2:41 PM, "Bob Soliday"  wrote:

> I am new to openmpi and have a problem that I cannot seem to solve.
> I am trying to run the hello_c example and I can't get it to work.
> I compiled openmpi with:
> 
> ./configure --prefix=/usr/local/software/openmpi-1.2.4 --disable-ipv6
> --with-openib
> 
> The hostname file contains the local host and one other node. When I
> run it I get:
> 
> 
> [soliday@max14 mpi-ex]$ /usr/local/software/openmpi-1.2.4/bin/mpirun --
> debug-daemons -mca oob_tcp_debug 1000 -machinefile hostfile -np 2
> hello_c
> [max14:31465] [0,0,0] accepting connections via event library
> [max14:31465] [0,0,0] mca_oob_tcp_init: calling orte_gpr.subscribe
> [max14:31466] [0,0,1] accepting connections via event library
> [max14:31466] [0,0,1] mca_oob_tcp_init: calling orte_gpr.subscribe
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_send: tag 2
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_peer_try_connect: connecting
> port 55152 to: 192.168.2.14:38852
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_peer_complete_connect:
> sending ack, 0
> [max14:31465] [0,0,0] mca_oob_tcp_accept: 192.168.2.14:37255
> [max14:31465] [0,0,0]-[0,0,1] accepted: 192.168.2.14 - 192.168.2.14
> nodelay 1 sndbuf 262142 rcvbuf 262142 flags 0802
> [max14:31466] [0,0,1]-[0,0,0] connected: 192.168.2.14 - 192.168.2.14
> nodelay 1 sndbuf 262142 rcvbuf 262142 flags 0802
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_recv: tag 2
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_send: tag 2
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_recv: tag 2
> Daemon [0,0,1] checking in as pid 31466 on host max14
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_send: tag 2
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_recv: tag 2
> [max15:28222] [0,0,2]-[0,0,0] mca_oob_tcp_peer_try_connect: connect to
> 192.168.1.14:38852 failed: Software caused connection abort (103)
> [max15:28222] [0,0,2]-[0,0,0] mca_oob_tcp_peer_try_connect: connect to
> 192.168.1.14:38852 failed: Software caused connection abort (103)
> [max15:28222] [0,0,2]-[0,0,0] mca_oob_tcp_peer_try_connect: connect to
> 192.168.1.14:38852 failed, connecting over all interfaces failed!
> [max15:28222] OOB: Connection to HNP lost
> [max14:31466] [0,0,1] orted_recv_pls: received message from [0,0,0]
> [max14:31466] [0,0,1] orted_recv_pls: received kill_local_procs
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_send: tag 15
> [max14:31465] [0,0,0] ORTE_ERROR_LOG: Timeout in file base/
> pls_base_orted_cmds.c at line 275
> [max14:31465] [0,0,0] ORTE_ERROR_LOG: Timeout in file pls_rsh_module.c
> at line 1166
> [max14:31465] [0,0,0] ORTE_ERROR_LOG: Timeout in file errmgr_hnp.c at
> line 90
> [max14:31465] ERROR: A daemon on node max15 failed to start as expected.
> [max14:31465] ERROR: There may be more information available from
> [max14:31465] ERROR: the remote shell (see above).
> [max14:31465] ERROR: The daemon exited unexpectedly with status 1.
> [max14:31466] [0,0,1] orted_recv_pls: received message from [0,0,0]
> [max14:31466] [0,0,1] orted_recv_pls: received exit
> [max14:31466] [0,0,1]-[0,0,0] mca_oob_tcp_send: tag 15
> [max14:31465] [0,0,0]-[0,0,1] mca_oob_tcp_msg_recv: peer closed
> connection
> [max14:31465] [0,0,0]-[0,0,1] mca_oob_tcp_peer_close(0x523100) sd 6
> state 4
> [max14:31465] [0,0,0] ORTE_ERROR_LOG: Timeout in file base/
> pls_base_orted_cmds.c at line 188
> [max14:31465] [0,0,0] ORTE_ERROR_LOG: Timeout in file pls_rsh_module.c
> at line 1198
> --
> mpirun was unable to cleanly terminate the daemons for this job.
> Returned value Timeout instead of ORTE_SUCCESS.
> --
> 
> 
> 
> I can see that the orted deamon program is starting on both computers
> but i

Re: [OMPI users] Job does not quit even when the simulation dies

2007-11-07 Thread Ralph H Castain
As Jeff indicated, the degree of capability has improved over time - I'm not
sure which version this represents.

The type of failure also plays a major role in our ability to respond. If a
process actually segfaults or dies, we usually pick that up pretty well and
abort the rest of the job (certainly, that seems to be working pretty well
in the 1.2 series and beyond).

If an MPI communication fails, I'm not sure what the MPI layer does - I
believe it may retry for awhile, but I don't know how robust the error
handling is in that layer. Perhaps someone else could address that question.

If an actual node fails, then we don't handle that very well at all, even in
today's development version. The problem is that we need to rely on the
daemon on that node to tell us that the local procs died - if the node dies,
then the daemon can't do that, so we never know it happened.

We are working on solutions to that problem. Hopefully, we will have at
least a preliminary version in the next release.

Ralph



On 11/7/07 6:44 AM, "Jeff Squyres"  wrote:

> Support for failure scenarios is something that is getting better over
> time in Open MPI.
> 
> It looks like the version you are using either didn't properly catch
> that there was a failure and/or then cleanly exit all MPI processes.
> 
> 
> On Nov 6, 2007, at 9:01 PM, Teng Lin wrote:
> 
>> Hi,
>> 
>> 
>> Just realize I have a job run for a long time, while some of the nodes
>> already die. Is there any way to ask other nodes to quit ?
>> 
>> 
>> [kyla-0-1.local:09741] mca_btl_tcp_frag_send: writev failed with
>> errno=104
>> [kyla-0-1.local:09742] mca_btl_tcp_frag_send: writev failed with
>> errno=104
>> 
>> The FAQ does mention it is related  to :
>>  Connection reset by peer: These types of errors usually occur after
>> MPI_INIT has completed, and typically indicate that an MPI process has
>> died unexpectedly (e.g., due to a seg fault). The specific error
>> message indicates that a peer MPI process tried to write to the now-
>> dead MPI process and failed.
>> 
>> Thanks,
>> Teng
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 




Re: [OMPI users] Circumvent --host or dynamically read host info?

2007-08-30 Thread Ralph H Castain
I take it you are running in an rsh/ssh environment (as opposed to a managed
environment like SLURM)?

I'm afraid that you have to tell us -all- of the nodes that will be utilized
in your job at the beginning (i.e., to mpirun). This requirement is planned
to be relaxed in a later version, but that won't be out for some time.

At the moment, there is no workaround.

Ralph



On 8/30/07 9:51 AM, "Murat Knecht"  wrote:

> 
> Hi,
> I have a question regarding the --host(file) option of mpirun. Whenever I
> try to fork a process on another node using Spawn(), I get the following
> message:
> 
> Verify that you have mapped the allocated resources properly using the
> --host specification.
> 
> I understand this can be fixed by providing the hostnames which will be
> used either by --host or by using a hostfile containing the names and
> possibly the slots available.
> This may be an acceptable solution, if one wants to start the same process
> on several blades, but what about starting a parent process which then
> initiates different child processes on other blades?
> In this scenario mpirun initially does not need the information of which
> other blades exist, but is only supposed to start the parent process
> locally. Surely, there must be a way not to previously specify blades, but
> to load this information at runtime, especially in a changing landscape
> where nodes are added at runtime.
> Is there a way to avoid this --host option?
> 
> I'm using the latest version of OpenMPI (1.2.3).
> 
> Best regards,
> Murat
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] memory leaks on solaris

2007-08-06 Thread Ralph H Castain
Hmmm...just to clarify as I think there may be some confusion here.

Orte-clean will kill any outstanding Open MPI daemons (which should kill
their local apps) and will cleanup their associated temporary file systems.
If you are having problems with zombied processes or stale daemons, then
this will hopefully help (it isn't perfect, but it helps).

However, orte-clean will not do anything about releasing memory that has
been "leaked" by Open MPI. We don't have any tools for doing that, I'm
afraid.


On 8/6/07 8:08 AM, "Don Kerr"  wrote:

> Glenn,
> 
> With CT7 there is a utility which can be used to clean up left over
> cruft from stale  MPI processes.
> 
> % man -M /opt/SUNWhpc/man -s 1 orte-clean
> 
> Achtung: This will remove current running jobs as well. Use of "-v" for
> verbose recommended.
> 
> I would be curious if this helps.
> 
> -DON
> p.s. orte-clean does not exist in the ompi v1.2 branch, it is in the
> trunk but  I think there is an issue with it currently
>  
> Ralph H Castain wrote:
> 
>> 
>> On 8/5/07 6:35 PM, "Glenn Carver"  wrote:
>> 
>>  
>> 
>>> I'd appreciate some advice and help on this one.  We're having
>>> serious problems running parallel applications on our cluster.  After
>>> each batch job finishes, we lose a certain amount of available
>>> memory. Additional jobs cause free memory to gradually go down until
>>> the machine starts swapping and becomes unusable or hangs. Taking the
>>> machine to single user mode doesn't restore the memory, only a reboot
>>> returns all available memory. This happens on all our nodes.
>>> 
>>> We've been doing some testing to try to pin the problems down,
>>> although we still don't fully know where the problem is coming from.
>>> We have ruled out our applications (fortran codes); we see the same
>>> behaviour with  Intel's IMB. We know it's not a network issue as a
>>> parallel job running solely on the 4 cores on each node produces the
>>> same effect. All nodes have been brought up to the very latest OS
>>> patches and we still see the same problem.
>>> 
>>> Details: we're running Solaris 10/06, Sun Studio 12, Clustertools 7
>>> (open-mpi 1.2.1) and Sun Gridengine 6.1. Hardware is Sun X4100/X4200.
>>> Kernel version: SunOS 5.10 Generic_125101-10 on all nodes.
>>> 
>>> I read in the release notes that a number of memory leaks were fixed
>>> for the 1.2.1 release but none have been noticed since so I'm not
>>> sure where the problem might be.
>>>
>>> 
>> 
>> I'm not sure where that claim came from, but it is certainly not true that
>> we haven't noticed any leaks since 1.2.1. We know we have quite a few memory
>> leaks in the code base, many of which are small in themselves but can add up
>> depending upon exactly what the application does (i.e., which code paths it
>> travels). Running a simple hello_world app under valgrind will show
>> significant unreleased memory.
>> 
>> I doubt you will see much, if any, improvement in 1.2.4. There have probably
>> been a few patches applied, but a comprehensive effort to eradicate the
>> problem has not been made. It is something we are trying to cleanup over
>> time, but hasn't been a crash priority as most OS's do a fairly good job of
>> cleaning up when the app completes.
>> 
>>  
>> 
>>> My next move is to try the very latest release (probably
>>> 1.2.4pre-release). As CT7 is built with sun studio 11 rather than 12
>>> which we're using, I might also try downgrading. At the moment we're
>>> rebooting our cluster nodes every day to keep things going. So any
>>> suggestions are appreciated.
>>> 
>>> Thanks,Glenn
>>> 
>>> 
>>> 
>>> 
>>> $ ompi_info
>>> Open MPI: 1.2.1r14096-ct7b030r1838
>>>Open MPI SVN revision: 0
>>> Open RTE: 1.2.1r14096-ct7b030r1838
>>>Open RTE SVN revision: 0
>>> OPAL: 1.2.1r14096-ct7b030r1838
>>>OPAL SVN revision: 0
>>>   Prefix: /opt/SUNWhpc/HPC7.0
>>>  Configured architecture: i386-pc-solaris2.10
>>>Configured by: root
>>>Configured on: Fri Mar 30 13:40:12 EDT 2007
>>>   Configure host: burpen-csx10-0
>>> Built by: root
>>> Built on: Fri Mar 30 13:57:25 EDT 2007
>

Re: [OMPI users] memory leaks on solaris

2007-08-06 Thread Ralph H Castain



On 8/5/07 6:35 PM, "Glenn Carver"  wrote:

> I'd appreciate some advice and help on this one.  We're having
> serious problems running parallel applications on our cluster.  After
> each batch job finishes, we lose a certain amount of available
> memory. Additional jobs cause free memory to gradually go down until
> the machine starts swapping and becomes unusable or hangs. Taking the
> machine to single user mode doesn't restore the memory, only a reboot
> returns all available memory. This happens on all our nodes.
> 
> We've been doing some testing to try to pin the problems down,
> although we still don't fully know where the problem is coming from.
> We have ruled out our applications (fortran codes); we see the same
> behaviour with  Intel's IMB. We know it's not a network issue as a
> parallel job running solely on the 4 cores on each node produces the
> same effect. All nodes have been brought up to the very latest OS
> patches and we still see the same problem.
> 
> Details: we're running Solaris 10/06, Sun Studio 12, Clustertools 7
> (open-mpi 1.2.1) and Sun Gridengine 6.1. Hardware is Sun X4100/X4200.
> Kernel version: SunOS 5.10 Generic_125101-10 on all nodes.
> 
> I read in the release notes that a number of memory leaks were fixed
> for the 1.2.1 release but none have been noticed since so I'm not
> sure where the problem might be.

I'm not sure where that claim came from, but it is certainly not true that
we haven't noticed any leaks since 1.2.1. We know we have quite a few memory
leaks in the code base, many of which are small in themselves but can add up
depending upon exactly what the application does (i.e., which code paths it
travels). Running a simple hello_world app under valgrind will show
significant unreleased memory.

I doubt you will see much, if any, improvement in 1.2.4. There have probably
been a few patches applied, but a comprehensive effort to eradicate the
problem has not been made. It is something we are trying to cleanup over
time, but hasn't been a crash priority as most OS's do a fairly good job of
cleaning up when the app completes.

> 
> My next move is to try the very latest release (probably
> 1.2.4pre-release). As CT7 is built with sun studio 11 rather than 12
> which we're using, I might also try downgrading. At the moment we're
> rebooting our cluster nodes every day to keep things going. So any
> suggestions are appreciated.
> 
> Thanks,Glenn
> 
> 
> 
> 
> $ ompi_info
>  Open MPI: 1.2.1r14096-ct7b030r1838
> Open MPI SVN revision: 0
>  Open RTE: 1.2.1r14096-ct7b030r1838
> Open RTE SVN revision: 0
>  OPAL: 1.2.1r14096-ct7b030r1838
> OPAL SVN revision: 0
>Prefix: /opt/SUNWhpc/HPC7.0
>   Configured architecture: i386-pc-solaris2.10
> Configured by: root
> Configured on: Fri Mar 30 13:40:12 EDT 2007
>Configure host: burpen-csx10-0
>  Built by: root
>  Built on: Fri Mar 30 13:57:25 EDT 2007
>Built host: burpen-csx10-0
>C bindings: yes
>  C++ bindings: yes
>Fortran77 bindings: yes (all)
>Fortran90 bindings: yes
>   Fortran90 bindings size: trivial
>C compiler: cc
>   C compiler absolute: /ws/ompi-tools/SUNWspro/SOS11/bin/cc
>  C++ compiler: CC
> C++ compiler absolute: /ws/ompi-tools/SUNWspro/SOS11/bin/CC
>Fortran77 compiler: f77
>Fortran77 compiler abs: /ws/ompi-tools/SUNWspro/SOS11/bin/f77
>Fortran90 compiler: f95
>Fortran90 compiler abs: /ws/ompi-tools/SUNWspro/SOS11/bin/f95
>   C profiling: yes
> C++ profiling: yes
>   Fortran77 profiling: yes
>   Fortran90 profiling: yes
>C++ exceptions: yes
>Thread support: no
>Internal debug support: no
>   MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>   libltdl support: yes
> Heterogeneous support: yes
>   mpirun default --prefix: yes
> MCA backtrace: printstack (MCA v1.0, API v1.0, Component v1.2.1)
> MCA paffinity: solaris (MCA v1.0, API v1.0, Component v1.2.1)
> MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.2.1)
> MCA timer: solaris (MCA v1.0, API v1.0, Component v1.2.1)
> MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
> MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
>  MCA coll: basic (MCA v1.0, API v1.0, Component v1.2.1)
>  MCA coll: self (MCA v1.0, API v1.0, Component v1.2.1)
>  MCA coll: sm (MCA v1.0, API v1.0, Component v1.2.1)
>  MCA coll: tuned (MCA v1.0, API v1.0, Component v1.2.1)
>MCA io: romio (MCA v1.0, API v1.0, Component v1.2.1)
> MCA mpool: sm (MCA v1.0, API v1.0, Component v1.2.1)
>

Re: [OMPI users] mpi daemon

2007-08-02 Thread Ralph H Castain
The daemon's name is "orted" - one will be launched on each remote node as
the application is started, but they only live for as long as the
application is executing. Then they go away.


On 8/2/07 12:47 PM, "Reuti"  wrote:

> Am 02.08.2007 um 18:32 schrieb Francesco Pietra:
> 
>> I compiled successfully the MD suite Amber9 on openmpi-1.2.3,
>> installed om
>> Debian Linux amd64 etch.
>> 
>> Although all tests for parallel amber9 passed successfully, when I run
>> 
>> ps -aux
>> 
>> I don't see any daemon referring to mpi. How is that daemon
>> identified, or how
>> should it be started?
> 
> The output of:
> 
> ps f -eo pid,ppid,pgrp,user,group,command
> 
> might be more informative.
> 
> -- Reuti
> 
> 
>> Thanks
>> 
>> francesco pietra
>> 
>> 
>>
>> __
>> __
>> Shape Yahoo! in your own image.  Join our Network Research Panel
>> today!   http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] orterun --bynode/--byslot problem

2007-07-23 Thread Ralph H Castain
Yes...it would indeed.


On 7/23/07 9:03 AM, "Kelley, Sean"  wrote:

> Would this logic be in the bproc pls component?
> Sean
> 
> 
> From: users-boun...@open-mpi.org on behalf of Ralph H Castain
> Sent: Mon 7/23/2007 9:18 AM
> To: Open MPI Users 
> Subject: Re: [OMPI users] orterun --bynode/--byslot problem
> 
> No, byslot appears to be working just fine on our bproc clusters (it is the
> default mode). As you probably know, bproc is a little strange in how we
> launch - we have to launch the procs in "waves" that correspond to the
> number of procs on a node.
> 
> In other words, the first "wave" launches a proc on all nodes that have at
> least one proc on them. The second "wave" then launches another proc on all
> nodes that have at least two procs on them, but doesn't launch anything on
> any node that only has one proc on it.
> 
> My guess here is that the system for some reason is insisting that your head
> node be involved in every wave. I confess that we have never tested (to my
> knowledge) a mapping that involves "skipping" a node somewhere in the
> allocation - we always just map from the beginning of the node list, with
> the maximum number of procs being placed on the first nodes in the list
> (since in our machines, the nodes are all the same, so who cares?). So it is
> possible that something in the code objects to skipping around nodes in the
> allocation.
> 
> I will have to look and see where that dependency might lie - will try to
> get to it this week.
> 
> BTW: that patch I sent you for head node operations will be in 1.2.4.
> 
> Ralph
> 
> 
> 
> On 7/23/07 7:04 AM, "Kelley, Sean"  wrote:
> 
>> > Hi,
>> > 
>> >  We are experiencing a problem with the process allocation on our Open
>> MPI
>> > cluster. We are using Scyld 4.1 (BPROC), the OFED 1.2 Topspin Infiniband
>> > drivers, Open MPI 1.2.3 + patch (to run processes on the head node). The
>> > hardware consists of a head node and N blades on private ethernet and
>> > infiniband networks.
>> > 
>> > The command run for these tests is a simple MPI program (called 'hn') which
>> > prints out the rank and the hostname. The hostname for the head node is
>> 'head'
>> > and the compute nodes are '.0' ... '.9'.
>> > 
>> > We are using the following hostfiles for this example:
>> > 
>> > hostfile7
>> > -1 max_slots=1
>> > 0 max_slots=3
>> > 1 max_slots=3
>> > 
>> > hostfile8
>> > -1 max_slots=2
>> > 0 max_slots=3
>> > 1 max_slots=3
>> > 
>> > hostfile9
>> > -1 max_slots=3
>> > 0 max_slots=3
>> > 1 max_slots=3
>> > 
>> > running the following commands:
>> > 
>> > orterun --hostfile hostfile7 -np 7 ./hn
>> > orterun --hostfile hostfile8 -np 8 ./hn
>> > orterun --byslot --hostfile hostfile7 -np 7 ./hn
>> > orterun --byslot --hostfile hostfile8 -np 8 ./hn
>> > 
>> > causes orterun to crash. However,
>> > 
>> > orterun --hostfile hostfile9 -np 9 ./hn
>> > ortetrun --byslot --hostfile hostfile9 -np 9 ./hn
>> > 
>> > works outputing the following:
>> > 
>> > 0 head
>> > 1 head
>> > 2 head
>> > 3 .0
>> > 4 .0
>> > 5 .0
>> > 6 .0
>> > 7 .0
>> > 8 .0
>> > 
>> > However, running the following:
>> > 
>> > orterun --bynode --hostfile hostfile7 -np 7 ./hn
>> > 
>> > works, outputing the following
>> > 
>> > 0 head
>> > 1 .0
>> > 2 .1
>> > 3 .0
>> > 4 .1
>> > 5 .0
>> > 6 .1
>> > 
>> > Is the '--byslot' crash a known problem? Does it have something to do with
>> > BPROC? Thanks in advance for any assistance!
>> > 
>> > Sean
>> > 
>> > ___
>> > users mailing list
>> > us...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] orterun --bynode/--byslot problem

2007-07-23 Thread Ralph H Castain
No, byslot appears to be working just fine on our bproc clusters (it is the
default mode). As you probably know, bproc is a little strange in how we
launch - we have to launch the procs in "waves" that correspond to the
number of procs on a node.

In other words, the first "wave" launches a proc on all nodes that have at
least one proc on them. The second "wave" then launches another proc on all
nodes that have at least two procs on them, but doesn't launch anything on
any node that only has one proc on it.

My guess here is that the system for some reason is insisting that your head
node be involved in every wave. I confess that we have never tested (to my
knowledge) a mapping that involves "skipping" a node somewhere in the
allocation - we always just map from the beginning of the node list, with
the maximum number of procs being placed on the first nodes in the list
(since in our machines, the nodes are all the same, so who cares?). So it is
possible that something in the code objects to skipping around nodes in the
allocation.

I will have to look and see where that dependency might lie - will try to
get to it this week.

BTW: that patch I sent you for head node operations will be in 1.2.4.

Ralph



On 7/23/07 7:04 AM, "Kelley, Sean"  wrote:

> Hi,
>  
>  We are experiencing a problem with the process allocation on our Open MPI
> cluster. We are using Scyld 4.1 (BPROC), the OFED 1.2 Topspin Infiniband
> drivers, Open MPI 1.2.3 + patch (to run processes on the head node). The
> hardware consists of a head node and N blades on private ethernet and
> infiniband networks.
>  
> The command run for these tests is a simple MPI program (called 'hn') which
> prints out the rank and the hostname. The hostname for the head node is 'head'
> and the compute nodes are '.0' ... '.9'.
>  
> We are using the following hostfiles for this example:
>  
> hostfile7
> -1 max_slots=1
> 0 max_slots=3
> 1 max_slots=3
>  
> hostfile8
> -1 max_slots=2
> 0 max_slots=3
> 1 max_slots=3
>  
> hostfile9
> -1 max_slots=3
> 0 max_slots=3
> 1 max_slots=3
>  
> running the following commands:
>  
> orterun --hostfile hostfile7 -np 7 ./hn
> orterun --hostfile hostfile8 -np 8 ./hn
> orterun --byslot --hostfile hostfile7 -np 7 ./hn
> orterun --byslot --hostfile hostfile8 -np 8 ./hn
>  
> causes orterun to crash. However,
>  
> orterun --hostfile hostfile9 -np 9 ./hn
> ortetrun --byslot --hostfile hostfile9 -np 9 ./hn
>  
> works outputing the following:
>  
> 0 head
> 1 head
> 2 head
> 3 .0
> 4 .0
> 5 .0
> 6 .0
> 7 .0
> 8 .0
>  
> However, running the following:
>  
> orterun --bynode --hostfile hostfile7 -np 7 ./hn
>  
> works, outputing the following
>  
> 0 head
> 1 .0
> 2 .1
> 3 .0
> 4 .1
> 5 .0
> 6 .1
>  
> Is the '--byslot' crash a known problem? Does it have something to do with
> BPROC? Thanks in advance for any assistance!
>  
> Sean
>  
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] OpenMPI start up problems

2007-07-19 Thread Ralph H Castain
I gather you are running under TM since you have a PBS_NODEFILE? If so, in
1.2 we setup to read that file directly - you cannot specify it on the
command line.

We will fix this in 1.3 so you can do both, but for now - under TM - you
have to leave that "-machinefile $PBS_NODEFILE" off of the command line.

Hope that helps
Ralph



On 7/19/07 9:27 AM, "Konstantin Kudin"  wrote:

>  All,
> 
>  I've run across a somewhat difficult code for OpenMPI to handle
> (CPMD).
> 
>  Here is the report on the versions I tried:
>  1.1.4 - mostly does not start
>  1.1.5 - works
>  1.2.3 - does not start
> 
> The machine has dual Opterons, with Gigabit. The running command with
> 4x2 cpus is:
> mpirun -np $np -machinefile $PBS_NODEFILE \
> -mca coll self,basic,tuned -mca  \
> mpi_paffinity_alone 1  -mca coll_basic_crossover 4 \
> $HOME/cpmd/cpmd.x_o $1 >> $2
> 
> Now, onto specific errors.
> 
> 1.1.4 :
> occasionally starts, occasionally gives the error:
>   PML add procs failed
>   --> Returned "Error" (-1) instead of "Success" (0)
> --
> *** An error occurred in MPI_Init
> *** before MPI was initialized
> *** MPI_ERRORS_ARE_FATAL (goodbye)
> 
> 1.1.5 :
> works fine
> 
> 1.2.3 :
> [node041:04866] pls:tm: failed to poll for a spawned proc, return
> status = 17002
> [node041:04866] [0,0,0] ORTE_ERROR_LOG: In errno in file rmgr_urm.c at
> line 462
> [node041:04866] mpirun: spawn failed with errno=-11
> 
>  Kostya
> 
> 
> 
>
> __
> __
> Building a website is a piece of cake. Yahoo! Small Business gives you all the
> tools to get online.
> http://smallbusiness.yahoo.com/webhosting
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] mpirun hanging followup

2007-07-18 Thread Ralph H Castain
Hooray! Glad we could help track this down - sorry it was so hard to do so.

To answer your questions:

1. Yes - ORTE should bail out gracefully. It definitely should not hang. I
will log the problem and investigate. I believe I know where the problem
lies, and it may already be fixed on our trunk, but the fix may not get into
the 1.2 family (have to see what it would entail).

2. I will definitely commit that debug code and ensure it is in future
releases.

3. I'll see if we can add something about --debug-daemons to the FAQ -
thanks for pointing out that oversight.

Thanks
Ralph



On 7/18/07 12:19 PM, "Bill Johnstone"  wrote:

> 
> --- Ralph Castain  wrote:
> 
>> Unfortunately, we don't have more debug statements internal to that
>> function. I'll have to create a patch for you that will add some so
>> we can
>> better understand why it is failing - will try to send it to you on
>> Wed.
> 
> Thank you for the patch you sent.
> 
> I solved the problem.  It was a head-slapper of an error.  Turned out
> that I had forgotten -- the permissions on the filesystem override the
> permissions of the mount point.  As I mentioned, these machines have an
> NFS root filesystem.  In that filesystem, tmp has permissions 1777.
> However, when each node mounts its local temp partition to /tmp, the
> permissions on that filesystem are the permissions the mount point
> takes on.
> 
> In this case, I had forgotten to apply permissions 1777 to /tmp after
> mounting on each machine.  As a result, /tmp really did not have the
> appropriate permissions for mpirun to write to it as necessary.
> 
> Your patch helped me figure this out.  Technically, I should have been
> able to figure it out from the messages you'd already sent to the
> mailing list, but it wasn't until I saw the line in session_dir.c where
> the error was occurring that I realized it had to be some kind of
> permissions error.
> 
> I've attached the new debug output below:
> 
> [node5.x86-64:11511] [0,0,1] ORTE_ERROR_LOG: Error in file
> util/session_dir.c at line 108
> [node5.x86-64:11511] [0,0,1] ORTE_ERROR_LOG: Error in file
> util/session_dir.c at line 391
> [node5.x86-64:11511] [0,0,1] ORTE_ERROR_LOG: Error in file
> runtime/orte_init_stage1.c at line 626
> --
> It looks like orte_init failed for some reason; your parallel process
> is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_session_dir failed
>   --> Returned value -1 instead of ORTE_SUCCESS
> 
> --
> [node5.x86-64:11511] [0,0,1] ORTE_ERROR_LOG: Error in file
> runtime/orte_system_init.c at line 42
> [node5.x86-64:11511] [0,0,1] ORTE_ERROR_LOG: Error in file
> runtime/orte_init.c at line 52
> Open RTE was unable to initialize properly.  The error occured while
> attempting to orte_init().  Returned value -1 instead of ORTE_SUCCESS.
> 
> Starting at line 108 of session_dir.c, is:
> 
> if (ORTE_SUCCESS != (ret = opal_os_dirpath_create(directory, my_mode)))
> {
> ORTE_ERROR_LOG(ret);
> }
> 
> Three further points:
> 
> -Is there some reason ORTE can't bail out gracefully upon this error,
> instead of hanging like it was doing for me?
> 
> -I think leaving in the extra debug logging code you sent me in the
> patch for future Open MPI versions would be a good idea to help
> troubleshoot problems like this.
> 
> -It would be nice to see "--debug-daemons" added to the Troubleshooting
> section of the FAQ on the web site.
> 
> Thank you very very much for your help Ralph and everyone else that replied.
> 
> 
>
> __
> __
> Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail,
> news, photos & more.
> http://mobile.yahoo.com/go?refer=1GNXIC
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] mpirun hanging followup

2007-07-18 Thread Ralph H Castain



On 7/18/07 11:46 AM, "Bill Johnstone"  wrote:

> --- Ralph Castain  wrote:
> 
>> No, the session directory is created in the tmpdir - we don't create
>> anything anywhere else, nor do we write any executables anywhere.
> 
> In the case where the TMPDIR env variable isn't specified, what is the
> default assumed by Open MPI/orte?

It rattles through a logic chain:

1. ompi mca param value

2. TMPDIR in environ

3. TMP in environ

4. default to /tmp just so we have something to work with...

> 
>> Just out of curiosity: although I know you have different arch's on
>> your
>> nodes, the tests you are running are all executing on the same arch,
>> correct???
> 
> Yes, tests all execute on the same arch, although I am led to another
> question.  Can I use a headnode of a particular arch, but in my mpirun
> hostfile, specify only nodes of another arch, and launch from the
> headnode?  In other words, no computation is done on the headnode of
> arch A, all computation is done on nodes of arch B, but the job is
> launched from the headnode -- would that be acceptable?

As long as the prefix is set such that the correct binary executables can be
found, then you should be fine.

> 
> I should be clear that for the problem you are helping me with, *all*
> the nodes involved are running the same arch, OS, compiler, system
> libraries, etc.  The multiple arch question is for edification for the
> future.

No problem - I just wanted to eliminate one possible complication for now.

Thanks
Ralph

> 
> 
> 
>
> __
> __
> Got a little couch potato?
> Check out fun summer activities for kids.
> http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=
> bz 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] orte_pls_base_select fails

2007-07-18 Thread Ralph H Castain
Tim has proposed a clever fix that I had not thought of - just be aware that
it could cause unexpected behavior at some point. Still, for what you are
trying to do, that might meet your needs.

Ralph


On 7/18/07 11:44 AM, "Tim Prins"  wrote:

> Adam C Powell IV wrote:
>> As mentioned, I'm running in a chroot environment, so rsh and ssh won't
>> work: "rsh localhost" will rsh into the primary local host environment,
>> not the chroot, which will fail.
>> 
>> [The purpose is to be able to build and test MPI programs in the Debian
>> unstable distribution, without upgrading the whole machine to unstable.
>> Though most machines I use for this purpose run Debian stable or
>> testing, the machine I'm currently using runs a very old Fedora, for
>> which I don't think OpenMPI is available.]
> 
> Allright, I understand what you are trying to do now. To be honest, I
> don't think we have ever really thought about this use case. We always
> figured that to test Open MPI people would simply install it in a
> different directory and use it from there.
> 
>> 
>> With MPICH, mpirun -np 1 just runs the new process in the current
>> context, without rsh/ssh, so it works in a chroot.  Does OpenMPI not
>> support this functionality?
> 
> Open MPI does support this functionality. First, a bit of explanation:
> 
> We use 'pls' (process launching system) components to handling the
> launching of processes. There are components for slurm, gridengine, rsh,
> and others. At runtime we open each of these components and query them
> as to whether they can be used. The original error you posted says that
> none of the 'pls' components can be used because all of they detected
> they could not run in your setup. The slurm one excluded itself because
> there were no environment variables set indicating it is running under
> SLURM. Similarly, the gridengine pls said it cannot run as well. The
> 'rsh' pls said it cannot run because neither 'ssh' nor 'rsh' are
> available (I assume this is the case, though you did not explicitly say
> they were not available).
> 
> But in this case, you do want the 'rsh' pls to be used. It will
> automatically fork any local processes, and will user rsh/ssh to launch
> any remote processes. Again, I don't think we ever imagined the use case
>   on a UNIX-like system where there are no launchers like SLURM
> available, and rsh/ssh also wasn't available (Open MPI is, after all,
> primarily concerned with multi-node operation).
> 
> So, there are several ways around this:
> 
> 1. Make rsh or ssh available, even though they will not be used.
> 
> 2. Tell the 'rsh' pls component to use a dummy program such as
> /bin/false by adding the following to the command line:
> -mca pls_rsh_agent /bin/false
> 
> 3. Create a dummy 'rsh' executable that is available in your path.
> 
> For instance:
> 
> [tprins@odin ~]$ which ssh
> /usr/bin/which: no ssh in
> (/u/tprins/usr/ompia/bin:/u/tprins/usr/bin:/usr/local/bin:/bin:/usr/X11R6/bin)
> [tprins@odin ~]$ which rsh
> /usr/bin/which: no rsh in
> (/u/tprins/usr/ompia/bin:/u/tprins/usr/bin:/usr/local/bin:/bin:/usr/X11R6/bin)
> [tprins@odin ~]$ mpirun -np 1  hostname
> [odin.cs.indiana.edu:18913] [0,0,0] ORTE_ERROR_LOG: Error in file
> runtime/orte_init_stage1.c at line 317
> --
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>orte_pls_base_select failed
>--> Returned value Error (-1) instead of ORTE_SUCCESS
> 
> --
> [odin.cs.indiana.edu:18913] [0,0,0] ORTE_ERROR_LOG: Error in file
> runtime/orte_system_init.c at line 46
> [odin.cs.indiana.edu:18913] [0,0,0] ORTE_ERROR_LOG: Error in file
> runtime/orte_init.c at line 52
> [odin.cs.indiana.edu:18913] [0,0,0] ORTE_ERROR_LOG: Error in file
> orterun.c at line 399
> 
> [tprins@odin ~]$ mpirun -np 1 -mca pls_rsh_agent /bin/false  hostname
> odin.cs.indiana.edu
> 
> [tprins@odin ~]$ touch usr/bin/rsh
> [tprins@odin ~]$ chmod +x usr/bin/rsh
> [tprins@odin ~]$ mpirun -np 1  hostname
> odin.cs.indiana.edu
> [tprins@odin ~]$
> 
> 
> I hope this helps,
> 
> Tim
> 
>> 
>> Thanks,
>> Adam
>> 
>> On Wed, 2007-07-18 at 11:09 -0400, Tim Prins wrote:
>>> This is strange. I assume that you what to use rsh or ssh to launch the
>>> processes?
>>> 
>>> If you want to use ssh, does "which ssh" find ssh? Similarly, if you
>>> want to use rsh, does "which rsh" find rsh?
>>> 
>>> Thanks,
>>> 
>>> Tim
>>> 
>>> Adam C Powell IV wrote:
 On Wed, 2007-07-18 at 09:50 -0400, Tim Prins wrote:
> Adam C Powell IV wrote:
>> Greetings,
>> 
>> I'm running the Debi

Re: [OMPI users] orte_pls_base_select fails

2007-07-18 Thread Ralph H Castain



On 7/18/07 9:49 AM, "Adam C Powell IV"  wrote:

> As mentioned, I'm running in a chroot environment, so rsh and ssh won't
> work: "rsh localhost" will rsh into the primary local host environment,
> not the chroot, which will fail.
> 
> [The purpose is to be able to build and test MPI programs in the Debian
> unstable distribution, without upgrading the whole machine to unstable.
> Though most machines I use for this purpose run Debian stable or
> testing, the machine I'm currently using runs a very old Fedora, for
> which I don't think OpenMPI is available.]
> 
> With MPICH, mpirun -np 1 just runs the new process in the current
> context, without rsh/ssh, so it works in a chroot.  Does OpenMPI not
> support this functionality?

Yes - and no. OpenMPI will launch on a local node without using rsh/ssh.
However, and it is a big however, our init code requires that we still
identify a working launcher that could be used to launch on remote nodes.
Frankly, we never considered the case you describe.

We could (and perhaps should) modify the code to allow it to continue even
if it doesn't find a viable launcher. I believe our initial thinking was
that something that launched only on the local node wasn't much use to MPI
and therefore that scenario probably represents an error condition.

We'll discuss it and see what we think should be done. Meantime, the answer
would have to be "no, we don't support that"

Ralph

> 
> Thanks,
> Adam
> 
> On Wed, 2007-07-18 at 11:09 -0400, Tim Prins wrote:
>> This is strange. I assume that you what to use rsh or ssh to launch the
>> processes?
>> 
>> If you want to use ssh, does "which ssh" find ssh? Similarly, if you
>> want to use rsh, does "which rsh" find rsh?
>> 
>> Thanks,
>> 
>> Tim
>> 
>> Adam C Powell IV wrote:
>>> On Wed, 2007-07-18 at 09:50 -0400, Tim Prins wrote:
 Adam C Powell IV wrote:
> Greetings,
> 
> I'm running the Debian package of OpenMPI in a chroot (with /proc
> mounted properly), and orte_init is failing as follows:
> [snip]
> What could be wrong?  Does orterun not run in a chroot environment?
> What more can I do to investigate further?
 Try running mpirun with the added options:
 -mca orte_debug 1 -mca pls_base_verbose 20
 
 Then send the output to the list.
>>> 
>>> Thanks!  Here's the output:
>>> 
>>> $ orterun -mca orte_debug 1 -mca pls_base_verbose 20 -np 1 uptime
>>> [new-host-3:19201] mca: base: components_open: Looking for pls components
>>> [new-host-3:19201] mca: base: components_open: distilling pls components
>>> [new-host-3:19201] mca: base: components_open: accepting all pls components
>>> [new-host-3:19201] mca: base: components_open: opening pls components
>>> [new-host-3:19201] mca: base: components_open: found loaded component
>>> gridengine[new-host-3:19201] mca: base: components_open: component
>>> gridengine open function successful
>>> [new-host-3:19201] mca: base: components_open: found loaded component proxy
>>> [new-host-3:19201] mca: base: components_open: component proxy open function
>>> successful
>>> [new-host-3:19201] mca: base: components_open: found loaded component rsh
>>> [new-host-3:19201] mca: base: components_open: component rsh open function
>>> successful
>>> [new-host-3:19201] mca: base: components_open: found loaded component slurm
>>> [new-host-3:19201] mca: base: components_open: component slurm open function
>>> successful
>>> [new-host-3:19201] orte:base:select: querying component gridengine
>>> [new-host-3:19201] pls:gridengine: NOT available for selection
>>> [new-host-3:19201] orte:base:select: querying component proxy
>>> [new-host-3:19201] orte:base:select: querying component rsh
>>> [new-host-3:19201] orte:base:select: querying component slurm
>>> [new-host-3:19201] [0,0,0] ORTE_ERROR_LOG: Error in file
>>> runtime/orte_init_stage1.c at line 312
>>> --
>>> It looks like orte_init failed for some reason; your parallel process is
>>> likely to abort.  There are many reasons that a parallel process can
>>> fail during orte_init; some of which are due to configuration or
>>> environment problems.  This failure appears to be an internal failure;
>>> here's some additional information (which may only be relevant to an
>>> Open MPI developer):
>>> 
>>>   orte_pls_base_select failed
>>>   --> Returned value -1 instead of ORTE_SUCCESS
>>> 
>>> --
>>> [new-host-3:19201] [0,0,0] ORTE_ERROR_LOG: Error in file
>>> runtime/orte_system_init.c at line 42
>>> [new-host-3:19201] [0,0,0] ORTE_ERROR_LOG: Error in file runtime/orte_init.c
>>> at line 52
>>> --
>>> Open RTE was unable to initialize properly.  The error occured while
>>> attempting to orte_init().  Returned value -1 instead of ORTE_SUCCESS.
>>> 

Re: [OMPI users] Recursive use of "orterun" (Ralph H Castain)

2007-07-11 Thread Ralph H Castain
Hmmm...well, what that indicates is that your application program is losing
the connection to orterun, but that orterun is still alive and kicking (it
is alive enough to send the [0,0,1] daemon a message ordering it to exit).
So the question is: why is your application program dropping the connection?

I haven't tried doing embedded orterun commands, so there could be a
conflict there that causes the OOB connection to drop. Best guess is that
there is confusion over which orterun it is supposed to connect to. I can
give it a try and see - this may not be a mode we can support.

Alternatively, you could start a persistent daemon and then just allow both
orterun instances to report to it. Our method for doing that isn't as
convenient as we want it to be, and hope to soon have it, but it does work.
What you have to do is:

1. to start the persistent daemon, type:

"orted --seed --persistent --scope public --universe foo"

where foo can be whatever name you like.

2. when you execute your application, use:

orterun -np 1 --universe foo python ./test.py

where the "foo" matches the name given above.

3. in your os.system command, you'll need that same "--universe foo" option

That may solve the problem (let me know if it does). Meantime, I'll take a
look at the embedded approach without the persistent daemon...may take me
awhile as I'm in the middle of something, but I will try to get to it
shortly.

Ralph


On 7/11/07 1:40 PM, "Lev Gelb"  wrote:

> 
> OK, I've added the debug flags - when I add them to the
> os.system instance of orterun, there is no additional input,
> but when I add them to the orterun instance controlling the
> python program, I get the following:
> 
>> orterun -np 1 --debug-daemons -mca odls_base_verbose 1 python ./test.py
> Daemon [0,0,1] checking in as pid 18054 on host druid.wustl.edu
> [druid.wustl.edu:18054] [0,0,1] orted: received launch callback
> [druid.wustl.edu:18054] odls: setting up launch for job 1
> [druid.wustl.edu:18054] odls: overriding oversubscription
> [druid.wustl.edu:18054] odls: oversubscribed set to false want_processor
> set to true
> [druid.wustl.edu:18054] odls: preparing to launch child [0, 1, 0]
> Pypar (version 1.9.3) initialised MPI OK with 1 processors
> [druid.wustl.edu:18057] OOB: Connection to HNP lost
> [druid.wustl.edu:18054] odls: child process terminated
> [druid.wustl.edu:18054] odls: child process [0,1,0] terminated normally
> [druid.wustl.edu:18054] [0,0,1] orted_recv_pls: received message from
> [0,0,0]
> [druid.wustl.edu:18054] [0,0,1] orted_recv_pls: received exit
> [druid.wustl.edu:18054] [0,0,1] odls_kill_local_proc: working on job -1
> [druid.wustl.edu:18054] [0,0,1] odls_kill_local_proc: checking child
> process [0,1,0]
> [druid.wustl.edu:18054] [0,0,1] odls_kill_local_proc: child is not alive
> 
> (the Pypar output is from loading that module; the next thing in
> the code is the os.system call to start orterun with 2 processors.)
> 
> Also, there is absolutely no output from the second orterun-launched
> program (even the first line does not execute.)
> 
> Cheers,
> 
> Lev
> 
> 
> 
>> Message: 5
>> Date: Wed, 11 Jul 2007 13:26:22 -0600
>> From: Ralph H Castain 
>> Subject: Re: [OMPI users] Recursive use of "orterun"
>> To: "Open MPI Users " 
>> Message-ID: 
>> Content-Type: text/plain; charset="US-ASCII"
>> 
>> I'm unaware of any issues that would cause it to fail just because it is
>> being run via that interface.
>> 
>> The error message is telling us that the procs got launched, but then
>> orterun went away unexpectedly. Are you seeing your procs complete? We do
>> sometimes see that message due to a race condition between the daemons
>> spawned to support the application procs and orterun itself (see other
>> recent notes in this forum).
>> 
>> If your procs are not completing, then it would mean that either the
>> connecting fabric is failing for some reason, or orterun is terminating
>> early. If you could add --debug-daemons -mca odls_base_verbose 1 to the
>> os.system command, the output from that might help us understand why it is
>> failing.
>> 
>> Ralph
>> 
>> 
>> 
>> On 7/11/07 10:49 AM, "Lev Gelb"  wrote:
>> 
>>> 
>>> Hi -
>>> 
>>> I'm trying to port an application to use OpenMPI, and running
>>> into a problem.  The program (written in Python, parallelized
>>> using either of "pypar" or "pyMPI") itself invokes "mpirun"
>>> in order to manage external, parallel processes, via something like:
>>> 
>>> orterun -

Re: [OMPI users] Recursive use of "orterun"

2007-07-11 Thread Ralph H Castain
I'm unaware of any issues that would cause it to fail just because it is
being run via that interface.

The error message is telling us that the procs got launched, but then
orterun went away unexpectedly. Are you seeing your procs complete? We do
sometimes see that message due to a race condition between the daemons
spawned to support the application procs and orterun itself (see other
recent notes in this forum).

If your procs are not completing, then it would mean that either the
connecting fabric is failing for some reason, or orterun is terminating
early. If you could add --debug-daemons -mca odls_base_verbose 1 to the
os.system command, the output from that might help us understand why it is
failing.

Ralph



On 7/11/07 10:49 AM, "Lev Gelb"  wrote:

> 
> Hi -
> 
> I'm trying to port an application to use OpenMPI, and running
> into a problem.  The program (written in Python, parallelized
> using either of "pypar" or "pyMPI") itself invokes "mpirun"
> in order to manage external, parallel processes, via something like:
> 
> orterun -np 2 python myapp.py
> 
> where myapp.py contains:
> 
> os.system('orterun -np 2 nwchem.x nwchem.inp > nwchem.out')
> 
> I have this working under both LAM-MPI and MPICH on a variety
> of different machines.  However, with OpenMPI,  all I get is an
> immediate return from the system call and the error:
> 
> "OOB: Connection to HNP lost"
> 
> I have verified that the command passed to os.system is correct,
> and even that it runs correctly if "myapp.py" doesn't invoke any
> MPI calls of its own.
> 
> I'm testing openMPI on a single box, so there's no machinefile-stuff currently
> active.  The system is running Fedora Core 6 x86-64, I'm using the latest
> openmpi-1.2.3-1.src.rpm rebuilt on the machine in question,
> I can provide additional configuration details if necessary.
> 
> Thanks, in advance, for any help or advice,
> 
> Lev
> 
> 
> --
> Lev Gelb Associate Professor Department of Chemistry, Washington University in
> St. Louis, St. Louis, MO 63130  USA
> 
> email: g...@wustl.edu
> phone: (314)935-5026 fax:   (314)935-4481
> 
> http://www.chemistry.wustl.edu/~gelb
> --
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Connection to HNP lost

2007-07-10 Thread Ralph H Castain



On 7/10/07 11:08 AM, "Glenn Carver"  wrote:

> Hi,
> 
> I'd be grateful if someone could explain the meaning of this error
> message to me and whether it indicates a hardware problem or
> application software issue:
> 
> [node2:11881] OOB: Connection to HNP lost
> [node1:09876] OOB: Connection to HNP lost

This message is nothing to be concerned about - all it indicates is that
mpirun exited before our daemon on your backend nodes did. It's relatively
harmless and probably should be eliminated in some future version (except
when developers are running in debug mode).

The message can appear when the timing changes between front and backend
nodes. What happens is:

1. mpirun detects that your processes have all completed. It then orders the
shutdown of the daemons on your backend nodes.

2. each daemon does an orderly shutdown. Just before it terminates, it tells
mpirun that it is done cleaning up and is about to exit

3. when mpirun hears that all daemons are done cleaning up, it exits itself.
This is where the timing issue comes into play - if mpirun exits before the
daemon, then you get that error message as the daemon is terminating.

So it's all a question of whether mpirun completes the last few steps to
exit before the daemons do. In most cases, the daemons complete first as
they have less to do. Sometimes, mpirun manages to get out first, and you
get the message.

I doubt it has anything to do with your hardware issues. Personally, I would
just ignore the message - I'll see it gets removed in later releases to
avoid unnecessary confusion.

Hope that helps
Ralph


> 
> I have a small cluster which until last week was just fine.
> Unfortunately we were hit by a sudden power dip which brought the
> cluster down and did significant damage to other servers (blew power
> supplies and disk).  Although the cluster machines and the Infiniband
> link is up and running jobs I am now getting these errors in user
> applications which we've never had before.
> 
> The system messages file reports (for node2):
> Jul  5 12:08:28 node1 genunix: [ID 408789 kern.notice] NOTICE:
> tavor0: fault cleared external to device; service available
> Jul  5 12:08:28 node1 genunix: [ID 451854 kern.notice] NOTICE:
> tavor0: port 1 up
> Jul  7 16:18:32 node1 genunix: [ID 408114 kern.info]
> /pci@1,0/pci1022,7450@2/pci15b3,5a46@1/pci15b3,5a44@0 (tavor0) online
> Jul  7 16:18:32 node1 ib: [ID 842868 kern.info] IB device: daplt@0, daplt0
> Jul  7 16:18:32 node1 genunix: [ID 936769 kern.info] daplt0 is /ib/daplt@0
> Jul  7 16:18:32 node1 genunix: [ID 408114 kern.info] /ib/daplt@0
> (daplt0) online
> Jul  7 16:18:32 node1 genunix: [ID 834635 kern.info] /ib/daplt@0
> (daplt0) multipath status: degraded, path
> /pci@1,0/pci1022,7450@2/pci15
> b3,5a46@1/pci15b3,5a44@0 (tavor0) to target address: daplt,0 is
> online Load balancing: round-robin
> 
> I wonder if this messages are indicative of a hardware problem,
> possibly on the Infiniband switch or the host adapters on the cluster
> machines.  The cluster software has not been altered but there have
> been small changes to the application codes. But I want to rule out
> hardware issues because of the power dip first.
> 
> Anyone seen this message before and know whether to investigate
> hardware first?  I did check the archives but it didn't help. More
> info provided below.
> 
> Any help appreciate, thanks.
> 
>   Glenn
> 
> --
> Details:
> Cluster uses mix of Sun's X4100/X4200 machines linked with Sun
> supplied Infiniband and host adapters. All machines are running
> Solaris 10_x86 (11/06) with latest kernel patches
> Software is Sun Clustertools 7.
> 
> Node2 $ ifconfig ibd1
> ibd1: flags=1000843 mtu 2044 index 3
>  inet 192.168.50.202 netmask ff00 broadcast 192.168.50.255
> 
> Node1 $ ifconfig ibd1
> ibd1: flags=1000843 mtu 2044 index 3
>  inet 192.168.50.201 netmask ff00 broadcast 192.168.50.255
> 
> 
> ompi_info -a
>  Open MPI: 1.2.1r14096-ct7b030r1838
> Open MPI SVN revision: 0
>  Open RTE: 1.2.1r14096-ct7b030r1838
> Open RTE SVN revision: 0
>  OPAL: 1.2.1r14096-ct7b030r1838
> OPAL SVN revision: 0
> MCA backtrace: printstack (MCA v1.0, API v1.0, Component v1.2.1)
> MCA paffinity: solaris (MCA v1.0, API v1.0, Component v1.2.1)
> MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.2.1)
> MCA timer: solaris (MCA v1.0, API v1.0, Component v1.2.1)
> MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
> MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
>  MCA coll: basic (MCA v1.0, API v1.0, Component v1.2.1)
>  MCA coll: self (MCA v1.0, API v1.0, Component v1.2.1)
>  MCA coll: sm (MCA v1.0, API v1.0, Component v1.2.1)
>  MCA coll: tuned (MCA v1.0, API v1.0, Component v1.2.1)
>MCA io: romio (MCA v1.0, AP

Re: [OMPI users] mpirun hanging when processes started on head node

2007-06-12 Thread Ralph H Castain
Hi Sean

> [Sean] I'm working through the strace output to follow the progression on the
> head node. It looks like mpirun consults '/bpfs/self' and determines that the
> request is to be run on the local machine so it fork/execs 'orted' which then
> runs 'hostname'. 'mpirun' didn't consult '/bpfs' or utilize 'rsh' after the
> determination to run on the local machine was made. When the 'hostname'
> command completes, 'orted' receives the SIGCHLD signal, performs some work and
> then both 'mpirun' and 'orted' go into what appears to be a poll() waiting for
> events.

This is the core of the problem - I confess to being blown away that mpirun
is fork/exec'ing that local orted. I will have to go through the code and
try to figure that one out - we have never seen that behavior. There should
be no way at all for that to happen.

The problem is that, if the code fork/exec's that local orted, then the
bproc code components have no idea it exists. Hence, the system doesn't know
it should shutdown when complete because (a) there is still a lingering
orted out there, but (b) the dominant component (bproc, in this case) has no
earthly idea where it is or how to tell it to go away.

FWIW, this problem will vanish in 1.3 due to a major change in the way we
handle orteds. However, the idea that we could fork/exec an orted under
bproc is something we definitely will have to fix.

Sorry for the problem. I'll have to see if there is a fix for 1.2 - it may
require too much code change and have to wait for 1.3. I'll advise as soon
as I figure this one out.

Ralph

> 
> 
> Hope that helps at least a little.
> 
> [Sean] I appreciate the help. We are running processes on the head node
> because the head node is the only node which can access external resources
> (storage devices).
> 
> 
> Ralph
> 
> 
> 
> 
> 
> On 6/11/07 1:04 PM, "Kelley, Sean"  wrote:
> 
>> I forgot to add that we are using 'bproc'. Launching processes on the compute
>> nodes using bproc works well, I'm not sure if bproc is involved when
>> processes are launched on the local node.
>> 
>> Sean
>> 
>> 
>> From: users-boun...@open-mpi.org on behalf of Kelley, Sean
>> Sent: Mon 6/11/2007 2:07 PM
>> To: us...@open-mpi.org
>> Subject: [OMPI users] mpirun hanging when processes started on head node
>> 
>> Hi,
>>   We are running the OFED 1.2rc4 distribution containing openmpi-1.2.2 on
>> a RedhatEL4U4 system with Scyld Clusterware 4.1. The hardware configuration
>> consists of a DELL 2950 as the headnode and 3 DELL 1950 blades as compute
>> nodes using Cisco TopSpin Infiband HCAs and switches for the interconnect.
>> 
>>   When we use 'mpirun' from the OFED/Open MPI distribution to start
>> processes on the compute nodes, everything works correctly. However, when we
>> try to start processes on the head node, the processes appear to run
>> correctly but 'mpirun' hangs and does not terminate until killed. The
>> attached 'run1.tgz' file contains detailed information from running the
>> following command:
>> 
>>  mpirun --hostfile hostfile1 --np 1 --byslot --debug-daemons -d hostname
>> 
>> where 'hostfile1' contains the following:
>> 
>> -1 slots=2 max_slots=2
>> 
>> The 'run.log' is the output of the above line. The 'strace.out.0' is the
>> result of 'strace -f' on the mpirun process (and the 'hostname' child process
>> since mpirun simply forks the local processes). The child process (pid 23415
>> in this case) runs to completion and exits successfully. The parent process
>> (mpirun) doesn't appear to recognize that the child has completed and hangs
>> until killed (with a ^c).
>> 
>> Additionally, when we run a set of processes which span the headnode and the
>> compute nodes, the processes on the head node complete successfully, but the
>> processes on the compute nodes do not appear to start. mpirun again appears
>> to hang.
>> 
>> Do I have a configuration error or is there a problem that I have
>> encountered? Thank you in advance for your assistance or suggestions
>> 
>> Sean
>> 
>> --
>> Sean M. Kelley
>> sean.kel...@solers.com
>> 
>>  
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users





Re: [OMPI users] mpirun hanging when processes started on head node

2007-06-11 Thread Ralph H Castain
Hi Sean

Could you please clarify something? I¹m a little confused by your comments
about where things are running. I¹m assuming that you mean everything works
fine if you type the mpirun command on the head node and just let it launch
on your compute nodes ­ that the problems only occur when you specifically
tell mpirun you want processes on the head node as well (or exclusively). Is
that correct?

There are several possible sources of trouble, if I have understood your
situation correctly. Our bproc support is somewhat limited at the moment ­
you may be encountering one of those limits. We currently have bproc support
focused on the configuration here at Los Alamos National Lab as (a) that is
where the bproc-related developers are working, and (b) it is the only
regular test environment we have to work with for bproc. We don¹t normally
use bproc in combination with hostfiles, so I¹m not sure if there is a
problem in that combination. I can investigate that a little later this
week.

Similarly, we require that all the nodes being used must be accessible via
the same launch environment. It sounds like we may be able to launch
processes on your head node via rsh, but not necessarily bproc. You might
check to ensure that the head node will allow bproc-based process launch (I
know ours don¹t ­ all jobs are run solely on the compute nodes. I believe
that is generally the case). We don¹t currently support mixed environments,
and I honestly don¹t expect that to change anytime soon.

Hope that helps at least a little.
Ralph





On 6/11/07 1:04 PM, "Kelley, Sean"  wrote:

> I forgot to add that we are using 'bproc'. Launching processes on the compute
> nodes using bproc works well, I'm not sure if bproc is involved when processes
> are launched on the local node.
>  
> Sean
> 
> 
> From: users-boun...@open-mpi.org on behalf of Kelley, Sean
> Sent: Mon 6/11/2007 2:07 PM
> To: us...@open-mpi.org
> Subject: [OMPI users] mpirun hanging when processes started on head node
> 
> Hi,
>   We are running the OFED 1.2rc4 distribution containing openmpi-1.2.2 on
> a RedhatEL4U4 system with Scyld Clusterware 4.1. The hardware configuration
> consists of a DELL 2950 as the headnode and 3 DELL 1950 blades as compute
> nodes using Cisco TopSpin Infiband HCAs and switches for the interconnect.
>  
>When we use 'mpirun' from the OFED/Open MPI distribution to start
> processes on the compute nodes, everything works correctly. However, when we
> try to start processes on the head node, the processes appear to run correctly
> but 'mpirun' hangs and does not terminate until killed. The attached
> 'run1.tgz' file contains detailed information from running the following
> command:
>  
>   mpirun --hostfile hostfile1 --np 1 --byslot --debug-daemons -d hostname
>  
> where 'hostfile1' contains the following:
>  
> -1 slots=2 max_slots=2
>  
> The 'run.log' is the output of the above line. The 'strace.out.0' is the
> result of 'strace -f' on the mpirun process (and the 'hostname' child process
> since mpirun simply forks the local processes). The child process (pid 23415
> in this case) runs to completion and exits successfully. The parent process
> (mpirun) doesn't appear to recognize that the child has completed and hangs
> until killed (with a ^c).
>  
> Additionally, when we run a set of processes which span the headnode and the
> compute nodes, the processes on the head node complete successfully, but the
> processes on the compute nodes do not appear to start. mpirun again appears to
> hang.
>  
> Do I have a configuration error or is there a problem that I have encountered?
> Thank you in advance for your assistance or suggestions
>  
> Sean
>  
> --
> Sean M. Kelley
> sean.kel...@solers.com
>  
>  
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




  1   2   >