Re: [OMPI users] running openmpi with specified lib path

2013-05-07 Thread Duke Nguyen
On 5/7/13 9:48 PM, Ralph Castain wrote: > > On May 7, 2013, at 7:39 AM, Duke Nguyen <duke.li...@gmx.com> wrote: > >> >> >> No I didnt (and that maybe the reason, I am not really sure I was >> correct when installation these things). What I did wa

Re: [OMPI users] running openmpi with specified lib path

2013-05-07 Thread Duke Nguyen
enmpi/mca_ess_slurmd.so: undefined symbol: orte_pmap_t_class (ignored) mpirun: symbol lookup error: /usr/local/lib/openmpi/mca_ess_singleton.so: undefined symbol: orte_util_setup_local_nidmap_entries gave the same error. > > > On May 7, 2013, at 7:11 AM, Duke Nguyen <duke.li..

Re: [OMPI users] running openmpi with specified lib path

2013-05-07 Thread Duke Nguyen
ed once the module is called. D. > > > On May 7, 2013, at 7:01 AM, Duke Nguyen <duke.li...@gmx.com> wrote: > >> On 5/7/13 7:02 PM, Reuti wrote: >>> Hi, >>> >>> Am 07.05.2013 um 13:36 schrieb Duke Nguyen: >>> >>>> I am testing our

Re: [OMPI users] running openmpi with specified lib path

2013-05-07 Thread Duke Nguyen
On 5/7/13 8:10 PM, Jeff Squyres (jsquyres) wrote: > On May 7, 2013, at 7:36 AM, Duke Nguyen <duke.li...@gmx.com> wrote: > >> So apparently openmpi 1.7.2 looks for the old library at >> /usr/loca/lib/openmpi for 1.6.3 instead of at >> /opt/apps/openmpi/openmpi-1.7.

Re: [OMPI users] running openmpi with specified lib path

2013-05-07 Thread Duke Nguyen
On 5/7/13 7:02 PM, Reuti wrote: > Hi, > > Am 07.05.2013 um 13:36 schrieb Duke Nguyen: > >> I am testing our cluster with module environment, and am having a >> headache to understand openmpi 1.7.2!!! So our system currently has >> openmpi 1.6.3 (at default locatio

[OMPI users] running openmpi with specified lib path

2013-05-07 Thread Duke Nguyen
Hi folks, I am testing our cluster with module environment, and am having a headache to understand openmpi 1.7.2!!! So our system currently has openmpi 1.6.3 (at default location /usr/local), 1.6.4 and 1.7.2 compiled with intel compilers (installed at /opt/apps). In order to use openmpi 1.7.2 for

Re: [OMPI users] memory per core/process

2013-04-02 Thread Duke Nguyen
On 4/2/13 11:03 PM, Gus Correa wrote: On 04/02/2013 11:40 AM, Duke Nguyen wrote: On 3/30/13 8:46 PM, Patrick Bégou wrote: Ok, so your problem is identified as a stack size problem. I went into these limitations using Intel fortran compilers on large data problems. First, it seems you can

Re: [OMPI users] memory per core/process

2013-04-02 Thread Duke Nguyen
fading memory - what version of OMPI are you using? I'll backport a patch for you. It's openmpi-1.6.3-x86_64, if that helps... On Apr 2, 2013, at 8:40 AM, Duke Nguyen <duke.li...@gmx.com> wrote: On 3/30/13 8:46 PM, Patrick Bégou wrote: Ok, so your problem is identified as a stack size p

Re: [OMPI users] memory per core/process

2013-04-02 Thread Duke Nguyen
an only access 1/12 of the node memory amount. If he needs more memory he has to request 2 cores, even if he uses a sequential code. This avoid crashing jobs of other users on the same node with memory requirements. But this is not configured on your node. Duke Nguyen a écrit : On 3/30/13 3:13 PM,

Re: [OMPI users] memory per core/process

2013-04-02 Thread Duke Nguyen
posts were moderated since I am the newcomer, but it seems nobody is managing those forums so my posts were never able to get through...). I wish they were as active as this forum... D. -- Reuti Duke Nguyen a écrit : On 3/30/13 3:13 PM, Patrick Bégou wrote: I do not know about your co

Re: [OMPI users] memory per core/process

2013-04-02 Thread Duke Nguyen
On 4/2/13 6:42 PM, Reuti wrote: /usr/local/bin/mpirun -npernode 1 -tag-output sh -c "ulimit -a" You are right :) $ /usr/local/bin/mpirun -npernode 1 -tag-output sh -c "ulimit -a" [1,0]:core file size (blocks, -c) 0 [1,0]:data seg size (kbytes, -d) unlimited

Re: [OMPI users] memory per core/process

2013-04-02 Thread Duke Nguyen
all the outputs will be interleaved, this will help you identify what came from each node. On Mar 31, 2013, at 11:30 PM, Duke Nguyen <duke.li...@gmx.com> wrote: On 3/31/13 12:20 AM, Duke Nguyen wrote: I should really have asked earlier. Thanks for all the helps. I think I was excited too soon :).

Re: [OMPI users] memory per core/process

2013-04-01 Thread Duke Nguyen
On 3/31/13 12:20 AM, Duke Nguyen wrote: I should really have asked earlier. Thanks for all the helps. I think I was excited too soon :). Increasing stacksize does help if I run a job in a dedicated server. Today I tried to modify the cluster (/etc/security/limits.conf, /etc/init.d/pbs_mom

Re: [OMPI users] memory per core/process

2013-03-30 Thread Duke Nguyen
ia.edu> wrote: On Mar 30, 2013, at 10:02 AM, Duke Nguyen wrote: On 3/30/13 8:20 PM, Reuti wrote: Am 30.03.2013 um 13:26 schrieb Tim Prince: On 03/30/2013 06:36 AM, Duke Nguyen wrote: On 3/30/13 5:22 PM, Duke Nguyen wrote: On 3/30/13 3:13 PM, Patrick Bégou wrote: I do not know about your code

Re: [OMPI users] memory per core/process

2013-03-30 Thread Duke Nguyen
On 3/30/13 8:20 PM, Reuti wrote: Am 30.03.2013 um 13:26 schrieb Tim Prince: On 03/30/2013 06:36 AM, Duke Nguyen wrote: On 3/30/13 5:22 PM, Duke Nguyen wrote: On 3/30/13 3:13 PM, Patrick Bégou wrote: I do not know about your code but: 1) did you check stack limitations ? Typically intel

Re: [OMPI users] memory per core/process

2013-03-30 Thread Duke Nguyen
On 3/30/13 5:22 PM, Duke Nguyen wrote: On 3/30/13 3:13 PM, Patrick Bégou wrote: I do not know about your code but: 1) did you check stack limitations ? Typically intel fortran codes needs large amount of stack when the problem size increase. Check ulimit -a First time I heard of stack

Re: [OMPI users] memory per core/process

2013-03-30 Thread Duke Nguyen
for a job ? Not really understand (also first time heard of fake numa), but I am pretty sure we do not have such things. The server I tried was a dedicated server with 2 x5420 and 16GB physical memory. Patrick Duke Nguyen a écrit : Hi folks, I am sorry if this question had been asked

[OMPI users] memory per core/process

2013-03-30 Thread Duke Nguyen
Hi folks, I am sorry if this question had been asked before, but after ten days of searching/working on the system, I surrender :(. We try to use mpirun to run abinit (abinit.org) which in turns will call an input file to run some simulation. The command to run is pretty simple $ mpirun -np

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-18 Thread Duke Nguyen
the helps/suggestsions/comments, D. On 2/6/13 10:58 PM, Reuti wrote: > Am 06.02.2013 um 16:45 schrieb Duke Nguyen: > > > On 2/6/13 10:06 PM, Jeff Squyres (jsquyres) wrote: > >> On Feb 6, 2013, at 5:11 AM, Reuti <re...@staff.uni-marburg.de> wrote: > >> > &g

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-06 Thread Duke Nguyen
On 2/6/13 10:06 PM, Jeff Squyres (jsquyres) wrote: > On Feb 6, 2013, at 5:11 AM, Reuti wrote: > >>> Thanks Reuti and Jeff, you are right, users should not be allowed to ssh to >>> all nodes, which is how our cluster was set up: users can even >>> password-less ssh to

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-06 Thread Duke Nguyen
On 2/5/13 11:20 PM, John Hearns wrote: > Lart your users. Its the only way. > they will thank you for it it, eventually. > > > www.catb.org/jargon/html/L/LART.html > > Not sure if I get this right (first time heard of LART anyway)... Do you mean

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-06 Thread Duke Nguyen
On 2/6/13 1:03 AM, Gus Correa wrote: > > On 02/05/2013 08:52 AM, Jeff Squyres (jsquyres) wrote: >> To add to what Reuti said, if you enable PBS support in Open MPI, when users "mpirun ..." in a PBS job, Open MPI will automatically use the PBS native launching mechanism, which won't let you run

[OMPI users] control openmpi or force to use pbs?

2013-02-05 Thread Duke Nguyen
Hi all, Please advise me how to force our users to use pbs instead of "mpirun --hostfile"? Or how do I control mpirun so that any user using "mpirun --hostfile" will not overload the cluster? We have OpenMPI installed with Torque/Maui and we can control users's limits (total number of procs,