Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread John Hearns via users
That fine. But in your job script ppn=2 Also check ldd cgles on the compute servers themselves. Are all the libraries available in your path? On 25 April 2018 at 11:43, Ankita m wrote: > i have 16 cores per one node. I usually use 4 node each node has 16 cores > so

Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread Ankita m
Can you please tell me whether to use mpicc compiler ar any other compiler for openmpi programs On Wed, Apr 25, 2018 at 3:13 PM, Ankita m wrote: > i have 16 cores per one node. I usually use 4 node each node has 16 cores > so total 64 processes. > > On Wed, Apr 25, 2018

Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread Ankita m
i have 16 cores per one node. I usually use 4 node each node has 16 cores so total 64 processes. On Wed, Apr 25, 2018 at 2:57 PM, John Hearns via users < users@lists.open-mpi.org> wrote: > I do not see much wrong with that. > However nodes=4 ppn=2 makes 8 processes in all. > You are using

Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread John Hearns via users
I do not see much wrong with that. However nodes=4 ppn=2 makes 8 processes in all. You are using mpirun -np 64 Actually it is better practice to use the PBS supplied environment variables during the job, rather than hard-wiring 64 I dont have access to a PBS cluster from my desk at the

[OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread Ankita m
> > > while using openmpi- 1.4.5 the program ended by showing this error file > (in the attachment) > I am Using PBS file . Below u can find the script that i am using to run my program cgles.err Description: Binary data run.pbs Description: Binary data