Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread John Hearns via users
That fine. But in your job script ppn=2

Also check   ldd cgles  on the compute servers themselves.
Are all the libraries available in your path?


On 25 April 2018 at 11:43, Ankita m  wrote:

> i have 16 cores per one node. I usually use 4 node each node has 16 cores
> so total 64 processes.
>
> On Wed, Apr 25, 2018 at 2:57 PM, John Hearns via users <
> users@lists.open-mpi.org> wrote:
>
>> I do not see much wrong with that.
>> However nodes=4  ppn=2  makes  8 processes in all.
>> You are using mpirun -np 64
>>
>> Actually it is better practice to use the PBS supplied environment
>> variables during the job, rather than hard-wiring   64
>> I dont have access to a PBS cluster from my desk at the moment.
>> You could also investigate using  mpiprocs=2  Then I think with openmpi
>> if it has compiled in PBS support all you would have to do is
>> mpirun
>>
>> Are you sure your compute servers only have two cores ??
>>
>> I also see that you are commenting out the module load openmpi-3.0.1   I
>> would guess you want the default Opnempi, which is OK
>>
>> First thing I would do, before the mpirun line in that job script:
>>
>> which mpirun(check that you are picking up an Openmpi version)
>>
>> ldd ./cgles  (check you are bringing in the libraries that you should)
>>
>>
>> Also run mpirun with the verbose flag  -v
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On 25 April 2018 at 11:10, Ankita m  wrote:
>>
>>>
 while using openmpi- 1.4.5 the program ended by showing this error file
 (in the attachment)

>>>
>>>  I am Using PBS file . Below u can find the script that i am using to
>>> run my program
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread John Hearns via users
Ankita, please read here:https://www.open-mpi.org/faq/?category=mpi-apps

On 25 April 2018 at 11:44, Ankita m  wrote:

> Can you please tell me whether to use mpicc compiler ar any other compiler
> for openmpi programs
>
> On Wed, Apr 25, 2018 at 3:13 PM, Ankita m  wrote:
>
>> i have 16 cores per one node. I usually use 4 node each node has 16 cores
>> so total 64 processes.
>>
>> On Wed, Apr 25, 2018 at 2:57 PM, John Hearns via users <
>> users@lists.open-mpi.org> wrote:
>>
>>> I do not see much wrong with that.
>>> However nodes=4  ppn=2  makes  8 processes in all.
>>> You are using mpirun -np 64
>>>
>>> Actually it is better practice to use the PBS supplied environment
>>> variables during the job, rather than hard-wiring   64
>>> I dont have access to a PBS cluster from my desk at the moment.
>>> You could also investigate using  mpiprocs=2  Then I think with openmpi
>>> if it has compiled in PBS support all you would have to do is
>>> mpirun
>>>
>>> Are you sure your compute servers only have two cores ??
>>>
>>> I also see that you are commenting out the module load openmpi-3.0.1   I
>>> would guess you want the default Opnempi, which is OK
>>>
>>> First thing I would do, before the mpirun line in that job script:
>>>
>>> which mpirun(check that you are picking up an Openmpi version)
>>>
>>> ldd ./cgles  (check you are bringing in the libraries that you should)
>>>
>>>
>>> Also run mpirun with the verbose flag  -v
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 25 April 2018 at 11:10, Ankita m  wrote:
>>>

> while using openmpi- 1.4.5 the program ended by showing this error
> file (in the attachment)
>

  I am Using PBS file . Below u can find the script that i am using to
 run my program

 ___
 users mailing list
 users@lists.open-mpi.org
 https://lists.open-mpi.org/mailman/listinfo/users

>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread Ankita m
Can you please tell me whether to use mpicc compiler ar any other compiler
for openmpi programs

On Wed, Apr 25, 2018 at 3:13 PM, Ankita m  wrote:

> i have 16 cores per one node. I usually use 4 node each node has 16 cores
> so total 64 processes.
>
> On Wed, Apr 25, 2018 at 2:57 PM, John Hearns via users <
> users@lists.open-mpi.org> wrote:
>
>> I do not see much wrong with that.
>> However nodes=4  ppn=2  makes  8 processes in all.
>> You are using mpirun -np 64
>>
>> Actually it is better practice to use the PBS supplied environment
>> variables during the job, rather than hard-wiring   64
>> I dont have access to a PBS cluster from my desk at the moment.
>> You could also investigate using  mpiprocs=2  Then I think with openmpi
>> if it has compiled in PBS support all you would have to do is
>> mpirun
>>
>> Are you sure your compute servers only have two cores ??
>>
>> I also see that you are commenting out the module load openmpi-3.0.1   I
>> would guess you want the default Opnempi, which is OK
>>
>> First thing I would do, before the mpirun line in that job script:
>>
>> which mpirun(check that you are picking up an Openmpi version)
>>
>> ldd ./cgles  (check you are bringing in the libraries that you should)
>>
>>
>> Also run mpirun with the verbose flag  -v
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On 25 April 2018 at 11:10, Ankita m  wrote:
>>
>>>
 while using openmpi- 1.4.5 the program ended by showing this error file
 (in the attachment)

>>>
>>>  I am Using PBS file . Below u can find the script that i am using to
>>> run my program
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread Ankita m
i have 16 cores per one node. I usually use 4 node each node has 16 cores
so total 64 processes.

On Wed, Apr 25, 2018 at 2:57 PM, John Hearns via users <
users@lists.open-mpi.org> wrote:

> I do not see much wrong with that.
> However nodes=4  ppn=2  makes  8 processes in all.
> You are using mpirun -np 64
>
> Actually it is better practice to use the PBS supplied environment
> variables during the job, rather than hard-wiring   64
> I dont have access to a PBS cluster from my desk at the moment.
> You could also investigate using  mpiprocs=2  Then I think with openmpi if
> it has compiled in PBS support all you would have to do is
> mpirun
>
> Are you sure your compute servers only have two cores ??
>
> I also see that you are commenting out the module load openmpi-3.0.1   I
> would guess you want the default Opnempi, which is OK
>
> First thing I would do, before the mpirun line in that job script:
>
> which mpirun(check that you are picking up an Openmpi version)
>
> ldd ./cgles  (check you are bringing in the libraries that you should)
>
>
> Also run mpirun with the verbose flag  -v
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On 25 April 2018 at 11:10, Ankita m  wrote:
>
>>
>>> while using openmpi- 1.4.5 the program ended by showing this error file
>>> (in the attachment)
>>>
>>
>>  I am Using PBS file . Below u can find the script that i am using to run
>> my program
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Fwd: Fwd: problem in cluster

2018-04-25 Thread John Hearns via users
I do not see much wrong with that.
However nodes=4  ppn=2  makes  8 processes in all.
You are using mpirun -np 64

Actually it is better practice to use the PBS supplied environment
variables during the job, rather than hard-wiring   64
I dont have access to a PBS cluster from my desk at the moment.
You could also investigate using  mpiprocs=2  Then I think with openmpi if
it has compiled in PBS support all you would have to do is
mpirun

Are you sure your compute servers only have two cores ??

I also see that you are commenting out the module load openmpi-3.0.1   I
would guess you want the default Opnempi, which is OK

First thing I would do, before the mpirun line in that job script:

which mpirun(check that you are picking up an Openmpi version)

ldd ./cgles  (check you are bringing in the libraries that you should)


Also run mpirun with the verbose flag  -v




























On 25 April 2018 at 11:10, Ankita m  wrote:

>
>> while using openmpi- 1.4.5 the program ended by showing this error file
>> (in the attachment)
>>
>
>  I am Using PBS file . Below u can find the script that i am using to run
> my program
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users