That fine. But in your job script ppn=2
Also check ldd cgles on the compute servers themselves.
Are all the libraries available in your path?
On 25 April 2018 at 11:43, Ankita m wrote:
> i have 16 cores per one node. I usually use 4 node each node has 16 cores
> so total 64 processes.
>
> O
Ankita, please read here:https://www.open-mpi.org/faq/?category=mpi-apps
On 25 April 2018 at 11:44, Ankita m wrote:
> Can you please tell me whether to use mpicc compiler ar any other compiler
> for openmpi programs
>
> On Wed, Apr 25, 2018 at 3:13 PM, Ankita m wrote:
>
>> i have 16 cores p
Can you please tell me whether to use mpicc compiler ar any other compiler
for openmpi programs
On Wed, Apr 25, 2018 at 3:13 PM, Ankita m wrote:
> i have 16 cores per one node. I usually use 4 node each node has 16 cores
> so total 64 processes.
>
> On Wed, Apr 25, 2018 at 2:57 PM, John Hearns v
i have 16 cores per one node. I usually use 4 node each node has 16 cores
so total 64 processes.
On Wed, Apr 25, 2018 at 2:57 PM, John Hearns via users <
users@lists.open-mpi.org> wrote:
> I do not see much wrong with that.
> However nodes=4 ppn=2 makes 8 processes in all.
> You are using mpir
I do not see much wrong with that.
However nodes=4 ppn=2 makes 8 processes in all.
You are using mpirun -np 64
Actually it is better practice to use the PBS supplied environment
variables during the job, rather than hard-wiring 64
I dont have access to a PBS cluster from my desk at the moment
>
>
> while using openmpi- 1.4.5 the program ended by showing this error file
> (in the attachment)
>
I am Using PBS file . Below u can find the script that i am using to run
my program
cgles.err
Description: Binary data
run.pbs
Description: Binary data
___