That's what I suspected. I suggest you talk to your sys admin about how PBS is 
configured - looks like you are only getting one node allocated despite your 
request for two. Probably something in the config needs adjusting.

On Jun 15, 2010, at 7:20 AM, Govind Songara wrote:

> I added the $PBS_NODEFILE in the script in my last email below.
> It   show only one node here is the output 
> ===============
> node47.beowulf.cluster node47.beowulf.cluster node47.beowulf.cluster 
> node47.beowulf.cluster
> This job has allocated 4 nodes
> Hello World! from process 1 out of 4 on node47.beowulf.cluster
> Hello World! from process 2 out of 4 on node47.beowulf.cluster
> Hello World! from process 3 out of 4 on node47.beowulf.cluster
> Hello World! from process 0 out of 4 on node47.beowulf.cluster
> ===============
> 
> On 15 June 2010 13:41, Ralph Castain <r...@open-mpi.org> wrote:
> Look at the contents of $PBS_NODEFILE and see how many nodes it contains.
> 
> On Jun 15, 2010, at 3:56 AM, Govind Songara wrote:
> 
>> Hi,
>> 
>> I have using openmpi build with tm support
>> When i run the job requesting for two nodes it run only on single node.
>> Here is my script.
>> >cat mpipbs-script.sh
>> #PBS -N mpipbs-script
>> #PBS -q short
>> ### Number of nodes: resources per node
>> ### (4 cores/node, so ppn=4 is ALL resources on the node)
>> #PBS -l nodes=2:ppn=4
> 
>> echo `cat $PBS_NODEFILE`
>> NPROCS=`wc -l < $PBS_NODEFILE`
>> echo This job has allocated $NPROCS nodes
> 
>       /opt/openmpi-1.4.2/bin/mpirun /scratch0/gsongara/mpitest/hello
> 
> 
> torque config
> set queue short resources_max.nodes = 4
> set queue short resources_default.nodes = 1
> set server resources_default.neednodes = 1
> set server resources_default.nodect = 1
> set server resources_default.nodes = 1
> 
> Can someone please advise if i missing anything here.
> 
> Regards
> Govind
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to