Hi Lukasz
Thank you! I have just added this code to the
$GLOBUS_LOCATION/lib/perl/Globus/GRAM/JobManager/pbs.pm (around line 250):
#
# Set 'nodes and ppn' - fixed
#
if (defined $description->host_count())
{
if (defined $description->count())
{
print JOB '#PBS -l ',
'nodes=', $description->host_count(),
':ppn=', $description->count(), "\n";
}
else
{
print JOB '#PBS -l ',
'nodes=', $description->host_count(), "\n";
}
}
else
{
print JOB '#PBS -l ',
'nodes=', $description->count(), "\n";
}
#
# Set 'nodes' - default
#
#if (defined $description->nodes())
#{
# #Generated by ExtensionsHandler.pm from resourceAllocationGroup
elements
# print JOB '#PBS -l nodes=', $description->nodes(), "\n";
#}
#elsif($description->host_count() != 0)
#{
# print JOB '#PBS -l nodes=', $description->host_count(), "\n";
#}
#elsif($cluster && $cpu_per_node != 0)
#{
# print JOB '#PBS -l nodes=',
# myceil($description->count() / $cpu_per_node), "\n";
#}
--
So i can run job, for example
&(executable=mpiprog)
(job_type=mpi)
(host_count=2)
(count=4)
and it will run on 2 nodes with 4 cpus on each of them.
2010/3/27 Lukasz Lacinski <[email protected]>
> Hi Alexey,
>
> You have to look into $GLOBUS_LOCATION/lib/perl/Globus/GRAM/JobManager/
> pbs.pm and investigate a little bit how PBS job scripts are created. In
> your case you will need to change a value of $cpu_per_node from 1 to 8. It
> is defined around line 35, probably.
>
> Regards,
> Lukasz
>
> On Mar 24, 2010, at 10:22 AM, Alexey Paznikov wrote:
>
> Hi Nikolay
>
> 2010/3/23 Nikolay Kutovskiy <[email protected]>
>
>> Hi Alexey,
>>
>> > I want to start such program, as it corresponds the next TORQUE job
>> script
>> >
>> > #PBS -l nodes=4:ppn=8
>> > cd $PBS_O_WORKDIR
>> >
>> > mpiexec ./pi_mpi
>> did you manage to run your job in that way? if you did then could you,
>> please, share your experience?
>> I am also interested in running MPI jobs on GT5.
>>
>> Thanks!
>> Nikolay.
>>
>
> Unfortunately i have no idea about that so far. When i get to know
> something, i will share an information with you, and i ask you about the
> same.
>
> Alexey
>
>
>
Alexey