$ qconf -sc h_vmem h_vmem MEMORY <= FORCED YES -- $ qconf -sp mpi30-lb pe_name mpi30-lb slots 9999 user_lists NONE xuser_lists NONE start_proc_args /opt/gridengine/mpi/startmpi.sh $pe_hostfile stop_proc_args /opt/gridengine/mpi/stopmpi.sh allocation_rule 30 control_slaves TRUE job_is_first_task TRUE urgency_slots min accounting_summary TRUE -- $qconf -sp mpifill
pe_name mpifill slots 9999 user_lists NONE xuser_lists NONE start_proc_args /opt/gridengine/mpi/startmpi.sh $pe_hostfile stop_proc_args /opt/gridengine/mpi/stopmpi.sh allocation_rule $fill_up control_slaves TRUE job_is_first_task TRUE urgency_slots min accounting_summary TRUE On Mon, Jul 23, 2012 at 12:11 PM, William Hay <[email protected]> wrote: > On 23 July 2012 08:19, mahbube rustaee <[email protected]> wrote: > > Hi, > > I defined pe mpi30-lb with allocation_rule 30 and mpifill with > > allocation_rule $fillup. > > h_vmem is set to real memory. > > > > I submited a job with sge options: > > > > #$ -S /bin/bash > > #$ -N jobname > > #$ -cwd > > #$ -l h_vmem=2G > > #$ -j y > > #$ -pe mpifill 60 > > > > no problem, job will be ran. > > > > I chenged h_vmem and pe such: > > > > #$ -l h_vmem=2G > > #$ -j y > > #$ -pe mpi30-lb 60 > > job will be deleted immediately(killed) because of h_vmem value. > > > > Is h_vmem memory request per slot at two above script? > > That depends on whether you've marked it consumable I believe. > > > > any hint? > > Perhaps I'm just being blind but I don't see a difference in the > h_vmem requests above. Given that the only difference is the pe > requested then details of the pe config might help us understand. I'm > wondering if something in start_proc_args fo mpi30-1b uses more than > 2G h_vmem and this triggers the kill. Alternatively if the PEs are on > different types of node this might cause whatever code you are running > to request different amounts of memory. > > William >
_______________________________________________ users mailing list [email protected] https://gridengine.org/mailman/listinfo/users
