Please check the attached archive in the original post. As said, the "subgms_export" for the latest GAMESS I will post ASAP. - Reuti
Am 20.01.2012 um 12:54 schrieb Semion Chernin: > Thanks so much. Can you please send also these scripts: > startgamess.sh > stopgamess.sh > > ----- Original Message ----- > From: Reuti <re...@staff.uni-marburg.de> > Date: Friday, January 20, 2012 13:42 > Subject: Re: [gridengine users] How I can integrate GAMESS under SGE? > To: Semion Chernin <s...@bgu.ac.il> > Cc: Mazouzi <mazo...@gmail.com>, "users@gridengine.org" <users@gridengine.org> > > > So, the idea is to prepare a list of hosts in the > > start_proc_args, which can directly be used by ddikick.x. > > > > Please find attached the both start/stop_proc_args. They are > > meant to be put in /usr/sge/cluster - otherwise please check all > > the directories inside and adjust them. All the stuff which I > > don't want to be destroyed by an update of SGE I put in > > /usr/sge/cluster (like some prolog and epilog scripts). > > Important is, that you compiled GAMESS to use just `rsh`for > > remote calls (i.e. left it untouched), so that SGE's wrapper can > > catch it and route it to any startup method you like. But it's > > also possible to define in the job script "export DDI_RSH=rsh" > > in case you have something else compiled in. > > > > ==== > > $ qconf -sp gamess > > pe_name gamess > > slots 128 > > user_lists NONE > > xuser_lists NONE > > start_proc_args > > /usr/sge/cluster/gamess/startgamess.sh -catch_rsh \ > > $pe_hostfile > > stop_proc_args > > /usr/sge/cluster/gamess/stopgamess.shallocation_rule $round_robin > > control_slaves TRUE > > job_is_first_task TRUE > > urgency_slots min > > accounting_summary FALSE > > ==== > > > > Then you have a jobscript: > > > > ==== > > #!/bin/sh > > #$ -N foobar > > #$ -o /home/reuti/err/foobar.stdout_$JOB_ID -e > > /home/reuti/err/foobar.stderr_$JOB_ID#$ -m ea > > #$ -A gamess_serial > > export > > LD_LIBRARY_PATH=/opt/chemsoft/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}cd > > $TMPDIR > > cp /home/reuti/foobar.in foobar.F05 > > ln -s /home/reuti/foobar.orb > > JOB=foobar > > CUSTOM_TMPDIR=/home/reuti/scr > > BINARY_LOCATION=/opt/chemsoft/$ARC/GAMESS_OCT012010R3 > > EXTERNAL_BASISSET=/dev/null > > . /home/soft/scripts/subgms_export > > unset JOB > > unset CUSTOM_TMPDIR > > unset BINARY_LOCATION > > unset EXTERNAL_BASISSET > > rm -f $IRCDATA > > rm -f $PUNCH > > rm -f $SIMEN > > rm -f $SIMCOR > > HOSTFILE=`hostname` > > /opt/chemsoft/$ARC/GAMESS_OCT012010R3/ddikick.x > > /opt/chemsoft/$ARC/GAMESS_OCT012010R3/gamess.00.x foobar -ddi 1 > > 1 $HOSTFILE -scr $TMPDIR < /dev/null > /home/reuti/foobar.out > > ==== > > > > And for parallel: > > > > ==== > > #!/bin/sh > > #$ -N foobar > > #$ -o /home/reuti/err/foobar.stdout_$JOB_ID -e > > /home/reuti/err/foobar.stderr_$JOB_ID#$ -m ea > > #$ -A gamess_parallel > > #$ -R y > > #$ -pe gamess 4 > > export > > LD_LIBRARY_PATH=/opt/chemsoft/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}JOB=foobar > > CUSTOM_TMPDIR=/home/reuti/scr > > BINARY_LOCATION=/opt/chemsoft/$ARC/GAMESS_OCT012010R3 > > cd $TMPDIR > > cp /home/reuti/foobar.in foobar.F05 > > . /home/soft/scripts/subgms_export > > unset JOB > > unset CUSTOM_TMPDIR > > unset BINARY_LOCATION > > rm -f $IRCDATA > > rm -f $PUNCH > > rm -f $SIMEN > > rm -f $SIMCOR > > HOSTFILE=`cat $TMPDIR/machines` > > /opt/chemsoft/$ARC/GAMESS_OCT012010R3/ddikick.x > > /opt/chemsoft/$ARC/GAMESS_OCT012010R3/gamess.00.x foobar -ddi > > $NHOSTS 4 $HOSTFILE -scr $TMPDIR < /dev/null > > > /home/reuti/foobar.out==== > > > > The installation of GAMESS then need only: > > > > reuti@foobar:/opt/chemsoft/lx24-amd64> ls GAMESS_OCT012010R3 > > auxdata ddikick.x gamess.00.x > > reuti@foobar:/opt/chemsoft/lx24-amd64> ls GAMESS_OCT012010R3/auxdata/ > > BASES MCP QUANPOL ericfmt.dat > > > > > > I'll prepare the "subgms_export" now, it will just set all > > variables which are normally set in rungms and post it ASAP. > > > > -- Reuti > > > > PS: Both job scripts are in fact created by a wrapper for the > > users, but I think for a first test it's better to start with a > > plain job script to avoid an additonal layer of trouble. The > > wrapper I could send you later, when it's working in principle. > > > > > > Am 19.01.2012 um 20:33 schrieb Semion Chernin: > > > > > Please send me qconf -sp gamess and start & stop scripts. > > > > > > ----- Original Message ----- > > > From: Mazouzi <mazo...@gmail.com> > > > Date: Thursday, January 19, 2012 17:32 > > > Subject: Re: [gridengine users] How I can integrate GAMESS > > under SGE? > > > To: Reuti <re...@staff.uni-marburg.de> > > > Cc: Semi <s...@bgu.ac.il>, "users@gridengine.org" > > <users@gridengine.org>> > > > > Hi, > > > > > > > > In our cluster, we use GAMESS with a special PE (gamess) using > > > > allocation_rule 2. > > > > > > > > The user MUST request an even number of slots ex ( 2*n). > > > > > > > > the command line is : > > > > > > > > gamess -n $NSLOTS my-job.inp >& my-job.out > > > > > > > > > > > > Hope this will help. > > > > > > > > On Thu, Jan 19, 2012 at 4:01 PM, Reuti <re...@staff.uni- > > > > marburg.de> wrote: > > > > > > > > > Hi, > > > > > > > > > > Am 19.01.2012 um 14:22 schrieb Semi: > > > > > > > > > > > How I can integrate GAMESS under SGE? > > > > > > > > > > > > I found some info about this in file called gms, but > > without pe, > > > > > start&stop scripts, example how to use this wrapper. > > > > > > > > > > > > # SGE job submission: > > > > > > # A 'parallel environment' named 'ddi' > > was > > > > set up on ISU's cluster, > > > > > > # this SGE prolog file creates the SGE > > > > directory $TMPDIR on every > > > > > node, > > > > > > # and this epilog script erases > > $TMPDIR, > > > > to be sure the scratch disk > > > > > is > > > > > > # always cleaned up, and to remove > > dead > > > > semaphores.> > # > > > > > > # SGE command 'qconf -sp ddi' shows > > the > > > > details of this environment, > > > > > > # including pathnames to prolog/epilog > > > > scripts. Also, 'qconf -spl'. > > > > > > # Other useful SGE commands: 'qconf - > > sc' > > > > shows config for resources. > > > > > > # > > > > > > # Mirabile dictu! SGE allows you > > to > > > > pass args to a job script by > > > > > > # just placing them behind the script > > > > name. In all my living days, > > > > > > # I've never seen a batch program that > > > > permitted this. Glory be! > > > > > > # > > > > > > if ($SCHED == SGE) then > > > > > > qsub -cwd -o $LOGFILE -j yes -pe ddi $NNODES > > -N > > > > $JOBNAME> $SGE_RESOURCES \ > > > > > > > > > > > > ~/scr/$JOB.script $JOB $VERNO $NCPUS $PPN > > > > > > endif > > > > > > > > > > I think this is something set up by your site already. > > I've > > > > never seen it > > > > > before. > > > > > > > > > > The rungms they provide with GAMESS seems targeting in the > > > > first place the > > > > > clusters of the author, what is understandable. But I > > would > > > > have really > > > > > liked that they provide a file which you have to source to > > set > > > > all their > > > > > environment variables and then just just issue ddikik.x > > with > > > > the correct > > > > > parameters and you are done. > > > > > > > > > > I could only send you my scripts to set it up from > > scratch. > > > > The export > > > > > list of variables I would have to refresh, as I have it > > for > > > > the October 01, > > > > > 2010 R3 version only right now. > > > > > > > > > > -- Reuti > > > > > _______________________________________________ > > > > > users mailing list > > > > > users@gridengine.org > > > > > https://gridengine.org/mailman/listinfo/users > > > > > > > > > > > > > > > > > _______________________________________________ users mailing list users@gridengine.org https://gridengine.org/mailman/listinfo/users