gadmin 8851 4992 0 May14 ?00:00:00 sge_shepherd-1434388 -bg
the actual process is also there
On Thu, Jun 20, 2013 at 2:14 PM, Reuti re...@staff.uni-marburg.de wrote:
Am 12.06.2013 um 06:19 schrieb Vamsi Krishna:
h_rt h_rt TIME
h_rt h_rt TIME=
YES NO 0:0:00
s_rt s_rt TIME=
YES NO 0:0:00
defined in queue as
queue1:
s_rt INFINITY
h_rt
Hi,
i have function insdie bash script
#!/bin/bash
test(){
echo $HOSTNAME
}
qrsh -V -now no -pty y test
can i submit function using qrsh interactive way
Regards
PVK
___
users mailing list
users@gridengine.org
pages
available regarding this issue. It seems to be RH related.
Regards,
Marco
On 05/27/2013 06:32 AM, Vamsi Krishna wrote:
yes it is there, It is accepting. it took some time to read the
settings.
qconf -sc | grep h_vmem
h_vmem h_vmem
is you h_vmem=64G set, do you get it via a load sensor or is is setup
for each execution host?
libsepol.so.1 is not a gridengine library. There are many pages available
regarding this issue. It seems to be RH related.
Regards,
Marco
On 05/27/2013 06:32 AM, Vamsi Krishna wrote:
yes
Hi,
i have three queues queue1.q, queue2.q and queue3.q. I have nodeA part of
queue3.q. h_vmem is configured to restrict user not to overcomit the job
using the following settings. but job is never submitted either interactive
or batch mode.
qconf -sc | grep h_vmem
h_vmem
qconf -sq queue3.q | grep h_vmem
h_vmemINFINITY
On Mon, May 27, 2013 at 12:13 AM, Vamsi Krishna vkpolise...@gmail.comwrote:
Hi,
i have three queues queue1.q, queue2.q and queue3.q. I have nodeA part of
queue3.q. h_vmem is configured to restrict user not to overcomit the job
in your qrsh
command?
Regards
Marco
Vamsi Krishna vkpolise...@gmail.com schrieb:
qconf -sq queue3.q | grep h_vmem
h_vmemINFINITY
On Mon, May 27, 2013 at 12:13 AM, Vamsi Krishna vkpolise...@gmail.comwrote:
Hi,
i have three queues queue1.q, queue2.q and queue3.q. I have
yes it was configured. thanks for clarification.
-pvk
On Tue, Apr 16, 2013 at 8:50 PM, Reuti re...@staff.uni-marburg.de wrote:
Am 16.04.2013 um 17:04 schrieb Vamsi Krishna:
how to kill the jobs which are running more than one hour, i have
configured
h_rt 01:00:00 in queue
Hi,
is there a way to find the memory used by the running job using grid
commands.
Regards
VK
___
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users
:SGE_STDOUT_PATH=/home/reuti/foobar_5849
$ qstat -j job_id | sed -n -e /^context: */s///p| tr , \n | grep
SGE_STDOUT_PATH
SGE_STDOUT_PATH=/home/reuti/foobar_5849
(We put even more entries in the job context, hence the `grep` here.)
-- Reuti
Regards
On Mon, Mar 18, 2013 at 8:22 AM, Vamsi
hi
i would like to stream the output log file similar like tail -f
output.file. is there a way to open automatically the -o 'output.file' in
the same terminal or in new terminal when the job is submitted to the Grid.
___
users mailing list
-
1347219 10.35000 batch_test user1r 03/15/2013 01:21:33
batch.q@node1 12
1347220 1.6 batch_test user1 qw03/15/2013 01:20:36
12
On Fri, Mar 8, 2013 at 9:29 PM, Vamsi Krishna vkpolise...@gmail.com
Thanks Reuti, Progressing well.
- V
On Wed, Mar 6, 2013 at 2:10 AM, Christopher Heiny che...@synaptics.comwrote:
On 03/05/2013 10:35 AM, Reuti wrote:
Hi,
Am 05.03.2013 um 18:41 schrieb Vamsi Krishna:
is there any way to run make -j slots in node when we submit to batch
environment
Hi,
is there any way to run make -j slots in node when we submit to batch
environment to use all the slots with -pe slots in node
Regards
PVK
___
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users
hi
is there any grid engine command to find out the actual memory utilization
of job submitted?
Regards
PVK
___
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users
Thanks Jesse, yes it always wrap around 4G, seems bug.
On Wed, Dec 12, 2012 at 1:08 AM, Jesse Becker becker...@mail.nih.govwrote:
On Fri, Dec 07, 2012 at 09:38:26AM +0530, Vamsi Krishna wrote:
hi
is there any grid engine command to find out the actual memory utilization
of job submitted
was also killed., does qmaster needs reboot.
On Thu, Sep 27, 2012 at 9:39 PM, Reuti re...@staff.uni-marburg.de wrote:
Am 26.09.2012 um 13:48 schrieb Vamsi Krishna:
*Exit code 140:* The job exceeded the wall clock time limit, h_rt is
setto infinity
submit with -notify by default
in default/spool/`hostname`/messages
starting up SGE 6.2u5 (lx24-amd64)
Regards
PVK
On Thu, Sep 27, 2012 at 11:50 PM, Reuti re...@staff.uni-marburg.de wrote:
Am 27.09.2012 um 19:41 schrieb Vamsi Krishna:
those were inputs for debugging.
job 1058200.1 failed on host assumedly after job because: job
Hi,
some of the batch jobs are killed and qacct -j of the job id
failed 100 : assumedly after job
exit_status 140
what could be the reason.
Regards
PVK
___
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users
*Exit code 140:* The job exceeded the wall clock time limit, h_rt is
setto infinity
submit with -notify by default.
--PVK
On Wed, Sep 26, 2012 at 12:46 PM, Reuti re...@staff.uni-marburg.de wrote:
Am 26.09.2012 um 08:53 schrieb Vamsi Krishna:
some of the batch jobs are killed and qacct -j
for event 779087.
EVENT DEL JOB 552060.1 failed
On Fri, Feb 3, 2012 at 2:46 AM, Reuti re...@staff.uni-marburg.de wrote:
Am 02.02.2012 um 17:27 schrieb vamsi krishna:
qhost -F mem_token
Host Resource(s): hc:mem_token=45.209G
qhost -F mem_free
node:lx24-amd64 8
, Reuti re...@staff.uni-marburg.de wrote:
Am 02.02.2012 um 17:04 schrieb vamsi krishna:
Case1: qconf -sc
mem_free mf MEMORY
=YES NO 00
mem_token mem_token MEMORY
=YES
Hi
is there any other parameter for load_thresholds,
my configuration:
load_thresholds np_load_avg=1.75
suspend_thresholdsNONE
i have the followinng sched configuration:
queue_sort_method seqno
job_load_adjustments np_load_avg=0.50
Hi Experts,
I am having interactive queue submitting the
jobs to only few machines even if there are available slots in other
machines. how can i distribute the jobs depending on memory and slots free.
i was having the following
algorithm default
schedule_interval
25 matches
Mail list logo