I ran again with time command in front of g09.
The console output is
Wed Mar 14 09:15:58 EDT 2018
real32m14.136s
user53m56.946s
sys2m17.855s
Wed Mar 14 09:48:12 EDT 2018
So the wall clock time is 32 minutes roughly.
g09 says
Job cpu time: 0 days 0 hours 47 minutes 56.0
Hi fellow slurm users,
Today I noticed that scontrol returns 0 when it denies a drain request
because no reason was supplied.
It seems to me that this is wrong behavior, it should return 1 or some
other error code so that scripts will know that the node was not actually
drain.
Thanks,
Eli
Slurm
Gaussian reports CPU time, sacct reports wall time here. Was Gaussian setup to
run with 2 CPU cores?
Best,
Shenglong
> On Mar 14, 2018, at 8:04 AM, Mahmood Naderan wrote:
>
> Hi,
> I see that slurm reports a 35 min duration for a completed job (g09) like this
>
>
Hi,
I see that slurm reports a 35 min duration for a completed job (g09) like this
[mahmood@rocks7 ~]$ sacct -j 30 --format=start,end,elapsed,time
Start EndElapsed Timelimit
--- --- -- --
2018-03-14T06:07:17
On Wednesday, 14 March 2018 9:14:45 PM AEDT Mahmood Naderan wrote:
> Thank you very much.
My pleasure, so glad it helped!
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
On Wednesday, 7 March 2018 12:53:30 AM AEDT Ron Golberg wrote:
> I have many Jobs divided into job arrays - which makes me cross the Slurm’s
> 67 Million JOBuid limit.
This is a consequence of the addition of federation support, you can see why
here:
Jessica Nettelblad writes:
> Maybe look into if scontrol top is of any help. It was introduced in
> 16.05 and changed in 17.11 to take a job list. Note that NEWS and the
> man page has slightly different information on how to enable it, maybe
> the man page is only
>If you put this in your script rather than the g09 command what does it say?
>
>ulimit -a
That was a very good hint. I first ran ssh to compute-0-1 and saw
"unlimited" value for "max memory size" and "virtual memory". Then I
submitted the job with --mem=2000M and put the command in the slurm
On Wednesday, 14 March 2018 7:37:19 PM AEDT Mahmood Naderan wrote:
> I tried with --mem=2000M in the slurm script and put strace command in front
> of g09. Please see some last lines
Gaussian is trying to allocate more than 2GB of RAM in that case.
Unfortunately your strace doesn't show
You can use the the "nice" features to "rearrange" jobs.
/M
On 2018-03-14 10:07, Loris Bennett wrote:
Hi,
I seem to remember reading something about users being able to change
the priorities within a group of their own jobs. So if a user suddenly
decided that a job submitted later that a
Hi,
I seem to remember reading something about users being able to change
the priorities within a group of their own jobs. So if a user suddenly
decided that a job submitted later that a similar job submitted earlier,
he/she would be able to switch the priorities.
Is it just wishful thinking on
11 matches
Mail list logo