me --mem=18000 -D $(pwd) submit_job.sh
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Sema Atasever <s.atase...@gmail.com<mailto:s.atase...@gmail.com>>
>>>>
>>>&g
>>
>>
>>
>> From: Sema Atasever <s.atase...@gmail.com<mailto:s.atase...@gmail.com>>
>>
>> Reply-To: slurm-dev <slurm-dev@schedmd.com<mailto:slurm-dev@schedmd.com>>
>>
>> Date: Thursday 24 August 2017 at 15:58
>>
>> T
On 06/09/17 17:38, Sema Atasever wrote:
> I tried the line of code what you recommended but the code still
> generates an error unfortunately.
We've seen issues where using:
JobAcctGatherType=jobacct_gather/linux
gathers incorrect values for jobs (in our experience MPI ones).
We constrain
Title: Re: [slurm-dev] Re: Exceeded job memory limit problem
Do you have enough memory on your nodes?
what's the output of
sinfo -n -O nodelist,memory:20
You might not have enough memory on the nodes for the dataset.
On 09/06/2017 10:36 AM, Sema
t;
>
>
>
>
>
>
> From: Sema Atasever <s.atase...@gmail.com<mailto:s.atase...@gmail.com>>
>
> Reply-To: slurm-dev <slurm-dev@schedmd.com<mailto:slurm-dev@schedmd.com>>
>
> Date: Thursday 24 August 2017 at 15:58
>
> To: slurm-dev <s
Sema,
You could temporarily disable the enforcement of memory, in an effort to see
what the daemons are
reporting the job(s) as using.
Simply update the parameter "MemLimitEnforce" in your slurm.conf.
I'd only recommend doing this for a very short period though. You definitely
don't want
On Wed, 2017-08-23 at 01:26 -0600, Sema Atasever wrote:
>
>
> Computing predictions by SVM...
> slurmstepd: Job 3469 exceeded memory limit (4235584 > 2048000), being
> killed
> slurmstepd: Exceeded job memory limit
>
>
> How can i fix this problem.
>
Error messages often give useful