Hi,
We have been using the burst buffer plugin to build our own staging layer
at LANL, and we are wondering if there will be any big changes to the burst
buffer plugin in the future?
Thanks
Lei
I see - yes, to clarify, we are specifying memory for each of these jobs,
and there is enough memory on the nodes for both types of jobs to be
running simultaneously.
On Fri, Nov 1, 2019 at 1:59 PM Brian Andrus wrote:
> I ask if you are specifying it, because if not, slurm will assume a job
>
I don’t know from experience if Slurm behaves unexpectedly with “unlimited”
versus some large number, like 30 days; but barring something unexpected, it
seems like time limits shouldn’t be the problem.
John
From: slurm-users on behalf of c b
Reply-To: Slurm User Community List
Date:
On Friday, 01 November 2019, at 10:41:26 (-0700),
Brian Andrus wrote:
> That's pretty much how I did it too.
>
> But...
>
> When you try to run slurmd, it chokes on the missing symbols issue.
I don't yet have a full RHEL8 cluster to test on, and this isn't
really my area of expertise, but have
I ask if you are specifying it, because if not, slurm will assume a job
will use all the memory available.
So without specifying, your big job gets allocated 100% of the memory so
nothing could be sent to the node. Same if you don't specify for the
little jobs. It would want 100%, but if
yes, there is enough memory for each of these jobs, and there is enough
memory to run the high resource and low resource jobs at the same time.
On Fri, Nov 1, 2019 at 1:37 PM Brian Andrus wrote:
> Are you specifying memory for each of the jobs?
>
> Can't run a small job if there isn't enough
On Friday, 01 November 2019, at 11:37:37 (-0600),
Michael Jennings wrote:
> I build with Mezzanine, but the equivalent would roughly be this:
>
> rpmbuild -ts slurm-19.05.3-2.tar.bz2
> cat the_above_diff.patch | (cd ~/rpmbuild/SPECS ; patch -p0)
> rpmbuild --with x11 --with lua --with pmix
On Tuesday, 29 October 2019, at 15:11:38 (+),
Christopher Benjamin Coffey wrote:
> Brian, I've actually just started attempting to build slurm 19 on
> centos 8 yesterday. As you say, there are packages missing now from
> repos like:
They're not missing; they're just harder to get at now, for
Are you specifying memory for each of the jobs?
Can't run a small job if there isn't enough memory available for it.
Brian Andrus
On 11/1/2019 7:42 AM, c b wrote:
I have:
SelectType=select/cons_res
SelectTypeParameters=CR_CPU_Memory
On Fri, Nov 1, 2019 at 10:39 AM Mark Hahn
I tried setting a 5 minute time limit on some low resource jobs, and one
hour on high resource jobs, but my 5 minute jobs are still waiting behind
the hourlong jobs.
Can you suggest some combination of time limits that would work here?
On Fri, Nov 1, 2019 at 11:08 AM c b wrote:
> On my low
On my low resource jobs I'm setting the time to 1 hour, and on my large
ones I'm setting time=unlimited.
Is the unlimited part the problem? I have that setting because in my
cluster there are some machines that come in and out during the day via
reservations, and I want to keep these larger jobs
Are you setting realistic job run times (sbatch –t )?
Slurm won’t backfill low priority jobs (with low resource requirements) in
front of a high priority job (blocked waiting on high resource requirements) if
it thinks the low priority jobs will delay the eventual start of the high
priority
I have:
SelectType=select/cons_res
SelectTypeParameters=CR_CPU_Memory
On Fri, Nov 1, 2019 at 10:39 AM Mark Hahn wrote:
> > In theory, these small jobs could slip in and run alongside the large
> jobs,
>
> what are your SelectType and SelectTypeParameters settings?
> ExclusiveUser=YES on
Hi,
Apologies for the weird subject line...I don't know how else to describe
what I'm seeing.
Suppose my cluster has machines with 8 cores each. I have many large high
priority jobs that each require 6 cores, so each machine in my cluster runs
one of each of these jobs at a time. However, I
14 matches
Mail list logo