srun --cpu_bind=cores
On Wed, Feb 8, 2017 at 1:08 PM, Brendan Moloney
wrote:
> Hi,
>
> I want to allocate at the level of logical cores (each serial job gets one
> thread on a hyperthreading system), which seems to be achievable only by
> not setting threads_per_core
(clumsy fingers)
if I understand your question correctly, but maybe;
srun --cpu_bind=threads
On Wed, Feb 8, 2017 at 4:02 PM, andrealphus <andrealp...@gmail.com> wrote:
> srun --cpu_bind=cores
>
> On Wed, Feb 8, 2017 at 1:08 PM, Brendan Moloney <moloney.bren...@gmail.com
Hi all,
Long time Torque user, first time SLURM user. I'm running version
15.08 from APT on Ubuntu Xenial. (running on an 18 core CPU E5-2697
v4)
I'm trying to figure out the proper slurm.conf configuration, and
script parameters to run a job array on a single node/server
workstation, with more
spoke too soon, so for posterity
need to set, in the conf;
SelectType=select/con_res
SelectTypeParameters=CR_CPU
and in the script;
#SBATCH --threads-per-core=1
and DefMemPerCPU, did not matter...
On Tue, Sep 6, 2016 at 3:08 PM, andrealphus <andrealp...@gmail.com> wrote:
>
ters=CR_CPU
>
> then you should be able to get to all 36.
>
> cheers
> L.
>
> --
> The most dangerous phrase in the language is, "We've always done it this
> way."
>
> - Grace Hopper
>
> On 7 September 2016 at 10:22, andrealphus <andrealp...@g
've always done it this
> way."
>
> - Grace Hopper
>
> On 7 September 2016 at 10:39, andrealphus <andrealp...@gmail.com> wrote:
>>
>>
>> Thanks Lachman, took threads-per-core and out same behavior, still
>> limited to 18.
>>
>> On Tu
one more follow up
This seems to limited to the number of cores. Anyway to change it so
that I can run up to the thread limit (18x2) concurrently?
Thanks!
On Tue, Sep 6, 2016 at 3:21 PM, andrealphus <andrealp...@gmail.com> wrote:
>
> spoke too soon, so for posterity
>
Thanks Christopher!
On Wed, Sep 14, 2016 at 4:30 PM, Christopher Samuel
<sam...@unimelb.edu.au> wrote:
>
> On 15/09/16 05:20, andrealphus wrote:
>
>> On a side note, any idea if there is a parameter to not
>> have it use a particular cpu? This is a single node wo
p.s. same issue on v16
On Wed, Sep 7, 2016 at 9:57 AM, andrealphus <andrealp...@gmail.com> wrote:
>
> p.s. it's listing 36 processors with sinfo, and that theyre all being
> used, but it only running 18 jobs. So it looks like while it can see
> the 36 "processors" its
is there an environmental variable available in sbatch that holds the
cpu/s the current job is being run on? (not the number of cpus, but a
cpu identifier).
, Sep 12, 2016 at 9:46 AM, alex straza <strza123...@gmail.com> wrote:
> Thanks Ashton, your solution worked.
>
> Uwe - don't know if your solution would have worked since I tried this
> first, but thanks in any case!
>
> On Mon, Sep 12, 2016 at 12:12 PM, andrealphus <a
Hi Alex! Please try this solution first, as I just went through this
exact same issue over the past two weeks, and believe setting anything
related to cores/sockets/threads will not work.
Leave;
SelectType=select/cons_res
SelectTypeParameters=CR_CPU
But in the node configuration you need to
wait, out of curiosity;
"Sockets=2 CoresPerSocket=10 ThreadsPerCore=2
CPUs are set to 40 and SelectTypeParameters=CR_CPU"
this is actually allowing you to run 40 processes? When I tried it, it
showed 40 available cpus, but would only allocate at the core level,
not thread level, and hence was a
, andrealphus <andrealp...@gmail.com> wrote:
>
> a..I'll give that a try. Thanks Lachlan, feel better!
>
> On Tue, Sep 6, 2016 at 6:49 PM, Lachlan Musicman <data...@gmail.com> wrote:
>> No, sorry, I meant that your config file line needs to change:
>>
>>
>&
:05 1 localhost
sinfo -o %C
CPUS(A/I/O/T)
36/0/0/36
On Wed, Sep 7, 2016 at 9:41 AM, andrealphus <andrealp...@gmail.com> wrote:
>
> I tried changing the CPU flag int eh compute node section of the conf
> file to 36, but it didnt make a difference, still limited to 18. Also
> tried r
did you install munge?
On Mon, Oct 31, 2016 at 7:11 AM, Peixin Qiao wrote:
> Hi Lachlan,
>
> My slurm.conf is as follows:
>
> # slurm.conf file generated by configurator easy.html.
> # Put this file on all nodes of your cluster.
> # See the slurm.conf man page for more
and of course right after sending a message, I found it
MaxArraySize
On Fri, Oct 28, 2016 at 5:27 PM, andrealphus <andrealp...@gmail.com> wrote:
>
> I must be missing a setting somewhere, but when I try to submit a job
> array with several thousand jobs in it it fails with;
Looks like I over the weekend our building lost power, and I'm
assuming my backup didn't keep my single node workstation up and
running very long.
When I came in this morning, I saw that the machine had restarted and
that slurm must have restarted the failed jobs (?) (slurm/munge is set
up to
18 matches
Mail list logo