Alejandro,

Thanks for that description.

Right now we have 2 partitions but the user only submitted to one 
(lowmem @ 32 nodes 128GB/ram, highmem @ 8 nodes 256GB/ram). She 
submitted 18500 jobs, and it got around 16000 before it started throwing 
the error. We found an entry in slurm.conf that sets max jobs = 5000, 
but it was commented out and we believed this was "unlimited" however we 
now believe this value commented out means, system default of 10,000.

Which version of slurm are you running that better handles mass jobs? 
We're really anxious to see what 2.6 provides onces it's out of RC so we 
can arrange array jobs and such.

AC

On 06/13/2013 04:07 AM, Alejandro Lucero Palau wrote:
> Slurmctld is multithreading but it does not mean any load will be supported.
>
> Each time someone connects to slurmctld (sbatch, srun, squeue, sinfo,
> ...) a new thread is created. There are the main threads as well, which
> will live through the whole slurmctld execution. And there are the
> agents threads which slurmtlcd uses to communicate with nodes.
>
> Under a heavy load you can see how many threads are active with sdiag:
>
> Server thread count: 14
> Agent queue size:    10
>
> As you can see, information about agents is also given. Sdiag can tell
> you if the scheduling is being a problem by itself.
>
> Main schedule statistics (microseconds):
>      Last cycle:   78973
>      Max cycle:    1801057
>      Total cycles: 1526
>
> If you got a Max cycle larger than a couple of seconds, you have a
> problem under a heavy load as main scheduler runs locking the queue.
>
> The main reason for the message
>
> "Slurm temporarily unable to accept job, sleeping and retrying."
>
>
> is (probably)  a high number of jobs being submitted or a high number
> completing. Any event like job submission or job completion creates a
> new thread. By default, those new threads call the scheduler. Even with
> the scheduler taking just a couple of seconds, hundreds or thousands of
> threads can "lock" the system. You can avoid this behaviour deferring
> calls to scheduler (the call is done but just trying to schedule the
> first one).
>
> Also, depending on your design in terms of partitions and queues, the
> slurm version you are using (2.5.4) could take too much time under some
> circumstances.
>
> We have been tuning a HTC cluster with Slurm and it now supports heavy
> loads as the one you describes. I have some tweaking (hardcoded) for
> improving scheduling when several partitions are actively used and when
> jobs can be submitted to more than one partition.
>
> On 06/12/2013 06:55 PM, Alan V. Cowles wrote:
>> Under the Data Objects section on the following page
>> http://slurm.schedmd.com/selectplugins.html we find the statement:
>>
>> "Slurmctld is a multi-threaded program with independent read and write
>> locks on each data structure type."
>>
>> Which is what lead me to believe it's there, that we perhaps missed a
>> configuration option.
>>
>> AC
>>
>>
>>
>> On 06/12/2013 12:43 PM, Paul Edmon wrote:
>>    
>>> I'm also interested in this as I've only ever seen one slurmctld and
>>> only at 100%.  It would be good if making slurm multithreaded was on the
>>> path for the future.  I know we will have 100,000's of jobs in flight
>>> for our config so it would be good to have something that can take that
>>> load.
>>>
>>> -Paul Edmon-
>>>
>>> On 06/12/2013 12:30 PM, Alan V. Cowles wrote:
>>>      
>>>> Hey Guys,
>>>>
>>>> I've seen a few references to the slurmctld as a multithreaded process
>>>> but it doesn't seem that way.
>>>>
>>>> We had a user submit 18000 jobs to our cluster (512 slots) and it shows
>>>> 512 fully loaded, shows those jobs running, shows about 9800 currently
>>>> pending, but upon her submission threw errors around 16500.
>>>>
>>>> Submitted batch job 16589
>>>> Submitted batch job 16590
>>>> Submitted batch job 16591
>>>> sbatch: error: Slurm temporarily unable to accept job, sleeping and
>>>> retrying.
>>>> sbatch: error: Batch job submission failed: Resource temporarily
>>>> unavailable.
>>>>
>>>> The thing we noticed at this time on our master host is that slurmctld
>>>> was pegging at 100% on one cpu quite regularly and paged 16GB of virtual
>>>> memory, while all other cpu's were completely idle.
>>>>
>>>> We wondered if the pegging out of the control daemon is what led to the
>>>> submission failure, as we haven't found any limits set anywhere to any
>>>> specific job or user, and wondered if perhaps we missed a configure
>>>> option for this when we did our original install.
>>>>
>>>> Any thoughts or ideas? We're running Slurm 2.5.4 on RHEL6.
>>>>
>>>> AC
>>>>        
>
> WARNING / LEGAL TEXT: This message is intended only for the use of the
> individual or entity to which it is addressed and may contain
> information which is privileged, confidential, proprietary, or exempt
> from disclosure under applicable law. If you are not the intended
> recipient or the person responsible for delivering the message to the
> intended recipient, you are strictly prohibited from disclosing,
> distributing, copying, or in any way using this message. If you have
> received this communication in error, please notify the sender and
> destroy and delete any copies you may have received.
>
> http://www.bsc.es/disclaimer

Reply via email to