That information is highly dependent upon the hardware, configuration, and workload. Those results with on a small cluster with 8 cores per node and trivial single cores jobs (sbatch of script running "srun /bin/true). There was not a significant job backlog.
There's a guide here that should help you: https://computing.llnl.gov/linux/slurm/high_throughput.html ________________________________________ From: [email protected] [[email protected]] On Behalf Of Glanfield, Wayne (Oxford) [[email protected]] Sent: Wednesday, March 23, 2011 7:55 AM To: [email protected] Subject: RE: [slurm-dev] design limits for 2.2? SLURM scalability Hi Moe, I'd be very very interested in how the scaling job throughput rate work goes. This has been an issue with us. We are moving to a modestly larger system in the next few weeks which should actually help us with submission rate, but would be curious to know what rates you are expecting to be able to achieve - currently ~120k/hour but system size, backlog etc not specified? Thanks Wayne -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of Jette, Moe Sent: 21 March 2011 16:09 To: [email protected] Cc: [email protected] Subject: RE: [slurm-dev] design limits for 2.2? SLURM scalability By this time next year, SLURM should be running on some much larger systems than listed below, including those with the slurmd daemons on each compute node. The scalability issues we see are mostly related to the rate of job submissions rather than the system size and we're working on that now. Moe ________________________________________ From: [email protected] [[email protected]] On Behalf Of Rayson Ho [[email protected]] Sent: Monday, March 21, 2011 8:56 AM To: [email protected] Subject: Re: [slurm-dev] design limits for 2.2? SLURM scalability Seems like SLURM daemons will not be running on each node on Sequoia - slurmd will run on the I/O nodes but not the compute nodes if I read this presentation correctly: Multi-Petascale Computing on the Sequoia Architecture: https://hpcrd.lbl.gov/scidac09/talks/Seager-Sequoia4SciDACv1.pdf Nevertheless, the installations Jette listed are really massive!! The largest known Grid Engine installation is Sun's Ranger at TACC, which only has 62,976 processor cores in 3,936 nodes. As a developer & maintainer of a Grid Engine fork (Oracle ended developing the open-source SGE code-base in 2010, and thus we forked the code and started the pure open-source project called "Open Grid Scheduler"), I think Grid Engine won't be able to scale to those numbers in the near or not so near future! :-( Rayson On Sat, Nov 20, 2010 at 1:49 PM, Jette, Moe <[email protected]> wrote: > I believe that SLURM can manage any machine that HP can build and a > customer can pay for ;-) > > We have not seen any scaling issues and some of the machines running SLURM today include: > Tianhe-1A in China with 186368 cores > Tera-100 at CEA with 138368 cores and a BlueGene/L at LLNL with 212992 > cores > > We plan to run SLURM on LLNL's 20 PFlop Bluegene/Q system next year > with 1.6 million processors > (http://www-304.ibm.com/jct03004c/press/us/en/pressrelease/26599.wss) > and I am not expecting any scalability problems, although task launch on the BlueGene systems differs from typical Linux systems. > > At the other end of the spectrum, Intel is using SLURM on their 48-core "cluster on a chip" > (http://www.hpcwire.com/features/Intel-Unveils-48-Core-Research-Chip-783 78487.html). > SLURM's architecture with a multitude of plugin options gives it tremendous flexibility. > > Moe > ________________________________________ > From: [email protected] [[email protected]] > On Behalf Of Andy Riebs [[email protected]] > Sent: Friday, November 19, 2010 8:14 AM > To: [email protected] > Subject: [slurm-dev] design limits for 2.2? > > How large a cluster should one expect to be able to support with Slurm > 2.2? (One suspects that the number is getting rather large!) > > Thanks! > Andy > > -- > Andy Riebs > Hewlett-Packard Company > SCI Solutions > +1-786-263-9743 > My opinions are not necessarily those of HP > > > ********************************************************************** Please consider the environment before printing this email or its attachments. The contents of this email are for the named addressees only. It contains information which may be confidential and privileged. If you are not the intended recipient, please notify the sender immediately, destroy this email and any attachments and do not otherwise disclose or use them. Email transmission is not a secure method of communication and Man Investments cannot accept responsibility for the completeness or accuracy of this email or any attachments. Whilst Man Investments makes every effort to keep its network free from viruses, it does not accept responsibility for any computer virus which might be transferred by way of this email or any attachments. This email does not constitute a request, offer, recommendation or solicitation of any kind to buy, subscribe, sell or redeem any investment instruments or to perform other such transactions of any kind. Man Investments reserves the right to monitor, record and retain all electronic communications through its network to ensure the integrity of its systems, for record keeping and regulatory purposes. Visit us at: www.maninvestments.com TG0908 **********************************************************************
