My 2 cents here (and I'm aware they will not help Eric ... and apologies in advance for the rant, it's a long-term heartfelt topic ...):

A DRM benchmark would be nice to have. Benchmarking in general is an almost vain attempt. You have to be very prescriptive of the boundary conditions to achieve comparable results. And such narrow boundary conditions almost never can reflect reality. So all benchmarks are up for interpretation and up for debate.

But, benchmarks are still a useful means to provide at least some orientation. As Chris has stated, the variability in the use case scenarios of workload managers is certainly even bigger than in classical performance benchmarks such as SPEC or Linpack. You also have to be careful what you are measuring: the underlying HW, network & storage performance? Or the efficiency of the SW? Or the ability to tune the workload management system - in itself and in combination with HW & SW underneath? Or the suitability of the workload management system for a specific application case?

So I guess that probably a suite of benchmarks would be needed, maybe akin to SPEC, to provide at least a roughly representative picture. And you'd have to either standardize on the HW, e.g. take 100 Amazon dedicated servers and run with that, or you'd have to do it like for Linpack and say: "I don't care what you use and how much of it but report the resulting throughput vs time numbers on these use cases." I.e. how fast can you possibly get. In other words something like the Top500 for workload management environments.

For many companies and institutions workload managers have become the most central workhorse - the conveyor belt of a data center. If it stops, all stops. If you can make it run quicker you achieve your results sooner. If it enables you to do so you can be much more flexible in responding to changing demands. So it's almost ironic that large computing centers are benchmarking individual server performance, run something like Linpack to advertise their peak performance and create their own site-specific application benchmark suites for selecting new HW. But they often do not benchmark with the workload management system in the picture which later, in combination with tuning and the rest of the environment, will define the efficiency of the data center.

So a benchmark for DRMs would be a highly useful tool. I've always wondered how to get an initiative started which would lead to such a benchmark ...

Any ideas?

Cheers,

Fritz


Am 16.02.11 22:38, schrieb Chris Dagdigian:

What exactly are you trying to benchmark? Job types and workflows are
far to variable to produce a usable generic reference.

The real benchmark is "does it do what I need?" and there are many
people on this list who can help you zero in on answering that question.

SGE is used on anything from single-node servers to the 60,000+ CPU
cores on the RANGER cluster over at TACC.

The devil is in the details of what you are trying to do of course!

-Chris



Eric Kaufmann wrote:
I am fairly new to SGE. I am interested in getting some benchmark
information from SGE.

Are there any tools for this etc?

Thanks,

Eric


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users


---------------------------------------------------------------------


Notice from Univa Postmaster:


This email message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply email and destroy all copies of 
the original message. This message has been content scanned by the Univa Mail 
system.



---------------------------------------------------------------------

<<attachment: fferstl.vcf>>

_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to