Is your application really strictly CPU bound or is there a memory/IO 
component?

You might be setting your bar a bit low. You could potentially get a 
full rack of 1U servers for less than that (40 U  * 8 cores * 2 CPUs).

I'd recommend looking at the Supermicro Twin Squared if you want to 
stick in half rack configuration. You'll get better power efficiency and 
screwless maintenance/replacement.  Dell has a similar product for a 
similar price, but not as easily maintainable. The twin squared has 4 
servers in 2 U and each is on an independent sled.  I have a favorite 
supermicro dealer who will rack the whole thing, cable it to my 
specifications, do an acceptance test, and ship it as one unit for a 
relatively low install fee. They even label the machines and cables and 
color code according to our specs. Finally, they give us a spreadsheet 
with all of the IPs, mac addresses, etc. as part of the deliverable. Let 
me know if you want a vendor reference. (shipping an entire rack as one 
unit is optional)

Also, we too, use rocks.

If you plan to run a CPU intensive load, you can tell them to run 
something like Linpack across all nodes for, for instance, 4 hours 
before delivering. We did something similar for an Infiniband cluster. 
Also, that no node should be more than ~15% below the average Linpack 
numbers for the cluster. You can specify similar memory benchmarks. 
Linpack is a good, general purpose, HPC benchmark.


On 11/16/2010 7:22 PM, Hugh Brown wrote:
> Hi everyone -- I work at a university in a smallish department, and
> I'm helping a faculty member craft an RFP for a smallish cluster:
> about $250k budget, taking up half a rack or so, and around 500 cores.
> We plan to manage/install the cluster using ROCKS.  I expect bids from
> the usual medium/big vendors.  It isn't huge, but it's a couple of
> firsts for me:
>
> * First cluster (woot)
> * First RFP of this size
>
> I'm concerned about acceptance testing, especially after hearing about
> the problems with the Intrepid cluster[1] at the latest LISA
> conference, and I wanted to ask what other admins have done about
> this.
>
> My questions:
>
> * Have you included acceptance testing for large hardware purchases
> before?  Why or why not?
>
> * What have you specified in the tests?  Do you have a template you
> can share?  How specific did you have to be in your RFP about what
> constituted failure?
>
> * What resistance, if any, have you found from the vendors?  What
> about management?  (I heard stories at LISA about acceptance testing
> being whittled down in negotiations in return for concessions on price
> or other things.)  If you had to negotiate, what did you give
> up/trade?
>
> * Have you had to reject anything, or send anything back to be fixed?
> What reaction did you get from the vendor?
>
> I've found one paper[2] on this subject -- very helpful -- but I'm
> hoping that someone can chime in on the larger/squishier (to use Adam
> Moskowitz' term) issues around this.
>
> I realize these are basic questions, but I hope I'm on the right side
> of the "how do I do this" vs. "please do my work for me" line.
>
> Thanks,
> Hugh
>
> Footnotes:
> [1]  http://www.usenix.org/events/lisa10/tech/full_papers/Wilson.pdf
> [2]  http://www2.cs.uh.edu/~openuh/hpcc07/papers/10-Muller.pdf
> --
> Hugh Brown
> http://saintaardvarkthecarpeted.com
> Because the plural of Anecdote is Myth.
>
> Footnotes:
>
>
> _______________________________________________
> Discuss mailing list
> [email protected]
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
> This list provided by the League of Professional System Administrators
>   http://lopsa.org/

_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to