Hugh Brown <[email protected]> writes: > Agreed; I was kind of lazy in my summary, which was based on blades. > (Half a rack of blades will just about max out our power budget per rack.)
When speccing this out to a vendor, I'd be careful to not imply that I needed blades. Specify # of cores and power budget, and physical space available. (most data centers have a watts per squarfoot limit, so from that and from the power budget you should have a number for how many racks you have.) Quite often, blades are more expensive than 1u servers, and because of the power per sqft limits at most data centers, you generally have to pay for just as much space even if the blades use 1/2 rack while the 1u servers will use a full rack. In my experience, blade servers quite often don't deliver on power promises. While it is true that you usually get a few percentage points in terms of power supply efficiency out of going with a blade server, quite often blades provide less component flexibility, something that sometimes dwarfs those few percent savings... for example, one client of mine a while back was evaluating $bigname blade servers against another $bigname blade servers. Both brands were essentially the same components on the inside (low power xeons with full power FBDIMMS) I pointed out that they could save a whole lot of power switching to low-power reg. ecc ddr2 with the same low power xeons... but the blade vendors in question only supported FBDIMMS at the time. They ended up getting one of the vendors to build some 1u servers with reg.ecc ddr2 and the same CPUs (that benchmarks /exactly the same/) that ended up using a whole lot less power. the inefficiency of FBDIMMS at the time dwarfed the power supply efficiency savings of the expensive blade servers. (the 1u servers ended up being cheaper in terms of capital costs, too.) Now, sometimes blades are cheaper; I have some of the supermicro 2 in 1u servers from my socket F days, not because I needed the density (though sometimes it's nice to have it when I have a rack almost full of low-power 3u servers that has power to spare but only 1u of free space.) but because they usually come out cheaper than 2 1u servers in terms of capital cost. Another downside to blades (and the reason why I now plan more carefully and avoid the 2 in 1u servers) is local disk I/O. Blades, due to their size, have fewer slots for local disk, either that or they only support 2.5" disk vs 3.5" disk. (for some use cases 2.5" makes sense... for example, if you have expensive power and need the iops you'd get by just running a bunch of 500GiB disks rather than fewer larger disks. For me, though, the 2.5x premium on 2.5" disks isn't worth saving the three or four watts.) Of course, if you don't use much local storage, this might not be a big deal for you, in which case the supermicro 2 in 1u stuff might be just perfect. I'm just saying, if you start with your needs rather than starting with a particular form factor, you quite often can come up with a solution that will better meet your needs. _______________________________________________ Discuss mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/
