Let's return to the "getting started" theme by looking at
specifications for the first blade server chassis and blades which can
be used as the basis for learning, experimentation, testing and initial
production pilots.

  Sample Specs for that first chassis follow - but please start by
trying very hard to do this collaboratively!  I've obtained these specs
from my friend and colleague Eric Sills.

  These refer to the IBM BladeCenter 
products which we started using for our campus distributed memory
parallel HPC computing facility and then put into service to run the
VCL's "Desktop Augmentation" service.

  Following the concept of "scaling up", we already knew this hardware
product, we were already using IBM's xCAT software (mentioned before), 
and so it made a lot of sense to continue with this same base 
infrastructure.  We've continued to be very satisfied with this choice
for a number of reasons including capability, density, power efficiency
and reliability.

one BladeCenter E chassis

chassis power supplies 3&4 (if using more than 6 blade servers)

two chassis Ethernet switch modules (we use BNT layer 2/3 copper

one (or two) chassis IO module(s) to directly attach blade used as VCL
management node to storage - may be SAS, iSCSI, of Fiber Channel (we
have used both FC and iSCSI with optical or copper pass through modules)

four to fourteen blade servers We have been using Intel blades (HS2x)
with dual Xeon processors (most recently quad-core), about 2GB
memory/core, and a SAS disk drive. For running VMs on hypervisor you
may want to choose one of the larger size disks but for bare metal
loads a 73GB disk will be more than sufficient.

This system needs to be rack mounted and will need 208VAC power. The E
chassis has four power supplies with cords that connect to C19 outlets -
so you will need a rack PDU with at least four C19 outlets to connect
the chassis.  We don't totally fill up each rack, in part to limit power
density in the room.  Also there may be a need to put some other devices
in the rack.

  As scaling up is done, serious attention should be paid to the machine
room design.  Good hot/cold aisle design setups will help improve
cooling and reduce the power required for cooling.  Improved cooling
also extends the hardware life. There are also some very interesting and
effective cooling products which attach on the back of the racks or 
inbetween the racks and which have the promise of being more effective 
than hot/cold aisles.
--henry schaffer

Reply via email to