On 13-2-2008 22:06 Tobias Brox wrote:
What I'm told is that the state-of-the-art SAN allows for
an "insane amount" of hard disks to be installed, much more than what
would fit into any decent database server.  We've ended up buying a SAN,
the physical installation was done last week, and I will be able to tell
in some months if it was a good idea after all, or not.

Your SAN-pusher should have a look at the HP-submissions for TPC-C... The recent Xeon systems are all without SAN's and still able to connect hundreds of SAS-disks.

This one has 2+28+600 hard drives connected to it:
http://tpc.org/results/individual_results/HP/hp_ml370g5_2p_X5460_tpcc_080107_es.pdf

Long story short, using SAS you can theoretically connect up to 64k disks to a single system. And with the HP-example they connected 26 external enclosures (MSA70) to 8 internal with external SAS-ports. I.e. they ended up with 28+600 harddrives spread out over 16 external 4-port SAS-connectors with a bandwidth of 12Gbit per connector...

Obviously its a bit difficult to share those 628 harddrives amongst several systems, but the argument your colleagues have for SAN isn't a very good one. All major hardware vendors nowadays have external SAS-enclosures which can hold 12-25 external harddrives (and can often be stacked to two or three enclosures) and can be connected to normal internal PCI-e SAS-raid-cards. Those controllers have commonly two external ports and can be used with other controllers in the system to combine all those connected enclosures to one or more virtual images, or you could have your software LVM/raid on top of those controllers.

Anyway, the common physical limit of 6-16 disks in a single server-enclosure isn't very relevant anymore in an argument against SAN.

Best regards,

Arjen

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

               http://www.postgresql.org/about/donate

Reply via email to