On Sun, 2010-03-28 at 15:36 -0700, Craig Dunwoody wrote:
> I'd be interested to hear from anyone who has suggestions about
> optimizing the hardware config of an HDD-based OSD node for Ceph, using
> currently available COTS hardware components.

Craig, list,

while this does not match your G1, G2 or G3, there is a G4 absolutely
worth considering IMO:

  Maximize storage area and transfer speed divided by hardware
investment + MRU.

Then G5:
  Optimize for performance / node.

or G6:
  Optimize for performance of the storage network.

matters too.  And both must be weighed against not only the hardware
investment, but also the MRU due to space, cooling and power
consumption.

I've done some raw calculations for the G4 and what I found was that if
you don't mind installing COTS-HW not exactly of your standard data
center make and model, you stand to gain a lot by simply deploying many
quite low-power devices with 4-5 SATA ports IMO. But it wholly depends
on what you are after.  I believe it is very interesting for a
data-warehousing application of Ceph.

Potentially, I must add.  I haven't tried it.  :) 
But for any sizable installation, I believe the storage network itself
will let, as you scale up, get sufficient performance. Ie., you might
hit some ceiling of the storage networks' performance soon enough
anyway. At least if you're using front ends to interface to it.

Unresolved in the above equation is MDS/OSD performance (and ratio) and
actual per-OSD performance. Power consumption is quite easy to get a
ball-park max/min/avg figure on.


I think you have to figure out what it is you need done for your
specific application, and back-track from there. Because there is no
single optimal configuration of a distributed file system such as Ceph,
for all applications.

Cheers,
-- 
Martin Millnert <mar...@millnert.se>

Attachment: signature.asc
Description: This is a digitally signed message part

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to