We use dual E5-2660 V2 with 56 6T and performance has not been an issue.  It 
will easily saturate the 40G interfaces and saturate the spindle io.

And yes, you can run dual servers attached to 30 disk each.  This gives you 
lots of density.  Your failure domain will remain as individual servers.  The 
only thing shared is the quad power supplies.

Tyler Bishop 
Chief Technical Officer 
513-299-7108 x10 



tyler.bis...@beyondhosting.net 


If you are not the intended recipient of this transmission you are notified 
that disclosing, copying, distributing or taking any action in reliance on the 
contents of this information is strictly prohibited.

----- Original Message -----
From: "Nick Fisk" <n...@fisk.me.uk>
To: "Василий Ангапов" <anga...@gmail.com>, "Tyler Bishop" 
<tyler.bis...@beyondhosting.net>
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, February 16, 2016 8:24:33 AM
Subject: RE: [ceph-users] Recomendations for building 1PB RadosGW with Erasure 
Code

> -----Original Message-----
> From: Василий Ангапов [mailto:anga...@gmail.com]
> Sent: 16 February 2016 13:15
> To: Tyler Bishop <tyler.bis...@beyondhosting.net>
> Cc: Nick Fisk <n...@fisk.me.uk>; <ceph-users@lists.ceph.com> <ceph-
> us...@lists.ceph.com>
> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
> Erasure Code
> 
> 2016-02-16 17:09 GMT+08:00 Tyler Bishop
> <tyler.bis...@beyondhosting.net>:
> > With ucs you can run dual server and split the disk.  30 drives per node.
> > Better density and easier to manage.
> I don't think I got your point. Can you please explain it in more details?

I think he means that the 60 bays can be zoned, so you end up with physically 1 
JBOD split into two 30 logical JBOD's each connected to a different server. 
What this does to your failures domains is another question.

> 
> And again - is dual Xeon's power enough for 60-disk node and Erasure Code?

I would imagine yes, but you would mostly likely need to go for the 12-18core 
versions with a high clock. These are serious $$$$. I don't know at what point 
this becomes more expensive than 12 disk nodes with "cheap" Xeon-D's or Xeon 
E3's.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to