I'm using 2x replica on that pool for storing rbd volumes. Our workload is
pretty heavy, id imagine objects an ec would be light in comparison.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
[email protected]
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distributing or taking any action in reliance on the
contents of this information is strictly prohibited.
From: "John Hogenmiller" <[email protected]>
To: "Tyler Bishop" <[email protected]>
Cc: "Nick Fisk" <[email protected]>, [email protected]
Sent: Wednesday, February 17, 2016 7:50:11 AM
Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure
Code
Tyler,
E5-2660 V2 is a 10-core, 2.2Ghz, giving you roughly 44Ghz or 0.78Ghz per OSD.
That seems to fall in line with Nick's "golden rule" or 0.5Ghz - 1Ghz per OSD.
Are you doing EC or Replication? If EC, what profile? Could you also provide an
average of CPU utilization?
I'm still researching, but so far, the ratio seems to be pretty realistic.
-John
On Tue, Feb 16, 2016 at 9:22 AM, Tyler Bishop < [email protected]
> wrote:
We use dual E5-2660 V2 with 56 6T and performance has not been an issue. It
will easily saturate the 40G interfaces and saturate the spindle io.
And yes, you can run dual servers attached to 30 disk each. This gives you lots
of density. Your failure domain will remain as individual servers. The only
thing shared is the quad power supplies.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
[email protected]
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distributing or taking any action in reliance on the
contents of this information is strictly prohibited.
----- Original Message -----
From: "Nick Fisk" < [email protected] >
To: "Василий Ангапов" < [email protected] >, "Tyler Bishop" <
[email protected] >
Cc: [email protected]
Sent: Tuesday, February 16, 2016 8:24:33 AM
Subject: RE: [ceph-users] Recomendations for building 1PB RadosGW with Erasure
Code
> -----Original Message-----
> From: Василий Ангапов [mailto: [email protected] ]
> Sent: 16 February 2016 13:15
> To: Tyler Bishop < [email protected] >
> Cc: Nick Fisk < [email protected] >; < [email protected] > <ceph-
> [email protected] >
> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
> Erasure Code
>
> 2016-02-16 17:09 GMT+08:00 Tyler Bishop
> < [email protected] >:
> > With ucs you can run dual server and split the disk. 30 drives per node.
> > Better density and easier to manage.
> I don't think I got your point. Can you please explain it in more details?
I think he means that the 60 bays can be zoned, so you end up with physically 1
JBOD split into two 30 logical JBOD's each connected to a different server.
What this does to your failures domains is another question.
>
> And again - is dual Xeon's power enough for 60-disk node and Erasure Code?
I would imagine yes, but you would mostly likely need to go for the 12-18core
versions with a high clock. These are serious $$$$. I don't know at what point
this becomes more expensive than 12 disk nodes with "cheap" Xeon-D's or Xeon
E3's.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com