om]
> on behalf of Lincoln Bryant [linco...@uchicago.edu]
> Sent: Thursday, January 16, 2014 1:10 PM
> To: Cedric Lemarchand
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph / Dell hardware recommendation
>
> For our ~400 TB Ceph deployment, we bought:
>(2)
s-boun...@lists.ceph.com]
> on behalf of Lincoln Bryant [linco...@uchicago.edu]
> Sent: Thursday, January 16, 2014 1:10 PM
> To: Cedric Lemarchand
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph / Dell hardware recommendation
>
> For our ~400 TB Ceph deploymen
..@npr.org | 202.513.3649
From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on
behalf of Lincoln Bryant [linco...@uchicago.edu]
Sent: Thursday, January 16, 2014 1:10 PM
To: Cedric Lemarchand
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph / Dell har
For our ~400 TB Ceph deployment, we bought:
(2) R720s w/ dual X5660s and 96 GB of RAM
(1) 10Gb NIC (2 interfaces per card)
(4) MD1200s per machine
...and a boat load of 4TB disks!
In retrospect, I would almost certainly would have gotten more servers. During
heavy
Le 16/01/2014 10:16, NEVEU Stephane a écrit :
Thank you all for comments,
So to sum up a bit, it's a reasonable compromise to buy :
2 x R720 with 2x Intel E5-2660v2, 2.2GHz, 25M Cache, 48Gb RAM, 2 x 146GB, SAS
6Gbps, 2.5-in, 15K RPM Hard Drive (Hot-plug) Flex Bay for OS and 24 x 1.2TB,
SAS 6G
À : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] Ceph / Dell hardware recommendation
On 01/15/2014 08:29 AM, Derek Yarnell wrote:
> On 1/15/14, 9:20 AM, Mark Nelson wrote:
>> I guess I'd probably look at the R520 in an 8 bay configuration with
>> an
>> E5-240
On 01/15/2014 12:42 PM, Derek Yarnell wrote:
...
> I think this is more a configuration Dell has been
> unwilling to sell is all.
Ah.
Every once in a while they make their bios complain when it finds a
non-"Dell approved" disk. Once enough customers start screaming they
release a "bios update" t
On 1/15/14, 1:35 PM, Dimitri Maziuk wrote:
> On 01/15/2014 10:53 AM, Alexandre DERUMIER wrote:
From what I understand the "flexbay" are inside the box,
typically usefull for OS (SSD) drives, then it lets you use
all the front hotlug slot with larger platter drives.
>>
>> Yes, it's
On 01/15/2014 10:53 AM, Alexandre DERUMIER wrote:
>> >From what I understand the "flexbay" are inside the box, typically
>>> usefull for OS (SSD) drives, then it lets you use all the front hotlug
>>> slot with larger platter drives.
>
> Yes, it's inside the box.
>
> I ask the question because
t;
They currently give me a hard time about trying to mix and
match SSDs though on the 12 bay back-plane which is not a technical
problem but a Dell problem
"
- Mail original -
De: "Cedric Lemarchand"
À: ceph-users@lists.ceph.com
Envoyé: Mercredi 15 Janvier 2014 1
Le 15/01/2014 17:34, Alexandre DERUMIER a écrit :
Hi Derek,
thanks for the information about r720xd.
Seem that 24 drive chassis is also available.
What is the advantage to use flexbay for ssd ? Bypass the back-plane ?
From what I understand the "flexbay" are inside the box, typically
usefull
4 15:29:07
Objet: Re: [ceph-users] Ceph / Dell hardware recommendation
On 1/15/14, 9:20 AM, Mark Nelson wrote:
> I guess I'd probably look at the R520 in an 8 bay configuration with an
> E5-2407 and 4 1TB data disks per chassis (along with whatever OS disk
> setup you want). That gi
On 01/15/2014 08:29 AM, Derek Yarnell wrote:
On 1/15/14, 9:20 AM, Mark Nelson wrote:
I guess I'd probably look at the R520 in an 8 bay configuration with an
E5-2407 and 4 1TB data disks per chassis (along with whatever OS disk
setup you want). That gives you 4 PCIE slots for the extra network
c
On 1/15/14, 9:20 AM, Mark Nelson wrote:
> I guess I'd probably look at the R520 in an 8 bay configuration with an
> E5-2407 and 4 1TB data disks per chassis (along with whatever OS disk
> setup you want). That gives you 4 PCIE slots for the extra network
> cards, the option for a hardware raid con
On 01/15/2014 07:52 AM, NEVEU Stephane wrote:
Hi all,
I have to build a new Ceph storage architecture replicated between
two datacenters (for Disastry Recovery Plan) so basically 2x30 terabits
(2x3.75 terabytes).
I only can buy Dell servers.
I planned to use 2x1gb (LACP) for the replicatio
Hi all,
I have to build a new Ceph storage architecture replicated between two
datacenters (for Disastry Recovery Plan) so basically 2x30 terabits (2x3.75
terabytes).
I only can buy Dell servers.
I planned to use 2x1gb (LACP) for the replication network and also 2x1gb (LACP)
for production n
16 matches
Mail list logo