On 05/08/2013 07:08 AM, Barry O'Rourke wrote:
Hi,
I've been doing some numbers today and it looks like our choice is
between 6 x R515's or 6 x R410's depending upon whether we want to allow
for the possibility of adding more OSDs at a later date.
Yeah, tough call. I would expect that R410s
Hi,
I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
which I intend to run in 3 x replication. I've opted for the following
configuration;
2 x 6 core processors
32Gb RAM
H700 controller (1Gb cache)
2 x SAS OS disks (in RAID1)
2 x 1Gb ethernet (bonded for cluster
Hi,
I'd be interested to hear from anyone running a similar configuration
I'm running a somewhat similar configuration here. I'm wondering why you
have left out SSDs for the journals?
I gather they would be quite important to achieve a level of performance
for hosting 100 virtual machines
FWIW, here is what I have for my ceph cluster:
4 x HP DL 180 G6
12Gb RAM
P411 with 512MB Battery Backed Cache
10GigE
4 HP MSA 60's with 12 x 1TB 7.2k SAS and SATA drives (bought at different times
so there is a mix)
2 HP D2600 with 12 x 3TB 7.2k SAS Drives
I'm currently running 79 qemu/kvm vm's
Hi,
I'm running a somewhat similar configuration here. I'm wondering why you
have left out SSDs for the journals?
I can't go into exact prices due to our NDA, but I can say that getting
a couple of decent SSD disks from Dell will increase the cost per server
by a four figure sum, and we're
On 05/07/2013 06:50 AM, Barry O'Rourke wrote:
Hi,
I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
which I intend to run in 3 x replication. I've opted for the following
configuration;
2 x 6 core processors
32Gb RAM
H700 controller (1Gb cache)
2 x SAS OS disks (in RAID1)
, May 7, 2013 9:17:24 AM
Subject: Re: [ceph-users] Dell R515 performance and specification question
On 05/07/2013 06:50 AM, Barry O'Rourke wrote:
Hi,
I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
which I intend to run in 3 x replication. I've opted
Hi,
With so few disks and the inability to do 10GbE, you may want to
consider doing something like 5-6 R410s or R415s and just using the
on-board controller with a couple of SATA disks and 1 SSD for the
journal. That should give you better aggregate performance since in
your case you
Hi,
On Tue, 2013-05-07 at 21:07 +0300, Igor Laskovy wrote:
If I currently understand idea, when this 1 SSD will fail whole node
with that SSD will fail. Correct?
Only OSDs that use that SSD for the journal will fail as they will lose
any writes still in the journal. If I only have 2 OSDs
Hi,
Here's a quick performance display with various block sizes on a host
with 1 public 1Gbe link and 1 1Gbe link on the same vlan as the ceph
cluster.
Thanks for taking the time to look into this for me, I'll compare it
with my existing set-up in the morning.
Thanks,
Barry
--
The
On 05/07/2013 03:36 PM, Barry O'Rourke wrote:
Hi,
With so few disks and the inability to do 10GbE, you may want to
consider doing something like 5-6 R410s or R415s and just using the
on-board controller with a couple of SATA disks and 1 SSD for the
journal. That should give you better
11 matches
Mail list logo