Cristian and everyone else have expertly responded to the SSD capabilities,
pros, and cons so I'll ignore that. I believe you were saying that it was
risky to swap out your existing journals to a new journal device. That is
actually a very simple operation that can be scripted to only take minutes
On 22-6-2017 03:59, Christian Balzer wrote:
>> Agreed. On the topic of journals and double bandwidth, am I correct in
>> thinking that btrfs (as insane as it may be) does not require double
>> bandwidth like xfs? Furthermore with bluestore being close to stable, will
>> my architecture need to
Hi,
One of the benefits of PCIe NVMe is that it does not take a disk slot,
resulting in a higher density. For example a 6048R-E1CR36N with 3x PCIe
NVMe yields 36 OSDs per servers (12 OSD per NVMe) where it yields 30 OSDs
per server if using SATA SSDs (6 OSDs per SSD).
Since you say that you used
Hello,
Hmm, gmail client not grokking quoting these days?
On Wed, 21 Jun 2017 20:40:48 -0500 Brady Deetz wrote:
> On Jun 21, 2017 8:15 PM, "Christian Balzer" wrote:
>
> On Wed, 21 Jun 2017 19:44:08 -0500 Brady Deetz wrote:
>
> > Hello,
> > I'm expanding my 288 OSD, primarily
On Jun 21, 2017 8:15 PM, "Christian Balzer" wrote:
On Wed, 21 Jun 2017 19:44:08 -0500 Brady Deetz wrote:
> Hello,
> I'm expanding my 288 OSD, primarily cephfs, cluster by about 16%. I have
12
> osd nodes with 24 osds each. Each osd node has 2 P3700 400GB NVMe PCIe
> drives
On Wed, 21 Jun 2017 19:44:08 -0500 Brady Deetz wrote:
> Hello,
> I'm expanding my 288 OSD, primarily cephfs, cluster by about 16%. I have 12
> osd nodes with 24 osds each. Each osd node has 2 P3700 400GB NVMe PCIe
> drives providing 10GB journals for groups of 12 6TB spinning rust drives
> and 2x
Hello,
I'm expanding my 288 OSD, primarily cephfs, cluster by about 16%. I have 12
osd nodes with 24 osds each. Each osd node has 2 P3700 400GB NVMe PCIe
drives providing 10GB journals for groups of 12 6TB spinning rust drives
and 2x lacp 40gbps ethernet.
Our hardware provider is recommending