Hi Roos,
I will try with the configuration, thank you very much!
Best Regards,
Dave Chen
-Original Message-
From: Marc Roos
Sent: Wednesday, November 14, 2018 4:37 PM
To: ceph-users; Chen2, Dave
Subject: RE: [ceph-users] Benchmark performance when using SSD as the journal
[EXTERNAL
Thanks Mokhtar! This is what I am looking for, thanks for your explanation!
Best Regards,
Dave Chen
From: Maged Mokhtar
Sent: Wednesday, November 14, 2018 3:36 PM
To: Chen2, Dave; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Benchmark performance when using SSD as the journal
Thanks Martin for your suggestion!
I will definitely try bluestore later. The version of Ceph I am using is
v10.2.10 Jewel, do you think it’s stable enough to use Bluestore for Jewel or
should I upgrade Ceph to Luminous?
Best Regards,
Dave Chen
From: Martin Verges
Sent: Wednesday, November
Thanks Merrick!
I haven’t tried the blue store but I believe what you said, I tried again with
“rbd bench-write” with filestore, the result has more than 50% performance
increase with the SSD as the journal, so I am still cannot understand why
“rados bench” cannot give us any difference,
Thanks Merrick!
I checked with Intel spec [1], the performance Intel said is,
• Sequential Read (up to) 500 MB/s
• Sequential Write (up to) 330 MB/s
• Random Read (100% Span) 72000 IOPS
• Random Write (100% Span) 2 IOPS
I think these indicator should be must better than general HDD, and
Hi all,
We want to compare the performance between HDD partition as the journal (inline
from OSD disk) and SSD partition as the journal, here is what we have done, we
have 3 nodes used as Ceph OSD, each has 3 OSD on it. Firstly, we created the
OSD with journal from OSD partition, and run
Hi all,
I have been trying to migrate the journal to SSD partition for an while,
basically I followed the guide here [1], I have the below configuration
defined in the ceph.conf
[osd.0]
osd_journal = /dev/disk/by-partlabel/journal-1
And then create the journal in this way,
# ceph-osd -i 0
I saw these statement from this link (
http://docs.ceph.com/docs/master/rados/operations/crush-map/ ), it that the
reason which leads to the warning?
" This, combined with the default CRUSH failure domain, ensures that replicas
or erasure code shards are separated across hosts and a single
Hi Burkhard,
Thanks for your explanation, I created an new OSD with 2TB from another node,
it truly solved the issue, the status of Ceph cluster is " health HEALTH_OK"
now.
Another question is if three homogeneous OSD is spread across 2 nodes, I still
got the warning message, and the status
Hi all,
I have setup a ceph cluster in my lab recently, the configuration per my
understanding should be okay, 4 OSD across 3 nodes, 3 replicas, but couple of
PG stuck with state "active+undersized+degraded", I think this should be very
generic issue, could anyone help me out?
Here is the
Hi,
Our Ceph version is Kraken and for the storage node we have up to 90 hard disks
that can be used for OSD, we configured the messenger type as "simple", I
noticed that "simple" type here might create lots of threads and hence occupied
lots of resource, we observed the configuration will
11 matches
Mail list logo