This drives are running as osd, not as journal.
I think I can't understand is, why the performance of using rados bench
with 1 thread is 3 times slower? Ceph osd bench shows good results.
In my opinion it could be a 20% less speed, because of software overhead.
I read the blog post
You should do your reference test with dd with oflag=direct,dsync
direct will only bypass the cache while dsync will fsync on every
block which is much closer to reality of what ceph is doing afaik
On Thu, Jan 4, 2018 at 9:54 PM, Rafał Wądołowski
wrote:
> Hi folks,
t; From: Rafał Wądołowski [mailto:rwadolow...@cloudferro.com]
> Sent: donderdag 4 januari 2018 16:56
> To: c...@elchaka.de; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Performance issues on Luminous
>
> I have size of 2.
>
> We know about this risk and we accept it, but we still
2018 16:56
To: c...@elchaka.de; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Performance issues on Luminous
I have size of 2.
We know about this risk and we accept it, but we still don't know why
performance so so bad.
Cheers,
Rafał Wądołowski
On 04.01.2018 16:51, c...@elchaka.de
They are configured with bluestore.
The network, cpu and disk are doing nothing. I was observing with atop,
iostat, top.
Similiar hardware configuration I have on jewel (with filestore), and
there are performing good.
Cheers,
Rafał Wądołowski
On 04.01.2018 17:05, Luis Periquito wrote:
you never said if it was bluestore or filestore?
Can you look in the server to see which component is being stressed
(network, cpu, disk)? Utilities like atop are very handy for this.
Regarding those specific SSDs they are particularly bad when running
some time without trimming - performance
I have size of 2.
We know about this risk and we accept it, but we still don't know why
performance so so bad.
Cheers,
Rafał Wądołowski
On 04.01.2018 16:51, c...@elchaka.de wrote:
I assume you have size of 3 then divide your expected 400 with 3 and
you are not far Away from what you get...
I assume you have size of 3 then divide your expected 400 with 3 and you are
not far Away from what you get...
In Addition you should Never use Consumer grade ssds for ceph as they will be
reach the DWPD very soon...
Am 4. Januar 2018 09:54:55 MEZ schrieb "Rafał Wądołowski"
Hi folks,
I am currently benchmarking my cluster for an performance issue and I
have no idea, what is going on. I am using these devices in qemu.
Ceph version 12.2.2
Infrastructure:
3 x Ceph-mon
11 x Ceph-osd
Ceph-osd has 22x1TB Samsung SSD 850 EVO 1TB
96GB RAM
2x E5-2650 v4
4x10G