Hello Ladies and Gentlemen;-)
The reason for the problem was the lack of battery backuped cache. After
we had installed it the load is even on all osd's.
Thanks
Pawel
---
Paweł Orzechowski
pawel.orzechow...@budikom.net
___
ceph-users
On 09/03/2014 04:34 PM, pawel.orzechow...@budikom.net wrote:
Hello Ladies and Gentlemen;-)
The reason for the problem was the lack of battery backuped cache. After
we had installed it the load is even on all osd's.
Glad to hear it was that simple! :)
Mark
Thanks
Pawel
---
Paweł
Irrelevant, but I need to say this: Cephers aren't only men, you know... :-)
Cheers,
Patrycja
2014-08-26 12:58 GMT+02:00 pawel.orzechow...@budikom.net:
Hello Gentelmen:-)
Let me point one important aspect of this low performance problem: from
all 4 nodes of our ceph cluster only one node
Move logs on the SSD and immediately increase performance. you have about
50% of the performance lost on logs. And just for the three replications
recommended more than 5 hosts
2014-08-26 12:17 GMT+04:00 Mateusz Skała mateusz.sk...@budikom.net:
Hi thanks for reply.
From the top of my
You mean to move /var/log/ceph/* to SSD disk?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm sorry, of course it journals)
2014-08-26 13:16 GMT+04:00 Mateusz Skała mateusz.sk...@budikom.net:
You mean to move /var/log/ceph/* to SSD disk?
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello Gentelmen:-)
Let me point one important aspect of this low performance problem:
from all 4 nodes of our ceph cluster only one node shows bad metrics,
that is very high latency on its osd's (from 200-600ms), while other
three nodes behave normaly, thats is latency of their osds is
I had a similar problem once. I traced my problem it to a failed battery
on my RAID card, which disabled write caching. One of the many things I
need to add to monitoring.
On Tue, Aug 26, 2014 at 3:58 AM, pawel.orzechow...@budikom.net wrote:
Hello Gentelmen:-)
Let me point one important