Hi all!
We have an annoying problem - when we launch intensive reading with rbd, the
client, to which mounted image, hangs in this state:
Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00
Try lowering filestore max sync interval and filestore min sync
interval. It looks like during the hanged period data is flushed from
some overly big buffer.
If this does not help you can monitor perf stats on OSDs to see if some
queue is unusually large.
--
Tomasz Kuzemko
On Thu, Dec 11, 2014 at 7:57 PM, reistlin87 79026480...@yandex.ru wrote:
Hi all!
We have an annoying problem - when we launch intensive reading with rbd, the
client, to which mounted image, hangs in this state:
Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz
On Mon, Dec 15, 2014 at 4:11 PM, Tomasz Kuzemko tomasz.kuze...@ovh.net wrote:
Try lowering filestore max sync interval and filestore min sync
interval. It looks like during the hanged period data is flushed from
some overly big buffer.
If this does not help you can monitor perf stats on OSDs
We tried default configuration without additional parameters, but it still hangs
How can we see a OSD queue?
15.12.2014, 16:11, Tomasz Kuzemko tomasz.kuze...@ovh.net:
Try lowering filestore max sync interval and filestore min sync
interval. It looks like during the hanged period data is
No, in dmesg is nothing about hangs
Here is the versions of software:
root@ceph-esx-conv03-001:~# uname -a
Linux ceph-esx-conv03-001 3.17.0-ceph #1 SMP Sun Oct 5 19:47:51 UTC 2014 x86_64
x86_64 x86_64 GNU/Linux
root@ceph-esx-conv03-001:~# ceph --version
ceph version 0.87
On Mon, Dec 15, 2014 at 7:05 PM, reistlin87 79026480...@yandex.ru wrote:
No, in dmesg is nothing about hangs
Not necessarily about hangs. socket closed messages? Can you
pastebin the entire kernel log for me?
Here is the versions of software:
root@ceph-esx-conv03-001:~# uname -a
Linux