Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-19 Thread Mykola Golub
On Fri, Jan 18, 2019 at 11:06:54AM -0600, Mark Nelson wrote: > IE even though you guys set bluestore_cache_size to 1GB, it is being > overridden by bluestore_cache_size_ssd. Isn't it vice versa [1]? [1] https://github.com/ceph/ceph/blob/luminous/src/os/bluestore/BlueStore.cc#L3976 -- Mykola

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-18 Thread Mark Nelson
ihost.ag] Sent: 15 January 2019 10:26 To: ceph-users@lists.ceph.com Cc: n.fahldi...@profihost.ag Subject: Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10 Hello list, i also tested current upstream/luminous branch and it happens as wel

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-18 Thread Nils Fahldieck - Profihost AG
r=1318472 >>>>>> pi=[1278145,1318472)/1 >>>>>> rops=4 crt=1318474'61584855 mlcod 1318356'61576253 active+rec >>>>>> overing+degraded m=183 snaptrimq=[ec1a0~1,ec808~1] >>>>>> mbc={255={(2+0)=184,(3+0)=3}}] _update_calc_stats ml 3 u

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-17 Thread Mark Nelson
d high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10 Hello list, i also tested current upstream/luminous branch and it happens as well. A clean install works fine. It only happens on upgraded bluestore osds. Greets, Stefan Am 14.01.19 um 20:35 schrieb Stefan Priebe -

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-17 Thread Stefan Priebe - Profihost AG
m 16.01.19 um 09:12 schrieb Stefan Priebe - Profihost AG: >>>>> Hi, >>>>> >>>>> no ok it was not. Bug still present. It was only working because the >>>>> osdmap was so far away that it has started backfill instead of recovery. >>>>

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-17 Thread Stefan Priebe - Profihost AG
was only working because the >>>> osdmap was so far away that it has started backfill instead of recovery. >>>> >>>> So it happens only in the recovery case. >>>> >>>> Greets, >>>> Stefan >>>> >>>> Am 15

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-17 Thread Mark Nelson
this issue. Greets, Stefan -Original Message- From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag] Sent: 15 January 2019 10:26 To: ceph-users@lists.ceph.com Cc: n.fahldi...@profihost.ag Subject: Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-16 Thread Stefan Priebe - Profihost AG
gt; >>>>> I upgraded this weekend from 12.2.8 to 12.2.10 without such issues >>>>> (osd's are idle) >>>> >>>> >>>> it turns out this was a kernel bug. Updating to a newer kernel - has >>>> solved this issue

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-16 Thread Mark Nelson
- From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag] Sent: 15 January 2019 10:26 To: ceph-users@lists.ceph.com Cc: n.fahldi...@profihost.ag Subject: Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10 Hello list, i also tes

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-16 Thread Stefan Priebe - Profihost AG
t;>> solved this issue. >>> >>> Greets, >>> Stefan >>> >>> >>>> -Original Message- >>>> From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag] >>>> Sent: 15 January 2019 10:26 >>

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-16 Thread Stefan Priebe - Profihost AG
gt; >>> -Original Message- >>> From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag] >>> Sent: 15 January 2019 10:26 >>> To: ceph-users@lists.ceph.com >>> Cc: n.fahldi...@profihost.ag >>> Subject: Re: [ceph-users] slow reques

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-16 Thread Stefan Priebe - Profihost AG
-Original Message- >> From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag] >> Sent: 15 January 2019 10:26 >> To: ceph-users@lists.ceph.com >> Cc: n.fahldi...@profihost.ag >> Subject: Re: [ceph-users] slow requests and high i/o / read rate on >>

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Mark Nelson
] Sent: 15 January 2019 10:26 To: ceph-users@lists.ceph.com Cc: n.fahldi...@profihost.ag Subject: Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10 Hello list, i also tested current upstream/luminous branch and it happens as well. A clean inst

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Stefan Priebe - Profihost AG
: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag] > Sent: 15 January 2019 10:26 > To: ceph-users@lists.ceph.com > Cc: n.fahldi...@profihost.ag > Subject: Re: [ceph-users] slow requests and high i/o / read rate on > bluestore osds after upgrade 12.2.8 -> 12.2.10 > >

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Marc Roos
] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10 Hello list, i also tested current upstream/luminous branch and it happens as well. A clean install works fine. It only happens on upgraded bluestore osds. Greets, Stefan Am 14.01.19 um 20:35 schrieb Ste

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Stefan Priebe - Profihost AG
Hello list, i also tested current upstream/luminous branch and it happens as well. A clean install works fine. It only happens on upgraded bluestore osds. Greets, Stefan Am 14.01.19 um 20:35 schrieb Stefan Priebe - Profihost AG: > while trying to upgrade a cluster from 12.2.8 to 12.2.10 i'm

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-14 Thread Mark Nelson
Hi Stefan, Any idea if the reads are constant or bursty?  One cause of heavy reads is when rocksdb is compacting and has to read SST files from disk.  It's also possible you could see heavy read traffic during writes if data has to be read from SST files rather than cache. It's possible this

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-14 Thread Stefan Priebe - Profihost AG
Hi Paul, Am 14.01.19 um 21:39 schrieb Paul Emmerich: > What's the output of "ceph daemon osd. status" on one of the OSDs > while it's starting? { "cluster_fsid": "b338193d-39e0-40e9-baba-4965ef3868a3", "osd_fsid": "d95d0e3b-7441-4ab0-869c-fe0551d3bd52", "whoami": 2, "state":

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-14 Thread Paul Emmerich
What's the output of "ceph daemon osd. status" on one of the OSDs while it's starting? Is the OSD crashing and being restarted all the time? Anything weird in the log files? Was there recovery or backfill during the upgrade? Paul -- Paul Emmerich Looking for help with your Ceph cluster?

[ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-14 Thread Stefan Priebe - Profihost AG
Hi, while trying to upgrade a cluster from 12.2.8 to 12.2.10 i'm experience issues with bluestore osds - so i canceled the upgrade and all bluestore osds are stopped now. After starting a bluestore osd i'm seeing a lot of slow requests caused by very high read rates. Device: rrqm/s