Hi,

no ok it was not. Bug still present. It was only working because the
osdmap was so far away that it has started backfill instead of recovery.

So it happens only in the recovery case.

Greets,
Stefan

Am 15.01.19 um 16:02 schrieb Stefan Priebe - Profihost AG:
> 
> Am 15.01.19 um 12:45 schrieb Marc Roos:
>>  
>> I upgraded this weekend from 12.2.8 to 12.2.10 without such issues 
>> (osd's are idle)
> 
> 
> it turns out this was a kernel bug. Updating to a newer kernel - has
> solved this issue.
> 
> Greets,
> Stefan
> 
> 
>> -----Original Message-----
>> From: Stefan Priebe - Profihost AG [mailto:[email protected]] 
>> Sent: 15 January 2019 10:26
>> To: [email protected]
>> Cc: [email protected]
>> Subject: Re: [ceph-users] slow requests and high i/o / read rate on 
>> bluestore osds after upgrade 12.2.8 -> 12.2.10
>>
>> Hello list,
>>
>> i also tested current upstream/luminous branch and it happens as well. A
>> clean install works fine. It only happens on upgraded bluestore osds.
>>
>> Greets,
>> Stefan
>>
>> Am 14.01.19 um 20:35 schrieb Stefan Priebe - Profihost AG:
>>> while trying to upgrade a cluster from 12.2.8 to 12.2.10 i'm 
>> experience
>>> issues with bluestore osds - so i canceled the upgrade and all 
>> bluestore
>>> osds are stopped now.
>>>
>>> After starting a bluestore osd i'm seeing a lot of slow requests 
>> caused
>>> by very high read rates.
>>>
>>>
>>> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
>>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>>> sda              45,00   187,00  767,00   39,00 482040,00  8660,00
>>> 1217,62    58,16   74,60   73,85   89,23   1,24 100,00
>>>
>>> it reads permanently with 500MB/s from the disk and can't service 
>> client
>>> requests. Overall client read rate is at 10.9MiB/s rd
>>>
>>> I can't reproduce this with 12.2.8. Is this a known bug / regression?
>>>
>>> Greets,
>>> Stefan
>>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to