Re: [ceph-users] New cephfs cluster performance issues- Jewel - cachepressure, capability release, poor iostat await avg queue size

2016-10-19 Thread mykola.dvornik
Not sure if related, but I see the same issue on the very different hardware/configuration. In particular on large data transfers OSDs become slow and blocking. Iostat await on spinners can go up to 6(!) s ( journal is on the ssd). Looking closer on those spinners with blktrace suggest that

Re: [ceph-users] cephfs slow delete

2016-10-14 Thread mykola.dvornik
I was doing parallel deletes until the point when there are >1M objects in the stry. Then delete fails with ‘no space left’ error. If one would deep-scrub those pgs containing corresponidng metadata, they turn to be inconsistent. In worst case one would get virtually empty folders that have

Re: [ceph-users] cephfs slow delete

2016-10-14 Thread mykola.dvornik
If you are running 10.2.3 on your cluster, then I would strongly recommend to NOT delete files in parallel as you might hit http://tracker.ceph.com/issues/17177 -Mykola From: Heller, Chris Sent: Saturday, 15 October 2016 03:36 To: Gregory Farnum Cc: ceph-users@lists.ceph.com Subject: Re:

Re: [ceph-users] CephFS: No space left on device

2016-10-06 Thread mykola.dvornik
Is there any way to repair pgs/cephfs gracefully? -Mykola From: Yan, Zheng Sent: Thursday, 6 October 2016 04:48 To: Mykola Dvornik Cc: John Spray; ceph-users Subject: Re: [ceph-users] CephFS: No space left on device On Wed, Oct 5, 2016 at 2:27 PM, Mykola Dvornik