[ceph-users] Re: Splitting PGs not happening on Nautilus 14.2.2

2019-10-30 Thread Bryan Stillwell
Responding to myself to follow up with what I found. While going over the release notes for 14.2.3/14.2.4 I found this was a known problem that has already been fixed. Upgrading the cluster to 14.2.4 fixed the issue. Bryan > On Oct 30, 2019, at 10:33 AM, Bryan Stillwell wrote: > > This

[ceph-users] Re: rgw recovering shards

2019-10-30 Thread Frank R
So, I ended up checking all datalog shards with: radosgw-admin data sync status --shard-id=XY --source-zone=us-east-1 and found one with a few hundred references to a bucket that had been deleted. I ended up shutting down HAProxy on both ends and running radosgw-admin data sync init This

[ceph-users] Re: Dirlisting hangs with cephfs

2019-10-30 Thread Kári Bertilsson
Not sure if this is related to the dirlisting issue since the deep-scrubs have always been way behind schedule. But lets see if it has any effect to clear this warning. But it seems i can only deep-scrub 5 pgs at a time. How can i increase this ? On Wed, Oct 30, 2019 at 6:53 AM Lars Täuber

[ceph-users] Re: Lower mem radosgw config?

2019-10-30 Thread Thomas Bennett
Hey Dan, We've got three rgws with the following configuration: - We're running 12.2.12 with civit web. - 3 RGW's with haproxy round robin - 32 GiB RAM (handles = 4, thread pool = 512) - We run mon+mgr+rgw on the same hardware. Looking at our grafana dashboards, I don't see us

[ceph-users] ceph: build_snap_context 100020859dd ffff911cca33b800 fail -12

2019-10-30 Thread Marc Roos
I am getting these since Nautilus upgrade [Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859dd 911cca33b800 fail -12 [Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859d2 911d3eef5a00 fail -12 [Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859d9

[ceph-users] Re: Correct Migration Workflow Replicated -> Erasure Code

2019-10-30 Thread Paul Emmerich
We've solved this off-list (because I already got access to the cluster) For the list: Copying on rados level is possible, but requires to shut down radosgw to get a consistent copy. This wasn't feasible here due to the size and performance. We've instead added a second zone where the placement

[ceph-users] Re: rgw recovering shards

2019-10-30 Thread Konstantin Shalygin
On 10/29/19 10:56 PM, Frank R wrote: oldest incremental change not applied: 2019-10-22 00:24:09.0.720448s May be zone period is not the same on both sides? k ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to