Responding to myself to follow up with what I found.
While going over the release notes for 14.2.3/14.2.4 I found this was a known
problem that has already been fixed. Upgrading the cluster to 14.2.4 fixed the
issue.
Bryan
> On Oct 30, 2019, at 10:33 AM, Bryan Stillwell wrote:
>
> This
So, I ended up checking all datalog shards with:
radosgw-admin data sync status --shard-id=XY --source-zone=us-east-1
and found one with a few hundred references to a bucket that had been
deleted.
I ended up shutting down HAProxy on both ends and running
radosgw-admin data sync init
This
Not sure if this is related to the dirlisting issue since the deep-scrubs
have always been way behind schedule.
But lets see if it has any effect to clear this warning. But it seems i can
only deep-scrub 5 pgs at a time. How can i increase this ?
On Wed, Oct 30, 2019 at 6:53 AM Lars Täuber
Hey Dan,
We've got three rgws with the following configuration:
- We're running 12.2.12 with civit web.
- 3 RGW's with haproxy round robin
- 32 GiB RAM (handles = 4, thread pool = 512)
- We run mon+mgr+rgw on the same hardware.
Looking at our grafana dashboards, I don't see us
I am getting these since Nautilus upgrade
[Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859dd
911cca33b800 fail -12
[Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859d2
911d3eef5a00 fail -12
[Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859d9
We've solved this off-list (because I already got access to the cluster)
For the list:
Copying on rados level is possible, but requires to shut down radosgw
to get a consistent copy. This wasn't feasible here due to the size
and performance.
We've instead added a second zone where the placement
On 10/29/19 10:56 PM, Frank R wrote:
oldest incremental change not applied: 2019-10-22 00:24:09.0.720448s
May be zone period is not the same on both sides?
k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to