Re: [ceph-users] Upgrading and lost OSDs

2019-11-22 Thread Brent Kennedy
I just ran into this today with a server we rebooted. The server has been upgraded to Nautilus 14.2.2 for a few months. Was originally installed as Jewel, then upgraded to Luminous ( then Nautilus ). I have a whole server where all 12 OSDs have empty folders. I recreated the keyring file

Re: [ceph-users] Dynamic bucket index resharding bug? - rgw.none showing unreal number of objects

2019-11-22 Thread Paul Emmerich
On Fri, Nov 22, 2019 at 9:09 PM J. Eric Ivancich wrote: > 2^64 (2 to the 64th power) is 18446744073709551616, which is 13 greater > than your value of 18446744073709551603. So this likely represents the > value of -13, but displayed in an unsigned format. I've seen this with values between -2

Re: [ceph-users] Dynamic bucket index resharding bug? - rgw.none showing unreal number of objects

2019-11-22 Thread J. Eric Ivancich
On 11/22/19 11:50 AM, David Monschein wrote: > Hi all. Running an Object Storage cluster with Ceph Nautilus 14.2.4. > > We are running into what appears to be a serious bug that is affecting > our fairly new object storage cluster. While investigating some > performance issues -- seeing

Re: [ceph-users] RBD Mirror DR Testing

2019-11-22 Thread Jason Dillaman
On Fri, Nov 22, 2019 at 11:16 AM Vikas Rana wrote: > > Hi All, > > We have a XFS filesystems on Prod side and when we trying to mount the DR > copy, we get superblock error > > root@:~# rbd-nbd map nfs/dir > /dev/nbd0 > root@:~# mount /dev/nbd0 /mnt > mount: /dev/nbd0: can't read superblock

Re: [ceph-users] Dynamic bucket index resharding bug? - rgw.none showing unreal number of objects

2019-11-22 Thread Paul Emmerich
I've originally reported the linked issue. I've seen this problem with negative stats on several of S3 setups but I could never figure out how to reproduce it. But I haven't seen the resharder act on these stats; that seems like a particularly bad case :( Paul -- Paul Emmerich Looking for

[ceph-users] Dynamic bucket index resharding bug? - rgw.none showing unreal number of objects

2019-11-22 Thread David Monschein
Hi all. Running an Object Storage cluster with Ceph Nautilus 14.2.4. We are running into what appears to be a serious bug that is affecting our fairly new object storage cluster. While investigating some performance issues -- seeing abnormally high IOPS, extremely slow bucket stat listings (over

Re: [ceph-users] RBD Mirror DR Testing

2019-11-22 Thread Vikas Rana
Hi All, We have a XFS filesystems on Prod side and when we trying to mount the DR copy, we get superblock error root@:~# rbd-nbd map nfs/dir /dev/nbd0 root@:~# mount /dev/nbd0 /mnt mount: /dev/nbd0: can't read superblock Any suggestions to test the DR copy any other way or if I'm doing

Re: [ceph-users] dashboard hangs

2019-11-22 Thread Oliver Freyermuth
Hi, On 2019-11-20 15:55, thoralf schulze wrote: hi, we were able to track this down to the auto balancer: disabling the auto balancer and cleaning out old (and probably not very meaningful) upmap-entries via ceph osd rm-pg-upmap-items brought back stable mgr daemons and an usable dashboard.