[ceph-users] Re: The always welcomed large omap

2021-05-31 Thread Szabo, Istvan (Agoda)
So the bucket has been deleted on the master zone which has been removed from the other zones as well. On the master zone after deep scrub the omap disappeared but on the secondary zone it's still there. It was 3 at the beginning after I scrubbed the affected OSDs (not just pgs) I have 6 omap.

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-05-31 Thread Marco Pizzolo
David, What I can confirm is that if this fix is already in 16.2.4 and 15.2.13, then there's another issue resulting in the same situation, as it continues to happen in the latest available images. We are going to try and see if we can install a 15.2.x release and subsequently upgrade using a

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-05-31 Thread David Orman
Does the image we built fix the problem for you? That's how we worked around it. Unfortunately, it even bites you with less OSDs if you have DB/WAL on other devices, we have 24 rotational drives/OSDs, but split DB/WAL onto multiple NVMEs. We're hoping the remoto fix (since it's merged upstream and

[ceph-users] Re: [Suspicious newsletter] Re: The always welcomed large omap

2021-05-31 Thread Szabo, Istvan (Agoda)
Yeah, I found a bucket at the moment in progress the deleting, will preshard it. Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com --- -Original

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-05-31 Thread Marco Pizzolo
Unfortunately Ceph 16.2.4 is still not working for us. We continue to have issues where the 26th OSD is not fully created and started. We've confirmed that we do get the flock as described in: https://tracker.ceph.com/issues/50526 - *I have verified in our labs a way to reproduce easily

[ceph-users] Re: The always welcomed large omap

2021-05-31 Thread Matt Vandermeulen
All the index data will be in OMAP, which you can see a listing of with `ceph osd df tree` Do you have large buckets (many, many objects in a single bucket) with few shards? You may have to reshard one (or some) of your buckets. It'll take some reading if you're using multisite, in order to

[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-05-31 Thread Szabo, Istvan (Agoda)
Yeah, this would be interesting for me as well. Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com --- -Original Message- From: mhnx Sent:

[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-05-31 Thread Szabo, Istvan (Agoda)
Bucket is created but if no sync rule set, the data will not be synced across. Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com --- -Original

[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-05-31 Thread Soumya Koduri
On 5/31/21 3:02 PM, mhnx wrote: Yes you're right. I have a Global sync rule in the zonegroup: "sync_from_all": "true", "sync_from": [], "redirect_zone": "" If I need to stop/start the sync after creation I use the command: radosgw-admin bucket sync

[ceph-users] The always welcomed large omap

2021-05-31 Thread Szabo, Istvan (Agoda)
Hi, Any way to clean up large-omap in the index pool? PG deep_scrub didn't help. I know how to clean in the log pool, but no idea in the index pool :/ It's an octopus deployment 15.2.10. Thank you This message is confidential and is for the sole use of the

[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-05-31 Thread mhnx
Yes you're right. I have a Global sync rule in the zonegroup: "sync_from_all": "true", "sync_from": [], "redirect_zone": "" If I need to stop/start the sync after creation I use the command: radosgw-admin bucket sync enable/disable --bucket=$newbucket I

[ceph-users] Bucket creation on RGW Multisite env.

2021-05-31 Thread mhnx
Hello. I have a multisite RGW environment. When I create a new bucket, the bucket is directly created on master and secondary. If I don't want to sync a bucket, I need to stop sync after creation. Is there any global option as "Do not sync directly, only start if I want to" ?

[ceph-users] Re: Nautilus CentOS-7 rpm dependencies

2021-05-31 Thread Wolfgang Lendl
Hi, CentOS7 is only partially supported for octopus "Note that the dashboard, prometheus, and restful manager modules will not work on the CentOS 7 build due to Python 3 module dependencies that are missing in CentOS 7." https://docs.ceph.com/en/latest/releases/octopus/ cheers wolfgang

[ceph-users] Re: Nautilus CentOS-7 rpm dependencies

2021-05-31 Thread Fabrice Bacchella
I had a similar problem with pacific when using the build from Centos, I switch to the rpm directly from ceph and it went fine. > Le 31 mai 2021 à 10:29, Andreas Haupt a écrit : > > Dear all, > > ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies > that cannot be resolved

[ceph-users] Nautilus CentOS-7 rpm dependencies

2021-05-31 Thread Andreas Haupt
Dear all, ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies that cannot be resolved here (not part of CentOS & EPEL 7): python3-cherrypy python3-routes python3-jwt Does anybody know where they are expected to come from? Thanks, Andreas -- | Andreas Haupt|