[ceph-users] Re: rbd online sparsify image

2023-01-29 Thread Jiatong Shen
Hello Ilya, Thank you very much for the clarification! Another question, for some historical reasons, there are still some Luminous clients existing. Is it dangerous to sparsify an image which is still being used by a Luminous client? Thank you very much for informing me that N/O are both r

[ceph-users] Re: excluding from host_pattern

2023-01-29 Thread mored1948
Everybody has apparatuses at home and some of the time they breakdown. I prescribe you to https://serviceservotech.com/appliance-repair/ utilize a help that can rapidly and proficiently fix them so you don't need to purchase another one. I have been to this help oftentimes previously and they ha

[ceph-users] Re: Ceph rbd clients surrender exclusive lock in critical situation

2023-01-29 Thread Mored1948
Everybody has apparatuses at home and some of the time they breakdown. I prescribe you to https://serviceservotech.com/appliance-repair/ utilize a help that can rapidly and proficiently fix them so you don't need to purchase another one. I have been to this help oftentimes previously and they ha

[ceph-users] Re: Debian update to 16.2.11-1~bpo11+1 failing

2023-01-29 Thread maebi
This problem has been fixed by the Ceph team in the mean time, Pacific upgrades and installations on Debian are now working as expected! ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Real memory usage of the osd(s)

2023-01-29 Thread Szabo, Istvan (Agoda)
Hello, If buffered_io is enabled, is there a way to know what is the exactly used physical memory from each osd? What I've found is the dump_mempools which last entries are the following, but this bytes would be the real physical memory usage? "total": { "items": 60005205,

[ceph-users] Re: Very slow snaptrim operations blocking client I/O

2023-01-29 Thread Victor Rodriguez
Looks like this is going to take a few days. I hope to manage the available performance for VMs with osd_snap_trim_sleep_ssd. I'm wondering if after that long snaptrim process you went through, was your cluster was stable again and snapshots/snaptrims did work properly? On 1/29/23 16:01, M

[ceph-users] Re: Replacing OSD with containerized deployment

2023-01-29 Thread David Orman
What does "ceph orch osd rm status" show before you try the zap? Is your cluster still backfilling to the other OSDs for the PGs that were on the failed disk? David On Fri, Jan 27, 2023, at 03:25, mailing-lists wrote: > Dear Ceph-Users, > > i am struggling to replace a disk. My ceph-cluster is

[ceph-users] Re: All pgs unknown

2023-01-29 Thread Josh Baergen
This often indicates that something is up with your mgr process. Based on ceph status, it looks like both the mgr and mon had recently restarted. Is that expected? Josh On Sun, Jan 29, 2023 at 3:36 AM Daniel Brunner wrote: > > Hi, > > my ceph cluster started to show HEALTH_WARN, there are no hea

[ceph-users] Re: Very slow snaptrim operations blocking client I/O

2023-01-29 Thread Matt Vandermeulen
I should have explicitly stated that during the recovery, it was still quite bumpy for customers. Some snaptrims were very quick, some took what felt like a really long time. This was however a cluster with a very large number of volumes and a long, long history of snapshots. I'm not sure wh

[ceph-users] Re: rbd online sparsify image

2023-01-29 Thread Ilya Dryomov
On Sun, Jan 29, 2023 at 11:29 AM Jiatong Shen wrote: > > Hello community experts, > >I would like to know the status of rbd image sparsify. From the website, > it should be added at Nautilus ( > https://docs.ceph.com/en/latest/releases/nautilus/ from pr (26226 >

[ceph-users] rbd-mirror replication speed is very slow - but initial replication is fast

2023-01-29 Thread ankit raikwar
Hello Team, Please help me i deploy two ceph cluster with 6 node configuration almost 800tb of capacity. and configurae in the DC-DR configuration for the data high availability. i eanbel the rwg and rbd block device mirroring for the replocatio of the data. we have the 10 GBPS fiber replication

[ceph-users] All pgs unknown

2023-01-29 Thread Daniel Brunner
Hi, my ceph cluster started to show HEALTH_WARN, there are no healthy pgs left, all are unknown, but it seems my cephfs is still readable, how to investigate this any further? $ sudo ceph -s cluster: id: ddb7ebd8-65b5-11ed-84d7-22aca0408523 health: HEALTH_WARN failed to

[ceph-users] Replacing OSD with containerized deployment

2023-01-29 Thread Ken D
Dear Ceph-Users, i am struggling to replace a disk. My ceph-cluster is not replacing the old OSD even though I did: ceph orch osd rm 232 --replace The OSD 232 is still shown in the osd list, but the new hdd will be placed as a new OSD. This wouldnt mind me much, if the OSD was also placed on t

[ceph-users] Replacing OSD with containerized deployment

2023-01-29 Thread mailing-lists
Dear Ceph-Users, i am struggling to replace a disk. My ceph-cluster is not replacing the old OSD even though I did: ceph orch osd rm 232 --replace The OSD 232 is still shown in the osd list, but the new hdd will be placed as a new OSD. This wouldnt mind me much, if the OSD was also placed o

[ceph-users] Replacing OSD with containerized deployment

2023-01-29 Thread mailing-lists
Dear Ceph-Users, i am struggling to replace a disk. My ceph-cluster is not replacing the old OSD even though I did: ceph orch osd rm 232 --replace The OSD 232 is still shown in the osd list, but the new hdd will be placed as a new OSD. This wouldnt mind me much, if the OSD was also placed o

[ceph-users] rbd online sparsify image

2023-01-29 Thread Jiatong Shen
Hello community experts, I would like to know the status of rbd image sparsify. From the website, it should be added at Nautilus ( https://docs.ceph.com/en/latest/releases/nautilus/ from pr (26226 ,) ) but on mimic it is related again https://docs.ceph