[ceph-users] Re: Is it a bug that OSD crashed when it's full?

2022-10-31 Thread Tony Liu
Hi Zizon, I know I ran out of space. I thought that full ratio would prevent me from being here. I tried a few ceph-*-tool, they crash the same way. I guess they need rockdb to start? Any recommendations how I can restore it or copy data out, or copy the volume to another bigger disk? Thanks!

[ceph-users] Re: Is it a bug that OSD crashed when it's full?

2022-10-31 Thread Tony Liu
Hi Steven, Thanks for your reply! I tried with list and it crashed. Looks like the same BT as OSD. # ceph-objectstore-tool --data-path /var/lib/ceph/fa771070-a975 -11ec-86c7-e4434be9cb2e/osd.16 --op list /home/jenkins-build/build/workspace/ceph-build/ARCH/x8

[ceph-users] Is it a bug that OSD crashed when it's full?

2022-10-31 Thread Tony Liu
Hi, Based on doc, Ceph prevents you from writing to a full OSD so that you don’t lose data. In my case, with v16.2.10, OSD crashed when it's full. Is this expected or some bug? I'd expect write failure instead of OSD crash. It keeps crashing when tried to bring it up. Is there any way to bring

[ceph-users] Re: 16.2.11 branch

2022-10-31 Thread Gregory Farnum
On Fri, Oct 28, 2022 at 8:51 AM Laura Flores wrote: > > Hi Christian, > > There also is https://tracker.ceph.com/versions/656 which seems to be > > tracking > > the open issues tagged for this particular point release. > > > > Yes, thank you for providing the link. > > If you don't mind me asking

[ceph-users] Re: [**SPAM**] Re: cephadm node-exporter extra_container_args for textfile_collector

2022-10-31 Thread Lee Carney
Much appreciated. From: Adam King Sent: 28 October 2022 19:25 To: Lee Carney Cc: Wyll Ingersoll; ceph-users@ceph.io Subject: [**SPAM**] [ceph-users] Re: cephadm node-exporter extra_container_args for textfile_collector We had actually considered adding an `ext

[ceph-users] Re: 750GB SSD ceph-osd using 42GB RAM

2022-10-31 Thread Stefan Kooman
On 10/30/22 07:19, Nico Schottelius wrote: Good morning ceph-users, we currently have one OSD based on a SATA SSD (750GB raw) that consume around 42 GB of RAM. The cluster status is HEALTH_OK, no rebalancing or pg change. I can go ahead and just kill it, but I was wondering if there is an easy