[ceph-users] ceph-volume lvm zap destroyes up+in OSD

2022-11-22 Thread Frank Schilder
Hi all, on our octopus-latest cluster I accidentally destroyed an up+in OSD with the command line ceph-volume lvm zap /dev/DEV It executed the dd command and then failed at the lvm commands with "device busy". Problem number one is, that the OSD continued working fine. Hence, there is no

[ceph-users] Re: CephFS performance

2022-11-22 Thread Gregory Farnum
In addition to not having resiliency by default, my recollection is that BeeGFS also doesn't guarantee metadata durability in the event of a crash or hardware failure like CephFS does. There's not really a way for us to catch up to their "in-memory metadata IOPS" with our "on-disk metadata IOPS".

[ceph-users] osd encryption is failing due to device-mapper

2022-11-22 Thread Ali Akil
Hallo folks, i am deploying a quincy ceph cluster 17.2.0 on openstack vm with ubuntu 22.04 minimal with cephadm. I was able too boostrap the cluster and add the hosts and the mons, but when i apply the osd spec with the encryption option enabled, it fails ```     service_type: osd    

[ceph-users] Re: CephFS performance

2022-11-22 Thread David C
My understanding is BeeGFS doesn't offer data redundancy by default, you have to configure mirroring. You've not said how your Ceph cluster is configured but my guess is you have the recommended 3x replication - I wouldn't be surprised if BeeGFS was much faster than Ceph in this case. I'd be

[ceph-users] Re: cephadm found duplicate OSD, how to resolve?

2022-11-22 Thread Stefan Kooman
On 11/22/22 12:43, Eugen Block wrote: Hi, there was a similar thread recently (https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/WSVIAZBFJQKBXA47RKISA5U4J7BHX6DK/). I did not find that thread unfortunately, need to up my search skills. Check the output of 'cepham ls' on the

[ceph-users] Fwd: Scheduled RBD volume snapshots without mirrioring (-schedule)

2022-11-22 Thread Tobias Bossert
Hi Ilya Thank you very much for clarification. I created a cronjob based script which is available here https://github.com/oposs/rbd_snapshot_manager Tobias - Ursprüngliche Mail - Von: "Ilya Dryomov" An: "Tobias Bossert" CC: "ceph-users" Gesendet: Sonntag, 20. November 2022

[ceph-users] Re: cephadm found duplicate OSD, how to resolve?

2022-11-22 Thread Eugen Block
Hi, there was a similar thread recently (https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/WSVIAZBFJQKBXA47RKISA5U4J7BHX6DK/). Check the output of 'cepham ls' on the node where the OSD is not running and remove it with 'cephadm rm-daemon --name osd.3'. If there's an empty

[ceph-users] cephadm found duplicate OSD, how to resolve?

2022-11-22 Thread Stefan Kooman
Hi, I'm in the process of re-provisioning OSDs on a test cluster with cephadm. One of the OSD id's that was supposedly previously living on host3 is now alive on host2. And cephadm is not happy about that: "Found duplicate OSDs: osd.3 in status running on host2, osd.3 in status stopped on

[ceph-users] Re: hw failure, osd lost, stale+active+clean, pool size 1, recreate lost pgs?

2022-11-22 Thread Jelle de Jong
Hello everybody, Someone that can help me in the right direction? Kind regards, Jelle On 11/21/22 17:14, Jelle de Jong wrote: Hello everybody, I had an HW failure and had to take an osd out however I now got stale+active+clean. I am okay with having zeros as replacement for the lost

[ceph-users] Re: RBD Images with namespace and K8s

2022-11-22 Thread Ilya Dryomov
On Tue, Nov 22, 2022 at 9:20 AM Marcus Müller wrote: > > Hi Ilya, > > thanks. This looks like as a general setting for all RBD images, not for some > specific, right? Right. > > Is a more specific definition possible, so you can have multiple rbd images > in different ceph namespaces? It

[ceph-users] Re: RBD Images with namespace and K8s

2022-11-22 Thread Marcus Müller
Hi Ilya, thanks. This looks like as a general setting for all RBD images, not for some specific, right? Is a more specific definition possible, so you can have multiple rbd images in different ceph namespaces? Regards Marcus > Am 21.11.2022 um 22:22 schrieb Ilya Dryomov : > > On Mon, Nov