[ceph-users] Re: iscsi deprecation

2022-09-30 Thread Ilya Dryomov
On Fri, Sep 30, 2022 at 7:36 PM Filipe Mendes wrote: > > Hello! > > > I'm considering switching my current storage solution to CEPH. Today we use > iscsi as a communication protocol and we use several different hypervisors: > VMware, hyper-v, xcp-ng, etc. Hi Filipe, Ceph's main hypervisor

[ceph-users] iscsi deprecation

2022-09-30 Thread Filipe Mendes
Hello! I'm considering switching my current storage solution to CEPH. Today we use iscsi as a communication protocol and we use several different hypervisors: VMware, hyper-v, xcp-ng, etc. I was reading that the current version of CEPH has discontinued iscsi support in favor of RBD or Nvmeof.

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-30 Thread Casey Bodley
On Thu, Sep 29, 2022 at 12:40 PM Neha Ojha wrote: > > > > On Mon, Sep 19, 2022 at 9:38 AM Yuri Weinstein wrote: >> >> Update: >> >> Remaining => >> upgrade/octopus-x - Neha pls review/approve > > > Both the failures in >

[ceph-users] cephfs mount fails

2022-09-30 Thread Daniel Kovacs
Hello! I have a client with Ubuntu 22.04 where I'd like to mount a cephfs volume. I got error: mount error 1 = Operation not permitted When I run mount command in verbose mode I see an extra field (key) in mount options. As I see this field is exactly same value as name field. What's this

[ceph-users] Re: strange osd error during add disk

2022-09-30 Thread Satish Patel
Hi Dominique, How do I check using cephadm shell ? I am new to cephadm :) https://paste.opendev.org/show/b4egkEdAkCWSkT3VRyO9/ On Fri, Sep 30, 2022 at 6:20 AM Dominique Ramaekers < dominique.ramaek...@cometal.be> wrote: > > Ceph.conf isn't available on that node/container. > > Wat happens if

[ceph-users] Re: strange osd error during add disk

2022-09-30 Thread Satish Patel
Hi Alvaro, I have seen errors on every node and even functional and working nodes so assuming it's not important "ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf" Maybe cephadm run inside docker and that is why its just giving this warning.

[ceph-users] RDMAConnectedSocketImpl.cc: 223: FAILED

2022-09-30 Thread Serkan KARCI
Hello all, We have been trying to implement RDMA for ceph public + cluster However, in terms of troubleshooting, we can't go any further from here. Do you have any idea how I am supposed to dig in more to reveal the issue behind? Thanks and Regards, ms_type=async+rdma ms_cluster_type =

[ceph-users] Re: Same location for wal.db and block.db

2022-09-30 Thread Janne Johansson
> I used to create Bluestore OSDs using commands such as this one: > > ceph-volume lvm create --bluestore --data ceph-block-50/block-50 --block.db > ceph-db-50-54/db-50 > with the goal of having block.db and wal.db co-located on the same LV > (ceph-db-50-54/db-5 in my example, which is on a SSD

[ceph-users] Re: Ceph quincy cephadm orch daemon stop osd.X not working

2022-09-30 Thread Eugen Block
What is your cluster status (ceph -s)? I assume that either your cluster is not healthy or your crush rules don't cover an osd failure. Sometimes it helps to fail the active mgr (ceph mgr fail). Can you also share your 'ceph osd tree'? Do you use the default replicated_rule or any

[ceph-users] Re: Slow OSD startup and slow ops

2022-09-30 Thread Gauvain Pocentek
Hi Stefan, Thanks for your feedback! On Thu, Sep 29, 2022 at 10:28 AM Stefan Kooman wrote: > On 9/26/22 18:04, Gauvain Pocentek wrote: > > > > > > > We are running a Ceph Octopus (15.2.16) cluster with similar > > configuration. We have *a lot* of slow ops when starting OSDs. Also > >