[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-28 Thread Robert W. Eckert
Thanks - I am on 17.2.5 - I was able to get to it by cephadm shell, a few zap and deletes of the /dev/sdx and ceph-volume lvm-prepare. I did miss seeing the db_devices part. For ceph orch apply - that would have saved a lot of effort. Does the osds_per_device create the partitions on the

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-28 Thread Neha Ojha
upgrades approved! Thanks, Neha On Tue, Mar 28, 2023 at 12:09 PM Radoslaw Zarzynski wrote: > rados: approved! > > On Mon, Mar 27, 2023 at 7:02 PM Laura Flores wrote: > >> Rados review, second round: >> >> Failures: >> 1. https://tracker.ceph.com/issues/58560 >> 2. https://tracker.ceph.

[ceph-users] s3-select introduction blog / Trino integration

2023-03-28 Thread Gal Salomon
Hi https://ceph.io/en/news/blog/2022/s3select-intro/ Recently I published a blog on s3-select. the Blog discusses what it is, and why it is required. the last paragraph discusses the Trino(analytic SQL utility) and its integration with Ceph/s3select. that integration is still on-work, and it is

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-28 Thread Pat Vaughan
Yes, this is an EC pool, and it was created automatically via the dashboard. Will this help to correct my current situation? Currently, there are 3 OSDs out of 12 that are about 90% full. One of them just crashed and will not come back up with "bluefs _allocate unable to allocate 0x8 on bdev

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-03-28 Thread Ernesto Puerta
Hey Ken, This change doesn't not involve any further internet access other than the already required for the "make dist" stage (e.g.: npm packages). That said, where feasible, I also prefer to keep the current approach for a minor version. Kind Regards, Ernesto On Mon, Mar 27, 2023 at 9:06 PM K

[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-28 Thread Robert Sander
Hi, On 28.03.23 05:42, Robert W. Eckert wrote: I am trying to add a new server to an existing cluster, but cannot get the OSDs to create correctly When I try Cephadm ceph-volume lvm create, it returns nothing but the container info. You are running a containerized cluster with the cephadm o

[ceph-users] Re: Unexpected slow read for HDD cluster (good write speed)

2023-03-28 Thread Marc
Yes it pays off to know what to do before you do it, instead of after. If you complain about speed, is it a general unfounded complaint or did you compare ceph with similar solutions? I have really no idea what the standards are for these types of solutions. I can remember asking at such seminar

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-28 Thread Robert Sander
On 27.03.23 23:13, Pat Vaughan wrote: Looking at the pools, there are 2 crush rules. Only one pool has a meaningful amount of data, the  charlotte.rgw.buckets.data pool. This is the crush rule for that pool. So that pool uses the device class ssd explicitely where the other pools do not care