If your cluster is managed by cephadm, don't interfer with ceph-volume
outside of a cephadm shell. You won't get the results you're hoping
for. It doesn't really make a difference if you use cephadm version
19.2.3 or 19.2.0 for this operation.
Zitat von Steven Vacaroaia <[email protected]>:
Thanks for the link and the details
I have not installed cephadm and respectively ceph-volume on other hosts
but the controller
I am running Ubuntu 24.04.2 and noticed the versions available for both
cephadm and ceph-volume are either 19.2.3 or 19.2.0 - no 19.2.2.??
On the controller, I have 19.2.2
Should I install 19.2.3 or 19.2.0 ?
Steven
On Fri, 20 Feb 2026 at 08:29, Dario Graña via ceph-users <[email protected]>
wrote:
I used the approach recommended by Eugen a few months ago and it worked
well.
I made a small change to the lv names, instead of osd.0 in
# Create Logical Volume
ceph:~ # lvcreate -L 5G -n ceph-osd0-db ceph-db
I used
ceph:~ # lvcreate -L 5G -n ceph-db-${OSD_FSID} ceph-db
where ${OSD_FSID} is the following
# Get the OSD's FSID
[ceph: root@ceph /]# OSD_FSID=$(ceph-volume lvm list 0 | awk '/osd fsid/
{print $3}')
fb69ba54-4d56-4c90-a855-6b350d186df5
On Thu, Feb 19, 2026 at 8:35 PM Eugen Block via ceph-users <
[email protected]> wrote:
> Maybe this helps:
>
>
>
https://heiterbiswolkig.blogs.nde.ag/2025/02/05/cephadm-migrate-db-wal-to-new-device/
>
> Note that cephadm shell is always executed locally on the host, not
> remotely, there’s no ‚--host‘ parameter.
>
> Zitat von Steven Vacaroaia via ceph-users <[email protected]>:
>
> > Hi,
> >
> > I have a 7 nodes cluster deployed using cephadm with a mixt of HDD, SSD
> and
> > NVME
> >
> > I would like to reconfigure some of the HDD OSDs and put their DB on
NVME
> > There are only 6 x18 TB OSDs and there is a dedicate , "empty" 1.6 TB
> NVME
> > installed for this purpose
> >
> > The cluster is configured as EC 4+2 and is not under heavy use
> >
> > I would appreciate if you could advise what is the best way of
achieving
> > that
> >
> > As far as I was able to find, there are these 2 options
> >
> > 1. remove and redeploy OSDs
> > 2. use ceph-volume
> >
> > Option 1 is pretty straight forward but more time consuming
> >
> > For Option2 though I was not able to find any clear instructions
> >
> > The first thing I stumbled one was how to use ceph-volume in a
> > cephadm managed cluster on a host that does not have it
> >
> > ( "cephadm shell --host HOSTNAME -- ceph-volume ls"
> > is complaining about "unrecognized arguments --host")
> >
> > Any help,links,suggestions will be greatly appreciated
> >
> > Steven
> > _______________________________________________
> > ceph-users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
>
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]