In my lab ceph is deployed with cephadm - so, containerized
env, when I do:
-> $ ceph-bluestore-tool bluefs-bdev-sizes --path
/var/lib/ceph/9f4f9dba-72c7-11f0-8052-525400519d29/osd.9/
...
I get a coredump - on each node, each osd - as I thumbed
through the net I read somewhere that an 'osd' must be
stopped for 'ceph-bluestore-tool' to operate on it?
But how would that make sense ? not in my mind, but hey...
or I'm doing something forbidden there.
I don't think a "migration" - loosely defined - applies to
my scenario as there is nothing to migration onto - disks
which Cephs uses "got" more space - no new/more disks/devices.
Ceph sees those changes already:
-> $ ceph orch device ls
podster1.mine.priv /dev/vdc hdd 400G No
13m ago Has a FileSystem, LVM detected
...
podster3.mine.priv /dev/vdc hdd 300G No
13m ago Has a FileSystem, Insufficient space (<10
extents) on vgs, LVM detected
first above, is easy to guess - got extra 100GB. I did
fiddled with it actually "outside" of ceph, did 'pvresize'
(PV which Ceph itself crerated and used LVM) which did not
cause any troubles, apparently.
So I was itching to do the 'lvresize' too but though Ceph's
way should be (existent) preferred - I can imagine even in
_production_ people do use..
partitions/LVM - as opposed to whole disks/drives.. no?
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]