I recommend to crush reweight your OSDs to reflect their actual sizes
in TB. According to your crush tree output it would be these commands:
ceph osd crush reweight osd.0 0.15
ceph osd crush reweight osd.1 0.15
ceph osd crush reweight osd.9 0.15
ceph osd crush reweight osd.4 0.4
ceph osd crush reweight osd.5 0.4
ceph osd crush reweight osd.10 0.4
I expect this to have a positive effect on PG distribution, but it
will cause some data movement, of course.
Zitat von listy via ceph-users <[email protected]>:
I don't understand - have what I showed already does reveal
underlying disk-devices sizes, does not do it clearly?
Perhaps ceph tools/cmd might do it incorrectly - which is why you
are asking bout 10GB? - or doing that jokingly?
Does that matter why I use something, why anybody runs whatever - I
already answered these questions - mate(s) if you ever were to work
a proverbial "front-desk" (facing customers) then I share a friendly
advice (if you take those).. teach yourself to resit the urge....
It was, still is a small lab - all what I've showed.
No, I do _not_ use 10GB drives - perhaps long time ago when the lab
was first spun up, I don't remember.
I've not touched _crush_ at all - what to do next?
Poke the cluster somehow to "fix" this? Do some changes/tweaks manually?
That would be a great thing to have - a more seamless path to resize
(mostly extend) OSD's disks - in ubiquitous cloud with IaaS but not
only there, in-house many virtualize too(yes, with ceph above, not
only underneath it), probably many _add_ drives but extending a
drive is not uncommon - great, I can only upvote it, if @devel read
here.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]