Dne 19. 11. 25 v 20:41 matthew patton napsal(a):
If the tool would have known 'which areas' areĀ  mappedĀ  (which knows thin-pool
target internally) then it would need to copy only those blocks.

no doubt. but if lvmthin is basically a private implementation that LVM (aka 
thick) doesn't actually know anything about and is just being used as a 
pass-thru to thin API, I'm not sure we want to expose thin internals to the 
caller. I obviously haven't read the code implementing either thick or thin, 
but if thick does a thin_read(one 4MB extent) then thin should just return a 
buffer with 4mb populated with all the data converted into a thick+linear 
representation that Thick is expecting. Then the traditional workflow can 
resume with the extent written out to its destination PV. In other words you're 
hydrating a thin representation into a thick representation. Could you take 
that buffer and thin_write(dest thin pool)? I don't see why not.



Clearly tiny thin-pool can provide a space to virtual volume of some massive size - i.e. you can have 10TiB LV using just couple MiB of the real physical storage in a VG.

So if I'd be an 'unaware' admin how this all works - I'd kind of naively expect then when I 'pvmove' such thinLV where the original LV takes just those 'couple' MiB - then I'd expect that copied volume would also take take approximately 'same' space and you would be coping couple MiB instead of having an operation running for even days.

We can 'kind of script' these things nowadays offline - but making this 'online' requires a lot of new kernel code...

So while a naive/dumb pvmove may possibly have some minor 'usage' - it has many 'weak' points - and plain 'dd' can be likely copy data faster - and as said I'm not aware users would ever requested such operation until today.

Typically admins do move the whole storage to the faster hardware - so thinpool with its data & metadata is moved to the new drive - which is fully supported online.

Or you could just punt and say pvmove() of a thin is necessarily a hydrate 
operation and can only be written out into a non-then PV, tough luck. Use 
offline `dd`if you want more.

I haven't been particularly impressed by LVM caching (it has it's uses don't 
get me wrong) but I find layering open-cas to be more intuitive and gives me a 
degree of freedom.


It's worth to note - dm-cache target is fully customizable - so anyone can come with 'policies' that fits their needs - somehow this doesn't happen and users mostly stick with default 'smq' policy - which is usually 'good enough' - but its a hot-spot cache. There is also 'writecache' which is targeted for heavy write loads....

If some likes OpenCas more :) obviously we can't change his mind. We've tried to make 'caching' simple for lvm2 users aka:
   'lvcreate --type cache -L100G  vg/lv /dev/nvme0p1'
(assuming vg was already vgextended with /dev/nvme0p1)
but surely there are many ways to skin the cat...

Regards

Zdenek


Reply via email to