Dne 13.9.2017 v 20:43 Xen napsal(a):

There is something else though.

You cannot set max size for thin snapshots?


We are moving here in right direction.

Yes - current thin-provisiong does not let you limit maximum number of blocks individual thinLV can address (and snapshot is ordinary thinLV)

Every thinLV can address  exactly   LVsize/ChunkSize  blocks at most.


This is part of the problem: you cannot calculate in advance what can happen, because by design, mayhem should not ensue, but what if your predictions are off?

Great - 'prediction' - we getting on the same page - prediction is big problem....

Being able to set a maximum snapshot size before it gets dropped could be very nice.

You can't do that IN KERNEL.

The only tool which is able to calculate real occupancy - is user-space thin_ls tool.

So all you need to do is to use the tool in user-space for this task.

This behaviour is very safe on non-thin.

It is inherently risky on thin.



(I know there are already some listed in this
thread, but I’m wondering about those folks that think the script is
insufficient and believe this should be more standard.)

You really want to be able to set some minimum free space you want per volume.

Suppose I have three volumes of 10GB, 20GB and 3GB.

I may want the 20GB volume to be least important. The 3GB volume most important. The 10GB volume in between.

I want at least 100MB free on 3GB volume.

When free space on thin pool drops below ~120MB, I want the 20GB volume and the 10GB volumes to be frozen, no new extents for 30GB volume.

I want at least 500MB free on 10GB volume.

When free space on thin pool drops below ~520MB, I want the 20GB volume to be frozen, no new extents for 20GB volume.


So I would get 2 thresholds and actions:

- threshold for 3GB volume causing all others to be frozen
- threshold for 10GB volume causing 20GB volume to be frozen

This is easily scriptable and custom thing.

But it would be nice if you could set this threshold in LVM per volume?

This is the main issue - these 'data' are pretty expensive to 'mine' out of data structures.

That's the reason why thin-pool is so fast and memory efficient inside the kernel - because it does not need to all those details about how much data thinLV eat from thin-pool - kernel target simply does not care - it only cares about referenced chunks

It's the user space utility which is able to 'parse' all the structure
and take a 'global' picture. But of course it takes CPU and TIME and it's not 'byte accurate' - that's why you need to start act early on some threshold.


But the most important thing is to freeze or drop snapshots I think.

And to ensure that this is default behaviour?

Why you think this should be default ?

Default is to auto-extend thin-data & thin-metadata when needed if you set threshold bellow 100%.

We can discuss if it's good idea to enable auto-extending by default - as we don't know if the free space in VG is meant to be used for thin-pool or there is some other plan admin might have...


Regards

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Reply via email to