I don't recall seeing an actual, practical, real-world example of why this 
issue got broached again. So here goes.

Create a thin LV on KVM dom0, put XFS/EXT4 on it, lay down (sparse) files as 
KVM virtual disk files.
Create and launch VMs and configure to suit. For example a dedicated VM for 
each of web server, a Tomcat server, and database. Let's call it a 'Stack'.
You're done configuring it.

You take a snapshot as a "restore point".
Then you present to your developers (or customers) a "drive-by" clone 
(snapshot) of the LV in which changes are typically quite limited (but could go 
up to full capacity) worth of overwrites depending on how much they test/play 
with it. You could have 500 such copies resident. Thin LV clones are damn 
convenient and mostly "free" and attractive for that purpose.

At some point one of those snapshots gets launched as, or converted into a 
production instance. Or if you rather, a customer purchases it and now you must 
be able to guarantee that it can do a full overwrite of it's space and that any 
interaction with the underlying thin pool trumps all the other ankle-biters 
(demo, dev, qa, trial) that might also be resident. Lesser snapshots will 
necessarily be evicted (destroyed) until the volume reaches some pre-defined 
level of reserved space that is now solely used for quick point-in-time restore 
points of the remaining instances. These snaps are retained for some amount of 
time and likely spooled off to a backup location. If thinPool pressure gets too 
high the oldest restore points (snapshots) get destroyed.

In any given ThinPool there may be multiple Stacks or flavors/versions of same.

I believe the pseudo-script provided earlier this afternoon suffices to 
implement the above.

_______________________________________________
linux-lvm mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Reply via email to