On Thu, Jan 29, 2004 at 03:47:48PM +1100, Jamie Wilkinson wrote:
> Meanwhile, the rest of the system think it's still writing to /home when
> it needs to, and the kernel effectively journals all those writes to the
> unallocated space in volume 4; which is effectively layered on top of
> volume 3 for everyone but your dump process reading the snapshot.
>
> At the end of it, you unsnapshot the volume and the kernel writes all
> the journalled data back into volume 3, and removes the snapshot block
> device, and all the while your applications didn't even know.

This is not exactly how LVM snapshots work.

Basically, a snapshot is an exact *read-only* image of a block device at
a point in time. To maintain this view, it uses available free space to
copy-before-write blocks on the original volume. Snapshots are always
read-only.

eg.

  # df -h /usr/src
  Filesystem            Size  Used Avail Use% Mounted on
  /dev/vg00/src         1.0G  575M  450M  57% /usr/src
  # lvcreate -s -L256M -n src-snap /dev/vg00/src
  lvcreate -- INFO: using default snapshot chunk size of 64 KB for "/dev/vg00/src-snap"
  lvcreate -- doing automatic backup of "vg00"
  lvcreate -- logical volume "/dev/vg00/src-snap" successfully created

  # mount /dev/vg00/src-snap /mnt/hd/
  mount: block device /dev/vg00/src-snap is write-protected, mounting read-only

While the snapshot is active, any blocks about to be modified on
/dev/vg00/src will first be copied to the 256M pool allocated for the
snapshot before being written to /dev/vg00/src.

When a block is read from the snapshot, LVM checks to see if the block
is in its "dirty" list of modified blocks. If it is it returns the
dirty block it has allocated otherwise it returns the original block
from /dev/vg00/src.

When the snapshot is removed, its blocks are simply returned to the free
pool.

Regards,
Alec

--
Evolution: Taking care of those too stupid to take care of themselves.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to