On Thu, Nov 13, 2008 at 03:37:28AM -0800, szhocker wrote:
> Hi,
> I have been testing zumastor since release 0.4 and there is a problem
> with current release.
> I've checked zumastor behaviour when overfilling the snapshot space.
> Data volume size is 20GB and snapshot device size is 1GB.
> How the test looks like:
> 1. start zumastor with volume "zumatest" and sizes like said before
> 2. create 200MB file on volume zumatest
> 3. take snapshot and check md5 of file in snapshot and on the volume
> 4. repeate steps 2 and 3 , three more times to have 4 files, each file
> is 200MB and create one 400MB file
> 5. After change contents of the first file (it means that space for
> snapshots is overflow) and taking 5th snapshot and waiting about
> 1-2min files from 1st taken snapshot exist in mount point, any kind of
> read like md5sum give me IO error from ddsnap.
> 6. So now we have situation that 1st taken snapshot is unavailable,
> but somehow zumastor don't report this to user, so I don't know that
> this snapshot is unavailable now.
> 7. After restart zumastor daemon 1st taken snapshot is still
> unavailable but now zumastor reports this by removing mountpoint
> directory of 1st snapshot and marking symlink as unavailable.


Sorry for the delay in response, but most of the zumastor team is busily
working on Tux3 right now (www.tux3.org).

What you are experiencing seems to be the expected behavior with what we
call "squashing" snapshots.  The problem is that zumastor is trying to
save more data in the snapshot store than there is room for, so
something has to give.  In the default case, the oldest snapshot is

It is possible to make a snapshot "un-squashable" by setting the
priority on it to the highest value (127).  As you would expect, the
snapshots of higher priority will be removed last.  And those with the
highest possible value with never be squashed (failing IO to the origin
rather than allowing squashing).

Another issue (especially if this is a new filesystem), is that you may
be snapshotting what is "free space" as far as the filesystem is
concerned.  We had some discussion about adding some hooks to avoid
doing this copy out to save snapshot space.

A better solution is sharing free space between the origin and snapshot
(which is one of the things Tux3 is designed to do), and that technology
should eventually be ported back to zumastor.  This is also a
performance optimization because the origin is "re-mapped".  There is
the downside that you won't be able to disable zumastor and access the
origin as you normally would.

Do you have any thoughts on how we could make zumastor more
user-friendly in this respect? (squashing snapshots, how to notify user,

Thanks for the report!

You received this message because you are subscribed to the Google Groups 
"Zumastor" group.
To post to this group, send email to zumastor@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to