Re: Zumastor snapshot problem

2008-11-20 Thread gato




> On Wed, Nov 19, 2008 at 02:10:05AM -0800, gato wrote:
>
> > On 18 Lis, 20:29, Shapor Naghibzadeh <[EMAIL PROTECTED]> wrote:
> > > On Thu, Nov 13, 2008 at 03:37:28AM -0800, szhocker wrote:
>
> > > > Hi,
> > > > I have been testing zumastor since release 0.4 and there is a problem
> > > > with current release.
> > > > I've checked zumastor behaviour when overfilling the snapshot space.
> > > > Data volume size is 20GB and snapshot device size is 1GB.
> > > > How the test looks like:
> > > > 1. start zumastor with volume "zumatest" and sizes like said before
> > > > 2. create 200MB file on volume zumatest
> > > > 3. take snapshot and check md5 of file in snapshot and on the volume
> > > > 4. repeate steps 2 and 3 , three more times to have 4 files, each file
> > > > is 200MB and create one 400MB file
> > > > 5. After change contents of the first file (it means that space for
> > > > snapshots is overflow) and taking 5th snapshot and waiting about
> > > > 1-2min files from 1st taken snapshot exist in mount point, any kind of
> > > > read like md5sum give me IO error from ddsnap.
> > > > 6. So now we have situation that 1st taken snapshot is unavailable,
> > > > but somehow zumastor don't report this to user, so I don't know that
> > > > this snapshot is unavailable now.
> > > > 7. After restart zumastor daemon 1st taken snapshot is still
> > > > unavailable but now zumastor reports this by removing mountpoint
> > > > directory of 1st snapshot and marking symlink as unavailable.
>
> > > Hi,
>
> > > Sorry for the delay in response, but most of the zumastor team is busily
> > > working on Tux3 right now (www.tux3.org).
>
> > > What you are experiencing seems to be the expected behavior with what we
> > > call "squashing" snapshots.  The problem is that zumastor is trying to
> > > save more data in the snapshot store than there is room for, so
> > > something has to give.  In the default case, the oldest snapshot is
> > > removed.
>
> > > It is possible to make a snapshot "un-squashable" by setting the
> > > priority on it to the highest value (127).  As you would expect, the
> > > snapshots of higher priority will be removed last.  And those with the
> > > highest possible value with never be squashed (failing IO to the origin
> > > rather than allowing squashing).
>
> > > Another issue (especially if this is a new filesystem), is that you may
> > > be snapshotting what is "free space" as far as the filesystem is
> > > concerned.  We had some discussion about adding some hooks to avoid
> > > doing this copy out to save snapshot space.
>
> > > A better solution is sharing free space between the origin and snapshot
> > > (which is one of the things Tux3 is designed to do), and that technology
> > > should eventually be ported back to zumastor.  This is also a
> > > performance optimization because the origin is "re-mapped".  There is
> > > the downside that you won't be able to disable zumastor and access the
> > > origin as you normally would.
>
> > > Do you have any thoughts on how we could make zumastor more
> > > user-friendly in this respect? (squashing snapshots, how to notify user,
> > > etc).
>
> > > Thanks for the report!
> > > Shapor
>
> > Hi,
> > Thanks for Your answer. Could you tell me how to set priority for the
> > snapshot because I can't find it in documentation.
> > I think to make Zumastor user-friendly is change notify user when
> > snapshot is full and remove unavailable snapshots without restart
> > daemon.
> > I'm going to continue testing Zumastor.
> > Best regards
> > gato
>
> The priority feature is documented in the ddsnap man page "man ddsnap".
> It is not exposed via the zumastor script.  I think it would make sense
> to add a "prefer snapshots" kind of option to "define master" so you can
> inform zumastor to set snapshots of a particular rotatation to
> "unsquashable".
>
> The ddsnap syntax would be something like:
> ddsnap priority /var/run/zumastor/servers/  
>
> Do you have any thoughts on how we could make it more useful, perhaps:
> zumastor define volume  --nosquash
> or for a particular rotation:
> zumastor define master ... --daily 7 --nosquash
>
> Regards,
> Shapor

Hi,
I have been testing ddsnap priority and it doesn't work.
I used test from my friend shocker but I added priority 127 for
snapshot0 and priority 1 (or 0 ) to next one snapshots.
After change contents of the first file (it means that space for
snapshots is overflow) and taking 5th snapshot I have got IO errors
and others like: "incoming: Snapshot 8 read failure" and
"replied_rw:Unable to handle pending IO Serverr 270ff80"
Mount point of logical volume in zumastor is lost and any snapshots
and any data on logical volume after restart zumastor disappear.
Do You have any idea ?

I think that more usefull will be something like that:
zumastor define volume zumatest... --nosquash
and something to change it at air:
zumastor snapshot zumatest ... --squash

or inversely.

I have another idea. In Zumastor should

Re: Zumastor snapshot problem

2008-11-20 Thread gato



> On Wed, Nov 19, 2008 at 02:10:05AM -0800, gato wrote:
>
> > On 18 Lis, 20:29, Shapor Naghibzadeh <[EMAIL PROTECTED]> wrote:
> > > On Thu, Nov 13, 2008 at 03:37:28AM -0800, szhocker wrote:
>
> > > > Hi,
> > > > I have been testing zumastor since release 0.4 and there is a problem
> > > > with current release.
> > > > I've checked zumastor behaviour when overfilling the snapshot space.
> > > > Data volume size is 20GB and snapshot device size is 1GB.
> > > > How the test looks like:
> > > > 1. start zumastor with volume "zumatest" and sizes like said before
> > > > 2. create 200MB file on volume zumatest
> > > > 3. take snapshot and check md5 of file in snapshot and on the volume
> > > > 4. repeate steps 2 and 3 , three more times to have 4 files, each file
> > > > is 200MB and create one 400MB file
> > > > 5. After change contents of the first file (it means that space for
> > > > snapshots is overflow) and taking 5th snapshot and waiting about
> > > > 1-2min files from 1st taken snapshot exist in mount point, any kind of
> > > > read like md5sum give me IO error from ddsnap.
> > > > 6. So now we have situation that 1st taken snapshot is unavailable,
> > > > but somehow zumastor don't report this to user, so I don't know that
> > > > this snapshot is unavailable now.
> > > > 7. After restart zumastor daemon 1st taken snapshot is still
> > > > unavailable but now zumastor reports this by removing mountpoint
> > > > directory of 1st snapshot and marking symlink as unavailable.
>
> > > Hi,
>
> > > Sorry for the delay in response, but most of the zumastor team is busily
> > > working on Tux3 right now (www.tux3.org).
>
> > > What you are experiencing seems to be the expected behavior with what we
> > > call "squashing" snapshots.  The problem is that zumastor is trying to
> > > save more data in the snapshot store than there is room for, so
> > > something has to give.  In the default case, the oldest snapshot is
> > > removed.
>
> > > It is possible to make a snapshot "un-squashable" by setting the
> > > priority on it to the highest value (127).  As you would expect, the
> > > snapshots of higher priority will be removed last.  And those with the
> > > highest possible value with never be squashed (failing IO to the origin
> > > rather than allowing squashing).
>
> > > Another issue (especially if this is a new filesystem), is that you may
> > > be snapshotting what is "free space" as far as the filesystem is
> > > concerned.  We had some discussion about adding some hooks to avoid
> > > doing this copy out to save snapshot space.
>
> > > A better solution is sharing free space between the origin and snapshot
> > > (which is one of the things Tux3 is designed to do), and that technology
> > > should eventually be ported back to zumastor.  This is also a
> > > performance optimization because the origin is "re-mapped".  There is
> > > the downside that you won't be able to disable zumastor and access the
> > > origin as you normally would.
>
> > > Do you have any thoughts on how we could make zumastor more
> > > user-friendly in this respect? (squashing snapshots, how to notify user,
> > > etc).
>
> > > Thanks for the report!
> > > Shapor
>
> > Hi,
> > Thanks for Your answer. Could you tell me how to set priority for the
> > snapshot because I can't find it in documentation.
> > I think to make Zumastor user-friendly is change notify user when
> > snapshot is full and remove unavailable snapshots without restart
> > daemon.
> > I'm going to continue testing Zumastor.
> > Best regards
> > gato
>
> The priority feature is documented in the ddsnap man page "man ddsnap".
> It is not exposed via the zumastor script.  I think it would make sense
> to add a "prefer snapshots" kind of option to "define master" so you can
> inform zumastor to set snapshots of a particular rotatation to
> "unsquashable".
>
> The ddsnap syntax would be something like:
> ddsnap priority /var/run/zumastor/servers/  
>
> Do you have any thoughts on how we could make it more useful, perhaps:
> zumastor define volume  --nosquash
> or for a particular rotation:
> zumastor define master ... --daily 7 --nosquash
>
> Regards,
> Shapor

Hi,
I have been testing ddsnap priority and it doesn't work.
I used test from my friend shocker but I added priority 127 for
snapshot0 and priority 1 (or 0 ) to next one snapshots.
After change contents of the first file (it means that space for
snapshots is overflow) and taking 5th snapshot I have got IO errors
and others like: "incoming: Snapshot 8 read failure" and
"replied_rw:Unable to handle pending IO Serverr 270ff80"
Mount point of logical volume in zumastor is lost and any snapshots
and any data on logical volume after restart zumastor disappear.

Regards,
gato
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Zumastor" group.
To post to this group, send email to zumastor@googlegroups.com
To unsubscribe from this group,

Re: Zumastor snapshot problem

2008-11-19 Thread Shapor Naghibzadeh

On Wed, Nov 19, 2008 at 02:10:05AM -0800, gato wrote:
> 
> 
> 
> On 18 Lis, 20:29, Shapor Naghibzadeh <[EMAIL PROTECTED]> wrote:
> > On Thu, Nov 13, 2008 at 03:37:28AM -0800, szhocker wrote:
> >
> > > Hi,
> > > I have been testing zumastor since release 0.4 and there is a problem
> > > with current release.
> > > I've checked zumastor behaviour when overfilling the snapshot space.
> > > Data volume size is 20GB and snapshot device size is 1GB.
> > > How the test looks like:
> > > 1. start zumastor with volume "zumatest" and sizes like said before
> > > 2. create 200MB file on volume zumatest
> > > 3. take snapshot and check md5 of file in snapshot and on the volume
> > > 4. repeate steps 2 and 3 , three more times to have 4 files, each file
> > > is 200MB and create one 400MB file
> > > 5. After change contents of the first file (it means that space for
> > > snapshots is overflow) and taking 5th snapshot and waiting about
> > > 1-2min files from 1st taken snapshot exist in mount point, any kind of
> > > read like md5sum give me IO error from ddsnap.
> > > 6. So now we have situation that 1st taken snapshot is unavailable,
> > > but somehow zumastor don't report this to user, so I don't know that
> > > this snapshot is unavailable now.
> > > 7. After restart zumastor daemon 1st taken snapshot is still
> > > unavailable but now zumastor reports this by removing mountpoint
> > > directory of 1st snapshot and marking symlink as unavailable.
> >
> > Hi,
> >
> > Sorry for the delay in response, but most of the zumastor team is busily
> > working on Tux3 right now (www.tux3.org).
> >
> > What you are experiencing seems to be the expected behavior with what we
> > call "squashing" snapshots.  The problem is that zumastor is trying to
> > save more data in the snapshot store than there is room for, so
> > something has to give.  In the default case, the oldest snapshot is
> > removed.
> >
> > It is possible to make a snapshot "un-squashable" by setting the
> > priority on it to the highest value (127).  As you would expect, the
> > snapshots of higher priority will be removed last.  And those with the
> > highest possible value with never be squashed (failing IO to the origin
> > rather than allowing squashing).
> >
> > Another issue (especially if this is a new filesystem), is that you may
> > be snapshotting what is "free space" as far as the filesystem is
> > concerned.  We had some discussion about adding some hooks to avoid
> > doing this copy out to save snapshot space.
> >
> > A better solution is sharing free space between the origin and snapshot
> > (which is one of the things Tux3 is designed to do), and that technology
> > should eventually be ported back to zumastor.  This is also a
> > performance optimization because the origin is "re-mapped".  There is
> > the downside that you won't be able to disable zumastor and access the
> > origin as you normally would.
> >
> > Do you have any thoughts on how we could make zumastor more
> > user-friendly in this respect? (squashing snapshots, how to notify user,
> > etc).
> >
> > Thanks for the report!
> > Shapor
> 
> Hi,
> Thanks for Your answer. Could you tell me how to set priority for the
> snapshot because I can't find it in documentation.
> I think to make Zumastor user-friendly is change notify user when
> snapshot is full and remove unavailable snapshots without restart
> daemon.
> I'm going to continue testing Zumastor.
> Best regards
> gato

The priority feature is documented in the ddsnap man page "man ddsnap".
It is not exposed via the zumastor script.  I think it would make sense
to add a "prefer snapshots" kind of option to "define master" so you can
inform zumastor to set snapshots of a particular rotatation to
"unsquashable".

The ddsnap syntax would be something like:
ddsnap priority /var/run/zumastor/servers/  

Do you have any thoughts on how we could make it more useful, perhaps:
zumastor define volume  --nosquash
or for a particular rotation:
zumastor define master ... --daily 7 --nosquash

Regards,
Shapor

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Zumastor" group.
To post to this group, send email to zumastor@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/zumastor?hl=en
-~--~~~~--~~--~--~---



Re: Zumastor snapshot problem

2008-11-19 Thread gato



On 18 Lis, 20:29, Shapor Naghibzadeh <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 13, 2008 at 03:37:28AM -0800, szhocker wrote:
>
> > Hi,
> > I have been testing zumastor since release 0.4 and there is a problem
> > with current release.
> > I've checked zumastor behaviour when overfilling the snapshot space.
> > Data volume size is 20GB and snapshot device size is 1GB.
> > How the test looks like:
> > 1. start zumastor with volume "zumatest" and sizes like said before
> > 2. create 200MB file on volume zumatest
> > 3. take snapshot and check md5 of file in snapshot and on the volume
> > 4. repeate steps 2 and 3 , three more times to have 4 files, each file
> > is 200MB and create one 400MB file
> > 5. After change contents of the first file (it means that space for
> > snapshots is overflow) and taking 5th snapshot and waiting about
> > 1-2min files from 1st taken snapshot exist in mount point, any kind of
> > read like md5sum give me IO error from ddsnap.
> > 6. So now we have situation that 1st taken snapshot is unavailable,
> > but somehow zumastor don't report this to user, so I don't know that
> > this snapshot is unavailable now.
> > 7. After restart zumastor daemon 1st taken snapshot is still
> > unavailable but now zumastor reports this by removing mountpoint
> > directory of 1st snapshot and marking symlink as unavailable.
>
> Hi,
>
> Sorry for the delay in response, but most of the zumastor team is busily
> working on Tux3 right now (www.tux3.org).
>
> What you are experiencing seems to be the expected behavior with what we
> call "squashing" snapshots.  The problem is that zumastor is trying to
> save more data in the snapshot store than there is room for, so
> something has to give.  In the default case, the oldest snapshot is
> removed.
>
> It is possible to make a snapshot "un-squashable" by setting the
> priority on it to the highest value (127).  As you would expect, the
> snapshots of higher priority will be removed last.  And those with the
> highest possible value with never be squashed (failing IO to the origin
> rather than allowing squashing).
>
> Another issue (especially if this is a new filesystem), is that you may
> be snapshotting what is "free space" as far as the filesystem is
> concerned.  We had some discussion about adding some hooks to avoid
> doing this copy out to save snapshot space.
>
> A better solution is sharing free space between the origin and snapshot
> (which is one of the things Tux3 is designed to do), and that technology
> should eventually be ported back to zumastor.  This is also a
> performance optimization because the origin is "re-mapped".  There is
> the downside that you won't be able to disable zumastor and access the
> origin as you normally would.
>
> Do you have any thoughts on how we could make zumastor more
> user-friendly in this respect? (squashing snapshots, how to notify user,
> etc).
>
> Thanks for the report!
> Shapor

Hi,
Thanks for Your answer. Could you tell me how to set priority for the
snapshot because I can't find it in documentation.
I think to make Zumastor user-friendly is change notify user when
snapshot is full and remove unavailable snapshots without restart
daemon.
I'm going to continue testing Zumastor.
Best regards
gato


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Zumastor" group.
To post to this group, send email to zumastor@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/zumastor?hl=en
-~--~~~~--~~--~--~---



Re: Zumastor snapshot problem

2008-11-18 Thread Shapor Naghibzadeh

On Thu, Nov 13, 2008 at 03:37:28AM -0800, szhocker wrote:
> 
> Hi,
> I have been testing zumastor since release 0.4 and there is a problem
> with current release.
> I've checked zumastor behaviour when overfilling the snapshot space.
> Data volume size is 20GB and snapshot device size is 1GB.
> How the test looks like:
> 1. start zumastor with volume "zumatest" and sizes like said before
> 2. create 200MB file on volume zumatest
> 3. take snapshot and check md5 of file in snapshot and on the volume
> 4. repeate steps 2 and 3 , three more times to have 4 files, each file
> is 200MB and create one 400MB file
> 5. After change contents of the first file (it means that space for
> snapshots is overflow) and taking 5th snapshot and waiting about
> 1-2min files from 1st taken snapshot exist in mount point, any kind of
> read like md5sum give me IO error from ddsnap.
> 6. So now we have situation that 1st taken snapshot is unavailable,
> but somehow zumastor don't report this to user, so I don't know that
> this snapshot is unavailable now.
> 7. After restart zumastor daemon 1st taken snapshot is still
> unavailable but now zumastor reports this by removing mountpoint
> directory of 1st snapshot and marking symlink as unavailable.

Hi,

Sorry for the delay in response, but most of the zumastor team is busily
working on Tux3 right now (www.tux3.org).

What you are experiencing seems to be the expected behavior with what we
call "squashing" snapshots.  The problem is that zumastor is trying to
save more data in the snapshot store than there is room for, so
something has to give.  In the default case, the oldest snapshot is
removed.

It is possible to make a snapshot "un-squashable" by setting the
priority on it to the highest value (127).  As you would expect, the
snapshots of higher priority will be removed last.  And those with the
highest possible value with never be squashed (failing IO to the origin
rather than allowing squashing).

Another issue (especially if this is a new filesystem), is that you may
be snapshotting what is "free space" as far as the filesystem is
concerned.  We had some discussion about adding some hooks to avoid
doing this copy out to save snapshot space.

A better solution is sharing free space between the origin and snapshot
(which is one of the things Tux3 is designed to do), and that technology
should eventually be ported back to zumastor.  This is also a
performance optimization because the origin is "re-mapped".  There is
the downside that you won't be able to disable zumastor and access the
origin as you normally would.

Do you have any thoughts on how we could make zumastor more
user-friendly in this respect? (squashing snapshots, how to notify user,
etc).

Thanks for the report!
Shapor

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Zumastor" group.
To post to this group, send email to zumastor@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/zumastor?hl=en
-~--~~~~--~~--~--~---