> I did those test and here are results:
> 
> r...@sl-node01:~# zfs list
> NAME                            USED  AVAIL  REFER  MOUNTPOINT
> mypool01                       91.9G   136G    23K  /mypool01
> mypool01/storage01             91.9G   136G  91.7G  /mypool01/storage01
> mypool01/storag...@30032010-1      0      -  91.9G  -
> mypool01/storag...@30032010-2      0      -  91.9G  -
> mypool01/storag...@30032010-3  2.15M      -  91.7G  -
> mypool01/storag...@30032010-4    41K      -  91.7G  -
> mypool01/storag...@30032010-5  1.17M      -  91.7G  -
> mypool01/storag...@30032010-6      0      -  91.7G  -
> mypool02                       91.9G   137G    24K  /mypool02
> mypool02/copies                  23K   137G    23K  /mypool02/copies
> mypool02/storage01             91.9G   137G  91.9G  /mypool02/storage01
> mypool02/storag...@30032010-1      0      -  91.9G  -
> mypool02/storag...@30032010-2      0      -  91.9G  -
> 
> As you can see I have differences for snapshot 4,5 and 6 as you
> suggested to make a test. But I can see also changes on snapshot no. 3
> - I complain about this snapshot because I could not see differences
> on it last night! Now it shows.

Well, the first thing you should know is this:  Suppose you take a snapshot,
and create some files.  Then the snapshot still occupies no disk space.
Everything is in the current filesystem.  The only time a snapshot occupies
disk space is when the snapshot contains data that is missing from the
current filesystem.  That is - If you "rm" or overwrite some files in the
current filesystem, then you will see the size of the snapshot growing.
Make sense?

That brings up a question though.  If you did the commands as I wrote them,
it would mean you created a 1G file, took a snapshot, and rm'd the file.
Therefore your snapshot should contain at least 1G.  I am confused by the
fact that you only have 1-2M in your snapshot.  Maybe I messed up the
command I told you, or you messed up entering it on the system, and you only
created a 1M file, instead of a 1G file?


> What is still strange: snapshots 1 and 2 are the oldest but they are
> still equal to zero! After changes and snapshots 3,4,5 and 6 I would
> expect that snapshots 1 and 2 are "recording" also changes on the
> storage01 file system, but not... could it be possible that snapshots
> 1 and 2 are somehow "broken?"

If some file existed during all of the old snapshots, and you destroy your
later snapshots, then the data occupied by the later snapshots will start to
fall onto the older snapshots.  Until you destroy the oldest snapshot that
contained that data.  At which time, the data is truly gone from all of the
snapshots.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to