> Snapshots are not on a per-pool basis but a
> per-file-system basis.  Thus, when you took a
> snapshot of "testpol", you didn't actually snapshot
> the pool; rather, you took a snapshot of the top
> level file system (which has an implicit name
> matching that of the pool).
> 
> Thus, you haven't actually affected file systems fs1
> or fs2 at all.
> 
> However, apparently you were able to roll back the
> file system, which either unmounted or broke the
> mounts to fs1 and fs2.  This probably shouldn't have
> been allowed.  (I wonder what would happen with an
> explicit non-ZFS mount to a ZFS directory which is
> removed by a rollback?)

Yes the feature to take snapshots directly on pool must not be allowed.

> Your fs1 and fs2 file systems still exist, but
> they're not attached to their old names any more.
> Maybe they got unmounted. You could probably mount
> them, either on the fs1 directory and on a new fs2
> directory if you create one, or at a different point
> in your file system hierarchy.
> 
You are right, they got unmounted.
zfs get mounted testpol/fs1 ---------> says no
zfs get mounted testpol/fs2 ---------> says no

I understand that mounted attribute is a read only property of a zfs file 
system.
I tried to mount the fs1 and fs2, but i was unsuccessful in doing so.
Is there any specific way to mount zfs file systems?

I have observed another strange behavior, in the same way as discussed in my 
previous post, i created the pool structure.
When i roll back the snapshot for the first time, everything seems to be 
working perfectly. I could see that file systems fs1 and fs2 are not affected.

However when i roll back the snapshot for the second time the file systems are 
unmounted.

Any ideas?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to