Yes, it seems that mounting it and unmounting it with the zfs command clears 
the condition and allows the data set to be destroyed. Seems this is a bug in 
zfs, or at least an annoyance. I verified with fuser that no processes were 
using the file system. 

Now, what I'd really like to know, is what causes a dataset to get into this 
state?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to