It looks like I have some leftovers of old clones that I cannot delete:
Clone name is tank/WinSrv/Latest
I'm trying:
zfs destroy -f -R tank/WinSrv/Latest
cannot unshare 'tank/WinSrv/Latest': path doesn't exist: unshare(1M) failed
Please help me to get rid of this garbage.
Thanks a lot.
--
This is a situation:
I've got an error on one of the drives in 'zpool status' output:
zpool status tank
pool: tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action:
I have zpool like that
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
___c6t0d0 ONLINE 0 0 0
___c6t1d0 ONLINE 0 0
Thank you very much for the answer
Yea,that what I was afraid of.
There is something I really cannot understand about zpool structuring...
What is a role these 4 drives play in that tank pool with current configuration
?
If they are not part of raidz3 array what is a point for Solaris to
I have a pool with zvolume (Opensolaris b134)
When I try zpool destroy tank I get pool is busy
# zpool destroy -f tank
cannot destroy 'tank': pool is busy
When I try destroy zvolume first I get dataset is busy
# zfs destroy -f tank/macbook0-data
cannot destroy 'tank/macbook0-data': dataset
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it
complains that log device is missing :)
For sure I can manually remove/and add it by
Any argumentation why ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks...Now I think I understand...
Let me summarize it andd let me know if I'm wrong.
Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to
report data acknowledgment before it actually was written to stable storage
which in turn improves performance but might cause
Thanks.Everything is clear now.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I had a pool on external drive.Recently the drive failed,but pool still shows
up when run 'zpoll status'
Any attempt to remove/delete/export pool ends up with unresponsiveness(The
system is still up/running perfectly,it's just this specific command kind of
hangs so I have to open new ssh
10 matches
Mail list logo