I believe, I'm in a very similar situation than yours.
Have you figured something out?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Good idea.
I will keep this test in mind - I'd do it immediately except for the fact that
it would be somewhat difficult to connect power to the drives considering the
design of my chassis, but I'm sure I can figure something out if it comes to
it...
--
This message posted from opensolaris.org
You should look at your disk IO patterns which will
likely lead you to find unset IO queues in sd.conf.
Look at this
http://blogs.sun.com/chrisg/entry/latency_bubble_in_yo
ur_io as a place to start.
Any idea why I would get this message from the dtrace script?
(I'm new to dtrace /
I'm about to do some testing with that dtrace script..
However, in the meantime - I've disabled primarycache (set primarycache=none)
since I noticed that it was easily caching /dev/zero and I wanted to do some
tests within the OS rather than over FC.
I am getting the same results through dd.
In general, ZFS can detect device changes but we recommend
exporting the pool before you move hardware around.
You might try exporting and importing this pool to see if
ZFS recognizes this device again.
Make sure you have a good backup of this data before you
export it because its hard to tell
Thanks for the update Robert.
Currently I have failed zpool with slog missing, which I was unable to
recover, although I was able to find out what the GUID was for the slog
device (below is the uotput of zpool import command).
I couldn't compile logfix binary either, so I ran out of any ideas
Dmitry Sorokin wrote:
Thanks for the update Robert.
Currently I have failed zpool with slog missing, which I was unable to
recover, although I was able to find out what the GUID was for the slog
device (below is the uotput of zpool import command).
I couldn’t compile logfix binary