I am in a rather unique situation. I've inherited a zpool composed of
two vdevs. One vdev was roughly 9TB on one RAID 5 array, and the
other vdev is roughly 2TB on a different RAID 5 array.The 9TB
array crashed and was sent to a data recovery firm, and they've given
me a dd image. I've
UPDATE (for those following along at home)...
After patching to latest and greatest Solaris 10 kernel update
and getting firmware on both OS drives (72 GB SAS) and server updated
to latest and greatest, Oracle has now officially declared it a bug
(CR#7082249). No word on when I'll hear
Hi Kelsey,
I haven't had to do this myself so someone who has done this
before might have a better suggestion.
I wonder if you need to make links from the original device
name to the new device names.
You can see from the zdb -l output below that the device path
is pointing to the original
On Wed, Aug 24, 2011 at 1:23 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
I wonder if you need to make links from the original device
name to the new device names.
You can see from the zdb -l output below that the device path
is pointing to the original device names (really long
On 24 August, 2011 - Kelsey Damas sent me these 1,8K bytes:
On Wed, Aug 24, 2011 at 1:23 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
I wonder if you need to make links from the original device
name to the new device names.
You can see from the zdb -l output below that the
Just for fun, try an absolute path.
Thank you again for the suggestions. I was able to make this work
with lofiadm to mount the images. Then, be sure to give zpool the -d
flag to scan /dev/lofi
# lofiadm -a /jbod1-diskbackup/restore/deep_Lun0.dd
/dev/lofi/1
# lofiadm -a
markm wrote:
Because the vdev tree is calling them 'disk', zfs is attempting to open
them using disk i/o instead of file i/o.
This was correct, thank you. lofiadm was useful to loopback mount
the image files to provide disk i/o.
ZFS has much more opportunity to recover from device failure