Hi all,
Great news - by attaching an identical size RDM to the server and then grabbing
the first 128K using the command you specified Ross
dd if=/dev/rdsk/c8t4d0p0 of=~/disk.out bs=512 count=256
we then proceeded to inject this into the faulted RDM and lo and behold the
volume recovered!
dd
Hi again,
Out of interest, could this problem have been avoided if the ZFS configuration
didnt rely on a single disk? i.e. RAIDZ etc
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Ok,
The fault appears to have occurred regardless of the attempts to move to
vSphere as we've now moved the host back to ESX 3.5 from whence it came and the
problem still exists.
Looks to me like the fault occurred as a result of a reboot.
Any help and advice would be greatly appreciated.
On Mar 11, 2010, at 8:27 AM, Andrew acmcomput...@hotmail.com wrote:
Ok,
The fault appears to have occurred regardless of the attempts to
move to vSphere as we've now moved the host back to ESX 3.5 from
whence it came and the problem still exists.
Looks to me like the fault occurred as a
Hi Ross,
Thanks for your advice.
I've tried presenting as Virtual and Physical but sadly to no avail. I'm
guessing if it was going to work then a quick zpool import or zpool status
should at the very show me the data pool thats gone missing.
The RDM is from a FC SAN so unfortunately I can't
Hi Ross,
Ok - as a Solaris newbie.. i'm going to need your help.
Format produces the following:-
c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126)
/p...@0,0/pci15ad,1...@10/s...@4,0
what dd command do I need to run to reference this disk? I've tried
/dev/rdsk/c8t4d0 and
On Mar 11, 2010, at 12:31 PM, Andrew acmcomput...@hotmail.com wrote:
Hi Ross,
Ok - as a Solaris newbie.. i'm going to need your help.
Format produces the following:-
c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) /
p...@0,0/pci15ad,1...@10/s...@4,0
what dd command do I