On Mar 11, 2010, at 8:27 AM, Andrew <acmcomput...@hotmail.com> wrote:

Ok,

The fault appears to have occurred regardless of the attempts to move to vSphere as we've now moved the host back to ESX 3.5 from whence it came and the problem still exists.

Looks to me like the fault occurred as a result of a reboot.

Any help and advice would be greatly appreciated.

It appears the RDM might have had something to do with this.

Try a different RDM setting then physical, like virtual. Try mounting the disk via iSCSI initiator inside VM instead of RDM.

If you tried fiddling with the ESX RDM options and it still doesn't work... Inside the Solaris VM, dump the first 128k of the disk to a file using dd then using a hex editor find out what lba contains the MBR, which should be LBA 0, but I suspect it will be offset. Then the GPT will start at MBR LBA + 1 to MBR LBA + 33. Use the wikipedia entry for MBR, there is a unique identifier in there somewhere to search for.

There is a backup GPT also in the last 33 sectors of the disk.

Once you find the offset it is best to just dump those 34 sectors (0-33) to another file. Edit each MBR and GPT entry to take into account the offset then copy those 34 sectors into the first 34 sectors of the disk, and the last 33 sectors of the file to the last 33 sectors of the disk. Rescan, and hopefully it will see the disk.

If the offset is in the other direction then it means it's been padded, probably with metainfo? And you will need to get rid of the RDM and use the iSCSI initiator in the solaris vm to mount the volume. See how the first 34 sectors look, and if they are damaged take the backup GPT to reconstruct the primary GPT and recreate the MBR.

-Ross

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to