Raw Device Mapping is a feature of ESX 2.5 and above which allows a guest OS to
have access to a LUN on fibre or ISCSI SAN.
See http://www.vmware.com/pdf/esx25_rawdevicemapping.pdf for more details.
You may be able to do something similar with the raw disks under workstation
see
I am seeing the same problem using a seperate virtual disk for the pool.
This is happening with Solaris 10 U3, U4 and U5
SCSI reservations is know to be an issue with clustered solaris
http://blogs.sun.com/SC/entry/clustering_solaris_guests_that_run
I wonder if this is the same problem. Maybe
Added an vdev using rdm and that seems to be stable over reboots
however the pools based on a virtual disk now also seems to be stable after
doing an export and import -f
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hello, I'm having the same exact situation on one VM, and not on another VM on
the same infrastructure.
The only difference is that on the failing VM I initially created the pool with
a name and then changed the mountpoint to another name.
Did you found a solution to the issue?
Should I consider
I have a test bed S10U5 system running under vmware ESX that has a weird
problem.
I have a single virtual disk, with some slices allocated as UFS filesystem
for the operating system, and s7 as a ZFS pool.
Whenever I reboot, the pool fails to open:
May 8 17:32:30 niblet fmd: [ID 441519