Re: [zfs-discuss] ZFS Problems under vmware
Raw Device Mapping is a feature of ESX 2.5 and above which allows a guest OS to have access to a LUN on fibre or ISCSI SAN. See http://www.vmware.com/pdf/esx25_rawdevicemapping.pdf for more details. You may be able to do something similar with the raw disks under workstation see http://www.vmware.com/support/reference/linux/osonpartition_linux.html Since I added the RDM to one of my guest OSes all of them them have started working using virtual disks after running code #zpool export tank #zpool import -f tank /code Maybe adding the RDM changed some behavoiur of ESX or mabe I just got lucky This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Problems under vmware
I am seeing the same problem using a seperate virtual disk for the pool. This is happening with Solaris 10 U3, U4 and U5 SCSI reservations is know to be an issue with clustered solaris http://blogs.sun.com/SC/entry/clustering_solaris_guests_that_run I wonder if this is the same problem. Maybe we have to use Raw Device Mapping (RDM) to get zfs to work under vmware. Anthony Worrall This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Problems under vmware
Added an vdev using rdm and that seems to be stable over reboots however the pools based on a virtual disk now also seems to be stable after doing an export and import -f This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss