I tried that, and workstation does not support that, only server does. Detlef
petya wrote: > Hi, > > It will work. You have to create a scsi vmdk with independent and > persistent reservation option, you have to add it to both of your vms. > It is supported in vmware server 2 (and it is free), but I think > workstation can do this too, maybe it involves some manual .vmx editing. > > petya > > On Wed, 2008-10-29 at 18:09 +0100, Detlef Ulherr wrote: > >> kevin wrote: >> >>> I am learning parallel programming using MPI these days, so I want to build >>> a cluster for testing examples in my textbook. >>> >>> I installed two virtual machines running OpenSolaris on VMware workstation >>> on my laptop, and I am trying to build a two-nodes cluster with them. >>> >>> It seems pretty hard for me. Does anyone have any experience or any useful >>> information on this? >>> >>> >> Hi Kevin, >> >> you will fail on the shared storage for a two node cluster, because >> vmware workstation does not support scsi reservations. A single node >> cluster works perfect with vmware workstation. You can failover data >> services between container. I am developing this way since several >> years, and I can tell you by expereince that it works. >> >> Cheers >> Detlef >> >> > > _______________________________________________ > ha-clusters-discuss mailing list > ha-clusters-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss > -- ***************************************************************************** Detlef Ulherr Staff Engineer Tel: (++49 6103) 752-248 Availability Engineering Fax: (++49 6103) 752-167 Sun Microsystems GmbH Amperestr. 6 mailto:detlef.ulherr at sun.com 63225 Langen http://www.sun.de/ ***************************************************************************** Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering *****************************************************************************