I am now finally able to reboot nodes in my cluster and they aren't stuck at 'waiting for quorum'.
What works correctly is this: setup the 'public facing' VLANS and IP addresses and one vlan called 'vmltx2' on the on-motherboard NIC (rge0). Leave the e1000g0 unconfigured, plugged into an untagged VLAN on the switch. There was some frustrating difficulty with scinstall NOT correctly running in interactive mode on nodes which had never had a second Intel NIC installed, but once I got the command line syntax from ONE node I was able to apply that to all nodes. My command for the first node (mltstore1) as generated by scinstall interactive mode: scinstall -i \ -C mltcluster0 \ -F \ -G lofi \ -T node=mltstore1,node=mltstore0,node=mltproc0,node=mltproc1,authtype=sys \ -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \ -A trtype=dlpi,name=e1000g0 -A trtype=dlpi,name=vmltx2 \ -B type=switch,name=switch1 -B type=switch,name=switch2 \ -m endpoint=:e1000g0,endpoint=switch1 \ -m endpoint=:vmltx2,endpoint=switch2 \ -P task=quorum,state=INIT And my command for all subsequent nodes as generated by scinstall interactive mode: scinstall -i \ -C mltcluster0 \ -N mltstore1 \ -G lofi \ -A trtype=dlpi,name=e1000g0 -A trtype=dlpi,name=vmltx2 \ -m endpoint=:e1000g0,endpoint=switch1 \ -m endpoint=:vmltx2,endpoint=switch2 In short: DLPI support appears to be required for the private interconnect, but is NOT required on the 'first boot when joining the cluster'; further, DLPI is not required to get TCP/IP connectivity; further, I can't figure out how to get the 'private interconnect over VLAN' piece working. I don't understand at all why 3 of 4 nodes would work and reboot correctly when using the 'non-DLPI' devices. Of course, perhaps I'm completely misunderstanding all of this... but for the moment it appears that I am able to reboot my nodes and they join the cluster so I'm going to proceed. -- This message posted from opensolaris.org