I just gave a remote hvm install a go. Dom0 = snv_83, DomU = snv_81. 
Here's what I did:

First, need to set a couple properties in xend:

svccfg -s xvm/xend setprop config/vncpasswd = astring: \"changeme\"
svcadm refresh xvm/xend; svcadm restart xvm/xend
svccfg -s xvm/xend setprop config/vnc-listen = astring: \"0.0.0.0\"
svcadm refresh xvm/xend; svcadm restart xvm/xend

Now I just used virt-install:

virt-install -n nv-81-1 --hvm --vnc -f 
/dev/zvol/dsk/storage/xvm-nv-81-instance-1 -r 1024 \
    -c /export/xen/install/iso/sol-nv-bld81-x86-dvd.iso

Now, virt-install complains with the following error. But I noticed (via 
netstat -an | grep LISTEN) that port 5900 was open for listening, so I 
decided to try to connect a remote TightVNC viewer (running on XP) to 
it, and sure enough, the domain was booting.

hope this helps.

  --joe


dean ross-smith wrote:
> Hello all.
>
> We're a pretty traditional old school sun shop w/a few dozen sun boxes from 
> 1999-2004.
> We're doing a try and buy on a sun blade 6000 with a couple of 6250 intel 
> blades and a 6320 blade, looking to use multiple chassis to do a 
> consolidation, using ldoms and xvm server with xvm ops center to manage it 
> all.
> At least that's our forward looking plan....  to move all our unix and 
> windows hosts onto this platform and have as close to a unified way of 
> viewing and managing the servers as possible
>
> So we're working on the xvm server side of this test implementation and not 
> having any luck.
> one of the 6250 blades has nv_b82 on it and is running xvm server.  As 
> documented elsewhere we have issues with the clock running fast(and for now 
> we're ok with this....)
> But we're really having issues with accessing consoles of domu's.
> We've created a solaris 10x86u4 domu from the iso cd as an hvm guest.  The 
> iso and the disk image is stored on an nfs mount and was created using 
> virt-manager running in an x11 session on a macbook. (our initial attempt at 
> virt-install failed because we didn't specify nfs://path for the iso 
> directory properly)
> After the domu is created and it tried to start, we got messages about the 
> vnc password not being set.  So we searched and fixed that issue and our 
> first little machine was alive.... but we can't connect to it... ie there's 
> no console or screen access cli or gui.
> After googling around I found that we can do a "virsh dumpxml" to look at the 
> settings of a domu and somewhere in the file we're supposed to add a line for 
> <serial="telnet:0.0.0.0:5999,server"/> but the config file also has a line 
> for vnc- do I do both, or only one? After adding the "serial=" line then 
> virsh complains when reloading the config file.
>
> Later today we found a document showing how to set vnc (blogs.sun.com/awenas) 
> on the xvm server and one of the other admins is currently attempting to 
> install windows 2003 server (we understand it may be slow, but we need a 
> proof of concept that at some point this year it'll work properly).  He used 
> the legacy install method via qemu and "xm create" with a .py file the second 
> time around and it appears to be working (he's at the screen asking for a 
> product key)
> A first attempt asked for a password for vnc and he didn't get any further.
>
> If we are doing a non-local (to the xvm server) install via ssh or x11, what 
> is the best way to set up the domu's for solaris or windows so that we have 
> console via telnet or vnc gui access to the virtual servers?  There seems to 
> be lots of little snippets of info indicating that it either "just works" or 
> that a few tweaks to an xml file.  We haven't experienced that yet.
>
> Thanks for reading :+)
>
> Dean Ross-Smith
> Unix Admin
> [EMAIL PROTECTED]
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> xen-discuss mailing list
> [email protected]
>   

_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to