Attila Nagy wrote:
>> Performance will not be very good until you have
>> Windows PV disk/net drivers. I'm not sure when we
>> will have publicly available Windows PV drivers
> 
> Performance is not _that_ bad, at least I feel so. This Windows boots up 
> fine, in about 30 seconds. Okay, far from perfect :)), but still, not bad.
> Do you have at least an estimate when PV drivers might arrive? (say 2008 Q1, 
> or maybe in 6 months, or maybe sooner? Just curious.)

Hopefully soon'ish, but I really don't know...



>> I would not recommend moving to zfs with only 1G
>> of ram to be shared between dom0 and any guests.
>> It's really meant for larger systems.
> 
> Yeah, _I_ know that, but the owner wants everything, but gives.... a little 
> less :) He even speaks about having some 30 instances of mssql on that HVM 
> Windows... I recommended him to buy a few 8GB ECC modules :))
> 
>> On a small system like this, you might want to
>> start with Solaris Volume Manager (SVM) and then
>> migrate to zfs when you go to a bigger system.
>> zfs gives you a lot more functionality, but SVM
>> is small and runs as fast as a native slice/disk
>> in dom0. You should be able to dd the disk image
>> from a svm volume to a disk file to a zvol and
>> move it around at will as long as the size
>> matches (I haven't done it myself though).
> 
> Okay, I'll give it a shot. I'd stick to zfs, because of it's simplicity and 
> features, but it needs more memory, I know.
> Somehow I always had hw raid, or zfs (lately). I don't have much experience 
> with svm, but I dont't think it's _that_ complicated :))
> 
>> You should also be shutting everything down
>> that you can on dom0. e.g. X windows, etc.
> 
> Sure.
> Probably I missed something, but how can I get to the HVM's console from, say 
> an other win, on the same lan? vnc to dom0? I don't think so. vnc to domU?
> It isn't trivial to administer Windows from the command line... :)

Yep, as of b76 or 77 we include a vnc server which you can
use for dom0 if needed.

You can vnc into the domUs for their console... If your
using a modern version of windows you should enable
the rdp stuff and use that.



MRJ


>> On a side note, I have been playing with a
>> script which builds a minimal dom0 which runs
>> out of a ramdisk. ramdisk is about 80M compressed
>> and takes about ~ 280M of memory for the disk
>> when running. Good for booting of a USB stick,
>> compact flash, etc. Other than the ramdisk, it
>> has a pretty small memory footprint so it almost
>> makes up for the ramdisk.. I need to clean it
>> up and send it out for folks to play with/improve...
> 
> Ummm... sounds interesting!
> Think of me as a volunteer! :)
> 
>> You would have to similar tricks that you do today
>> with windows when moving to a bigger disk.. e.g.
>> create
>> a new larger disk, copy the old disk to the larger
>> disk, use partition magic to grow the partition.
> 
> Yes, that was my idea too. Okay.
> 
>> It doesn't hurt having multiple disks...
> 
> Not at all!
> 
>> In the near'ish future, it may even make sense to
>> export a zfs filesystem via cifs from the dom0
>> and have the windows domain net mount some of the
>> disks depending on you performance requirements.
>> You would need PV drivers of course.
> 
> I was actually considering something similar, only with samba :)
> 
> 
>> See this thread..
>>
>> ttp://www.opensolaris.org/jive/thread.jspa?messageID=1
>> 66817
>>
> 
> Great thread, thanks! I planned that I let the server compress the win image 
> in the night hours, and put it in a samba-shared directory, and from there a 
> win client machine pulls down, and maybe writes to dvd (or directly to 
> dvd-ram). This way the data leaves the server. 
> It's almost the same theory as a zfs send/receive! Whoa I invented the wheel! 
> :))
> 
>> Thanks,
>>
>> MRJ
>>
>>
> 
> Thanks for the great support, and all the work!
> 
> Attila
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> xen-discuss mailing list
> [email protected]
_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to