Martin wrote:
> Please excluse that my comment is cross-posted to both the Xen and zfs 
> discuss forums, I didn't notice it on the Xen forum until I had posted.  
> Anyway, here goes.  Any info, testing, etc please contact me.
> 
> 
> Regarding the following that I also hit, see
> http://www.opensolaris.org/jive/thread.jspa?messageID=180995
> and if any further details or tests are required, I would be happy to assist.


 > I set /etc/system's zfs:zfs_arc_max = 0x10000000 and it seems better now.
 >
 > I had previously tried setting it to 2Gb rather than 256Mb as above without 
 > success... I should have tried much lower!
 >
 > It "seems" that when I perform I/O though a WindowsXP hvm, I get a 
 > "reasonable" I/O rate, but I'm not sure at this point in time. When a write 
 > is made from within the hvm VM, would I expect for the same DMA issue to 
 > arise? (I can't really tell either way aty the moment because it's not super 
 > fast anyway)


A couple of things here..

I have found that ~ 2G is good for a dom0 if you use
zfs. If your using zfs in dom0, you should use
a dom0_mem entry in grub's menu.lst and fix the
amount of memory dom0 starts with.

e.g. my entry looks like...

title 64-bit dom0
root (hd0,0,d)
kernel$ /boot/$ISADIR/xen.gz dom0_mem=2048M com1=9600,8n1 console=com1
module$ /platform/i86xpv/kernel/$ISADIR/unix 
/platform/i86xpv/kernel/$ISADIR/unix -k
module$ /platform/i86pc/$ISADIR/boot_archive


Then, make sure you don't auto-balloon dom0 down (by creating
guests which would take some of that memory from dom0).
The best way to do this is to set config/dom0-min-mem
in xvm/xend (e.g. svcprop xvm/xend). zfs and auto-ballooning
down don't seem to work great together (I haven't
done a lot of testing to characterize this though).

If you do want to auto-balloon, or want to use less memory in
a zfs based dom0, setting zfs_arc_max to a low value seems
to work well..


I/O performance in a Windows HVM guests is not good at this
point. It won't be until we have Windows PV drivers available.



Thanks,

MRJ

_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to