Re: optimal kernel options for VMWARE guest system

2006-10-12 Thread Jeff Dickens

Jeff Dickens wrote:

Jeff Dickens wrote:

John Nielsen wrote:

On Tuesday 03 October 2006 12:58, Jeff Dickens wrote:
 
I have some Freebsd systems that are running as VMware guests.  I'd 
like
to configure their kernels so as to minimize the overhead on the 
VMware

host system.  After reading and partially digesting the white paper on
timekeeping in VMware virtual machines
(http://www.vmware.com/pdf/vmware_timekeeping.pdf) it appears that I
might want to make some changes.

Has anyone addressed this issue?



I haven't read the white paper (yet; thanks for the link), but I've 
had good results with recent -STABLE VM's running under ESX server 
3. Some thoughts:


As I do on most of my installs, I trimmed down GENERIC to include 
just the drivers I use. In this case that was mpt for the disk and 
le for the network (although I suspect forcing the VM to present 
e1000 hardware and then using the em driver would work as well if 
not better).


The VMware tools package that comes with ESX server does a poor job 
of getting itself to run, but it can be made to work without too 
much difficulty. Don't use the port, run the included install script 
to install the files, ignore the custom network driver and compile 
the memory management module from source (included). If using X.org, 
use the built-in vmware display driver, and copy the vmmouse driver 
.o file from the VMware tools dist to the appropriate dir under 
/usr/X11. Even though the included file is for X.org 6.8, it works 
fine with 6.9/7.0 (X.org 7.1 should include the vmmouse driver.) Run 
the VMware tools config script from a non-X terminal (and you can 
ignore the warning about running it remotely if you're using SSH), 
so it won't mess with your X display (it doesn't do anything not 
accomplished above). Then run the rc.d script to start the VMware 
tools.


I haven't noticed any timekeeping issues so far.

JN
___
  
What is the advantage of using the e1000 hardware, and is this 
documented somewhere?  I got the vxn network driver working without 
issues; I just had to edit the .vxn file manually:  I'm using the 
free VMware server V1 rather than the ESX server.


  ethernet0.virtualDev=vmxnet

I've got timekeeping running stably on these.  I turn on time sync 
via vmware tools in the .vmx file:


 tools.syncTime = TRUE

and in the guest file's rc.conf start ntpd with flags -Aqgx  so it 
just syncs once at boot and exits.


I'm not using X on these.  They're supposed to be clean  lean 
systems to run such things as djbdns and qmail.  And they do work 
well. My main goal is to reduce the background load on the VMware 
host system so that it isn't spending more time than it has to 
simulating interrupt controllers for the guests.  I'm wondering about 
the disable ACPI boot option.  I suppose I first should figure out 
how to even roughly measure the effect of any changes I might make.


Well, I've done some pseudo-scientific measurement on this.  I 
currently have five freebsd virtual systems running, and one Centos 4 
(linux 2.6),   This command give some info on the background cpu usage:


(The host is a Centos 3 system, linux 2.4)

[EMAIL PROTECTED] root]# ps auxww | head -1
USER   PID %CPU %MEM   VSZ  RSS TTY  STAT START   TIME COMMAND
[EMAIL PROTECTED] root]# ps auxww | grep vmx
root 18031 12.7  1.5 175440 39916 ?  S   Oct09 345:50 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Goose/freebsd-6.1-i386.vmx -@ 
root 18058 12.9  1.4 174772 36916 ?  S   Oct09 351:01 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Duck/freebsd-6.1-i386.vmx -@ 
root 18072 16.2  5.5 246372 141776 ? S   Oct09 440:16 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/BlueJay/freebsd-6.1-i386.vmx -@ 
root 18086 12.9  1.4 174688 38464 ?  S   Oct09 351:47 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Heron/freebsd-6.1-i386.vmx -@ 
root 18100  9.4  4.1 385712 107348 ? S   Oct09 256:25 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Newt/freebsd-6.1-i386.vmx -@ 
root 18139 12.2  2.5 299388 65132 ?  S   Oct09 330:35 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Centos4/Centos4.vmx -@ 

root 28930  0.0  0.0  3680  672 pts/3S14:08   0:00 grep vmx
[EMAIL PROTECTED] root]#


As one can see the one called Newt is consistently lower in the 
%CPU column.  Curiously enough, this *is* the one I built a custom 
kernel for.
The config file I used is posted below:  Besides commenting out 
devices I wasn't using  NFS, etc, I commented out the apic and 
pctimer devices.  Do you think I'm on the right track for reducing 
interrupt frequency?


Also, if I were to want to move this kernel to other FreeBSD systems, 
how much has to move, the whole /boot/kernel directory?


Finally I did have to re-run the vmware-config-tools.pl script after 
rebuilding 

Re: optimal kernel options for VMWARE guest system

2006-10-11 Thread Jeff Dickens

Jeff Dickens wrote:

John Nielsen wrote:

On Tuesday 03 October 2006 12:58, Jeff Dickens wrote:
 
I have some Freebsd systems that are running as VMware guests.  I'd 
like

to configure their kernels so as to minimize the overhead on the VMware
host system.  After reading and partially digesting the white paper on
timekeeping in VMware virtual machines
(http://www.vmware.com/pdf/vmware_timekeeping.pdf) it appears that I
might want to make some changes.

Has anyone addressed this issue?



I haven't read the white paper (yet; thanks for the link), but I've 
had good results with recent -STABLE VM's running under ESX server 3. 
Some thoughts:


As I do on most of my installs, I trimmed down GENERIC to include 
just the drivers I use. In this case that was mpt for the disk and le 
for the network (although I suspect forcing the VM to present e1000 
hardware and then using the em driver would work as well if not better).


The VMware tools package that comes with ESX server does a poor job 
of getting itself to run, but it can be made to work without too much 
difficulty. Don't use the port, run the included install script to 
install the files, ignore the custom network driver and compile the 
memory management module from source (included). If using X.org, use 
the built-in vmware display driver, and copy the vmmouse driver .o 
file from the VMware tools dist to the appropriate dir under 
/usr/X11. Even though the included file is for X.org 6.8, it works 
fine with 6.9/7.0 (X.org 7.1 should include the vmmouse driver.) Run 
the VMware tools config script from a non-X terminal (and you can 
ignore the warning about running it remotely if you're using SSH), so 
it won't mess with your X display (it doesn't do anything not 
accomplished above). Then run the rc.d script to start the VMware tools.


I haven't noticed any timekeeping issues so far.

JN
___
  
What is the advantage of using the e1000 hardware, and is this 
documented somewhere?  I got the vxn network driver working without 
issues; I just had to edit the .vxn file manually:  I'm using the free 
VMware server V1 rather than the ESX server.


  ethernet0.virtualDev=vmxnet

I've got timekeeping running stably on these.  I turn on time sync via 
vmware tools in the .vmx file:


 tools.syncTime = TRUE

and in the guest file's rc.conf start ntpd with flags -Aqgx  so it 
just syncs once at boot and exits.


I'm not using X on these.  They're supposed to be clean  lean systems 
to run such things as djbdns and qmail.  And they do work well. 
My main goal is to reduce the background load on the VMware host 
system so that it isn't spending more time than it has to simulating 
interrupt controllers for the guests.  I'm wondering about the 
disable ACPI boot option.  I suppose I first should figure out how 
to even roughly measure the effect of any changes I might make.


Well, I've done some pseudo-scientific measurement on this.  I currently 
have five freebsd virtual systems running, and one Centos 4 (linux 
2.6),   This command give some info on the background cpu usage:


(The host is a Centos 3 system, linux 2.4)

[EMAIL PROTECTED] root]# ps auxww | head -1
USER   PID %CPU %MEM   VSZ  RSS TTY  STAT START   TIME COMMAND
[EMAIL PROTECTED] root]# ps auxww | grep vmx
root 18031 12.7  1.5 175440 39916 ?  S   Oct09 345:50 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Goose/freebsd-6.1-i386.vmx -@ 
root 18058 12.9  1.4 174772 36916 ?  S   Oct09 351:01 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Duck/freebsd-6.1-i386.vmx -@ 
root 18072 16.2  5.5 246372 141776 ? S   Oct09 440:16 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/BlueJay/freebsd-6.1-i386.vmx -@ 
root 18086 12.9  1.4 174688 38464 ?  S   Oct09 351:47 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Heron/freebsd-6.1-i386.vmx -@ 
root 18100  9.4  4.1 385712 107348 ? S   Oct09 256:25 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Newt/freebsd-6.1-i386.vmx -@ 
root 18139 12.2  2.5 299388 65132 ?  S   Oct09 330:35 
/usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual 
Machines/Centos4/Centos4.vmx -@ 

root 28930  0.0  0.0  3680  672 pts/3S14:08   0:00 grep vmx
[EMAIL PROTECTED] root]#


As one can see the one called Newt is consistently lower in the %CPU 
column.  Curiously enough, this *is* the one I built a custom kernel for. 

The config file I used is posted below:  Besides commenting out devices 
I wasn't using  NFS, etc, I commented out the apic and pctimer 
devices.  Do you think I'm on the right track for reducing interrupt 
frequency?


Also, if I were to want to move this kernel to other FreeBSD systems, 
how much has to move, the whole /boot/kernel directory?


Finally I did have to re-run the vmware-config-tools.pl script after 
rebuilding the kernel.



newt# cat 

Re: optimal kernel options for VMWARE guest system

2006-10-04 Thread Jeff Dickens

John Nielsen wrote:

On Tuesday 03 October 2006 12:58, Jeff Dickens wrote:
  

I have some Freebsd systems that are running as VMware guests.  I'd like
to configure their kernels so as to minimize the overhead on the VMware
host system.  After reading and partially digesting the white paper on
timekeeping in VMware virtual machines
(http://www.vmware.com/pdf/vmware_timekeeping.pdf) it appears that I
might want to make some changes.

Has anyone addressed this issue?



I haven't read the white paper (yet; thanks for the link), but I've had good 
results with recent -STABLE VM's running under ESX server 3. Some thoughts:


As I do on most of my installs, I trimmed down GENERIC to include just the 
drivers I use. In this case that was mpt for the disk and le for the network 
(although I suspect forcing the VM to present e1000 hardware and then using 
the em driver would work as well if not better).


The VMware tools package that comes with ESX server does a poor job of getting 
itself to run, but it can be made to work without too much difficulty. Don't 
use the port, run the included install script to install the files, ignore 
the custom network driver and compile the memory management module from 
source (included). If using X.org, use the built-in vmware display driver, 
and copy the vmmouse driver .o file from the VMware tools dist to the 
appropriate dir under /usr/X11. Even though the included file is for X.org 
6.8, it works fine with 6.9/7.0 (X.org 7.1 should include the vmmouse 
driver.) Run the VMware tools config script from a non-X terminal (and you 
can ignore the warning about running it remotely if you're using SSH), so it 
won't mess with your X display (it doesn't do anything not accomplished 
above). Then run the rc.d script to start the VMware tools.


I haven't noticed any timekeeping issues so far.

JN
___
  
What is the advantage of using the e1000 hardware, and is this 
documented somewhere?  I got the vxn network driver working without 
issues; I just had to edit the .vxn file manually:  I'm using the free 
VMware server V1 rather than the ESX server.


  ethernet0.virtualDev=vmxnet

I've got timekeeping running stably on these.  I turn on time sync via 
vmware tools in the .vmx file:


 tools.syncTime = TRUE

and in the guest file's rc.conf start ntpd with flags -Aqgx  so it 
just syncs once at boot and exits.


I'm not using X on these.  They're supposed to be clean  lean systems 
to run such things as djbdns and qmail.  And they do work well.  

My main goal is to reduce the background load on the VMware host system 
so that it isn't spending more time than it has to simulating interrupt 
controllers for the guests.  I'm wondering about the disable ACPI boot 
option.  I suppose I first should figure out how to even roughly measure 
the effect of any changes I might make.









___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: optimal kernel options for VMWARE guest system

2006-10-04 Thread John Nielsen
On Wednesday 04 October 2006 10:48, Jeff Dickens wrote:
 John Nielsen wrote:
  On Tuesday 03 October 2006 12:58, Jeff Dickens wrote:
  I have some Freebsd systems that are running as VMware guests.  I'd like
  to configure their kernels so as to minimize the overhead on the VMware
  host system.  After reading and partially digesting the white paper on
  timekeeping in VMware virtual machines
  (http://www.vmware.com/pdf/vmware_timekeeping.pdf) it appears that I
  might want to make some changes.
 
  Has anyone addressed this issue?
 
  I haven't read the white paper (yet; thanks for the link), but I've had
  good results with recent -STABLE VM's running under ESX server 3. Some
  thoughts:
 
  As I do on most of my installs, I trimmed down GENERIC to include just
  the drivers I use. In this case that was mpt for the disk and le for the
  network (although I suspect forcing the VM to present e1000 hardware and
  then using the em driver would work as well if not better).
 
  The VMware tools package that comes with ESX server does a poor job of
  getting itself to run, but it can be made to work without too much
  difficulty. Don't use the port, run the included install script to
  install the files, ignore the custom network driver and compile the
  memory management module from source (included). If using X.org, use the
  built-in vmware display driver, and copy the vmmouse driver .o file from
  the VMware tools dist to the appropriate dir under /usr/X11. Even though
  the included file is for X.org 6.8, it works fine with 6.9/7.0 (X.org 7.1
  should include the vmmouse driver.) Run the VMware tools config script
  from a non-X terminal (and you can ignore the warning about running it
  remotely if you're using SSH), so it won't mess with your X display (it
  doesn't do anything not accomplished above). Then run the rc.d script to
  start the VMware tools.
 
  I haven't noticed any timekeeping issues so far.
 
  JN
  ___

 What is the advantage of using the e1000 hardware, and is this
 documented somewhere?  I got the vxn network driver working without
 issues; I just had to edit the .vxn file manually:  I'm using the free
 VMware server V1 rather than the ESX server.

ethernet0.virtualDev=vmxnet

Not documented, just my opinion that the em(4) driver is probably a better 
performer than le(4), and the former has awareness of media speeds, etc. I 
actually haven't tried using the vxn network driver yet. My view could be 
tainted by old experiences with VMware Workstation 3 and the lnc(4) driver, 
though.

 I've got timekeeping running stably on these.  I turn on time sync via
 vmware tools in the .vmx file:

   tools.syncTime = TRUE

 and in the guest file's rc.conf start ntpd with flags -Aqgx  so it
 just syncs once at boot and exits.

 I'm not using X on these.  They're supposed to be clean  lean systems
 to run such things as djbdns and qmail.  And they do work well.

 My main goal is to reduce the background load on the VMware host system
 so that it isn't spending more time than it has to simulating interrupt
 controllers for the guests.  I'm wondering about the disable ACPI boot
 option.  I suppose I first should figure out how to even roughly measure
 the effect of any changes I might make.

So far I'm just experimenting with FreeBSD VM's in my spare time. Our 
only production VM's at the moment are Windows and a Fedora instance or 
two. It'd be nice if there were a central repository for some of these tips 
and other info. (Maybe there are threads on VMTN, I haven't really looked).

JN
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


optimal kernel options for VMWARE guest system

2006-10-03 Thread Jeff Dickens
I have some Freebsd systems that are running as VMware guests.  I'd like 
to configure their kernels so as to minimize the overhead on the VMware 
host system.  After reading and partially digesting the white paper on 
timekeeping in VMware virtual machines 
(http://www.vmware.com/pdf/vmware_timekeeping.pdf) it appears that I 
might want to make some changes.


Has anyone addressed this issue?


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: optimal kernel options for VMWARE guest system

2006-10-03 Thread John Nielsen
On Tuesday 03 October 2006 12:58, Jeff Dickens wrote:
 I have some Freebsd systems that are running as VMware guests.  I'd like
 to configure their kernels so as to minimize the overhead on the VMware
 host system.  After reading and partially digesting the white paper on
 timekeeping in VMware virtual machines
 (http://www.vmware.com/pdf/vmware_timekeeping.pdf) it appears that I
 might want to make some changes.

 Has anyone addressed this issue?

I haven't read the white paper (yet; thanks for the link), but I've had good 
results with recent -STABLE VM's running under ESX server 3. Some thoughts:

As I do on most of my installs, I trimmed down GENERIC to include just the 
drivers I use. In this case that was mpt for the disk and le for the network 
(although I suspect forcing the VM to present e1000 hardware and then using 
the em driver would work as well if not better).

The VMware tools package that comes with ESX server does a poor job of getting 
itself to run, but it can be made to work without too much difficulty. Don't 
use the port, run the included install script to install the files, ignore 
the custom network driver and compile the memory management module from 
source (included). If using X.org, use the built-in vmware display driver, 
and copy the vmmouse driver .o file from the VMware tools dist to the 
appropriate dir under /usr/X11. Even though the included file is for X.org 
6.8, it works fine with 6.9/7.0 (X.org 7.1 should include the vmmouse 
driver.) Run the VMware tools config script from a non-X terminal (and you 
can ignore the warning about running it remotely if you're using SSH), so it 
won't mess with your X display (it doesn't do anything not accomplished 
above). Then run the rc.d script to start the VMware tools.

I haven't noticed any timekeeping issues so far.

JN
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]