Re: [gentoo-user] vmware-server performance

2010-05-03 Thread Stefan G. Weichinger
Am 30.04.2010 18:55, schrieb Stefan G. Weichinger:

 It's not THAT bad here, but the XP-guest takes a while to boot, yes.
 Right now I simply don't shutdown the guest and hibernate-to-ram the
 whole linux-box.

I moved the VM from a LV formatted with XFS to another LV formatted with
ext4 (both mounted with noatime).

It seems to help a bit, the VM boots faster and also works smoother.

iotop shows less load for kdmflush as well.

Just for the records ...

Stefan



Re: [gentoo-user] vmware-server performance

2010-04-30 Thread Florian Philipp
Am 29.04.2010 20:22, schrieb Stefan G. Weichinger:
 Am 18.03.2010 22:16, schrieb Stefan G. Weichinger:
 Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:

 If you are on linux soft raid you might check your disks for errors
 with smartmontools.  Other than that the only thing I can think of is
 something like a performance regression in the ide/scsi/sata
 controller (on host or virtual) or mdadm on host.  If the host system
 is bogged before starting vmware instances I would suspect the former
 (host controller or mdadm).

 The disks look good so far ...

 Just to bump this one up again ...

 Hard disks OK, ran long smart-tests, completely ok.

 Still that high io-load from kdmflush.
 
 No change since then.
 
 What do you guys use? RAID1, RAID0 ?? LVM? Specific filesystems?
 I could also transfer it to another box using NFSv4 ... but that wasn't
 much difference back then.
 
 I would like to hear your thoughts, thanks, Stefan
 

Hi!

I just want to tell you that I experience similar problems with
vmware-player. I'm currently on kernel 2.6.32. The guest system is a
Ubuntu with an Oracle Express database (used for a database lecture I'm
taking).

The system feels like it swaps out the complete host system when I
switch to the guest system and vice versa although there is plenty of
free memory. It is so bad that the system becomes completely unusable
for more than 15 minutes. I didn't investigate it yet because I don't
really need that guest OS.

Regards,
Florian Philipp



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] vmware-server performance

2010-04-30 Thread Stefan G. Weichinger
Am 30.04.2010 16:41, schrieb Florian Philipp:

 I just want to tell you that I experience similar problems with 
 vmware-player. 

Good to hear that ... in a way.

 I'm currently on kernel 2.6.32. The guest system is a 
 Ubuntu with an Oracle Express database (used for a database lecture
 I'm taking).

I had those problems with 2.6.32 as well.
Should try to go back further for a check ...

 The system feels like it swaps out the complete host system when I 
 switch to the guest system and vice versa although there is plenty
 of free memory. It is so bad that the system becomes completely
 unusable for more than 15 minutes. I didn't investigate it yet
 because I don't really need that guest OS.

Good for you ;-)

It's not THAT bad here, but the XP-guest takes a while to boot, yes.
Right now I simply don't shutdown the guest and hibernate-to-ram the
whole linux-box.

Thanks, Stefan



Re: [gentoo-user] vmware-server performance

2010-04-29 Thread Stefan G. Weichinger
Am 18.03.2010 22:16, schrieb Stefan G. Weichinger:
 Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:
 
 If you are on linux soft raid you might check your disks for errors
 with smartmontools.  Other than that the only thing I can think of is
 something like a performance regression in the ide/scsi/sata
 controller (on host or virtual) or mdadm on host.  If the host system
 is bogged before starting vmware instances I would suspect the former
 (host controller or mdadm).

 The disks look good so far ...
 
 Just to bump this one up again ...
 
 Hard disks OK, ran long smart-tests, completely ok.
 
 Still that high io-load from kdmflush.

No change since then.

What do you guys use? RAID1, RAID0 ?? LVM? Specific filesystems?
I could also transfer it to another box using NFSv4 ... but that wasn't
much difference back then.

I would like to hear your thoughts, thanks, Stefan



Re: [gentoo-user] vmware-server performance

2010-03-18 Thread Stefan G. Weichinger
Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:

 If you are on linux soft raid you might check your disks for errors
 with smartmontools.  Other than that the only thing I can think of is
 something like a performance regression in the ide/scsi/sata
 controller (on host or virtual) or mdadm on host.  If the host system
 is bogged before starting vmware instances I would suspect the former
 (host controller or mdadm).
 
 The disks look good so far ...

Just to bump this one up again ...

Hard disks OK, ran long smart-tests, completely ok.

Still that high io-load from kdmflush.

Stefan



Re: [gentoo-user] vmware-server performance

2010-03-13 Thread Stefan G. Weichinger
Am 12.03.2010 23:37, schrieb Kyle Bader:
 If the elevated iowait from iostat is on the host you might be able to
 find something hogging you io bandwidth with iotop.  Also look for D
 state procs with ps auxr.  Are you on a software raid?

Yes, sw-raid level 1, two SATA-disks.

iotop points to kdmflush, whatever that is ...

equery doesn't know it, so I assume it's some kind of kernel-process?

device-mapper-related ? dm ...

 If you are on linux soft raid you might check your disks for errors
 with smartmontools.  Other than that the only thing I can think of is
 something like a performance regression in the ide/scsi/sata
 controller (on host or virtual) or mdadm on host.  If the host system
 is bogged before starting vmware instances I would suspect the former
 (host controller or mdadm).

The disks look good so far ...

thanks, S



Re: [gentoo-user] vmware-server performance

2010-03-12 Thread Stefan G. Weichinger
Am 11.03.2010 16:54, schrieb Kyle Bader:
 If you use the cfq scheduler (linux default) you might try turning off
 low latency mode (introduced in 2.6.32):
 
 Echo 0  /sys/class/block/device name/queue/iosched/low_latency
 
 http://kernelnewbies.org/Linux_2_6_32

That sounded good, but unfortunately it is not really doing the trick.
The VM still takes minutes to boot ... and this after I copied it back
to the RAID1-array which should in theory be faster than the
noraid-partition before.

Thanks anyway, I will test that setting ...

Stefan




Re: [gentoo-user] vmware-server performance

2010-03-12 Thread Kyle Bader
If the elevated iowait from iostat is on the host you might be able to
find something hogging you io bandwidth with iotop.  Also look for D
state procs with ps auxr.  Are you on a software raid?

If you are on linux soft raid you might check your disks for errors
with smartmontools.  Other than that the only thing I can think of is
something like a performance regression in the ide/scsi/sata
controller (on host or virtual) or mdadm on host.  If the host system
is bogged before starting vmware instances I would suspect the former
(host controller or mdadm).

On 3/11/10, Stefan G. Weichinger li...@xunil.at wrote:
 Am 11.03.2010 16:54, schrieb Kyle Bader:
 If you use the cfq scheduler (linux default) you might try turning off
 low latency mode (introduced in 2.6.32):

 Echo 0  /sys/class/block/device name/queue/iosched/low_latency

 http://kernelnewbies.org/Linux_2_6_32

 That sounded good, but unfortunately it is not really doing the trick.
 The VM still takes minutes to boot ... and this after I copied it back
 to the RAID1-array which should in theory be faster than the
 noraid-partition before.

 Thanks anyway, I will test that setting ...

 Stefan




-- 
Sent from my mobile device


Kyle



Re: [gentoo-user] vmware-server performance

2010-03-11 Thread Kyle Bader
If you use the cfq scheduler (linux default) you might try turning off
low latency mode (introduced in 2.6.32):

Echo 0  /sys/class/block/device name/queue/iosched/low_latency

http://kernelnewbies.org/Linux_2_6_32

On 3/10/10, Stefan G. Weichinger li...@xunil.at wrote:

 Recently I see bad performance with my vmware-server.

 Loads of harddisk IO ... even bad on the RAID1, disks working all the
 time (I hear them and iostat tells me).

 Might have to do with kernel 2.6.33 and non-fitting vmware-modules?

 I masked some modules back then because they didn't work, maybe they
 would now.

 Could someone tell me what combo works with gentoo-sources-2.6.33 ?

 I currently have:

 # eix vmware-mod
 [I] app-emulation/vmware-modules
  Available versions:  1.0.0.15-r1 1.0.0.15-r2 (~)1.0.0.24-r1{tbz2}
 [m]1.0.0.25-r1 [m](~)1.0.0.26 {kernel_linux}
  Installed versions:  1.0.0.24-r1{tbz2}(20:34:53
 01.03.2010)(kernel_linux)

 # eix vmware-ser
 [I] app-emulation/vmware-server
  Available versions:  1.0.8.126538!s 1.0.9.156507!s
 (~)1.0.10.203137!s (~)2.0.1.156745-r3!s{tbz2} (~)2.0.2.203138!f!s{tbz2}
  Installed versions:  2.0.2.203138!f!s{tbz2}(20:19:33 10.03.2010)


 Thanks in advance, Stefan



-- 
Sent from my mobile device


Kyle



[gentoo-user] vmware-server performance

2010-03-10 Thread Stefan G. Weichinger

Recently I see bad performance with my vmware-server.

Loads of harddisk IO ... even bad on the RAID1, disks working all the
time (I hear them and iostat tells me).

Might have to do with kernel 2.6.33 and non-fitting vmware-modules?

I masked some modules back then because they didn't work, maybe they
would now.

Could someone tell me what combo works with gentoo-sources-2.6.33 ?

I currently have:

# eix vmware-mod
[I] app-emulation/vmware-modules
 Available versions:  1.0.0.15-r1 1.0.0.15-r2 (~)1.0.0.24-r1{tbz2}
[m]1.0.0.25-r1 [m](~)1.0.0.26 {kernel_linux}
 Installed versions:  1.0.0.24-r1{tbz2}(20:34:53
01.03.2010)(kernel_linux)

# eix vmware-ser
[I] app-emulation/vmware-server
 Available versions:  1.0.8.126538!s 1.0.9.156507!s
(~)1.0.10.203137!s (~)2.0.1.156745-r3!s{tbz2} (~)2.0.2.203138!f!s{tbz2}
 Installed versions:  2.0.2.203138!f!s{tbz2}(20:19:33 10.03.2010)


Thanks in advance, Stefan