Am 30.04.2010 18:55, schrieb Stefan G. Weichinger:
It's not THAT bad here, but the XP-guest takes a while to boot, yes.
Right now I simply don't shutdown the guest and hibernate-to-ram the
whole linux-box.
I moved the VM from a LV formatted with XFS to another LV formatted with
ext4 (both
Am 29.04.2010 20:22, schrieb Stefan G. Weichinger:
Am 18.03.2010 22:16, schrieb Stefan G. Weichinger:
Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:
If you are on linux soft raid you might check your disks for errors
with smartmontools. Other than that the only thing I can think of is
Am 30.04.2010 16:41, schrieb Florian Philipp:
I just want to tell you that I experience similar problems with
vmware-player.
Good to hear that ... in a way.
I'm currently on kernel 2.6.32. The guest system is a
Ubuntu with an Oracle Express database (used for a database lecture
I'm
Am 18.03.2010 22:16, schrieb Stefan G. Weichinger:
Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:
If you are on linux soft raid you might check your disks for errors
with smartmontools. Other than that the only thing I can think of is
something like a performance regression in the
Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:
If you are on linux soft raid you might check your disks for errors
with smartmontools. Other than that the only thing I can think of is
something like a performance regression in the ide/scsi/sata
controller (on host or virtual) or mdadm on
Am 12.03.2010 23:37, schrieb Kyle Bader:
If the elevated iowait from iostat is on the host you might be able to
find something hogging you io bandwidth with iotop. Also look for D
state procs with ps auxr. Are you on a software raid?
Yes, sw-raid level 1, two SATA-disks.
iotop points to
Am 11.03.2010 16:54, schrieb Kyle Bader:
If you use the cfq scheduler (linux default) you might try turning off
low latency mode (introduced in 2.6.32):
Echo 0 /sys/class/block/device name/queue/iosched/low_latency
http://kernelnewbies.org/Linux_2_6_32
That sounded good, but
If the elevated iowait from iostat is on the host you might be able to
find something hogging you io bandwidth with iotop. Also look for D
state procs with ps auxr. Are you on a software raid?
If you are on linux soft raid you might check your disks for errors
with smartmontools. Other than
If you use the cfq scheduler (linux default) you might try turning off
low latency mode (introduced in 2.6.32):
Echo 0 /sys/class/block/device name/queue/iosched/low_latency
http://kernelnewbies.org/Linux_2_6_32
On 3/10/10, Stefan G. Weichinger li...@xunil.at wrote:
Recently I see bad
Recently I see bad performance with my vmware-server.
Loads of harddisk IO ... even bad on the RAID1, disks working all the
time (I hear them and iostat tells me).
Might have to do with kernel 2.6.33 and non-fitting vmware-modules?
I masked some modules back then because they didn't work,
10 matches
Mail list logo