Re: [CentOS] Really bad KVM disk performance

2012-02-23 Thread Marcelo Roccasalva
On Mon, Feb 20, 2012 at 02:26, Bob Puff b...@nleaudio.com wrote:

 Hi Gang,
[...]
 On my machine's Centos 5.7 x32 guest install:
 # hdparm -tT /dev/hda

 /dev/hda:
  Timing cached reads:   1864 MB in  2.16 seconds = 863.87 MB/sec
  Timing buffered disk reads:  358 MB in  3.08 seconds = 116.17 MB/sec

cached reads is a measure of linux buffer bandwitdth, not disk
performance!!. It should be several thousands... In a real machine, it
could be a motherboard problem...

--
Marcelo

¿No será acaso que ésta vida moderna está teniendo más de moderna que de
vida? (Mafalda)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Really bad KVM disk performance

2012-02-20 Thread Gordon Messmer
On 02/19/2012 09:26 PM, Bob Puff wrote:

 Immediately when I was installing stuff, I could tell this new system I just
 built was not nearly as fast as the first one.  I ran some CPU and disk
 benchmarking programs, and saw that while the CPU stuff tested similarly, the
 disk thruput was much different... Down-right poor in one of the guests!

If you use Red Hat's virt-install to set up a guest, you can specify 
the --os-type and --os-variant.  If you specify the a variant that 
supports virtualized IO (such as type: linux and variant: rhel5.4 or 
rhel 6), the guest's disk and network IO will have much better 
throughput.  You'll want to look for those options in whatever front-end 
you're using, or use virt-install.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Really bad KVM disk performance

2012-02-19 Thread Bob Puff
Hi Gang,

I recently rented a server at a datacenter with Centos 5.7 X64, Q9550
Processor, 8GB Ram, and dual 250GB SATA HDs (with 16mb cache).  They had
loaded it with KVM, and installed a 30-day trial of Virtualizor as the
front-end for KVM. 

I was so impressed with how fasts the guests ran that I want to build a few of
these machines for myself.  I just installed one: same Q9550 processor, 4GB
ram, and dual 250GB SATA HDs (with 32mb cache).  I installed Centos 6.2 X64,
and installed Webmin's Cloudmin as the front-end.

Immediately when I was installing stuff, I could tell this new system I just
built was not nearly as fast as the first one.  I ran some CPU and disk
benchmarking programs, and saw that while the CPU stuff tested similarly, the
disk thruput was much different... Down-right poor in one of the guests!

On both systems, /dev/md2 is a LVM reserved exclusively for KVM guests.  So
each guest is running in its own logical volume, in software raid.

Thinking there may be something wrong with the HDs, I ran Bonnie (
http://www.coker.com.au/bonnie++/ ) and compared both host machines.  They
tested fairly similar (within 10%).  Yet comparing their guests is like night
and day. Example:

On good machine's Centos 5.7 x32 guest install:
# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   26760 MB in  1.99 seconds = 13417.10 MB/sec
 Timing buffered disk reads:  388 MB in  3.01 seconds = 128.86 MB/sec

On my machine's Centos 5.7 x32 guest install:
# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   1864 MB in  2.16 seconds = 863.87 MB/sec
 Timing buffered disk reads:  358 MB in  3.08 seconds = 116.17 MB/sec

On one of my machine's Mandrake 8.2 x32 guest install:
# hdparm -tT /dev/hda

/dev/hda:
 Timing buffer-cache reads:   27000 MB in  2.00 seconds = 13500.00 MB/sec
 Timing buffered disk reads:   12 MB in  3.66 seconds =   3.28 MB/sec

On that system, the hdparm's -i output shows:
# hdparm -i /dev/hda

/dev/hda:

 Model=QEMU HARDDISK, FwRev=0.12.1, SerialNo=QM1
 Config={ Fixed }
 RawCHS=16383/16/63, TrkSize=32256, SectSize=512, ECCbytes=4
 BuffType=DualPortCache, BuffSize=256kB, MaxMultSect=16, MultSect=16
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=73400320
 IORDY=yes, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2
 DMA modes:  sdma0 sdma1 sdma2 mdma0 mdma1 mdma2
 UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5
 AdvancedPM=no
 Drive conforms to: ATA/ATAPI-5 published, ANSI NCITS 340-2000:

 * signifies the current active mode

The bonnie numbers show for sequential output:
Good Machine Host: 76,857K/Sec
My Machine Host: 72,561K/Sec

Good Machine Centos 5.7 Guest: 66,266K/sec
My Machine Centos 5.7 Guest: 20,623K/sec
My machine Mandrake Guest: 1,365K/sec

Where should I look?  I realize I do have two different front-ends to KVM, and
perhaps they are passing different parameters to it.  I am also running the
KVM from Centos 6.2 on my machine, vs the other server is running on 5.7, but
I would have thought that newer is better.  Also note that my hard drives
have a larger cache.


On a side note, I'm not thrilled with the Virtualizor's tech support, but the
product seems easy to use, once it actually works.  Cloudmin seems to be
buggy, and not let you do things like change cd images on the fly, access the
console before the machine fully boots (!)... Any suggestions on other,
preferably open-source options?  I'm a definite newbie to this virtualization
stuff.

Bob


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Really bad KVM disk performance

2012-02-19 Thread Wuxi Ixuw
How much did you paid for this?

On 20/02/2012 07:26 AM, Bob Puff wrote:
 Hi Gang,

 I recently rented a server at a datacenter with Centos 5.7 X64, Q9550
 Processor, 8GB Ram, and dual 250GB SATA HDs (with 16mb cache).  They had
 loaded it with KVM, and installed a 30-day trial of Virtualizor as the
 front-end for KVM.

 I was so impressed with how fasts the guests ran that I want to build a few of
 these machines for myself.  I just installed one: same Q9550 processor, 4GB
 ram, and dual 250GB SATA HDs (with 32mb cache).  I installed Centos 6.2 X64,
 and installed Webmin's Cloudmin as the front-end.

 Immediately when I was installing stuff, I could tell this new system I just
 built was not nearly as fast as the first one.  I ran some CPU and disk
 benchmarking programs, and saw that while the CPU stuff tested similarly, the
 disk thruput was much different... Down-right poor in one of the guests!

 On both systems, /dev/md2 is a LVM reserved exclusively for KVM guests.  So
 each guest is running in its own logical volume, in software raid.

 Thinking there may be something wrong with the HDs, I ran Bonnie (
 http://www.coker.com.au/bonnie++/ ) and compared both host machines.  They
 tested fairly similar (within 10%).  Yet comparing their guests is like night
 and day. Example:

 On good machine's Centos 5.7 x32 guest install:
 # hdparm -tT /dev/hda

 /dev/hda:
   Timing cached reads:   26760 MB in  1.99 seconds = 13417.10 MB/sec
   Timing buffered disk reads:  388 MB in  3.01 seconds = 128.86 MB/sec

 On my machine's Centos 5.7 x32 guest install:
 # hdparm -tT /dev/hda

 /dev/hda:
   Timing cached reads:   1864 MB in  2.16 seconds = 863.87 MB/sec
   Timing buffered disk reads:  358 MB in  3.08 seconds = 116.17 MB/sec

 On one of my machine's Mandrake 8.2 x32 guest install:
 # hdparm -tT /dev/hda

 /dev/hda:
   Timing buffer-cache reads:   27000 MB in  2.00 seconds = 13500.00 MB/sec
   Timing buffered disk reads:   12 MB in  3.66 seconds =   3.28 MB/sec

 On that system, the hdparm's -i output shows:
 # hdparm -i /dev/hda

 /dev/hda:

   Model=QEMU HARDDISK, FwRev=0.12.1, SerialNo=QM1
   Config={ Fixed }
   RawCHS=16383/16/63, TrkSize=32256, SectSize=512, ECCbytes=4
   BuffType=DualPortCache, BuffSize=256kB, MaxMultSect=16, MultSect=16
   CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=73400320
   IORDY=yes, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
   PIO modes:  pio0 pio1 pio2
   DMA modes:  sdma0 sdma1 sdma2 mdma0 mdma1 mdma2
   UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5
   AdvancedPM=no
   Drive conforms to: ATA/ATAPI-5 published, ANSI NCITS 340-2000:

   * signifies the current active mode

 The bonnie numbers show for sequential output:
 Good Machine Host: 76,857K/Sec
 My Machine Host: 72,561K/Sec

 Good Machine Centos 5.7 Guest: 66,266K/sec
 My Machine Centos 5.7 Guest: 20,623K/sec
 My machine Mandrake Guest: 1,365K/sec

 Where should I look?  I realize I do have two different front-ends to KVM, and
 perhaps they are passing different parameters to it.  I am also running the
 KVM from Centos 6.2 on my machine, vs the other server is running on 5.7, but
 I would have thought that newer is better.  Also note that my hard drives
 have a larger cache.


 On a side note, I'm not thrilled with the Virtualizor's tech support, but the
 product seems easy to use, once it actually works.  Cloudmin seems to be
 buggy, and not let you do things like change cd images on the fly, access the
 console before the machine fully boots (!)... Any suggestions on other,
 preferably open-source options?  I'm a definite newbie to this virtualization
 stuff.

 Bob


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Really bad KVM disk performance

2012-02-19 Thread nux
Bob Puff writes:

 /dev/hda:
 

It should be /dev/vda.. it means you're not taking advantage of KVM's 
paravirtualisation capabilities. That probably explains the crappy 
performance. Run your disk and network with virtio and it should be much 
faster.

In terms of management tools, check 
http://www.linux-kvm.org/page/Management_Tools

hth

--
Nux!
www.nux.ro

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos