Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-24 Thread cauchy-love
Hi Paolo,
  I am not sure if the following information is clear:
  I have run a guest on qemu 2.3.0 within different kernels (linux-2.6.32 and 
higher versions) on the same PC and tested the FTP performance of the guest 
(the server is a windows OS on a remote PC). I found that the FTP download 
throughput degraded significantly from 2.6.39 or later version but the upload 
performance only vary slightly. The qemu command line is as following:
   kvm -hda XX.img -net ... -smp 4 m 2G -enable-kvm.
And I have tried to enabled the Linux AIO support for QEMU but also got nothing 
improved. I have also changed the smp/hda/memory configuration but got no luck. 
The problem might exist in the linux kernel itself. Could you please point some 
suspicious functions of the qemu to debug this problem?


 Thank,
Yi.








At 2015-06-24 20:32:30, "Paolo Bonzini"  wrote:
>
>
>On 24/06/2015 14:31, cauchy-love wrote:
>> 
>>Sorry, but I don't know what does bisect mean exactly.Could you
>> please explain it? I feel it might be some kernel configuration items
>> introduce this problem.I am also trying other kernel versions from
>> 2.6.34 to 2.6.38 (2.6.33.3 has good performance like 2.6.32) and will
>> show you the results.
>
>I meant exactly what you are already doing.  Thanks!
>
>Paolo
>


Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-24 Thread cauchy-love
Hi, Paolo,


   Sorry, but I don't know what does bisect mean exactly. Could you please 
explain it? I feel it might be some kernel configuration items introduce this 
problem. I am also trying other kernel versions from 2.6.34 to 2.6.38 (2.6.33.3 
has good performance like 2.6.32) and will show you the results.


Thanks,


Yi








At 2015-06-24 20:20:02, "Paolo Bonzini"  wrote:
>
>
>On 24/06/2015 14:14, cauchy-love wrote:
>> I have updated the kernel of CentOS 6.5 from 2.6.32 to higher
>> versions (2.6.39+) and found the disk IO write performance of my guest
>> OS degraded significantly (around 1/10 of 2.6.32) while read performance
>> was not influenced. That means everything is the same (i.e., hardware
>> configurations, BIOS, filesystems (ext4)) except the kernel. I would be
>> greatly apprieciated if you can provide some help about debugging this
>> problem. I have tried to trace the qemu but didn't get any progress.
>
>Can you bisect it further between 2.6.32 and 2.6.39?
>
>Stefan, does it ring any bell?
>
>Paolo


Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-24 Thread Paolo Bonzini


On 24/06/2015 14:31, cauchy-love wrote:
> 
>Sorry, but I don't know what does bisect mean exactly.Could you
> please explain it? I feel it might be some kernel configuration items
> introduce this problem.I am also trying other kernel versions from
> 2.6.34 to 2.6.38 (2.6.33.3 has good performance like 2.6.32) and will
> show you the results.

I meant exactly what you are already doing.  Thanks!

Paolo



Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-22 Thread cauchy-love

thanks for your help. I don't think it is the filesystem that causes this 
problem since when I updated the kernel of CentOS6.5 to the CentOS 7.0's 
version and the disk io performance was nearly the same as CentOS 7.0.



--
发自我的网易邮箱手机智能版


在 2015-06-19 16:37:23,"Paolo Bonzini"  写道:
>
>
>On 19/06/2015 03:03, cauchy-love wrote:
>> I have tried your qemu cammand line but got no luck (the embedded os
>> have no virtio support but this is not the problem i think). The
>> problem probably lies in the host kernel version as it is the only
>> difference for my tests. I traced the guest kernel and found the ATA
>> drivers always used non-DMA mode (CentOS 7 and CentOS 6.5 are the
>> same at this point).
>
>You can use perf to see what's the difference.
>
>Perhaps CentOS 6.5 used ext4 and 7 uses XFS?  Though in general XFS is
>the faster one...
>
>Paolo
>

Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-19 Thread Stefan Hajnoczi
On Fri, Jun 19, 2015 at 08:40:11AM +0800, cauchy-love wrote:
> 
> the iozone  command line is:
> ./iozone -i 0 -i 1 -f ./iotmp -Rab ./iotmp.xls -g 8G -n 4G -c
> 
> The problem is why the performance difference is so big for different linux 
> kernels. The guest's io performance test is by both FTP tools and raw write, 
> which demonstrate the much lower performance of  Centos 7 (1/10 of that on 
> Centos 6.5). And how can I debug this problem?

You are missing the -I command-line option for O_DIRECT.

When you run without -I you are not benchmarking just disk I/O but also
page cache performance, which varies depending on the amount of RAM
available.  This means you cannot compare guest results with host
results.

Stefan


pgpxJjO1MkIuX.pgp
Description: PGP signature


Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-19 Thread Paolo Bonzini


On 19/06/2015 03:03, cauchy-love wrote:
> I have tried your qemu cammand line but got no luck (the embedded os
> have no virtio support but this is not the problem i think). The
> problem probably lies in the host kernel version as it is the only
> difference for my tests. I traced the guest kernel and found the ATA
> drivers always used non-DMA mode (CentOS 7 and CentOS 6.5 are the
> same at this point).

You can use perf to see what's the difference.

Perhaps CentOS 6.5 used ext4 and 7 uses XFS?  Though in general XFS is
the faster one...

Paolo



Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-18 Thread cauchy-love
I have tried your qemu cammand line but got no luck (the embedded os have no 
virtio support but this is not the problem i think). The problem probably lies 
in the host kernel version as it is the only difference for my tests. I traced 
the guest kernel and found the ATA drivers always used non-DMA mode (CentOS 7 
and CentOS 6.5 are the same at this point). 
Yi




--
发自我的网易邮箱手机智能版


在 2015-06-16 21:05:39,"Stefan Hajnoczi"  写道:
>On Mon, Jun 15, 2015 at 09:14:52PM +0800, cauchy-love wrote:
>>I am running a embedded OS on Qemu-2.3.0. However I found that the I/O 
>> performance was quite different for different CentOS release (CentOS 6.5 and 
>> CentOS 7.0). The CentOS 6.5 uses linux-2.6.32 while the CentOS 7.0 uses 
>> linux-3.10.0. The I/O throughput (write) of the former (CentOS 6.5) is about 
>> ten times of that on the latter (CentOS 7.0). All configurations are the 
>> same except the kernel version. And the command line is as following:
>>   # qemu-kvm -m 2G -smp 4 -enable-kvm -hda guest.img ...
>>  The I/O throughput of the host is almost the same for these two different 
>> kernels, as tested by Iozone tool. Actually, I found that the IO throughput 
>> degraded significantly since the 3.4 or later kernel version. I also tried 
>> different qemu versions such as qemu-1.5.3, qemu-2.1.3 but got no 
>> improvements.
>
>Please post the iozone command-line you are using and the output.
>
>Your QEMU command-line uses the IDE storage controller, which is not
>optimized for performance.  Usually virtio-blk is used when good
>performance is required:
>
>  -drive if=virtio,cache=none,format=raw,aio=native,file=guest.img
>
>The guest OS needs virtio-blk device drivers in order for this to work.
>
>Stefan


ATT2.bin
Description: PGP signature


Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-18 Thread cauchy-love

the iozone  command line is:
./iozone -i 0 -i 1 -f ./iotmp -Rab ./iotmp.xls -g 8G -n 4G -c

The problem is why the performance difference is so big for different linux 
kernels. The guest's io performance test is by both FTP tools and raw write, 
which demonstrate the much lower performance of  Centos 7 (1/10 of that on 
Centos 6.5). And how can I debug this problem?
 
Thanks again for your help.

Yi



--
发自我的网易邮箱手机智能版


在 2015-06-16 21:05:39,"Stefan Hajnoczi"  写道:
>On Mon, Jun 15, 2015 at 09:14:52PM +0800, cauchy-love wrote:
>>I am running a embedded OS on Qemu-2.3.0. However I found that the I/O 
>> performance was quite different for different CentOS release (CentOS 6.5 and 
>> CentOS 7.0). The CentOS 6.5 uses linux-2.6.32 while the CentOS 7.0 uses 
>> linux-3.10.0. The I/O throughput (write) of the former (CentOS 6.5) is about 
>> ten times of that on the latter (CentOS 7.0). All configurations are the 
>> same except the kernel version. And the command line is as following:
>>   # qemu-kvm -m 2G -smp 4 -enable-kvm -hda guest.img ...
>>  The I/O throughput of the host is almost the same for these two different 
>> kernels, as tested by Iozone tool. Actually, I found that the IO throughput 
>> degraded significantly since the 3.4 or later kernel version. I also tried 
>> different qemu versions such as qemu-1.5.3, qemu-2.1.3 but got no 
>> improvements.
>
>Please post the iozone command-line you are using and the output.
>
>Your QEMU command-line uses the IDE storage controller, which is not
>optimized for performance.  Usually virtio-blk is used when good
>performance is required:
>
>  -drive if=virtio,cache=none,format=raw,aio=native,file=guest.img
>
>The guest OS needs virtio-blk device drivers in order for this to work.
>
>Stefan

Re: [Qemu-devel] Greate difference of disk I/O performance for guest on Qemu-2.30 of CentOS.

2015-06-16 Thread Stefan Hajnoczi
On Mon, Jun 15, 2015 at 09:14:52PM +0800, cauchy-love wrote:
>I am running a embedded OS on Qemu-2.3.0. However I found that the I/O 
> performance was quite different for different CentOS release (CentOS 6.5 and 
> CentOS 7.0). The CentOS 6.5 uses linux-2.6.32 while the CentOS 7.0 uses 
> linux-3.10.0. The I/O throughput (write) of the former (CentOS 6.5) is about 
> ten times of that on the latter (CentOS 7.0). All configurations are the same 
> except the kernel version. And the command line is as following:
>   # qemu-kvm -m 2G -smp 4 -enable-kvm -hda guest.img ...
>  The I/O throughput of the host is almost the same for these two different 
> kernels, as tested by Iozone tool. Actually, I found that the IO throughput 
> degraded significantly since the 3.4 or later kernel version. I also tried 
> different qemu versions such as qemu-1.5.3, qemu-2.1.3 but got no 
> improvements.

Please post the iozone command-line you are using and the output.

Your QEMU command-line uses the IDE storage controller, which is not
optimized for performance.  Usually virtio-blk is used when good
performance is required:

  -drive if=virtio,cache=none,format=raw,aio=native,file=guest.img

The guest OS needs virtio-blk device drivers in order for this to work.

Stefan


pgpiESMM4M3Qa.pgp
Description: PGP signature