Re: [ovirt-users] Ovirt with bad IO performance

2016-09-07 Thread Yaniv Kaul
On Tue, Sep 6, 2016 at 5:26 PM, Gabriel Ozaki 
wrote:

> Hi Yaniv
>
> This results is averange in sysbench, my machine for example gets
> 1.3905Mb/sec, i don't know how this test really works and i will search
> about it
>
> So i try to make a* bonnie++ test* ( reference
> http://support.commgate.net/index.php?/Knowledgebase/Article/View/212 ):
>
> Xenserver speeds:
> Write speed: 91076 KB/sec
> ReWrite speed: 57885 KB/sec
> Read speed: 215457 KB/sec (Strange, too high)
> Num of Blocks: 632.4
>
> Ovirt Speeds:
> Write speed: 111597 KB/sec (22% more then xenserver)
> ReWrite speed: 73402 KB/sec (26% more then xenserver)
> Read speed:  121537 KB/sec (44% less then xenserver)
> Num of Blocks: 537.2 ( 15% less then xenserver)
>
>
> result: a draw?
>

Perhaps - depends on what you wish to measure.


>
>
> And* DD test *( reference: https://romanrm.net/dd-benchmark )*:*
> [root@xenserver teste]# echo 2 > /proc/sys/vm/drop_caches  && sync
> [root@xenserver teste]# dd bs=1M count=256 if=/dev/zero of=test
> conv=fdatasync
>

fdatasync is the wrong choice - it still caches (but again, I'm not sure
what you are trying to measure). You should use direct IO (oflag=direct) if
you are interested in pure IO data path performance .
Note that most applications do not:
1. Write sequentially (especially not VMs)
2. Write 1MB blocks.

256+0 registros de entrada
> 256+0 registros de saída
> 268435456 bytes (268 MB) copiados, 1,40111 s, 192 MB/s (Again, too high)
>

Perhaps the disk is caching?


>
> [root@ovirt teste]# echo 2 > /proc/sys/vm/drop_caches  && sync
> [root@ovirt teste]# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync
> 256+0 registros de entrada
> 256+0 registros de saída
> 268435456 bytes (268 MB) copiados, 2,31288 s, 116 MB/s (Really fair, the
> host result is 124 MB/s)
>
>
> *HDparm *(FAIL on xenserver)
> [root@xenserver teste]# hdparm -Tt /dev/xvda1
>
> /dev/xvda1:
>  Timing cached reads:   25724 MB in  2.00 seconds = 12882.77 MB/sec
>  Timing buffered disk reads: 2984 MB in  3.00 seconds = 994.43 MB/sec ( 8
> times the expect value, something is very wrong)
>
> [root@ovirt teste]# hdparm -Tt /dev/vda1
>
> /dev/vda1:
>  Timing cached reads:   25042 MB in  2.00 seconds = 12540.21 MB/sec
>  Timing buffered disk reads: 306 MB in  3.01 seconds = 101.66 MB/sec(ok
> result)
>
>
> There is something strange in xenserver affecting the results, probably
> the best choice is close the thread and  start the studies about benchmarks
>

Agreed. It's not easy, it's sometimes more art than science, but first of
all you need to define what you wish to benchmark exactly.
I warmly suggest you look more into real life applications rather than
synthetic benchmarks, but if you insist, I warmly recommend fio (
https://github.com/axboe/fio)

HTH,
Y.


>
> Thanks
>
>
>
>
>
>
> 2016-09-05 12:01 GMT-03:00 Yaniv Kaul :
>
>>
>>
>> On Mon, Sep 5, 2016 at 1:45 PM, Gabriel Ozaki 
>> wrote:
>>
>>> Hi Yaniv and Sandro
>>>
>>> The disk is in the same machine then ovirt-engine
>>>
>>
>> I'm looking back at your results, and something is terribly wrong there:
>> For example, sysbench:
>>
>> Host result:2.9843Mb/sec
>> Ovirt result:   1.1561Mb/sec
>> Xenserver result:   2.9006Mb/sec
>>
>> This is slower than a USB1 disk on key performance. I don't know what to
>> make of it, but it's completely bogus. Even plain QEMU can get better
>> results than this.
>> And the 2nd benchmark:
>>
>>
>> **The novabench test:*
>> Ovirt result:  79Mb/s
>> Xenserver result:  101Mb/s
>>
>> This is better, but still very slow. If I translate it to MB/s, it's
>> ~10-12MBs - still very very slow.
>> If, however, this is MB/sec, then this makes sense - and is probably as
>> much as you can get from a single spindle.
>> The difference between XenServer and oVirt are more likely have to do
>> with caching than anything else. I don't know what the caching settings of
>> XenServer - can you ensure no caching ('direct IO') is used?
>>
>>
>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>> 2016-09-02 15:31 GMT-03:00 Yaniv Kaul :
>>>


 On Fri, Sep 2, 2016 at 6:11 PM, Gabriel Ozaki <
 gabriel.oz...@kemi.com.br> wrote:

> Hi Yaniv
>
> Sorry guys, i don't explain well on my first mail, i notice a bad IO
> performance on *disk* benchmarks, the network are working really fine
>

 But where is the disk? If it's across the network, then network is
 involved and is certainly a bottleneck.
 Y.


>
>
>
>
>
>
>
> 2016-09-02 12:04 GMT-03:00 Yaniv Kaul :
>
>> On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki <
>> gabriel.oz...@kemi.com.br> wrote:
>>
>>> Hi Nir, thanks for the answer
>>>
>>>
>>> *The nfs server is in the host?*
>>> Yes, i choose NFS to use as storage on ovirt host
>>>
>>> *- Is this 2.9GiB/s or 2.9 MiB/s?*

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-06 Thread Gabriel Ozaki
Hi Yaniv

This results is averange in sysbench, my machine for example gets
1.3905Mb/sec, i don't know how this test really works and i will search
about it

So i try to make a* bonnie++ test* ( reference
http://support.commgate.net/index.php?/Knowledgebase/Article/View/212 ):

Xenserver speeds:
Write speed: 91076 KB/sec
ReWrite speed: 57885 KB/sec
Read speed: 215457 KB/sec (Strange, too high)
Num of Blocks: 632.4

Ovirt Speeds:
Write speed: 111597 KB/sec (22% more then xenserver)
ReWrite speed: 73402 KB/sec (26% more then xenserver)
Read speed:  121537 KB/sec (44% less then xenserver)
Num of Blocks: 537.2 ( 15% less then xenserver)


result: a draw?


And* DD test *( reference: https://romanrm.net/dd-benchmark )*:*
[root@xenserver teste]# echo 2 > /proc/sys/vm/drop_caches  && sync
[root@xenserver teste]# dd bs=1M count=256 if=/dev/zero of=test
conv=fdatasync
256+0 registros de entrada
256+0 registros de saída
268435456 bytes (268 MB) copiados, 1,40111 s, 192 MB/s (Again, too high)

[root@ovirt teste]# echo 2 > /proc/sys/vm/drop_caches  && sync
[root@ovirt teste]# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync
256+0 registros de entrada
256+0 registros de saída
268435456 bytes (268 MB) copiados, 2,31288 s, 116 MB/s (Really fair, the
host result is 124 MB/s)


*HDparm *(FAIL on xenserver)
[root@xenserver teste]# hdparm -Tt /dev/xvda1

/dev/xvda1:
 Timing cached reads:   25724 MB in  2.00 seconds = 12882.77 MB/sec
 Timing buffered disk reads: 2984 MB in  3.00 seconds = 994.43 MB/sec ( 8
times the expect value, something is very wrong)

[root@ovirt teste]# hdparm -Tt /dev/vda1

/dev/vda1:
 Timing cached reads:   25042 MB in  2.00 seconds = 12540.21 MB/sec
 Timing buffered disk reads: 306 MB in  3.01 seconds = 101.66 MB/sec(ok
result)


There is something strange in xenserver affecting the results, probably the
best choice is close the thread and  start the studies about benchmarks

Thanks






2016-09-05 12:01 GMT-03:00 Yaniv Kaul :

>
>
> On Mon, Sep 5, 2016 at 1:45 PM, Gabriel Ozaki 
> wrote:
>
>> Hi Yaniv and Sandro
>>
>> The disk is in the same machine then ovirt-engine
>>
>
> I'm looking back at your results, and something is terribly wrong there:
> For example, sysbench:
>
> Host result:2.9843Mb/sec
> Ovirt result:   1.1561Mb/sec
> Xenserver result:   2.9006Mb/sec
>
> This is slower than a USB1 disk on key performance. I don't know what to
> make of it, but it's completely bogus. Even plain QEMU can get better
> results than this.
> And the 2nd benchmark:
>
>
> **The novabench test:*
> Ovirt result:  79Mb/s
> Xenserver result:  101Mb/s
>
> This is better, but still very slow. If I translate it to MB/s, it's
> ~10-12MBs - still very very slow.
> If, however, this is MB/sec, then this makes sense - and is probably as
> much as you can get from a single spindle.
> The difference between XenServer and oVirt are more likely have to do with
> caching than anything else. I don't know what the caching settings of
> XenServer - can you ensure no caching ('direct IO') is used?
>
>
>
>> Thanks
>>
>>
>>
>>
>>
>> 2016-09-02 15:31 GMT-03:00 Yaniv Kaul :
>>
>>>
>>>
>>> On Fri, Sep 2, 2016 at 6:11 PM, Gabriel Ozaki >> > wrote:
>>>
 Hi Yaniv

 Sorry guys, i don't explain well on my first mail, i notice a bad IO
 performance on *disk* benchmarks, the network are working really fine

>>>
>>> But where is the disk? If it's across the network, then network is
>>> involved and is certainly a bottleneck.
>>> Y.
>>>
>>>







 2016-09-02 12:04 GMT-03:00 Yaniv Kaul :

> On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki <
> gabriel.oz...@kemi.com.br> wrote:
>
>> Hi Nir, thanks for the answer
>>
>>
>> *The nfs server is in the host?*
>> Yes, i choose NFS to use as storage on ovirt host
>>
>> *- Is this 2.9GiB/s or 2.9 MiB/s?*
>> Is MiB/s, i put the full test on paste bin
>> centos guest on ovirt:
>> http://pastebin.com/d48qfvuf
>>
>> centos guest on xenserver:
>> http://pastebin.com/gqN3du29
>>
>> how the test works:
>> https://www.howtoforge.com/how-to-benchmark-your-system-cpu-
>> file-io-mysql-with-sysbench
>>
>> *- Are you testing using NFS in all versions?*
>> i am using the v3 version
>>
>> *- What is the disk format?*
>> partion size format
>> /20Gb xfs
>> swap 2 Gb xfs
>> /dados rest of disk xfs   (note, this is the partition where i save
>> the ISOs,exports and VM disks)
>>
>>
>> *- How do you test io on the host?*
>> I do a clean install of centos and do the test before i install the
>> ovirt
>> the test:
>> http://pastebin.com/7RKU7778
>>
>> *- What kind of nic is used? (1G, 10G?)*
>> Is only a 100mbps :(
>>
>
> 100Mbps will not get you more than 

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-05 Thread Yaniv Kaul
On Mon, Sep 5, 2016 at 1:45 PM, Gabriel Ozaki 
wrote:

> Hi Yaniv and Sandro
>
> The disk is in the same machine then ovirt-engine
>

I'm looking back at your results, and something is terribly wrong there:
For example, sysbench:

Host result:2.9843Mb/sec
Ovirt result:   1.1561Mb/sec
Xenserver result:   2.9006Mb/sec

This is slower than a USB1 disk on key performance. I don't know what to
make of it, but it's completely bogus. Even plain QEMU can get better
results than this.
And the 2nd benchmark:


**The novabench test:*
Ovirt result:  79Mb/s
Xenserver result:  101Mb/s

This is better, but still very slow. If I translate it to MB/s, it's
~10-12MBs - still very very slow.
If, however, this is MB/sec, then this makes sense - and is probably as
much as you can get from a single spindle.
The difference between XenServer and oVirt are more likely have to do with
caching than anything else. I don't know what the caching settings of
XenServer - can you ensure no caching ('direct IO') is used?



> Thanks
>
>
>
>
>
> 2016-09-02 15:31 GMT-03:00 Yaniv Kaul :
>
>>
>>
>> On Fri, Sep 2, 2016 at 6:11 PM, Gabriel Ozaki 
>> wrote:
>>
>>> Hi Yaniv
>>>
>>> Sorry guys, i don't explain well on my first mail, i notice a bad IO
>>> performance on *disk* benchmarks, the network are working really fine
>>>
>>
>> But where is the disk? If it's across the network, then network is
>> involved and is certainly a bottleneck.
>> Y.
>>
>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 2016-09-02 12:04 GMT-03:00 Yaniv Kaul :
>>>
 On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki <
 gabriel.oz...@kemi.com.br> wrote:

> Hi Nir, thanks for the answer
>
>
> *The nfs server is in the host?*
> Yes, i choose NFS to use as storage on ovirt host
>
> *- Is this 2.9GiB/s or 2.9 MiB/s?*
> Is MiB/s, i put the full test on paste bin
> centos guest on ovirt:
> http://pastebin.com/d48qfvuf
>
> centos guest on xenserver:
> http://pastebin.com/gqN3du29
>
> how the test works:
> https://www.howtoforge.com/how-to-benchmark-your-system-cpu-
> file-io-mysql-with-sysbench
>
> *- Are you testing using NFS in all versions?*
> i am using the v3 version
>
> *- What is the disk format?*
> partion size format
> /20Gb xfs
> swap 2 Gb xfs
> /dados rest of disk xfs   (note, this is the partition where i save
> the ISOs,exports and VM disks)
>
>
> *- How do you test io on the host?*
> I do a clean install of centos and do the test before i install the
> ovirt
> the test:
> http://pastebin.com/7RKU7778
>
> *- What kind of nic is used? (1G, 10G?)*
> Is only a 100mbps :(
>

 100Mbps will not get you more than several MB/s. 11MB/s on a very
 bright day...

>
> *We need much more details to understand what do you test here.*
> I have problems to upload the benchmark test on orvirt to novabench
> site, so here is the screenshot(i make a mistake on the last email i get
> the wrong value), is 86 Mb/s:
>

 Which is not possible on the wire. Unless it's VM to VM? And the
 storage is local, which means it's the bandwidth of the physical disk
 itself?
 Y.



>
> ​
> And the novabench on xenserver:
> https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d396
> 70b164ab11061671
>
> *- For Xenserver - detailed description of the vm and the storage
> configuration?*
> The host is the same(i install xenserver, do the tests before i
> install centos), the VM i use the same configuration of ovirt, 2 cores, 4
> Gb of ram and 60 Gb disk(in the default xenserver SR)
>
> *- For ovirt, can you share the vm command line, available in
> /var/log/libvirt/qemu/vmname.**log?*
> 2016-09-01 12:50:28.268+: starting up libvirt version: 1.2.17,
> package: 13.el7_2.5 (CentOS BuildSystem ,
> 2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0
> (qemu-kvm-ev-2.3.0-31.el7.16.1)
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine
> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
> size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
> 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa
> node,nodeid=0,cpus=0-1,mem=4096 -uuid 21872e4b-7699-4502-b1ef-2c058eff1c3c
> -smbios type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-2.1511.el7.centos.2.10,serial=03AA02FC-0414-0
> 5F8-D906-710700080009,uuid=21872e4b-7699-4502-b1ef-2c058eff1c3c
> -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/va
> r/lib/libvirt/qemu/domain-vmcentos/monitor.sock,server,nowait -mon
> 

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-05 Thread Gabriel Ozaki
Hi Yaniv and Sandro

The disk is in the same machine then ovirt-engine

Thanks





2016-09-02 15:31 GMT-03:00 Yaniv Kaul :

>
>
> On Fri, Sep 2, 2016 at 6:11 PM, Gabriel Ozaki 
> wrote:
>
>> Hi Yaniv
>>
>> Sorry guys, i don't explain well on my first mail, i notice a bad IO
>> performance on *disk* benchmarks, the network are working really fine
>>
>
> But where is the disk? If it's across the network, then network is
> involved and is certainly a bottleneck.
> Y.
>
>
>>
>>
>>
>>
>>
>>
>>
>> 2016-09-02 12:04 GMT-03:00 Yaniv Kaul :
>>
>>> On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki >> > wrote:
>>>
 Hi Nir, thanks for the answer


 *The nfs server is in the host?*
 Yes, i choose NFS to use as storage on ovirt host

 *- Is this 2.9GiB/s or 2.9 MiB/s?*
 Is MiB/s, i put the full test on paste bin
 centos guest on ovirt:
 http://pastebin.com/d48qfvuf

 centos guest on xenserver:
 http://pastebin.com/gqN3du29

 how the test works:
 https://www.howtoforge.com/how-to-benchmark-your-system-cpu-
 file-io-mysql-with-sysbench

 *- Are you testing using NFS in all versions?*
 i am using the v3 version

 *- What is the disk format?*
 partion size format
 /20Gb xfs
 swap 2 Gb xfs
 /dados rest of disk xfs   (note, this is the partition where i save the
 ISOs,exports and VM disks)


 *- How do you test io on the host?*
 I do a clean install of centos and do the test before i install the
 ovirt
 the test:
 http://pastebin.com/7RKU7778

 *- What kind of nic is used? (1G, 10G?)*
 Is only a 100mbps :(

>>>
>>> 100Mbps will not get you more than several MB/s. 11MB/s on a very bright
>>> day...
>>>

 *We need much more details to understand what do you test here.*
 I have problems to upload the benchmark test on orvirt to novabench
 site, so here is the screenshot(i make a mistake on the last email i get
 the wrong value), is 86 Mb/s:

>>>
>>> Which is not possible on the wire. Unless it's VM to VM? And the storage
>>> is local, which means it's the bandwidth of the physical disk itself?
>>> Y.
>>>
>>>
>>>

 ​
 And the novabench on xenserver:
 https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d396
 70b164ab11061671

 *- For Xenserver - detailed description of the vm and the storage
 configuration?*
 The host is the same(i install xenserver, do the tests before i install
 centos), the VM i use the same configuration of ovirt, 2 cores, 4 Gb of ram
 and 60 Gb disk(in the default xenserver SR)

 *- For ovirt, can you share the vm command line, available in
 /var/log/libvirt/qemu/vmname.**log?*
 2016-09-01 12:50:28.268+: starting up libvirt version: 1.2.17,
 package: 13.el7_2.5 (CentOS BuildSystem ,
 2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0
 (qemu-kvm-ev-2.3.0-31.el7.16.1)
 LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
 QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine
 pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
 size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa
 node,nodeid=0,cpus=0-1,mem=4096 -uuid 21872e4b-7699-4502-b1ef-2c058eff1c3c
 -smbios type=1,manufacturer=oVirt,product=oVirt
 Node,version=7-2.1511.el7.centos.2.10,serial=03AA02FC-0414-0
 5F8-D906-710700080009,uuid=21872e4b-7699-4502-b1ef-2c058eff1c3c
 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/va
 r/lib/libvirt/qemu/domain-vmcentos/monitor.sock,server,nowait -mon
 chardev=charmonitor,id=monitor,mode=control -rtc
 base=2016-09-01T09:50:28,driftfix=slew -global
 kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
 virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
 virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
 -drive file=/rhev/data-center/mnt/ovirt.kemi.intranet:_dados_iso/52
 ee9f87-9d38-48ec-8003-193262f81994/images/--
 --/CentOS-7-x86_64-NetInstall-1511.iso,if=no
 ne,id=drive-ide0-1-0,readonly=on,format=raw -device
 ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
 -drive file=/rhev/data-center/0001-0001-0001-0001-02bb/
 4ccdd1f3-ee79-4425-b6ed-5774643003fa/images/2ecfcf18-ae84-4e
 73-922f-28b9cda9e6e1/800f05bf-23f7-4c9d-8c1d-b2503592875f,if
 =none,id=drive-virtio-disk0,format=raw,serial=2ecfcf18-ae84-
 4e73-922f-28b9cda9e6e1,cache=none,werror=stop,rerror=stop,aio=threads
 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virti
 

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-02 Thread Yaniv Kaul
On Fri, Sep 2, 2016 at 6:11 PM, Gabriel Ozaki 
wrote:

> Hi Yaniv
>
> Sorry guys, i don't explain well on my first mail, i notice a bad IO
> performance on *disk* benchmarks, the network are working really fine
>

But where is the disk? If it's across the network, then network is involved
and is certainly a bottleneck.
Y.


>
>
>
>
>
>
>
> 2016-09-02 12:04 GMT-03:00 Yaniv Kaul :
>
>> On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki 
>> wrote:
>>
>>> Hi Nir, thanks for the answer
>>>
>>>
>>> *The nfs server is in the host?*
>>> Yes, i choose NFS to use as storage on ovirt host
>>>
>>> *- Is this 2.9GiB/s or 2.9 MiB/s?*
>>> Is MiB/s, i put the full test on paste bin
>>> centos guest on ovirt:
>>> http://pastebin.com/d48qfvuf
>>>
>>> centos guest on xenserver:
>>> http://pastebin.com/gqN3du29
>>>
>>> how the test works:
>>> https://www.howtoforge.com/how-to-benchmark-your-system-cpu-
>>> file-io-mysql-with-sysbench
>>>
>>> *- Are you testing using NFS in all versions?*
>>> i am using the v3 version
>>>
>>> *- What is the disk format?*
>>> partion size format
>>> /20Gb xfs
>>> swap 2 Gb xfs
>>> /dados rest of disk xfs   (note, this is the partition where i save the
>>> ISOs,exports and VM disks)
>>>
>>>
>>> *- How do you test io on the host?*
>>> I do a clean install of centos and do the test before i install the ovirt
>>> the test:
>>> http://pastebin.com/7RKU7778
>>>
>>> *- What kind of nic is used? (1G, 10G?)*
>>> Is only a 100mbps :(
>>>
>>
>> 100Mbps will not get you more than several MB/s. 11MB/s on a very bright
>> day...
>>
>>>
>>> *We need much more details to understand what do you test here.*
>>> I have problems to upload the benchmark test on orvirt to novabench
>>> site, so here is the screenshot(i make a mistake on the last email i get
>>> the wrong value), is 86 Mb/s:
>>>
>>
>> Which is not possible on the wire. Unless it's VM to VM? And the storage
>> is local, which means it's the bandwidth of the physical disk itself?
>> Y.
>>
>>
>>
>>>
>>> ​
>>> And the novabench on xenserver:
>>> https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d396
>>> 70b164ab11061671
>>>
>>> *- For Xenserver - detailed description of the vm and the storage
>>> configuration?*
>>> The host is the same(i install xenserver, do the tests before i install
>>> centos), the VM i use the same configuration of ovirt, 2 cores, 4 Gb of ram
>>> and 60 Gb disk(in the default xenserver SR)
>>>
>>> *- For ovirt, can you share the vm command line, available in
>>> /var/log/libvirt/qemu/vmname.**log?*
>>> 2016-09-01 12:50:28.268+: starting up libvirt version: 1.2.17,
>>> package: 13.el7_2.5 (CentOS BuildSystem ,
>>> 2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0
>>> (qemu-kvm-ev-2.3.0-31.el7.16.1)
>>> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
>>> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine
>>> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
>>> size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
>>> 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa
>>> node,nodeid=0,cpus=0-1,mem=4096 -uuid 21872e4b-7699-4502-b1ef-2c058eff1c3c
>>> -smbios type=1,manufacturer=oVirt,product=oVirt
>>> Node,version=7-2.1511.el7.centos.2.10,serial=03AA02FC-0414-0
>>> 5F8-D906-710700080009,uuid=21872e4b-7699-4502-b1ef-2c058eff1c3c
>>> -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/va
>>> r/lib/libvirt/qemu/domain-vmcentos/monitor.sock,server,nowait -mon
>>> chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=2016-09-01T09:50:28,driftfix=slew -global
>>> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
>>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
>>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
>>> -drive file=/rhev/data-center/mnt/ovirt.kemi.intranet:_dados_iso/52
>>> ee9f87-9d38-48ec-8003-193262f81994/images/--
>>> --/CentOS-7-x86_64-NetInstall-1511.iso,if=
>>> none,id=drive-ide0-1-0,readonly=on,format=raw -device
>>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
>>> -drive file=/rhev/data-center/0001-0001-0001-0001-02bb/
>>> 4ccdd1f3-ee79-4425-b6ed-5774643003fa/images/2ecfcf18-ae84-
>>> 4e73-922f-28b9cda9e6e1/800f05bf-23f7-4c9d-8c1d-b2503592875f,
>>> if=none,id=drive-virtio-disk0,format=raw,serial=2ecfcf18-
>>> ae84-4e73-922f-28b9cda9e6e1,cache=none,werror=stop,rerror=stop,aio=threads
>>> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virti
>>> o-disk0,id=virtio-disk0,bootindex=1 -chardev
>>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/2
>>> 1872e4b-7699-4502-b1ef-2c058eff1c3c.com.redhat.rhevm.vdsm,server,nowait
>>> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel
>>> 

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-02 Thread Gabriel Ozaki
Hi Yaniv

Sorry guys, i don't explain well on my first mail, i notice a bad IO
performance on *disk* benchmarks, the network are working really fine







2016-09-02 12:04 GMT-03:00 Yaniv Kaul :

> On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki 
> wrote:
>
>> Hi Nir, thanks for the answer
>>
>>
>> *The nfs server is in the host?*
>> Yes, i choose NFS to use as storage on ovirt host
>>
>> *- Is this 2.9GiB/s or 2.9 MiB/s?*
>> Is MiB/s, i put the full test on paste bin
>> centos guest on ovirt:
>> http://pastebin.com/d48qfvuf
>>
>> centos guest on xenserver:
>> http://pastebin.com/gqN3du29
>>
>> how the test works:
>> https://www.howtoforge.com/how-to-benchmark-your-system-cpu-
>> file-io-mysql-with-sysbench
>>
>> *- Are you testing using NFS in all versions?*
>> i am using the v3 version
>>
>> *- What is the disk format?*
>> partion size format
>> /20Gb xfs
>> swap 2 Gb xfs
>> /dados rest of disk xfs   (note, this is the partition where i save the
>> ISOs,exports and VM disks)
>>
>>
>> *- How do you test io on the host?*
>> I do a clean install of centos and do the test before i install the ovirt
>> the test:
>> http://pastebin.com/7RKU7778
>>
>> *- What kind of nic is used? (1G, 10G?)*
>> Is only a 100mbps :(
>>
>
> 100Mbps will not get you more than several MB/s. 11MB/s on a very bright
> day...
>
>>
>> *We need much more details to understand what do you test here.*
>> I have problems to upload the benchmark test on orvirt to novabench site,
>> so here is the screenshot(i make a mistake on the last email i get the
>> wrong value), is 86 Mb/s:
>>
>
> Which is not possible on the wire. Unless it's VM to VM? And the storage
> is local, which means it's the bandwidth of the physical disk itself?
> Y.
>
>
>
>>
>> ​
>> And the novabench on xenserver:
>> https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d396
>> 70b164ab11061671
>>
>> *- For Xenserver - detailed description of the vm and the storage
>> configuration?*
>> The host is the same(i install xenserver, do the tests before i install
>> centos), the VM i use the same configuration of ovirt, 2 cores, 4 Gb of ram
>> and 60 Gb disk(in the default xenserver SR)
>>
>> *- For ovirt, can you share the vm command line, available in
>> /var/log/libvirt/qemu/vmname.**log?*
>> 2016-09-01 12:50:28.268+: starting up libvirt version: 1.2.17,
>> package: 13.el7_2.5 (CentOS BuildSystem ,
>> 2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0
>> (qemu-kvm-ev-2.3.0-31.el7.16.1)
>> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
>> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine
>> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
>> size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
>> 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa
>> node,nodeid=0,cpus=0-1,mem=4096 -uuid 21872e4b-7699-4502-b1ef-2c058eff1c3c
>> -smbios type=1,manufacturer=oVirt,product=oVirt
>> Node,version=7-2.1511.el7.centos.2.10,serial=03AA02FC-0414-
>> 05F8-D906-710700080009,uuid=21872e4b-7699-4502-b1ef-2c058eff1c3c
>> -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/va
>> r/lib/libvirt/qemu/domain-vmcentos/monitor.sock,server,nowait -mon
>> chardev=charmonitor,id=monitor,mode=control -rtc
>> base=2016-09-01T09:50:28,driftfix=slew -global
>> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
>> -drive file=/rhev/data-center/mnt/ovirt.kemi.intranet:_dados_iso/
>> 52ee9f87-9d38-48ec-8003-193262f81994/images/--
>> --/CentOS-7-x86_64-NetInstall-1511.iso,
>> if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
>> -drive file=/rhev/data-center/0001-0001-0001-0001-02bb/
>> 4ccdd1f3-ee79-4425-b6ed-5774643003fa/images/2ecfcf18-
>> ae84-4e73-922f-28b9cda9e6e1/800f05bf-23f7-4c9d-8c1d-
>> b2503592875f,if=none,id=drive-virtio-disk0,format=raw,
>> serial=2ecfcf18-ae84-4e73-922f-28b9cda9e6e1,cache=none,
>> werror=stop,rerror=stop,aio=threads -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virti
>> o-disk0,id=virtio-disk0,bootindex=1 -chardev
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/2
>> 1872e4b-7699-4502-b1ef-2c058eff1c3c.com.redhat.rhevm.vdsm,server,nowait
>> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel
>> 0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
>> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/2
>> 1872e4b-7699-4502-b1ef-2c058eff1c3c.org.qemu.guest_agent.0,server,nowait
>> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel
>> 1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0
>> -vnc 

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-02 Thread Yaniv Kaul
On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki 
wrote:

> Hi Nir, thanks for the answer
>
>
> *The nfs server is in the host?*
> Yes, i choose NFS to use as storage on ovirt host
>
> *- Is this 2.9GiB/s or 2.9 MiB/s?*
> Is MiB/s, i put the full test on paste bin
> centos guest on ovirt:
> http://pastebin.com/d48qfvuf
>
> centos guest on xenserver:
> http://pastebin.com/gqN3du29
>
> how the test works:
> https://www.howtoforge.com/how-to-benchmark-your-system-
> cpu-file-io-mysql-with-sysbench
>
> *- Are you testing using NFS in all versions?*
> i am using the v3 version
>
> *- What is the disk format?*
> partion size format
> /20Gb xfs
> swap 2 Gb xfs
> /dados rest of disk xfs   (note, this is the partition where i save the
> ISOs,exports and VM disks)
>
>
> *- How do you test io on the host?*
> I do a clean install of centos and do the test before i install the ovirt
> the test:
> http://pastebin.com/7RKU7778
>
> *- What kind of nic is used? (1G, 10G?)*
> Is only a 100mbps :(
>

100Mbps will not get you more than several MB/s. 11MB/s on a very bright
day...

>
> *We need much more details to understand what do you test here.*
> I have problems to upload the benchmark test on orvirt to novabench site,
> so here is the screenshot(i make a mistake on the last email i get the
> wrong value), is 86 Mb/s:
>

Which is not possible on the wire. Unless it's VM to VM? And the storage is
local, which means it's the bandwidth of the physical disk itself?
Y.



>
> ​
> And the novabench on xenserver:
> https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d39670b164a
> b11061671
>
> *- For Xenserver - detailed description of the vm and the storage
> configuration?*
> The host is the same(i install xenserver, do the tests before i install
> centos), the VM i use the same configuration of ovirt, 2 cores, 4 Gb of ram
> and 60 Gb disk(in the default xenserver SR)
>
> *- For ovirt, can you share the vm command line, available in
> /var/log/libvirt/qemu/vmname.**log?*
> 2016-09-01 12:50:28.268+: starting up libvirt version: 1.2.17,
> package: 13.el7_2.5 (CentOS BuildSystem ,
> 2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0
> (qemu-kvm-ev-2.3.0-31.el7.16.1)
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine
> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
> size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
> 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa
> node,nodeid=0,cpus=0-1,mem=4096 -uuid 21872e4b-7699-4502-b1ef-2c058eff1c3c
> -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-2.1511.el7.
> centos.2.10,serial=03AA02FC-0414-05F8-D906-710700080009,
> uuid=21872e4b-7699-4502-b1ef-2c058eff1c3c -no-user-config -nodefaults
> -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> vmcentos/monitor.sock,server,nowait -mon 
> chardev=charmonitor,id=monitor,mode=control
> -rtc base=2016-09-01T09:50:28,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
> -drive file=/rhev/data-center/mnt/ovirt.kemi.intranet:_dados_
> iso/52ee9f87-9d38-48ec-8003-193262f81994/images/-
> ---/CentOS-7-x86_64-NetInstall-
> 1511.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
> -drive file=/rhev/data-center/0001-0001-0001-0001-
> 02bb/4ccdd1f3-ee79-4425-b6ed-5774643003fa/images/
> 2ecfcf18-ae84-4e73-922f-28b9cda9e6e1/800f05bf-23f7-
> 4c9d-8c1d-b2503592875f,if=none,id=drive-virtio-disk0,
> format=raw,serial=2ecfcf18-ae84-4e73-922f-28b9cda9e6e1,
> cache=none,werror=stop,rerror=stop,aio=threads -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1 -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 21872e4b-7699-4502-b1ef-2c058eff1c3c.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 21872e4b-7699-4502-b1ef-2c058eff1c3c.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device
> usb-tablet,id=input0 -vnc 192.168.0.189:0,password -k pt-br -device
> VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
> 2016-09-01T12:50:28.307173Z qemu-kvm: warning: CPU(s) not present in any
> NUMA nodes: 2 3 4 5 6 7 8 9 10 11 12 13 14 15
> 2016-09-01T12:50:28.307371Z qemu-kvm: warning: All 

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-02 Thread Gabriel Ozaki
Hi Nir, thanks for the answer


*The nfs server is in the host?*
Yes, i choose NFS to use as storage on ovirt host

*- Is this 2.9GiB/s or 2.9 MiB/s?*
Is MiB/s, i put the full test on paste bin
centos guest on ovirt:
http://pastebin.com/d48qfvuf

centos guest on xenserver:
http://pastebin.com/gqN3du29

how the test works:
https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-mysql-with-sysbench

*- Are you testing using NFS in all versions?*
i am using the v3 version

*- What is the disk format?*
partion size format
/20Gb xfs
swap 2 Gb xfs
/dados rest of disk xfs   (note, this is the partition where i save the
ISOs,exports and VM disks)


*- How do you test io on the host?*
I do a clean install of centos and do the test before i install the ovirt
the test:
http://pastebin.com/7RKU7778

*- What kind of nic is used? (1G, 10G?)*
Is only a 100mbps :(

*We need much more details to understand what do you test here.*
I have problems to upload the benchmark test on orvirt to novabench site,
so here is the screenshot(i make a mistake on the last email i get the
wrong value), is 86 Mb/s:

​
And the novabench on xenserver:
https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d39670b164ab11061671

*- For Xenserver - detailed description of the vm and the storage
configuration?*
The host is the same(i install xenserver, do the tests before i install
centos), the VM i use the same configuration of ovirt, 2 cores, 4 Gb of ram
and 60 Gb disk(in the default xenserver SR)

*- For ovirt, can you share the vm command line, available in
/var/log/libvirt/qemu/vmname.**log?*
2016-09-01 12:50:28.268+: starting up libvirt version: 1.2.17, package:
13.el7_2.5 (CentOS BuildSystem ,
2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0
(qemu-kvm-ev-2.3.0-31.el7.16.1)
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine
pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
2,maxcpus=16,sockets=16,cores=1,threads=1 -numa
node,nodeid=0,cpus=0-1,mem=4096 -uuid 21872e4b-7699-4502-b1ef-2c058eff1c3c
-smbios type=1,manufacturer=oVirt,product=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=03AA02FC-0414-05F8-D906-710700080009,uuid=21872e4b-7699-4502-b1ef-2c058eff1c3c
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-vmcentos/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2016-09-01T09:50:28,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive
file=/rhev/data-center/mnt/ovirt.kemi.intranet:_dados_iso/52ee9f87-9d38-48ec-8003-193262f81994/images/----/CentOS-7-x86_64-NetInstall-1511.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw
-device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 -drive
file=/rhev/data-center/0001-0001-0001-0001-02bb/4ccdd1f3-ee79-4425-b6ed-5774643003fa/images/2ecfcf18-ae84-4e73-922f-28b9cda9e6e1/800f05bf-23f7-4c9d-8c1d-b2503592875f,if=none,id=drive-virtio-disk0,format=raw,serial=2ecfcf18-ae84-4e73-922f-28b9cda9e6e1,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/21872e4b-7699-4502-b1ef-2c058eff1c3c.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/21872e4b-7699-4502-b1ef-2c058eff1c3c.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-device usb-tablet,id=input0 -vnc 192.168.0.189:0,password -k pt-br -device
VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
2016-09-01T12:50:28.307173Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 2 3 4 5 6 7 8 9 10 11 12 13 14 15
2016-09-01T12:50:28.307371Z qemu-kvm: warning: All CPU(s) up to maxcpus
should be described in NUMA config
qemu: terminating on signal 15 from pid 1
2016-09-01 19:13:47.899+: shutting down


Thanks







2016-09-02 11:05 GMT-03:00 Nir Soffer :

> On Fri, Sep 2, 2016 at 4:44 PM, Gabriel Ozaki 
> wrote:
>
>> Hi
>> i am trying Ovirt 4.0 and i am getting some strange results when
>> comparing with Xenserver
>>
>> **The host machine*
>> Intel Core i5-4440 3.10GHz running at 3093 MHz
>> 8 Gb of RAM (1x8)
>> 500 Gb of Disk 

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-02 Thread Nir Soffer
On Fri, Sep 2, 2016 at 4:44 PM, Gabriel Ozaki 
wrote:

> Hi
> i am trying Ovirt 4.0 and i am getting some strange results when comparing
> with Xenserver
>
> **The host machine*
> Intel Core i5-4440 3.10GHz running at 3093 MHz
> 8 Gb of RAM (1x8)
> 500 Gb of Disk (seagate st500dm002 7200rpm)
> CentOS 7 (netinstall for the most updated and stable packages)
>
>
>
> **How i am testing:*
> I choose two benchmark tools, sysbench(epel-repo on centos) and
> novabench(for windows guest, https://novabench.com ), then i make a clean
> install of xenserver and create two guests(CentOS and Windows 7 SP1)
>
>
> **The Guest specs*
> 2 cores
> 4 Gb of RAM
> 60 Gb of disk (using virtIO in a NFS storage)
>

The nfs server is in the host?


> Important note: only the testing guest are up on benchmark and i have
> installed the drivers in guest
>
>
> **The Sysbench disk test(creates 10Gb of data and do the bench):*
> # sysbench --test=fileio --file-total-size=10G prepare
> # sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw
> --init-rng=on --max-time=300 --max-requests=0 run
>
> Host result:2.9843Mb/sec
> Ovirt result:   1.1561Mb/sec
> Xenserver result:   2.9006Mb/sec
>

- Is this 2.9GiB/s or 2.9 MiB/s?
- Are you testing using NFS in all versions?
- What is the disk format?
- How do you test io on the host?
- What kind of nic is used? (1G, 10G?)


>
> **The novabench test:*
> Ovirt result:  79Mb/s
> Xenserver result:  101Mb/s
>

We need much more details to understand what do you test here.

- For ovirt, can you share the vm command line, available in
/var/log/libvirt/qemu/vmname.log?
- For Xenserver - detailed description of the vm and the storage
configuration?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users