[ovirt-users] Re: very very bad iscsi performance

2020-07-24 Thread Strahil Nikolov via Users
I think that the best test is to:
0. Set only 1  change in the infrastructure
1. Automatically create your VM
2. Install the necessary application on the VM from point 1
3. Restore from backup the state of the App
4. Run a typical workload on the app - for example a bunch of queries that are 
pushed against a typical DB
5. Measure performance during point 4 (for example time of execution)
6. Start over

Anything else  is a waste of time.

Best Regards,
Strahil Nikolov


На 24 юли 2020 г. 13:26:18 GMT+03:00, Stefan Hajnoczi  
написа:
>On Thu, Jul 23, 2020 at 07:25:14AM -0700, Philip Brown wrote:
>> Usually in that kind of situation, if you dont turn on sync-to-disk
>on every write, you get benchmarks that are artificially HIGH.
>> Forcing O_DIRECT slows throughput down.
>> Dont you think the results are bad enough already? :-}
>
>The results that were posted do not show iSCSI performance in isolation
>so it's hard to diagnose the problem.
>
>The page cache is used when the O_DIRECT flag is absent. I/O is not
>sent
>to the disk at all when it can be fulfilled from the page cache in
>memory. Therefore the benchmark is not an accurate indicator of disk
>I/O
>performance.
>
>In addition to this, page cache behavior depends on various factors
>such
>as available free memory, operating system implementation and version,
>etc. This makes it hard to compare results across VMs, different
>machines, etc.
>
>Stefan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WH4KFRSFGJCKRVM2VUJGKY26DEX5R2DH/


[ovirt-users] Re: very very bad iscsi performance

2020-07-24 Thread Stefan Hajnoczi
On Thu, Jul 23, 2020 at 07:25:14AM -0700, Philip Brown wrote:
> Usually in that kind of situation, if you dont turn on sync-to-disk on every 
> write, you get benchmarks that are artificially HIGH.
> Forcing O_DIRECT slows throughput down.
> Dont you think the results are bad enough already? :-}

The results that were posted do not show iSCSI performance in isolation
so it's hard to diagnose the problem.

The page cache is used when the O_DIRECT flag is absent. I/O is not sent
to the disk at all when it can be fulfilled from the page cache in
memory. Therefore the benchmark is not an accurate indicator of disk I/O
performance.

In addition to this, page cache behavior depends on various factors such
as available free memory, operating system implementation and version,
etc. This makes it hard to compare results across VMs, different
machines, etc.

Stefan


signature.asc
Description: PGP signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZCYB2SRTSMF6RL3S7HH5U4J6MXZDHV3/


[ovirt-users] Re: very very bad iscsi performance

2020-07-23 Thread Paolo Bonzini
Getting meaningful results is more important than getting good results. If
the benchmark is not meaningful, it is not useful towards fixing the issue.

Did you try virtio-blk with direct LUN?

Paolo

Il gio 23 lug 2020, 16:35 Philip Brown  ha scritto:

> Im in the middle of a priority issue right now, so cant take time out to
> rerun the bench, but...
> Usually in that kind of situation, if you dont turn on sync-to-disk on
> every write, you get benchmarks that are artificially HIGH.
> Forcing O_DIRECT slows throughput down.
> Dont you think the results are bad enough already? :-}
>
> - Original Message -
> From: "Stefan Hajnoczi" 
> To: "Philip Brown" 
> Cc: "Nir Soffer" , "users" ,
> "qemu-block" , "Paolo Bonzini" ,
> "Sergio Lopez Pascual" , "Mordechai Lehrer" <
> mleh...@redhat.com>, "Kevin Wolf" 
> Sent: Thursday, July 23, 2020 6:09:39 AM
> Subject: Re: [BULK]  Re: [ovirt-users] very very bad iscsi performance
>
>
> Hi,
> At first glance it appears that the filebench OLTP workload does not use
> O_DIRECT, so this isn't a measurement of pure disk I/O performance:
> https://github.com/filebench/filebench/blob/master/workloads/oltp.f
>
> If you suspect that disk performance is the issue please run a benchmark
> that bypasses the page cache using O_DIRECT.
>
> The fio setting is direct=1.
>
> Here is an example fio job for 70% read/30% write 4KB random I/O:
>
>   [global]
>   filename=/path/to/device
>   runtime=120
>   ioengine=libaio
>   direct=1
>   ramp_time=10# start measuring after warm-up time
>
>   [read]
>   readwrite=randrw
>   rwmixread=70
>   rwmixwrite=30
>   iodepth=64
>   blocksize=4k
>
> (Based on
> https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html)
>
> Stefan
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SENWWFESXYPGJY3IOV276RQDRMX6GDTH/


[ovirt-users] Re: very very bad iscsi performance

2020-07-23 Thread Philip Brown
Im in the middle of a priority issue right now, so cant take time out to rerun 
the bench, but...
Usually in that kind of situation, if you dont turn on sync-to-disk on every 
write, you get benchmarks that are artificially HIGH.
Forcing O_DIRECT slows throughput down.
Dont you think the results are bad enough already? :-}

- Original Message -
From: "Stefan Hajnoczi" 
To: "Philip Brown" 
Cc: "Nir Soffer" , "users" , "qemu-block" 
, "Paolo Bonzini" , "Sergio Lopez 
Pascual" , "Mordechai Lehrer" , "Kevin 
Wolf" 
Sent: Thursday, July 23, 2020 6:09:39 AM
Subject: Re: [BULK]  Re: [ovirt-users] very very bad iscsi performance


Hi,
At first glance it appears that the filebench OLTP workload does not use
O_DIRECT, so this isn't a measurement of pure disk I/O performance:
https://github.com/filebench/filebench/blob/master/workloads/oltp.f

If you suspect that disk performance is the issue please run a benchmark
that bypasses the page cache using O_DIRECT.

The fio setting is direct=1.

Here is an example fio job for 70% read/30% write 4KB random I/O:

  [global]
  filename=/path/to/device
  runtime=120
  ioengine=libaio
  direct=1
  ramp_time=10# start measuring after warm-up time

  [read]
  readwrite=randrw
  rwmixread=70
  rwmixwrite=30
  iodepth=64
  blocksize=4k

(Based on 
https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html)

Stefan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5LCSEPHJP4GW6PAUN4ZUAJWEQUEJ6J7F/


[ovirt-users] Re: very very bad iscsi performance

2020-07-21 Thread Nir Soffer
On Tue, Jul 21, 2020 at 2:20 AM Philip Brown  wrote:
>
> yes I am testing small writes. "oltp workload" means, simulation of OLTP 
> database access.
>
> You asked me to test the speed of iscsi from another host, which is very 
> reasonable. So here are the results,
> run from another node in the ovirt cluster.
> Setup is using:
>
>  - exact same vg device, exported via iscsi
>  - mounted directly into another physical host running centos 7, rather than 
> a VM running on it
>  -   literaly the same filesystem, again, mounted noatime
>
> I ran the same oltp workload. this setup gives the following results over 2 
> runs.
>
>  grep Summary oltp.iscsimount.?
> oltp.iscsimount.1:35906: 63.433: IO Summary: 648762 ops, 10811.365 ops/s, 
> (5375/5381 r/w),  21.4mb/s,475us cpu/op,   1.3ms latency
> oltp.iscsimount.2:36830: 61.072: IO Summary: 824557 ops, 13741.050 ops/s, 
> (6844/6826 r/w),  27.2mb/s,429us cpu/op,   1.1ms latency
>
>
> As requested, I attach virsh output, and qemu log

What we see in your logs:

You are using:
- thin disk - qcow2 image on logical volume:
- virtio-scsi


  
  

  
  
  
  47af0207-8b51-4a59-a93e-fddf9ed56d44
  
  
  


-object iothread,id=iothread1 \
-device 
virtio-scsi-pci,iothread=iothread1,id=ua-a50f193d-fa74-419d-bf03-f5a2677acd2a,bus=pci.0,addr=0x5
\
-drive 
file=/rhev/data-center/mnt/blockSD/87cecd83-d6c8-4313-9fad-12ea32768703/images/47af0207-8b51-4a59-a93e-fddf9ed56d44/743550ef-7670-4556-8d7f-4d6fcfd5eb70,format=qcow2,if=none,id=drive-ua-47af0207-8b51-4a59-a93e-fddf9ed56d44,serial=47af0207-8b51-4a59-a93e-fddf9ed56d44,werror=stop,rerror=stop,cache=none,aio=native
\

This is the most flexible option oVirt has, but not the default.

Known issue with such a disk is possible pausing of the VM when the
disk becomes full,
if oVirt cannot extend the underlying logical volume fast enough. It
can be mitigated by
using  larger chunks in vdsm.

We recommend these settings if you are going to use VMs with heavy I/O
with thin disks:

# cat /etc/vdsm/vdsm.conf.d/99-local.conf
[irs]

# Together with volume_utilization_chunk_mb, set the minimal free
# space before a thin provisioned block volume is extended. Use lower
# values to extend earlier.
# default value:
# volume_utilization_percent = 50
volume_utilization_percent = 25

# Size of extension chunk in megabytes, and together with
# volume_utilization_percent, set the free space limit. Use higher
# values to extend in bigger chunks.
# default value:
# volume_utilization_chunk_mb = 1024
volume_utilization_chunk_mb = 4096


With this configuration, when free space on the disk is 1 GiB, oVirt will extend
the disk by 4 GiB. So your disk may be up to 5 GiB larger than the used space,
but if the VM is writing data very fast, the chance of pausing is reduced.

If you want to reduce the chance of pausing your database in the most busy times
to zero, using a preallocated disk is the way.

In oVirt 4.4. you can check this option when creating a disk:

[x] Enable Incremental Backup

With:

Allocation Policy: [Preallocated]

You will get a preallocated disk in the specified size, using qcow2
format. This gives
you both the option to use incremental backup, faster disk operations
in oVirt (since
qemu-img does not need to read the entire disk), and avoids the
pausing issue. It may also
defeat thin provisioning, but if your backend storage supports thin
provisioning anyway
it does not matter.

To get best performance for database use case preallocated volume
should be better.

Please try to benchmark:
- raw preallocated disk
- using virtio instead of virtio-scsi

If your database can use multiple disks, you may get better
performance by adding multiple
disks and use one iothread per disk.

See also interesting talk about storage performance from 2017:
https://events19.lfasiallc.com/wp-content/uploads/2017/11/Storage-Performance-Tuning-for-FAST-Virtual-Machines_Fam-Zheng.pdf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5F7CMTKPCVLV4WICTBBN63YXSW6ADRYO/


[ovirt-users] Re: very very bad iscsi performance

2020-07-21 Thread Strahil Nikolov via Users
Do you  have NICs  that support  iSCSI -I guess you can use hardware offloading?

MTU size ?
Lattency is usually the killer of any performance, what is your round-trip time 
 ?


Best Regards,
Strahil Nikolov


На 21 юли 2020 г. 2:37:10 GMT+03:00, Philip  Brown  написа:
>AH! my apologies. It seemed very odd, so I reviewed, and discovered
>that I messed up my testing of direct lun.
>
>updated results are improved from my previous email, but not any better
>than going through normal storage domain.
>
>18156: 61.714: IO Summary: 110396 ops, 1836.964 ops/s, (921/907 r/w),  
>3.6mb/s,949us cpu/op,  27.3ms latency
>
>17095: 61.794: IO Summary: 123458 ops, 2052.922 ops/s, (1046/996 r/w), 
> 4.0mb/s,858us cpu/op,  60.4ms latency
>
>
>
>- Original Message -
>From: "Philip Brown" 
>To: "Paolo Bonzini" 
>Cc: "Nir Soffer" , "users" ,
>"qemu-block" , "Stefan Hajnoczi"
>, "Sergio Lopez Pascual" ,
>"Mordechai Lehrer" 
>Sent: Monday, July 20, 2020 4:30:32 PM
>Subject: Re: [ovirt-users] very very bad iscsi performance
>
>FYI, I just tried it with direct lun.
>
>it is as bad or worse.
>I dont know about that sg io vs qemu initiator, but here is the
>results.
>
>
>15223: 62.824: IO Summary: 83751 ops, 1387.166 ops/s, (699/681 r/w),  
>2.7mb/s,619us cpu/op, 281.4ms latency
>15761: 62.268: IO Summary: 77610 ops, 1287.908 ops/s, (649/632 r/w),  
>2.5mb/s,686us cpu/op, 283.0ms latency
>16397: 61.812: IO Summary: 94065 ops, 1563.781 ops/s, (806/750 r/w),  
>3.0mb/s,894us cpu/op, 217.3ms latency
>
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAVOLDANXJCE6VEVZVYY7S4TUFF2BXEN/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ATR3WHSD6RWLRWUHOQZXVSHSGXV6PTD6/


[ovirt-users] Re: very very bad iscsi performance

2020-07-20 Thread Philip Brown
AH! my apologies. It seemed very odd, so I reviewed, and discovered that I 
messed up my testing of direct lun.

updated results are improved from my previous email, but not any better than 
going through normal storage domain.

18156: 61.714: IO Summary: 110396 ops, 1836.964 ops/s, (921/907 r/w),   
3.6mb/s,949us cpu/op,  27.3ms latency

17095: 61.794: IO Summary: 123458 ops, 2052.922 ops/s, (1046/996 r/w),   
4.0mb/s,858us cpu/op,  60.4ms latency



- Original Message -
From: "Philip Brown" 
To: "Paolo Bonzini" 
Cc: "Nir Soffer" , "users" , "qemu-block" 
, "Stefan Hajnoczi" , "Sergio Lopez 
Pascual" , "Mordechai Lehrer" 
Sent: Monday, July 20, 2020 4:30:32 PM
Subject: Re: [ovirt-users] very very bad iscsi performance

FYI, I just tried it with direct lun.

it is as bad or worse.
I dont know about that sg io vs qemu initiator, but here is the results.


15223: 62.824: IO Summary: 83751 ops, 1387.166 ops/s, (699/681 r/w),   2.7mb/s, 
   619us cpu/op, 281.4ms latency
15761: 62.268: IO Summary: 77610 ops, 1287.908 ops/s, (649/632 r/w),   2.5mb/s, 
   686us cpu/op, 283.0ms latency
16397: 61.812: IO Summary: 94065 ops, 1563.781 ops/s, (806/750 r/w),   3.0mb/s, 
   894us cpu/op, 217.3ms latency

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAVOLDANXJCE6VEVZVYY7S4TUFF2BXEN/


[ovirt-users] Re: very very bad iscsi performance

2020-07-20 Thread Philip Brown
FYI, I just tried it with direct lun.

it is as bad or worse.
I dont know about that sg io vs qemu initiator, but here is the results.


15223: 62.824: IO Summary: 83751 ops, 1387.166 ops/s, (699/681 r/w),   2.7mb/s, 
   619us cpu/op, 281.4ms latency
15761: 62.268: IO Summary: 77610 ops, 1287.908 ops/s, (649/632 r/w),   2.5mb/s, 
   686us cpu/op, 283.0ms latency
16397: 61.812: IO Summary: 94065 ops, 1563.781 ops/s, (806/750 r/w),   3.0mb/s, 
   894us cpu/op, 217.3ms latency




- Original Message -
From: "Paolo Bonzini" 
To: "Nir Soffer" 
Cc: "Philip Brown" , "users" , "qemu-block" 
, "Stefan Hajnoczi" , "Sergio Lopez 
Pascual" , "Mordechai Lehrer" 
Sent: Monday, July 20, 2020 3:46:39 PM
Subject: Re: [ovirt-users] very very bad iscsi performance

Il lun 20 lug 2020, 23:42 Nir Soffer  ha scritto:

> I think you will get the best performance using direct LUN.


Is direct LUN using the QEMU iSCSI initiator, or SG_IO, and if so is it
using /dev/sg or has that been fixed? SG_IO is definitely not going to be
the fastest, especially with /dev/sg.

Storage
> domain is best if you want
> to use features provided by storage domain. If your important feature
> is performance, you want
> to connect the storage in the most direct way to your VM.
>

Agreed but you want a virtio-blk device, not SG_IO; direct LUN with SG_IO
is only recommended if you want to do clustering and other stuff that
requires SCSI-level access.

Paolo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GL62RUMHMFCUHCJOG2X6IZC6BWZAB6X5/


[ovirt-users] Re: very very bad iscsi performance

2020-07-20 Thread Philip Brown
yes I am testing small writes. "oltp workload" means, simulation of OLTP 
database access.

You asked me to test the speed of iscsi from another host, which is very 
reasonable. So here are the results,
run from another node in the ovirt cluster.
Setup is using:

 - exact same vg device, exported via iscsi
 - mounted directly into another physical host running centos 7, rather than a 
VM running on it
 -   literaly the same filesystem, again, mounted noatime

I ran the same oltp workload. this setup gives the following results over 2 
runs.

 grep Summary oltp.iscsimount.?
oltp.iscsimount.1:35906: 63.433: IO Summary: 648762 ops, 10811.365 ops/s, 
(5375/5381 r/w),  21.4mb/s,475us cpu/op,   1.3ms latency
oltp.iscsimount.2:36830: 61.072: IO Summary: 824557 ops, 13741.050 ops/s, 
(6844/6826 r/w),  27.2mb/s,429us cpu/op,   1.1ms latency


As requested, I attach virsh output, and qemu log

2020-07-19 19:11:42.310+: starting up libvirt version: 4.5.0, package: 33.el7_8.1 (CentOS BuildSystem , 2020-05-12-16:25:35, x86-01.bsys.centos.org), qemu version: 2.12.0qemu-kvm-ev-2.12.0-44.1.el7_8.1, kernel: 3.10.0-1127.8.2.el7.x86_64, hostname: xx
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
QEMU_AUDIO_DRV=spice \
/usr/libexec/qemu-kvm \
-name guest=dbtest01,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-22-dbtest01/master-key.aes \
-machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off \
-cpu Westmere \
-m size=2097152k,slots=16,maxmem=8388608k \
-realtime mlock=off \
-smp 1,maxcpus=16,sockets=16,cores=1,threads=1 \
-object iothread,id=iothread1 \
-numa node,nodeid=0,cpus=0,mem=2048 \
-uuid b5210980-8343-41c0-978e-08b81a8242a8 \
-smbios 'type=1,manufacturer=oVirt,product=oVirt Node,version=7-8.2003.0.el7.centos,serial=30313436-3631-5355-4534-313658343653,uuid=b5210980-8343-41c0-978e-08b81a8242a8' \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=33,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=2020-07-19T19:11:41,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-global PIIX4_PM.disable_s3=1 \
-global PIIX4_PM.disable_s4=1 \
-boot menu=on,splash-time=3,strict=on \
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
-device virtio-scsi-pci,iothread=iothread1,id=ua-a50f193d-fa74-419d-bf03-f5a2677acd2a,bus=pci.0,addr=0x5 \
-device virtio-serial-pci,id=ua-9b8cfdd3-c5a1-4237-82e2-08c6e15c3f74,max_ports=16,bus=pci.0,addr=0x4 \
-drive if=none,id=drive-ua-fd6685fe-1b28-4827-97cf-79a186b9144a,werror=report,rerror=report,readonly=on \
-device ide-cd,bus=ide.1,unit=0,drive=drive-ua-fd6685fe-1b28-4827-97cf-79a186b9144a,id=ua-fd6685fe-1b28-4827-97cf-79a186b9144a \
-drive file=/rhev/data-center/mnt/blockSD/87cecd83-d6c8-4313-9fad-12ea32768703/images/47af0207-8b51-4a59-a93e-fddf9ed56d44/743550ef-7670-4556-8d7f-4d6fcfd5eb70,format=qcow2,if=none,id=drive-ua-47af0207-8b51-4a59-a93e-fddf9ed56d44,serial=47af0207-8b51-4a59-a93e-fddf9ed56d44,werror=stop,rerror=stop,cache=none,aio=native \
-device scsi-hd,bus=ua-a50f193d-fa74-419d-bf03-f5a2677acd2a.0,channel=0,scsi-id=0,lun=0,drive=drive-ua-47af0207-8b51-4a59-a93e-fddf9ed56d44,id=ua-47af0207-8b51-4a59-a93e-fddf9ed56d44,bootindex=1,write-cache=on \
-netdev tap,fd=35,id=xx \
-device virtio-net-pci,host_mtu=1500,netdev=hostua-82b631ec-5c54-4246-8594-16be324c8de2,id=ua-82b631ec-5c54-4246-8594-16be324c8de2,mac=00:1a:4a:16:01:5c,bus=pci.0,addr=0x3,bootindex=2 \
-chardev socket,id=charchannel0,fd=37,server,nowait \
-device virtserialport,bus=ua-9b8cfdd3-c5a1-4237-82e2-08c6e15c3f74.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0 \
-chardev socket,id=charchannel1,fd=38,server,nowait \
-device virtserialport,bus=ua-9b8cfdd3-c5a1-4237-82e2-08c6e15c3f74.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 \
-chardev spicevmc,id=charchannel2,name=vdagent \
-device virtserialport,bus=ua-9b8cfdd3-c5a1-4237-82e2-08c6e15c3f74.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-vnc  \
-k en-us \
-spice port=5903,tls-port=5904,addr=xxx,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on \
-device qxl-vga,id=ua-4c9d4b68-dead-4e24-9b6a-6b4a2f1c2e28,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \
-device virtio-balloon-pci,id=ua-9834a02d-0002-4414-96a2-6671de6532f4,bus=pci.0,addr=0x6 \
-object rng-random,id=objua-c94709ab-0d34-4e91-a158-32602e6b2294,filename=/dev/urandom \
-device virtio-rng-pci,rng=objua-c94709ab-0d34-4e91-a158-32602e6b2294,id=ua-c94709ab-0d34-4e91-a158-32602e6b2294,bus=pci.0,addr=0x7 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=de

[ovirt-users] Re: very very bad iscsi performance

2020-07-20 Thread Paolo Bonzini
Il lun 20 lug 2020, 23:42 Nir Soffer  ha scritto:

> I think you will get the best performance using direct LUN.


Is direct LUN using the QEMU iSCSI initiator, or SG_IO, and if so is it
using /dev/sg or has that been fixed? SG_IO is definitely not going to be
the fastest, especially with /dev/sg.

Storage
> domain is best if you want
> to use features provided by storage domain. If your important feature
> is performance, you want
> to connect the storage in the most direct way to your VM.
>

Agreed but you want a virtio-blk device, not SG_IO; direct LUN with SG_IO
is only recommended if you want to do clustering and other stuff that
requires SCSI-level access.

Paolo


> Mordechai, did we do any similar performance tests in our lab?
> Do you have example results?
>
> Nir
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MXDNCWDFL4NCCYIHCKHOAHIU2HXTBXZY/


[ovirt-users] Re: very very bad iscsi performance

2020-07-20 Thread Nir Soffer
On Mon, Jul 20, 2020 at 8:51 PM Philip Brown  wrote:
>
> I'm trying to get optimal iscsi performance. We're a heavy iscsi shop, with 
> 10g net.
>
> I'mm experimenting with SSDs, and the performance in ovirt is way, way less 
> than I would have hoped.
> More than an order of magnitude slower.
>
> here's a datapoint.
> Im running filebench, with the OLTP workload.

Did you try fio?
https://fio.readthedocs.io/en/latest/fio_doc.html

I think this is the most common and advanced tool for such tests.

> First, i run it on one of the hosts, that has an SSD directly attached.
> create an xfs filesystem (created on a vg "device" on top of the SSD), mount 
> it with noatime, and run the benchmark.
>
>
> 37166: 74.084: IO Summary: 3746362 ops, 62421.629 ops/s, (31053/31049 r/w), 
> 123.6mb/s,161us cpu/op,   1.1ms latency

What do you get if you login to the target on the host  and access the
LUN directly on the host?

If you create a file system on the LUN and mount it on the host?

> I then unmount it, and make the exact same device an iscsi target, and create 
> a storage domain with it.
> I then create a disk for a VM running *on the same host*, and run the 
> benchmark.

What kind of disk? thin? preallocated?

> The same thing: filebench, oltp workload, xfs filesystem, noatime.
>
>
> 13329: 91.728: IO Summary: 153548 ops, 2520.561 ops/s, (1265/1243 r/w),   
> 4.9mb/s,289us cpu/op,  88.4ms latency

4.9mb/s looks very low. Are you testing very small random writes?

> 62,000 ops/s vs 2500 ops/s.
>
> what
>
>
> Someone might be tempted to say, "try making the device directly available, 
> AS a device, to the VM".
> Unfortunately,this is not an option.
> My goal is specifically to put together a new, high performing storage 
> domain, that I can use as database devices in VMs.

This is something to discuss with qemu folks. oVirt is just an easy
way to manage VMs.

Please attach the VM XML using:
virsh -r dumpxml vm-name-or-id

And the qemu command line from:
/var/log/libvirt/qemu/vm-name.log

I think you will get the best performance using direct LUN. Storage
domain is best if you want
to use features provided by storage domain. If your important feature
is performance, you want
to connect the storage in the most direct way to your VM.

Mordechai, did we do any similar performance tests in our lab?
Do you have example results?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SGQ7K6K4URDSS54B2IC7GHD5NZYPG66N/