Im in the middle of a priority issue right now, so cant take time out to rerun 
the bench, but...
Usually in that kind of situation, if you dont turn on sync-to-disk on every 
write, you get benchmarks that are artificially HIGH.
Forcing O_DIRECT slows throughput down.
Dont you think the results are bad enough already? :-}

----- Original Message -----
From: "Stefan Hajnoczi" <stefa...@redhat.com>
To: "Philip Brown" <pbr...@medata.com>
Cc: "Nir Soffer" <nsof...@redhat.com>, "users" <users@ovirt.org>, "qemu-block" 
<qemu-bl...@nongnu.org>, "Paolo Bonzini" <pbonz...@redhat.com>, "Sergio Lopez 
Pascual" <s...@redhat.com>, "Mordechai Lehrer" <mleh...@redhat.com>, "Kevin 
Wolf" <kw...@redhat.com>
Sent: Thursday, July 23, 2020 6:09:39 AM
Subject: Re: [BULK]  Re: [ovirt-users] very very bad iscsi performance


Hi,
At first glance it appears that the filebench OLTP workload does not use
O_DIRECT, so this isn't a measurement of pure disk I/O performance:
https://github.com/filebench/filebench/blob/master/workloads/oltp.f

If you suspect that disk performance is the issue please run a benchmark
that bypasses the page cache using O_DIRECT.

The fio setting is direct=1.

Here is an example fio job for 70% read/30% write 4KB random I/O:

  [global]
  filename=/path/to/device
  runtime=120
  ioengine=libaio
  direct=1
  ramp_time=10            # start measuring after warm-up time

  [read]
  readwrite=randrw
  rwmixread=70
  rwmixwrite=30
  iodepth=64
  blocksize=4k

(Based on 
https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html)

Stefan
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5LCSEPHJP4GW6PAUN4ZUAJWEQUEJ6J7F/

Reply via email to