I think that the best test is to:
0. Set only 1 change in the infrastructure
1. Automatically create your VM
2. Install the necessary application on the VM from point 1
3. Restore from backup the state of the App
4. Run a typical workload on the app - for example a bunch of queries that are
On Thu, Jul 23, 2020 at 07:25:14AM -0700, Philip Brown wrote:
> Usually in that kind of situation, if you dont turn on sync-to-disk on every
> write, you get benchmarks that are artificially HIGH.
> Forcing O_DIRECT slows throughput down.
> Dont you think the results are bad enough already? :-}
Getting meaningful results is more important than getting good results. If
the benchmark is not meaningful, it is not useful towards fixing the issue.
Did you try virtio-blk with direct LUN?
Il gio 23 lug 2020, 16:35 Philip Brown ha scritto:
> Im in the middle of a priority issue right
Im in the middle of a priority issue right now, so cant take time out to rerun
the bench, but...
Usually in that kind of situation, if you dont turn on sync-to-disk on every
write, you get benchmarks that are artificially HIGH.
Forcing O_DIRECT slows throughput down.
Dont you think the results
On Tue, Jul 21, 2020 at 2:20 AM Philip Brown wrote:
> yes I am testing small writes. "oltp workload" means, simulation of OLTP
> database access.
> You asked me to test the speed of iscsi from another host, which is very
> reasonable. So here are the results,
> run from another node in the
Do you have NICs that support iSCSI -I guess you can use hardware offloading?
MTU size ?
Lattency is usually the killer of any performance, what is your round-trip time
На 21 юли 2020 г. 2:37:10 GMT+03:00, Philip Brown написа:
>AH! my apologies. It
AH! my apologies. It seemed very odd, so I reviewed, and discovered that I
messed up my testing of direct lun.
updated results are improved from my previous email, but not any better than
going through normal storage domain.
18156: 61.714: IO Summary: 110396 ops, 1836.964 ops/s, (921/907
FYI, I just tried it with direct lun.
it is as bad or worse.
I dont know about that sg io vs qemu initiator, but here is the results.
15223: 62.824: IO Summary: 83751 ops, 1387.166 ops/s, (699/681 r/w), 2.7mb/s,
619us cpu/op, 281.4ms latency
15761: 62.268: IO Summary: 77610 ops, 1287.908
yes I am testing small writes. "oltp workload" means, simulation of OLTP
You asked me to test the speed of iscsi from another host, which is very
reasonable. So here are the results,
run from another node in the ovirt cluster.
Setup is using:
- exact same vg device, exported
Il lun 20 lug 2020, 23:42 Nir Soffer ha scritto:
> I think you will get the best performance using direct LUN.
Is direct LUN using the QEMU iSCSI initiator, or SG_IO, and if so is it
using /dev/sg or has that been fixed? SG_IO is definitely not going to be
the fastest, especially with /dev/sg.
On Mon, Jul 20, 2020 at 8:51 PM Philip Brown wrote:
> I'm trying to get optimal iscsi performance. We're a heavy iscsi shop, with
> 10g net.
> I'mm experimenting with SSDs, and the performance in ovirt is way, way less
> than I would have hoped.
> More than an order of magnitude slower.
Mail list logo