On Fri, Sep 25, 2020 at 01:41:39PM +0100, Dr. David Alan Gilbert wrote:
[..]
> So I'm sitll beating 9p; the thread-pool-size=1 seems to be great for
> read performance here.
>
Hi Dave,
I spent some time making changes to virtiofs-tests so that I can test
a mix of random read and random write workload. That testsuite runs
a workload 3 times and reports the average. So I like to use it to
reduce run to run variation effect.
So I ran following to mimic carlos's workload.
$ ./run-fio-test.sh test -direct=1 -c <test-dir> fio-jobs/randrw-psync.job >
testresults.txt
$ ./parse-fio-results.sh testresults.txt
I am using a SSD at the host to back these files. Option "-c" always
creates new files for testing.
Following are my results in various configurations. Used cache=mmap mode
for 9p and cache=auto (and cache=none) modes for virtiofs. Also tested
9p default as well as msize=16m. Tested virtiofs both with exclusive
as well as shared thread pool.
NAME WORKLOAD Bandwidth IOPS
9p-mmap-randrw randrw-psync 42.8mb/14.3mb 10.7k/3666
9p-mmap-msize16m randrw-psync 42.8mb/14.3mb 10.7k/3674
vtfs-auto-ex-randrw randrw-psync 27.8mb/9547kb 7136/2386
vtfs-auto-sh-randrw randrw-psync 43.3mb/14.4mb 10.8k/3709
vtfs-none-sh-randrw randrw-psync 54.1mb/18.1mb 13.5k/4649
- Increasing msize to 16m did not help with performance for this workload.
- virtiofs exclusive thread pool ("ex"), is slower than 9p.
- virtiofs shared thread pool ("sh"), matches the performance of 9p.
- virtiofs cache=none mode is faster than cache=auto mode for this
workload.
Carlos, I am looking at more ways to optimize it further for virtiofs.
In the mean time I think switching to "shared" thread pool should
bring you very close to 9p in your setup I think.
Thanks
Vivek
_______________________________________________
Virtio-fs mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/virtio-fs