Jeff,

I used fio in a quick benchmarking script inspired by
https://smcleod.net/benchmarking-io/:

#!/bin/bash
#Random throughput
echo "Random throughput"
sync
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=test --bs=4M --iodepth=256 --size=10G --readwrite=randread
--ramp_time=4
#Random IOPS
echo "Random IOPS"
sync
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=test --bs=4k --iodepth=256 --size=4G --readwrite=randread
--ramp_time=4
#Sequential throughput
echo "Sequential throughput"
sync
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read
--ramp_time=4
#Sequential IOPS
echo "Sequential IOPS"
sync
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=test --bs=4k --iodepth=256 --size=4G --readwrite=read
--ramp_time=4

Performing the test you suggested, I get 128.5MB/s. Monitoring the test, I
find that the throughput is constant from start to finish and that the
iowait is also constant at 5%:

charles@hpdl380g6:~$ sudo sh -c 'time cat /mnt/data/postgresql/base/16385/*
| wc -c'
[sudo] password for charles:
1.62user 179.94system 29:50.79elapsed 10%CPU (0avgtext+0avgdata
1920maxresident)k
448026264inputs+0outputs (0major+117minor)pagefaults 0swaps
241297594904


After making the changes to HugePage suggested by Rick Otten (above), I
found slightly better results (135.7MB/s):

charles@hpdl380g6:~$ sudo sh -c 'time cat /mnt/data/postgresql/base/16385/*
| wc -c'
[sudo] password for charles:
0.86user 130.84system 28:15.78elapsed 7%CPU (0avgtext+0avgdata
1820maxresident)k
471286792inputs+0outputs (1major+118minor)pagefaults 0swaps
241297594904


Could you suggest another way to benchmark random reads?

Thanks for your help!

Charles

On Mon, Jul 10, 2017 at 9:24 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:

> On Mon, Jul 10, 2017 at 7:03 AM, Charles Nadeau <charles.nad...@gmail.com>
> wrote:
>
>>
>> The problem I have is very poor read. When I benchmark my array with fio
>> I get random reads of about 200MB/s and 1100IOPS and sequential reads of
>> about 286MB/s and 21000IPS.
>>
>
>
> That doesn't seem right.  Sequential is only 43% faster?  What job file
> are giving to fio?
>
> What do you get if you do something simpler, like:
>
> time cat ~/$PGDATA/base/16402/*|wc -c
>
> replacing 16402 with whatever your biggest database is.
>
> Cheers,
>
> Jeff
>



-- 
Charles Nadeau Ph.D.
http://charlesnadeau.blogspot.com/

Reply via email to