Problem summary:
The IOPS is very unstable since I changed the number of jobs from 2 to 4.
even I changed it back, the IOPS performance also can't return back.
# cat 1.fio
[global]
rw=randread
size=128m
[job1]
[job2]
when I run fio 1.fio, the iops is around 31k. and then I add the following 2
entries:
[job3]
[job4]
The IOPS dropped to around 1k.
Even I remove these 2 jobs, the IOPS still be around 1k.
Only if I removed all the jobn.n.0 files, and re-run with 2 jobs setting,
the IOPS can be 31k again.
# bash blkinfo.sh /dev/sda
Vendor : LSI
Model : MR9260-8i
Nr_request : 128
rotational : 1
It looks like you're testing against a LSI megaraid SAS controller,
which presumably has magnetic drives attached. When you add more jobs
to your config its going to cause the heads on the drives (you don't say
how many you have) to thrash more as they try and interleave requests
that are going to land on different portions of the disk. So its not
unsurprising that you'll see IOPS drop off.
A lot of how and where the IOPS will drop off is going to depend on the
raid config of the drives you have attached to the controller however.
Generally speaking 31k IOPS at 128MB I/O's (which will be split into
something smaller like 1MB typically) is well beyond what you should
expect 8 HDD's to do unless you're getting lots of hits in the DRAM
buffer on the raid controller. Enterprise HDD's (even 15k ones)
generally can only sustain <= 250 random read IOPS, so even with perfect
interleaving on an 8 drive raid-0, 31k seem suspicious, 1k seems
perfectly realistic however!
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html