Fluffles wrote:

> If you use dd on the raw device (meaning no UFS/VFS) there is no
> read-ahead. This means that the following DD-command will give lower STR
> read than the second:
> 
> no read-ahead:
> dd if=/dev/mirror/data of=/dev/null bs=1m count=1000
> read-ahead and multiple I/O queue depth:
> dd if=/mounted/mirror/volume of=/dev/null bs=1m count=1000

I'd agree in theory, but bonnie++ gives WORSE results than raw device:

Version 1.93c       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
xxxx.xxxx.xx     1G   305  99 59135  15 21350   7   501  99 57480  11
478.5  13
Latency             27325us   63238us     535ms   45347us   68125us
2393ms

And pumping vfs.read_max to an obscene value doesn't really help:

# sysctl vfs.read_max=256
vfs.read_max: 16 -> 256

Version 1.93c       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
xxxx.xxxx.xx     1G   305  99 57718  15 18758   6   500  99 60900  13
467.8  13
Latency             27325us   89977us   99594us   36706us   71907us
90021us

I've experimented with increasing MAXPHYS (to 256K) before and it also
doesn't help.


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to