Both of them.
Reading two 100M files in interleaved way with 16K buffer, 62MB/s
Reading two 700M files in interleaved way with 16K buffer, 9MB/s
Reading two 100M files in interleaved way with 1M buffer, 55MB/s
get worse with large buffer somehow
Reading two 700M files in interleaved way with 1M buffer, 34MB/s
get better with large buffer, but still difference, 55 vs 34
I cannot find the reason for this. gstat(8) also shows low rates when
reading large files in interleaved way but not for small files.
On Sun, Feb 22, 2009 at 5:20 PM, Wojciech Puchar <
> That's true. Using bigger buffer will help, but it doesn't tell why reading
>> large size file is slower than reading small size file.
>> really slower? or just bigger difference with large files?
>> On Sat, Feb 21, 2009 at 5:56 PM, Wojciech Puchar <
>> woj...@wojtek.tensor.gdynia.pl> wrote:
>> I'm just guessing inode structure, the physical file location on HDD
>>>> might be related to this. But, if I read only one file, the size
>>>> doesn't matter. Reading file (10M, 100M, 700M) gives constantly about
>>>> 70MB/s, and the weird thing happens when I read 2 files of big size.
>>> if you use O_DIRECT it's read from disk exactly as you specified, without
>>> readahead, so you do a lot of seeks.
>>> simply use bigger buffer like 1MB
email@example.com mailing list
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"