On 06/11/14 21:26, Nick Holland wrote:
> On 06/11/14 15:55, Christian Weisgerber wrote:
>> On 2014-06-11, Peter Fraser <[email protected]> wrote:
> ...
>>> Also for dd the block size has always been a puzzle.
>> 
>> For accessing a raw device you want it to be a multiple of the
>> sector size of the device (512 bytes for most disks) and there is
>> usually no point in making it bigger than MAXPHYS (64k on OpenBSD),
>> i.e., the maximal size of a single I/O transfer the kernel handles;
>> larger reads or writes will be broken up into multiple transfers.
> 
> I've heard this a number of times...and yet my testing on hardware I've
> had in front of me (i.e., "your throughput may vary") has shown that
> bs=1M does give substantially better throughput when zeroing disks than
> 32k, and last time I did extensive testing in this, sizes larger than
> 1MB give even better throughput, though the return gets very small after
> around 1MB -- so I usually use 1MB so a "pkill -INFO dd" will give me an
> indication of the progress in easy to read terms, which I find more
> useful than a 1% reduction in time.
> 
> I'm just reporting an observation, not explaining it. :)
> 
> Nick.
> 

AAAANNNDDD...  It was pointed out I missed the fact that my example
(reading from /dev/zero) is quite different than reading from another disk.

So...your results WILL vary...

(I still like 1M block size for purpose of -INFO output ... but that
wasn't the question)

Nick.

Reply via email to