Oh, just to be clear:

2012/4/27 Pádraig Brady <[email protected]>:
> Your suggestion of parallelizing the reads on the local
> host might help, especially when the processing of each
> block takes time. I.E. it would be more beneficial for
> gzip than for cp. You could get much the equivalent parallelization
> for cp by using tar to continuously read and buffer like:
>
>  tar -c /cifs/ | (cd dest && tar -xp)
>
> This would be better for the local system,
> as you would have a single process writing to
> the dest, rather than multiple writers competing
> for that resource.
>
> Note the latency/throughput of the above, is dictated by
> the pipe buffer in the kernel which is 1M on my system
> (/proc/sys/fs/pipe-max-size).

The source is one .iso file.  Local fs contention is not an issue, since
I am sure whatever throughput I manage to attain, it won't approach
the capacity of a local disk drive.  Finally, if the resulting file winds up
with a scattered allocation, I can always clean that up with a local
copy.  Copying a 4GB file locally takes no time at all -- by comparison.
My goal is to try to fill the pipe with CIFS data.

Reply via email to