On 9/27/05, m ike <[EMAIL PROTECTED]> wrote:
> On 9/27/05, Carl Lowenstein <[EMAIL PROTECTED]> wrote:
> > On 9/27/05, m ike <[EMAIL PROTECTED]> wrote:
> > > for extracting a portion of a file, the dd command can be hastened
> > > dramatically (by a factor of 10,000) by changing the to bs=1024
> > > (for example) and increasing count to be inclusive, and then piping
> > > the result to head -c to trim it down to exact byte-size.
> > >
> > > 10,000 may be an exaggeration. okay it is an exaggeration. but does
> > > not seem to be far off.
> >
> > There are two reasons for using large block sizes in dd.  One is to
> > eliminate the overhead of e.g. issuing a million system calls each to
> > read one byte, vs. one system call to read a million bytes.  The other
> > is to reduce the effect of missing the "next block" in a disk read.
> > If you have to wait for a whole disk revolution to read a block, your
> > data transfer slows down proportional to the number of blocks per
> > cylinder.  Nowadays this can range from 600 at the inner radius to
> > 1200 at the outer.  (these are real physical blocks, not the fictional
> > blocks that LBA software uses).
>
> fwiw, afaif, when one is grabbing a specific hunk within
> a file, the largest bs= that can be specified is the greatest
> common denominator of skip= and count=.

Not if you first grab a large chunk and then skip and count within it
for a smaller selection.

    carl
--
    carl lowenstein         marine physical lab     u.c. san diego
                                                 [EMAIL PROTECTED]


--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to