This is mostly musings about design/refactoring that hasn't happened.

I'm working on ar, and have gotten to the point that calls for read-write
loops.
Yes, I know they're fairly small, and I've written a few in the past,
and I could easily write another...
But by now, we have at least four:
-two in cpio
-one in tar: void copy_in_out(int in, int out, off_t size)
-one for cp/mv/...: xsendfile(int in, int out)

I'm not counting cat, since it only does one byte at a time.
The one in dd is also not relevant, since it's got a bunch of
special requirements.

These have subtle variations in what they do:
cpio archive creation:
  write garbage and continue on short read 
  Corrupts one file, but allows to continue: among other things,
  this makes cpio -p keep going even if some of the files are on bad blocks.
cpio archive extraction:
  die on short read, die on short write (reasonable)
tar copy_in_out():
  die on short read, try to avoid but ignore short write (calls writeall())
  Aborts when hitting a bad block and creating archive,
  as well as on extracting truncated archive.
  Blindly continues on running out of space.
xsendfile():
  no concept of file length, dies on write less than read.
  Will truncate files on bad blocks and continue.

I'm wondering how best to generalize this.
It seems that die on short read/write is currently the most relevant one.

But ar would seem to want a slightly different approach for some 
functions, which would not be compatible with any of the current 
archivers (ar is the only non-streaming archiver so far):
-create:
  die on short write (after deleting new archive/file?)
  indicate bytes written on short read
  This *roughly* corresponds to xsendfile(), but returning an off_t.
 I suppose I could use xsendfile() and then lseek() rather than refactoring
 xsendfile().
-extract:
  die on short read (corrupt file), die on short write (out of space).

So I guess the sensible course is to write xcopyall() and make all the
archivers use it where relevant.

Thanks,
Isaac Dunham
_______________________________________________
Toybox mailing list
[email protected]
http://lists.landley.net/listinfo.cgi/toybox-landley.net

Reply via email to