On Sat, 05 Dec 2009 15:03:44 EST erik quanstrom <quans...@quanstro.net>  wrote:
> > The OS support I am talking about:
> > a) the fork behavior on an open file should be available
> >    *without* forking.  dup() doesn't cut it (both fds share
> >    the same offset on the underlying file). I'd call the new
> >    syscall fdfork().  That is, if I do
> > 
> >        int newfd = fdfork(oldfd);
> > 
> >    reading N bytes each from newfd and oldfd will return
> >    identical data.
> 
> i can't think of a way to do this correctly.  buffering in the
> kernel would only work if each process issued exactly the
> same set of reads.  there is no requirement that the data
> from 2 reads of 100 bytes each be the same as the data
> return with 1 200 byte read.

To be precise, both fds have their own pointer (or offset)
and reading N bytes from some offset O must return the same
bytes.  The semantics I'd choose is first read gets bufferred
and reads get satisfied first from buffered data and only
then from the underlying object. Same with writes.  They are
'write through".  If synthetic files do weird things at
different offsets or for different read/write counts, I'd
consider them uncacheable (and you shouldn't use fdfork with
them).  For disk based files and fifos there should be no
problem.

Note that Haskell streams are basically cacheable!

> before you bother with "but that's a wierd case", remember
> that the success of unix and plan 9 has been built on the
> fact that there aren't syscalls that fail in "wierd" cases.

I completely agree. But hey, I just came up with the idea and
haven't worked out all the design bugs (and may never)!  It
seemed worth sharing to elicit exactly the kind of feedback
you are giving.

Reply via email to