On Wed, 3 Jul 2002, Christoph Egger wrote:
> The first thing is related how to handle frames with and without timestamps.
> There are three scenarious to handle.
> 
> 1) Both input and output target support frames
> 1.a) The input target has as much frames as the output one.
> 1.b) The input target has more frames than the output one.
> 1.c) The input target has less frames than the output one.

Well, IMO the first step is for LibGPF to try to slow down or speed
up the framerate in either the source or the sink, if either of them
support advisory rate setting.  But of course this won't fix the problem
in many situations where the targets cannot tune rate.

In 1.b you have a range of choices, from just throwing away unneeded
frames, to merging consecutive frames.  The former is the most efficient;
the latter preserves the most data (but only preserves data when 
merging the frames provides more resolution somehow.)  I would say the
default behavior should be the former.  But how to structure the API
for choosing options like this is something that will require some thought.
In the case where the output are timeless files of still frames it is easy:
if the user has selected a pattern to create frame files, move to a new file 
each time the current one fills.  Otherwise, the output target should signal
for the close of the pipeline when the file fills, and throw away any more
received data.  I suppose one other option is to just keep filling the
same frame, which might be useful, if wasteful, for interactive frame
grabbing.

1.c will mostly occur in realtime due to monotonic framerate mismatch, 
rarely ever due to mismatches in timeless or timestamped graphics formats.  
Here you can just double up frames, or interpolate.  Interpolation is a far
way off IMO, and not something most people would want done by deafault.
If the output target does not have the ability to hold the current frame
indefinitely, then the pipeline is going to have to rewind and send the 
frame again.

Anyway, I know this does little to actually move towards a decision
on a full API, but I think with the default behavior defined that that
will help coding move forward a bit further at least, so those are my
suggestions for default behavior.

> The protocol provides a flush() mechanism intended to flush the cached data.
> Brian has been come up with an idea, that gives the target more control
> about its behaviour. His idea is to add an additional parameter that let the
> protocol to throw away the whole cached data to an given offset. The disadvantage
> is, that this parameter is unused, when the protocol acts as output. Thus, I
> suggested to bypass a number of bytes to flush/throw away and 0 simply means
> everything. This way, the additional param has a use in both ways: as input
> and as output. That means, when the protocol lib acts as input, then the
> param tells it how much data to throw away from the cache and how much data to
> flush, when it acts as output.

I think this is a good way to do it.  Originally I was leaning towards
using an absolute off_t.  However, now that I think of it, an ssize_t makes
more sense.  Consider the following scenarios:

1) io is an output, and flush is called with a negative ssize_t.
   flush -10 would flush up to but not necessarily including the last 
        ten bytes written.

2) io is an output, and flush is called with a positive ssize_t.
   flush 10 would cause the output to autoflush every time data is written
        up to but not necessarily beyond the next 10 bytes.  Autoflushing
        would stop after that amount of data was written.

3) io is an input, and flush is called with a negative ssize_t.
   flush -10 would cause all data except for the ten newest unread bytes
   to be disposed of (so the app now knows it cannot seek back farther 
   than that).

4) io is an input, and flush is called with a positive ssize_t.
   flush 10 will tell the input lib to throw away all data previous
   to the current pos, and the next ten data bytes not yet read are 
   useless and do not need to be cached.  The next call the target 
   makes should be to seek past these bytes; it should not do any reading.
   This allows more efficiency then calling seek before flush would allow
   when skipping large unused blocks of data.

And of course when called with 0 everything before the current pos will 
be flushed or discarded.

Now, to point something out: currently we have copied the lseek
prototype, which uses an absolute file position and a whence.  
There are arguments both for and against using a relative ssize_t here
as well and dispensing with absolute values.  I lean towards keeping 
the absolute off_t because it makes it easier for the target to seek 
around without having to keep track of where it has been.  I could be 
convinced otherwise, though (it isn't that hard to work with relative
coordinates).  But do note, that eventually on a continuous stream left 
running for a long period of time, the off_t will roll over to 0.  We 
should take care to explicitly define the behavior of the target wrt 
flushing and seeking with SEEK_SET after/during the fpos rollover.
Just because C leaves all sorts of behaviors undefined :-( doesn't mean 
we should :-).

--
Brian

Reply via email to