Re: [Bf-committers] S-DNA and Changes

2010-11-26 Thread Peter Schlaile
Hi Leo,

 Then you can have one strip with an fcurve to map from VSE output frames
 to scene frames:

 Strip 1: Scene, frames 1-20, with an fcurve that maps from VSE frames
 (iterated over using next()) to scene frames. It covers frames 1-219 in
 the VSE.

 If I may modify your code a little:

 class scene_iterator : public iterator {
 public:
   Frame next () {
// fcurves go in map_vse_frame_to_cfra
float cfra = map_vse_frame_to_cfra (nextFrame);
   setup_render(cfra);
++nextFrame;
   return render();
   }
int nextFrame;
 }

and again, that doesn't work with fcurves and gets really nasty 
with stacked speed effects.

map_vse_frame_to_cfra() is *really* a non-trivial function!

 VSE only sees a sequence of discrete frames

uhm, why is that exactly the case? In fact, currently it renders 
internally with floats.

 - which is precisely what its domain model should look like,
 because video editing is about time-discrete sequences, not continuous.

again, why? The point behind making cfra continous was, that the *input* 
strip can make it's best afford to do inter-frame interpolation or do 
in-between rendering.

It depends heavily on the *input* strip, how that is done best.

So: no, I *strongly* disagree with your opinion, that the VSE sees a 
sequence of discrete frames. In fact, it doesn't!

 The Blender scene is a  continuous simulation - having a float cfra 
 makes sense, because time is continuous there. In the VSE the domain 
 objects are discrete in time. Having a float cfra makes no sense.

as stated above, I disagree.

 Which brings me to the point: what was the sense in dropping the random
 access interface again?

 The imbuf reader has also a random access interface, but internally keeps
 track of the last fetched frame and only reseeks on demand.

 It is always possible to wrap a sequential interface in a random-access
 interface, or vice versa. The purpose of dropping the random access
 interface was to be able to write code that didn't have to deal with two
 cases - you'd know that you'll always be expected to produce the next
 frame, and can code for that. Less to write, less to go wrong.

uhm, you always will need a fetch() and a seek() function, so where 
exactly does your idea make things simpler?

 Clients of the code know that they can iterate over *every* frame in the
 sequence just by calling next(). With a random access interface -
 especially one that uses floats for indexing - you'll always worry if
 you missed a frame, or got a duplicate, thanks to rounding errors.

uhm, as stated above, next() isn't really defined in your sense.

You can define a version, that does CFRA + 1 if you like. If that is 
really helpfull is another question.

In fact, the next cfra for a given track will be defined by the topmost 
speed effect fcurve and then calculated down the stack. That won't break 
your initial idea of changing the order in which frame calculation takes 
place, it only reflects the fact, that next() isn't that easy to calculate 
if you do retiming in a creative way.

So: yes, next() won't be easy to calculate in advance for a given track, 
but yes: that is a fundamental problem, if we allow stacked retiming with 
curves. Even if you do retiming with simple factors, you will run into the 
problem, that if the user speeds up a track with say a factor of 100 you 
probably don't want to blend *all* input frames into the output frame but 
limit the sampling to say 10 inbetween frames. (That's the way Blender 
2.49 does it and Blender 2.5 will do it soon using the new SeqRenderData 
parameters.)

Cheers,
Peter


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-26 Thread Leo Sutic
Hi Peter,

you raise some very good points, and I'll have to go back to the drawing
board for a moment.

For example: How do you play a movie backwards? Not easy with a
next()-style interface.

It's going to take a little while, though, because the next() style
interface makes some other things like image stabilization and object
tracking much easier.

One question, though:

 map_vse_frame_to_cfra() is *really* a non-trivial function!

How? Isn't it just a fcurve.evaluate (frame)? Or is it the speed control
strips that make it non-trivial?

/LS

On 2010-11-26 10:25, Peter Schlaile wrote:
 Hi Leo,
 
 Then you can have one strip with an fcurve to map from VSE output frames
 to scene frames:

 Strip 1: Scene, frames 1-20, with an fcurve that maps from VSE frames
 (iterated over using next()) to scene frames. It covers frames 1-219 in
 the VSE.

 If I may modify your code a little:

 class scene_iterator : public iterator {
 public:
  Frame next () {
// fcurves go in map_vse_frame_to_cfra
float cfra = map_vse_frame_to_cfra (nextFrame);
  setup_render(cfra);
++nextFrame;
  return render();
  }
int nextFrame;
 }
 
 and again, that doesn't work with fcurves and gets really nasty 
 with stacked speed effects.
 
 map_vse_frame_to_cfra() is *really* a non-trivial function!
 
 VSE only sees a sequence of discrete frames
 
 uhm, why is that exactly the case? In fact, currently it renders 
 internally with floats.
 
 - which is precisely what its domain model should look like,
 because video editing is about time-discrete sequences, not continuous.
 
 again, why? The point behind making cfra continous was, that the *input* 
 strip can make it's best afford to do inter-frame interpolation or do 
 in-between rendering.
 
 It depends heavily on the *input* strip, how that is done best.
 
 So: no, I *strongly* disagree with your opinion, that the VSE sees a 
 sequence of discrete frames. In fact, it doesn't!
 
 The Blender scene is a  continuous simulation - having a float cfra 
 makes sense, because time is continuous there. In the VSE the domain 
 objects are discrete in time. Having a float cfra makes no sense.
 
 as stated above, I disagree.
 
 Which brings me to the point: what was the sense in dropping the random
 access interface again?

 The imbuf reader has also a random access interface, but internally keeps
 track of the last fetched frame and only reseeks on demand.

 It is always possible to wrap a sequential interface in a random-access
 interface, or vice versa. The purpose of dropping the random access
 interface was to be able to write code that didn't have to deal with two
 cases - you'd know that you'll always be expected to produce the next
 frame, and can code for that. Less to write, less to go wrong.
 
 uhm, you always will need a fetch() and a seek() function, so where 
 exactly does your idea make things simpler?
 
 Clients of the code know that they can iterate over *every* frame in the
 sequence just by calling next(). With a random access interface -
 especially one that uses floats for indexing - you'll always worry if
 you missed a frame, or got a duplicate, thanks to rounding errors.
 
 uhm, as stated above, next() isn't really defined in your sense.
 
 You can define a version, that does CFRA + 1 if you like. If that is 
 really helpfull is another question.
 
 In fact, the next cfra for a given track will be defined by the topmost 
 speed effect fcurve and then calculated down the stack. That won't break 
 your initial idea of changing the order in which frame calculation takes 
 place, it only reflects the fact, that next() isn't that easy to calculate 
 if you do retiming in a creative way.
 
 So: yes, next() won't be easy to calculate in advance for a given track, 
 but yes: that is a fundamental problem, if we allow stacked retiming with 
 curves. Even if you do retiming with simple factors, you will run into the 
 problem, that if the user speeds up a track with say a factor of 100 you 
 probably don't want to blend *all* input frames into the output frame but 
 limit the sampling to say 10 inbetween frames. (That's the way Blender 
 2.49 does it and Blender 2.5 will do it soon using the new SeqRenderData 
 parameters.)
 
 Cheers,
 Peter
 
 
 Peter Schlaile
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers
 

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Peter Schlaile
Hi Leo,

  1. Write a VSE 2 and create all-new structures?

this will break compatibility with older versions of blender. Should only 
be done as a last resort and if you *really* know, what you are doing.

  2. Create some kind of compatibility loader or data import filter
 that converts the old data to the new, as far as possible? That is, we
 read old and new formats, but only write new format.

that is *always* necessary, otherwise, you can't open old files or make 
sure, that on load of an old file, your new structure elements are 
initialized properly. This is done in doversions() in readfile.c .

  3. Convert the old structs to the new in-memory? That is, we read and
 write the old format, maybe with some back-compatible changes, but use a
 new format internally?

nope. After each editing operation, DNA data has to be in sync, since DNA 
load/save is also used on undo operations(!).

I'd also suggest, that you first try to make sure, that you *have* to 
change something, and why. Since, you guessed it, you will most likely 
make some people unhappy, that want to open their new .blend file with an 
old version and see things broken all over the place.

So I'm a bit wondering, what you want to change?

I tried to understand the blog post you linked some days ago.

To quote your blog: (disadvantages of the current system)

 1.1. Disadvantages

 The disadvantages come when we wish to perform any kind of processing 
 that requires access to more than just the current frame. The frames in the 
 sequencer aren't just random jumbles of image data, but sequences of 
 images that have a lot of inter-frame information: Object motion, for 
 example, is something that is impossible to compute given a single frame, 
 but possible with two or more frames.

 Another use case that is very difficult with frame-wise rendering is 
 adjusting the frame rate of a clip. When mixing video from different 
 sources one can't always assume that they were filmed at the same frame 
 rate. If we wish to re-sample a video clip along the time axis, we need 
 access to more than one frame in order to handle the output frames that 
 fall in-between input frames - which usually is most of them.

To be honest: you can't calculate the necessary optical flow data on the 
fly, and: most likely, people want to have some control over the 
generation process. (Maybe they just want to use externally generated 
OFLOW files from icarus?)

To make a long story short: we should really just add a seperate 
background rendering job, to add optical-flow tracks to video tracks, just 
like we did with proxies, only in the background with the new job system 
and everything should be fine. (For scene tracks or OpenEXR-sequences with 
a vector pass, there even is already optical flow information available 
for free(!) )

In between frames should be handled with float cfras (the code is already 
adapted at most places for that) and the new additional mblur parameters.

That has the additional advantage, that you can actually *calculate* real 
inbetween frames in scene tracks.

For other jobs, like image stabilisation, you should just add similar 
builder jobs. (Which most likely don't have to write out full frames, but 
just generate the necessary animation fcurves.)

The implications of your track rendering idea is - scary.
Either you end up with a non-realtime system (since you have to calculate 
optical flow information on the fly in some way, which is, to my 
knowledge, not possible with current hardware) or you have to render 
everything to disk - always.

I, as a user, want to have control over my diskspace (which is very 
valuable, since my timelines are 3 hours long, and rendering every 
intermediate result to disk is *impossible*!).

Or, to put it another way: please show a case to me, that *doesn't* work, 
with a simple background builder job system, where you can add arbitrary 
intermediate data to video, meta or effect tracks. Having to access 
multiple frames at once during playback *and* doing heavy 
calculation on them doesn't sound realtime to me by definition, and that 
is, what Ton told me, the sequencer should be: realtime. For everything 
else, the Compositor should be used.

You could still use RenderMode: (CHUNKED, SEQUENTIAL and FULL) to make 
that background render job run in the most efficient way. But it is still 
a background render job, which is seperated from the rest of the pipeline.

As always, feel free to proof me wrong. If I got it correctly, your 
interface idea looks like a good starting point for a background builder 
system interface. 
So probably, if you convince everyone, that this is the best thing to do for 
playback, too, we might end up promoting your builder interface to the 
preview renderer, who knows?

BTW:
I'm currently rewriting the sequencer render pipeline using a generic 
Imbuf-Render-Pipeline system, which will move some things around, 
especially all those little prefiltering structures will find 

Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Leo Sutic
Hi Peter,

thank you for your feedback on the SDNA.

You bring up some very good points, let me try to address them quickly:

On 2010-11-25 19:45, Peter Schlaile wrote:
 Or, to put it another way: please show a case to me, that *doesn't*
 work, with a simple background builder job system,

I thought about that yesterday, actually, because it seemed such a
simple solution. But let's consider the background builder job itself.

In principle, there is nothing that can't be factored out as a
background builder job. I mean, I can do anything I want in an external
program and then just place the end result in Blender.

But then I have to build a UI for my stabilization program, and the
whole purpose of doing any work on Blender was to put my stabilization
algorithms there in order to have a nice UI. By nice I mean to be able
to combine video effects in the VSE much like I can combine nodes in the
compositor. I want to be able to apply stabilization, interpolation,
etc. by stacking effect strips. Combinatorial power, I think it is
called. This means that the background builder job ends up being more
or less a render in anything but name. It needs access to the same data
as a render - output from previous effect strips.

That's why I need to involve the main rendering pipeline. I wish I
didn't have to.

 The implications of your track rendering idea is - scary.
 Either you end up with a non-realtime system (since you have to
 calculate optical flow information on the fly in some way, which is,
 to my knowledge, not possible with current hardware) or you have to
 render everything to disk - always.

 I, as a user, want to have control over my diskspace (which is very
 valuable, since my timelines are 3 hours long, and rendering every
 intermediate result to disk is *impossible*!).

Correct, it is impossible.

You've asked this once before (way back), I replied in:
http://lists.blender.org/pipermail/bf-committers/2010-September/028825.html

and you thought:
that sounds indeed usefull
http://lists.blender.org/pipermail/bf-committers/2010-September/028826.html

Short-short summary: The system need not do it the naive way and write
out every frame. We can also optimize away a lot frames being held
concurrently in memory by using smart algorithms.

The current state of the prototype code is that *nothing* is being
written to disk, and it never renders more frames than absolutely
needed. Eventually I might need to include the option of writing out
some things to disk but I consider that a last resort, and only for
operations that the user *knows* will cost a lot of disk. This, however,
is not a consequence of the strip-wise rendering algorithm itself, but
of the processing we want to do.

/LS
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Peter Schlaile
Hi Leo,

 On 2010-11-25 19:45, Peter Schlaile wrote:
 Or, to put it another way: please show a case to me, that *doesn't*
 work, with a simple background builder job system,

 That's why I need to involve the main rendering pipeline. I wish I
 didn't have to.

hmm, that's maybe a misunderstanding. I was talking about using the job 
system of blender 2.5 (which just forks another job within blender - not 
an external system/program!). So: still complete access to UI.

 You've asked this once before (way back), I replied in:
 http://lists.blender.org/pipermail/bf-committers/2010-September/028825.html

 and you thought:
 that sounds indeed usefull
 http://lists.blender.org/pipermail/bf-committers/2010-September/028826.html

again, maybe a misunderstanding here. In that old post, I was more 
thinking of rendering seperate tracks in advance for prefetch rendering.
Not: making CPU heavy stuff run in the previews.
(But I didn't make that point really clear, that's true.)

 Short-short summary: The system need not do it the naive way and write
 out every frame. We can also optimize away a lot frames being held
 concurrently in memory by using smart algorithms.

 The current state of the prototype code is that *nothing* is being
 written to disk, and it never renders more frames than absolutely
 needed. Eventually I might need to include the option of writing out
 some things to disk but I consider that a last resort, and only for
 operations that the user *knows* will cost a lot of disk. This, however,
 is not a consequence of the strip-wise rendering algorithm itself, but
 of the processing we want to do.

ok, point taken. I'm still a bit unsure about the CPU-heavy stuff, but I 
have to admit, that you could always add a proxy, if things start to get 
slow.

Regarding your interface prototype: your interators should take a float 
increment parameter. (There isn't really a well defined next frame in 
float precision scene-rendering...)

Otherwise: this could really work.

Cheers,
Peter


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Leo Sutic
On 2010-11-25 21:54, Peter Schlaile wrote:
 Regarding your interface prototype: your interators should take a float 
 increment parameter. (There isn't really a well defined next frame in 
 float precision scene-rendering...)

I decided against that due to the complications it resulted in - mostly
because it became very difficult to get all frames to align in time when
round-off errors may affect the float cfra parameter depending on how it
was calculated. (It was also difficult for movie clip sources, again due
to rounding errors, where you could end up on the wrong frame.) It was
easier to just pretend, in the VSE, that each sequence was continuous,
but sampled at a fixed rate. (So the frame rate should really be a
field rate.) That way, the only time we risk that the frames don't
line up is when we do framerate conversion - and everyone kinda expects
them to not line up then.

So what I have instead is that each sequence (strip) has an individual
rate. That way I could slow down or speed up a sequence by altering the
frame rate. An effect strip can then do motion interpolation or
timelapse to convert the base sequence to the desired output framerate.
So the renderer might have a float frame number, but the VSE only ever
sees integer frame numbers.

I'm just guessing regarding the float cfra parameter: Is it for motion blur?

Then, for example, with output frame rate at 24fps:

Strip 1: Movie clip, 60fps. Must be converted to output frame rate.
Strip 2: Converts strip 1 to the output frame rate, 24 fps, by combining
frames in strip 1.
Strip 3: Movie clip 24 fps. Can be used as-is.
Strip 4: Gamma cross 13
Strip 5: Blender scene, motion blur, 24fps with simulated shutter at
60fps. As far as the VSE is concerned, the output of this strip is 24fps.
Strip 6: Combine 5 and 4 using alpha in 5.

/LS
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Peter Schlaile
Hi Leo,

 Regarding your interface prototype: your interators should take a float
 increment parameter. (There isn't really a well defined next frame in
 float precision scene-rendering...)

I decided against that due to the complications it resulted in - mostly
because it became very difficult to get all frames to align in time when
round-off errors may affect the float cfra parameter depending on how it
was calculated. (It was also difficult for movie clip sources, again due
to rounding errors, where you could end up on the wrong frame.) It was
easier to just pretend, in the VSE, that each sequence was continuous,
but sampled at a fixed rate. (So the frame rate should really be a
field rate.) That way, the only time we risk that the frames don't
line up is when we do framerate conversion - and everyone kinda expects
them to not line up then.

uhm, ouch. OK, do it differently, tell the next()-iterator the 
absolute next-cfra not the increment, but please make it a float, because...

 I'm just guessing regarding the float cfra parameter: Is it for motion blur?
... it's not about motion blur, it's about retiming.

If CFRA is float, you can retime a scene strip afterwards and have it 
render subframe precision frames (read: extrem slowdowns), which are 
done the real way, not using fake interpolation.

Cheers,
Peter


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Leo Sutic
On 2010-11-25 23:40, Peter Schlaile wrote:
 Hi Leo,
 
 Regarding your interface prototype: your interators should take a float
 increment parameter. (There isn't really a well defined next frame in
 float precision scene-rendering...)
 
 I decided against that due to the complications it resulted in - mostly
 because it became very difficult to get all frames to align in time when
 round-off errors may affect the float cfra parameter depending on how it
 was calculated. (It was also difficult for movie clip sources, again due
 to rounding errors, where you could end up on the wrong frame.) It was
 easier to just pretend, in the VSE, that each sequence was continuous,
 but sampled at a fixed rate. (So the frame rate should really be a
 field rate.) That way, the only time we risk that the frames don't
 line up is when we do framerate conversion - and everyone kinda expects
 them to not line up then.
 
 uhm, ouch. OK, do it differently, tell the next()-iterator the 
 absolute next-cfra not the increment, but please make it a float, because...

 I'm just guessing regarding the float cfra parameter: Is it for motion blur?
 ... it's not about motion blur, it's about retiming.
 
 If CFRA is float, you can retime a scene strip afterwards and have it 
 render subframe precision frames (read: extrem slowdowns), which are 
 done the real way, not using fake interpolation.

Ah, ok.

I'd still try to stick with integer frames and a parameterless next().
Allowing the client to specify the advance in the next() method makes it
too much of a random-access method (there is no guarantee that the
advance is to the next frame, which is the whole purpose of the
iterator interface).

I'd do it this way:

Suppose we have a scene where we want normal speed for frames 1-10, an
extreme slowdown for 11-12 and normal speed from 13-20.

Strip 1: Scene, frames 1-10. This strip covers frames 1-10 in the VSE.
Strip 2: Scene, frames 11-12, with a framerate of 1/100th of the output
framerate. This strip covers frames 11-211 in the VSE.
Strip 3: Scene, frames 13-20. This strip covers frames 212-219 in the VSE.

When the sequencer renders this, the next() for strip 1 and 3 will
advance one scene-frame. The next() for strip 2 will advance 0.01 frames.

That way, the VSE code only ever sees integer frames (although it does
see different frame rates), and we can guarantee machine precision in
the slowdown frame number calculation. The renderer, however, sees the
fractional frames - but this is hidden behind the scene strip's code.

(This also enables us to take a scene that was designed for 24p and
render it at 60p with everything lining up properly.)

Would this take care of the issue?

I realize that we suddenly have a VSE cfra and a Scene cfra, but if
you're going to have retiming through the VSE, then you already got that.

/LS
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Peter Schlaile
Hi Leo,

 Ah, ok.

 I'd still try to stick with integer frames and a parameterless next().
 Allowing the client to specify the advance in the next() method makes it
 too much of a random-access method (there is no guarantee that the
 advance is to the next frame, which is the whole purpose of the
 iterator interface).

 I'd do it this way:

 Suppose we have a scene where we want normal speed for frames 1-10, an
 extreme slowdown for 11-12 and normal speed from 13-20.

 Strip 1: Scene, frames 1-10. This strip covers frames 1-10 in the VSE.
 Strip 2: Scene, frames 11-12, with a framerate of 1/100th of the output
 framerate. This strip covers frames 11-211 in the VSE.
 Strip 3: Scene, frames 13-20. This strip covers frames 212-219 in the VSE.

sorry, that won't work. One can retime using fcurves. Which brings me to 
the point: what was the sense in dropping the random access interface 
again?

The imbuf reader has also a random access interface, but internally keeps 
track of the last fetched frame and only reseeks on demand.

So you get: fast output, if you fetch the next frame and a slow reseek if 
you don't, which will work nicely in all relevant cases, since your code 
will be optimized to work on consecutive frames as much as it can.

Cheers,
Peter

Just do it internally like this on a movie strip which has a fixed 
framerate:

class movie_iterator : public iterator {
public:
Frame fetch(float cfra) {
if (cfra == last_cfra + 1) {
return next();
}

seek(cfra - preseek);

for (int i = 0; i  preseek; i++) {
next();
}

return next();
}
private:
Frame next() {
return next_frame using and updating last_frame_context;
}
void seek(float cfra) {
...
}

float last_cfra;
context last_frame_context;
}

where as a scene strip does:

class scene_iterator : public iterator {
public:
Frame fetch(float cfra) {
setup_render(cfra);
return render();
}
}



Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Leo Sutic
On 2010-11-26 00:28, Peter Schlaile wrote:
 Hi Leo,
 
 Ah, ok.

 I'd still try to stick with integer frames and a parameterless next().
 Allowing the client to specify the advance in the next() method makes it
 too much of a random-access method (there is no guarantee that the
 advance is to the next frame, which is the whole purpose of the
 iterator interface).

 I'd do it this way:

 Suppose we have a scene where we want normal speed for frames 1-10, an
 extreme slowdown for 11-12 and normal speed from 13-20.

 Strip 1: Scene, frames 1-10. This strip covers frames 1-10 in the VSE.
 Strip 2: Scene, frames 11-12, with a framerate of 1/100th of the output
 framerate. This strip covers frames 11-211 in the VSE.
 Strip 3: Scene, frames 13-20. This strip covers frames 212-219 in the VSE.
 
 sorry, that won't work. One can retime using fcurves.

Then you can have one strip with an fcurve to map from VSE output frames
to scene frames:

Strip 1: Scene, frames 1-20, with an fcurve that maps from VSE frames
(iterated over using next()) to scene frames. It covers frames 1-219 in
the VSE.

If I may modify your code a little:

class scene_iterator : public iterator {
public:
Frame next () {
// fcurves go in map_vse_frame_to_cfra
float cfra = map_vse_frame_to_cfra (nextFrame);
setup_render(cfra);
++nextFrame;
return render();
}
int nextFrame;
}

VSE only sees a sequence of discrete frames - which is precisely what
its domain model should look like, because video editing is about
time-discrete sequences, not continuous. The Blender scene is a
continuous simulation - having a float cfra makes sense, because time is
continuous there. In the VSE the domain objects are discrete in time.
Having a float cfra makes no sense.

 Which brings me to the point: what was the sense in dropping the random 
 access interface again?
 
 The imbuf reader has also a random access interface, but internally keeps 
 track of the last fetched frame and only reseeks on demand.

It is always possible to wrap a sequential interface in a random-access
interface, or vice versa. The purpose of dropping the random access
interface was to be able to write code that didn't have to deal with two
cases - you'd know that you'll always be expected to produce the next
frame, and can code for that. Less to write, less to go wrong.

Clients of the code know that they can iterate over *every* frame in the
sequence just by calling next(). With a random access interface -
especially one that uses floats for indexing - you'll always worry if
you missed a frame, or got a duplicate, thanks to rounding errors.

/LS

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers