Dear All,

right now we are working toward a 2.5 release, and I am very aware of
the fact that we have to get things out the door. "Real artists ship"
and all that. I just want to share some thoughts on the future of the
VSE, post-2.5. I apologize if these thoughts have been brought up
earlier and been shot down.

I've used Blender for video editing for about a year, and I find it to
be clearly superior to any other solution. In parallel with using
Blender I have developed my own little suite of video editing tools
(timelapse, anti-shake, motion interpolation, etc.), and lately I have
started to think about putting these tools into Blender. I found them
useful, so I'm assuming others will, too.

Looking at the code for the VSE it appears solid, but not very modular,
nor suitable for effects that need access to more than the current
frame. Since the tools I have fall into that category – the anti-shake,
for example, needs to compute the optical flow for each pair of frames –
it is currently near-impossible to port them over in a way that would
give a good user experience or remain modular enough to be maintainable.


Per-Strip Rendering
-------------------

Looking at the code, it looks to me that the assumption is that:
    a movie
    is a sequence of frames
    that can be rendered independently.

In other words, a movie is a sequence of stills. At first sight, this
seems correct – after all, a movie is a sequence of still images. But it
ignores the fact that these frames are highly related to each other. It
isn't just a random list of pictures. I think that the VSE's
architecture must include that must more: We are not operating on
independent images, we're operating on sequences.

Right now, the VSE works like this: For each frame, it renders all
strips at that frame, taking inter-strip dependencies into
consideration, then composites the frames. What if we turned that
around? Render all frames of all strips to a temporary space, then
composite them. That is:

    for each frame:
        for each strip:
            render
        composite

gets turned into:

    for each strip:
        for each frame:
            render
    composite

This way, we could do frame rate conversion naturally. We could do
speedup/slowdown, interpolation, anti-shake, and everything easily.
Effects that only require access to the current frame would still work
as a kernel inside a strip.


Render Farm Issues
------------------

An entire strip is too big of an object. Consider, for example, a Scene
strip. If we have a render farm, we want to split the rendering of that
strip over the nodes in the farm. Any Strip abstraction would have to
include the ability to only render a part of itself. If each strip could
signal the minimum recommended chunk size, the VSE could ensure that the
strip is split according to that.

A rendered scene would have a chunk size of 1 – that is, each frame can
be rendered independently. A frame rate conversion could have a variable
chunk size depending on the rate difference.


Next Steps
----------

For now, I just want to know if I'm completely wrong, retreading old
ground, or if there is enough merit in this to warrant further
experimentation, once 2.5 is out. I'll do all development on my own, so
unless I'm successful, you probably won't hear a thing.

/LS
_______________________________________________
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers

Reply via email to