Am 19.01.2012 14:44, schrieb feli:
> Would it be possible to convert the effect to one color space or the other 
> before applying it? Or what are the advantage of using one color space or an 
> other in the editing process? Maybe the color space could be limited to only 
> RGB(A)?

Hi all,

when discussing colour models, we should consider that any conversion is
(slightly) lossy. Because you always have to interpolate an exact value
when mapping it into the other colour space. Obviously, when we use
floating point numbers to represent values, these losses become small
and close to negligible.

So, theoretically speaking, it would always be the best options to perform
all effects in the same colour space, and that colour space should be the
one of the original footage. BUT -- since the two relevant colour models
(RGB and YUV) work somewhat different, basically this requires coding
each effect twice. And on top of that, it requires a thorough understanding
of the colour models. Because this way, basically the programmer does the
conversion "in pure math", i.e. in his brain, when he adapts the algorithm
to a different colour space. Sometimes this isn't even feasible, or it is
very demanding (requiring a solid knowledge of math). Thus, in practice,
many plug-ins just have their "natural" home colour model, and either
the other operation modes are sort-of broken, or implemented in a
sub optimal way.

So, bottom line: its not desirable to have just one colour model.
- YUV is the natural model of most camcorder produced video footage
- RGB is the most easy-to-understand model for the programmers
- additionally, float is the most precise and desirable model,
  but lacks support in (consumer) hardware


> How will Lumiera cope with color space selection?

First, let me answer this question for *Cinelerra*

On colour spce selection, Cinelerra will configure the frame buffer for your
resulting video to be able to hold data in that colour model. Then, for each
plug-in, it will pick the variant of the algorithm coded for that model. If
that variant of the algorithm happens to be buggy, you've lost. We know since
several years, that some models are coded erroneously for some plug-ins,
especially when combined with an alpha channel. Unfortunately fixing that
requires really intense and systematic work; often it's not even outright
clear, how the "correct" implementation should work; thus, additionally
it would require some research and learning of theory.
We, as a community, simply didn't manage to get that done.

This was one of the core problems which led the Lumiera developers to use a
more elaborate approach right from start. Which unfortunately has the downside
of making the *internals of *Lumiera* somewhat intricate and difficult to
understand: In Lumiera, we completely separate the "Session" (the clips, tracks,
effects and further objects you as a user will interact with while editing) and
the "render graph" (that is what the engine processes). We put a transformation
step in between, which translates the objects in the session into a low-level
pipeline.

Clearly, in Lumiera our goal is *not* to have any fixed colour model.
Similarily, we do *not* have a fixed framerate for the whole session.

Rather these properties are controlled by the *output configuration* used.
Which, in Lumiera becomes part of the timeline; but you can use your
edited sequences within multiple timelines. Thus, when an edited sequence
is used within a timeline, we get an output connection with a colour model
and a framerate. OTOH, the source material also has framerate/colour model.
Then, we try to keep as much of the pipeline running with the same
model. And at some point, we'll insert an conversion node.

That is the plan. But, honestly, at the moment we're targeting the goal
of building such pipelines (partially done) and running them in a multi-core
aware engine (also partially done). We haven't gotten to the point to worry
about plug-in metadata, or about the rules to use to determine the point
where to insert that conversion.

Clearly, our approach contains some "complexity bombs":
- allowing multiple timelines/outputs at the same time
- allowing unlimited nesting (a sequence can be used as virtual clip
  in another sequence)
- supporting various kinds of relative "placement" for the clips
- having no limitations on the number of channels or the kind and mix of media.
- not "taking side" for one fixed media handling framework (ffmpeg, gstreamer,
  MLT, or writing our own, like Cinelerra). We want just plug-ins and metadata.

But frankly, I don't know any other approach how to tackle that problem of
professional editing in the current media landscape, without cheating, or
without creating those nasty impediments and technologically unnecessary
limitations found in many of the existing editing solutions.

Cheers,
Hermann Vosseler
(aka "Ichthyo")





_______________________________________________
Cinelerra mailing list
[email protected]
https://init.linpro.no/mailman/skolelinux.no/listinfo/cinelerra

Reply via email to