On 10:49 Thu 24 Jul     , Ichthyostega wrote:
> 
> Hi Clay,
> 
> there seems to be some uncertainty and misunderstanding, probably we are
> using some common terms with a slightly different meaning. On the whole,
> I get the impression we both want to express similar concepts. We are
> both searching ways of keeping the handling of the Application clear
> and understandable.
> 
> 
> > On 17:22 Sun 20 Jul, Ichthyostega wrote:
> >> Note further that I have assumed you can put clips on leaf tracks
> >> only. If you group together some tracks, this creates a group head
> >> track (parent track).
> ...
> 
> Clay Barnes schrieb:
> > Perhaps I misunderstood your previous paragraph, but I think you're 
> > proposing a 'parent track'/'grouping track' separate from the 
> > multi-track container.  I am resistant to add the visual and 
> > operational complexity distinguishing them in the interface would 
> > entail.  I think that having only four components in the timeline 
> > would still be fully expressive of any desired edits:
> ...
> 
> No, I don't think I am proposing a separate new grouping feature here.
> 
> Besides the track-tree, which I thought was something obvious.
> Now, if I recall correct, we never actually discussed and considered
> if we want a track-tree, because it seemed obvious. I vaguely recall
> one of the first discussions with Cehteh somewhere in spring 2007.
> Someone of us said something like "and btw., as we are partially
> redoing things, of course we fix the tracks not being a tree"
> 
> The fact that tracks are /not/ a tree (but rather a flat list) in many
> video editors, indeed today seems like a braindead design mistake.
> 
> Actually, if you consider history, it isn't so silly. Rather, I'd call
> it "legacy". Remember, 20 years ago, many prominent people held up the
> statement that you can't do "real film" editing with computers, just
> "fastfood crap". You had to convince people to use computer based
> editing, and of course everyone strived to make it feel as much as
> possible like "real film". Thus we got "media bins", an "A roll" / "B
> roll" video track with transitions in between,  and sound tracks are
> nicely arranged separately like on good old multitrack magnetic tape
> machines.
> 
> Meanwhile, maybe we still stick to it just out of habbit, but there
> is not much technological reason for things being organized this way.
> 
> Today, none of the structures we see in the GUI of a video editor, is
> actually involved in the technical process of rendering media data.
> 
> - clips don't any longer "contain" the media, they only refer to it
> - tracks don't any longer "play back" the media, they are simply a
>   place you can put things at.
> - the framerate is no longer built into the machine, rather it's just
>   a property you choose for rendering.
> and so on....
> 
> Thus, the following conclusions seem obvious to me:
> - Tracks can be nested like folders or other containers. They form a
>   tree, can be collapsed and inherit relevant settings down to subtracks
>   (lockining, enabled, output port, fader setting etc.)
> - a clip has start, length and source position and refers to some
>   source media
> - media always inherently contains multiple channels, which are tied
>   together and used in sync.
> - most of the "wiring" can be done automatically, simply by considering
>   the stream type and mixing it into the appropriate output port. Only
>   in special and rather rare cases the user needs to define the
>   output/wiring explicitly.
> - there is no difference between a "normal" clip and a "meta clip". They
>   just refer to a different kind of media (an external media file versus
>   the output port of another timeline)
> - there is no technical reason why the program should inhibit the user
>   in mixing clips with various different media types, frame rates,
>   channel counts etc. on the same track, because the track doesn't
>   "playback" the media.
> - there is no reason why you should be restricted to "apply" an effect
>   only to an complete clip. Effects can span multiple clips and even be
>   faded and crossfaded separately from the media.

I agree on all these points, and I think the UI we have thus far
discussed definitely meets every one naturally.

> 
> Many of today's existing video editing applications get some or all of
> those points wrong, probably because of sticking to some legacy state
> of affairs. Indeed, any of those points enlisted above, if overlooked,
> can cause a lot of unneeded code complexity and force quite some amount
> of unnecessary and stupid work on user.
> 
> In our case, having to edit a movie imposes quite a bit of essential
> complexity. You have to care for lots of details and consider some
> subtle points, otherwise the work isn't professional.
> 
> 
> Clay Barnes schrieb:
> > The limitation of inputs for the (compound)filters to entire media 
> > containers and media files on the same level provides a substantial 
> > reduction in visual/code complexity, resolves some nasty potential 
> > graph cycle bugs, and heads off some hard questions about edits that 
> > involve the contents of both unlocked and locked media containers.  I
> >  think the very small reduction in flexibility is a blessing---the 
> > whole point of the media container concept is to 'encapsulate' their 
> > contents, so interaction with them that deals with the sub-objects or
> >  crosses their boundaries is actually confusing/counterintuitive to 
> > users.
> I fully agree with the latter statement: a meta-clip should be opaque.
> I the user wants to edit its internals, (s)he should "open" the
> meta-clip which would just switch to another timeline window tab.
> 
> I don't understand your argument at the beginning of the paragraph.
> Letting aside that it's never a good idea to design the UI such as to
> avoid implementation bugs, we simply aren't in the situation to decide
> on "limitations" here. What is needed is dictated by actual necessities.
> Regarding Effects:
> - you have effects in a global bus insert (e.g. global deinterlacer)
> - you have effects on some timespan, irrespective of the content. This
>   kind of effects are very frequently combined with automation.
> - you have effects applied to a whole clip
> - you have effects which need to be applied to a specific source media
> - you have effects applied just to a part of a clip, typically with
>   automation
> - you have the situation where an effect spans a transition.
> 
> All of these cases are everyday situations. You do no one a favour if
> you try to inhibit some of those cases, you just force people to use
> workarounds, often with serious consequences on the whole project.

I am afraid my poor up-all-night authorship has obscured my meaning.
None of the rendering effects is at odds with my idea in the least,
though I need to work out a good representation of effects tied to the
absolute last output rather than any one meta-track (i.e. global
effects).

> > Sudden idea:  As a parallel to the idea of cuts in source media, the 
> > corresponding effect (crop off the ends) should be easily performed
> > on the media containers, too.
> 
> You are asking the wrong question. Not "how can containers be made to
> behave somewhat similar to normal clips"?  This way you are creating
> "accidental complexity".
> 
> The right question is: "is there any difference between the handling
> of real clips and meta-clips? What has to behave different?"
> And the answer is: there is none.
> They are just using a different kind of source media, that's all.
> (you can "edit" this source media of the meta-clip in another timeline,
> while you can't "edit" the source media from a media file or life
> input). You can trim, splice, rearrange, and combine them like normal
> clips and all of this doesn't cost a single line of additional code.
> 

Again, I blame my borderline unconsciousness---I agree that my
eventual conclusion was that at most, the difference between them be
no more than having faded-out continuations be visible by default for
containers but not streams, and that I didn't like horizontal (in
time, that is) stacking of containers.

However, I realize that the second of those is clearly not a
problem---correlation between each of their constituent streams is
entirely unnecessary.

> > Note that two media containers cannot share the same track (unlike 
> > dropping one video after another on the same video track)---trying to
> > do so ends up with a mess whenever there are different numbers of
> > content tracks for each container.
> 
> First of all, we should be careful with our terminology.
> A media doesn't contain "tracks", it contains N separate *streams*.
> Each stream has a stream-type.
> 
> A track, on the contrary doesn't contain media data, it doesn't
> "playback" or "mix" media (the latter is the job of the render- or
> processing engine). Consequently, it doesn't, on itself, have a
> stream-type. A track is just a place to arrange clips and effects,
> nothing more. There is no reason why you should artificially limit
> the kind of clips which can be put next to one another.

Terms noted.  Excuse my persistent errors therewith until now.

> There is a GUI design problem though: how to draw the visualisation
> or preview of the clips? Do we really need to see the various possible
> streams within a single clip displayed "inline", in the timeline view??
> While, for example, displaying a mono waveform is certainly helpful,
> showing 9 tiny, almost identical waveforms of an 2nd order Ambisoncs
> track is quite silly. For the purpose of editing, it would be fully
> sufficient to show the waveform of the W channel (sound pressure sum).
> Similar argument for a clip containing three video channels from
> a multi-camera setup.
> 
> ...
> > before the beginning crop and after the end crop, all of the content
> > of the media container can still be seen and edited, but it is faded
> > to indicate the 'invisibility' of that content.
> 
> This sounds like a nice idea to me. Of course there is no reason why
> such a feature shouldn't be available for real clips too.

My reluctance stemmed from that overlap in the faded extensions which
would show up in most work flows.  I suppose the low importance allows
us to just have them 'run into' each other at the halfway point. 

> In a similar vein, do you know the "region transparency" in Ardour?
> In Ardour you can stack regions on top of each other as you like,
> but the topmost region always covers what's "below". But if you
> switch the region to "transparent", you can hear a mix of both
> and you can see the waveform of the region "below" shine through
> the transparent waveform of the "upper" region.
> (In Ardour, "region" denotes what we call a "clip")

I am not aware of it.  I'll install Ardour and have a look this week.

> >> Lets assume you lock a track containing an media object "A". Lets
> >> further assume there is another object "B" placed on another track
> >> 
> >> Case 1:....
> 
> > I am either misreading your text or I think you might have made some 
> > small media reference errors, below is my interpretation of your 
> > intended meaning:
> 
> Well, I can't see any difference between both descriptions (besides
> the small implementation detail when to re-adjust the offset). So
> I take it that we both judge the situation similarly.
> 
> > 
> > I agree that this is a good handling of relative placement 
> > interactions with locked tracks.  The visual cue for Case 1 should be
> > the same one used by Case 2 (and the default relative motion case).
> > I am imagining something like a red line with text indicating the
> > time of the relative offset bracketed by the related track's defining
> > edges.  An image of an anchor (the nautical tool) on the end of the 
> > track that defines the relative placement would clearly show 
> > directionality of the relationship.
> 
> Yeah, sounds like a good way of visualizing this relationship.
> (ok, I am aware that implementing such behaviour may have some rough
> edges, but from a usage point of view its easy to grasp)
> 
> Cheers,
> Hermann V.
> 

-- 
Clay Barnes

Website:
http://www.hci-matters.com

GPG Public Key (Fingerprint 0xBEF1 E1B8 3CC8 4F9E 65E):
http://www.hci-matters.com/keys/robertclaytonbarnes_public_key_until20130101.gpg
https://keyserver2.pgp.com/vkd/DownloadKey.event?keyid=0xE1B83CC84F9E65E8

Attachment: signature.asc
Description: Digital signature

Reply via email to