Jason Wood schrieb: > > On Saturday 18 May 2002 6:55 pm, Christian Berger wrote: > > > I've incorporated the changes that we discussed, and came up with a > > > couple of other things which we hadn't thought of before. > > > > > > Firstly, I have said that each cutting list should start with a version > > > command. This is important so that we can add new features, and modify > > > stuff, and know what to expect for any particular version. > > > > Well I don't know if we'll ever need it, but it sure is an interresting > > piece of information. Maybe we should also define a "creator" command, > > telling us which programm created the cutlist. Maybe also a date and > > comment field. Those fields definitely are good for recovering broken > > harddisks :) > > Of course, if you hard disk has died, recovering the cutting list is probably > the least of your worries -
Or think of undeleting files. > > > Secondly, I have added a couple of extra parameter value types - String, > > > ID, and time and interpolation. We need to decide exactly on how time > > > should be represented within the file format. > > > > Well I'd say we use time in milliseconds. Unless we want to be limited > > to film and PAL/SECAM we have to be able to handle wiered framerates > > which aren't a multiple of 0.01. We cannot really do that with > > 00:00:00:00 formats. > > If we are likely to need, say, ten-thousandths of a second for the correct > accuracy at times, then rather than milliseconds just measuring in seconds > and allowing any fraction after the decimal point might be better than > choosing milliseconds. > Eg. 330.176215 seconds. This might be an idea. > The question is, should time be represented in > hours/minutes/seconds.fraction-of-a-second, or should it be represented as > just seconds.fraction-of-a-second? Only seconds. That hours/mintes/seconds system is braindeath, and to complicated. I mean think of leap seconds and all that stuff. Besides I don't see why we should base our file-format on the revolution speed of a planes :) > > > Thirdly, I have added the interpolation concept. Whilst at the moment it > > > is limted to simple interpolation between a start value and an end value, > > > the idea is that the interpolation within a scene should be independant > > > of the values it is working on : instead of having -startvalue -endvalue, > > > we have just -value which can accept an interpolation. Whilst slightly > > > more difficult to parse (though not greatly more so), it allows for a > > > great degree of freedom for later expansion. E.g. we could at a later > > > date expand it to allow non-linear interpolations, or to have "freehand" > > > sets where the value changes randomly. > > > > Well I'd do it a different way, I'd do "0 1 linear" Althought this > > doesn't seem to be much different (now it's encoded in a string) it > > might mean a lot of flexibility. We could write a tiny little Forth > > interpreter. This interpreter would interpret the "interpolation" at > > every frame. On the stack there already will be the current relative > > time of the scene as well as the length of the scene, and the command > > "linear" calculates the value based on start and end values. For example > > if we want to make a linear wipe going in steps we could do this: > > 0 1 linear 5 * int 5 / which would let it move in 5 steps. > > The main extension that I was later thinking of for the interpolation would be > some form of "freehand" ability - where you have multiple points, so instead > of just [start, end], you could have [start, middle1, middle2, middle3, > middle4, end] > > Of course, then you need to move on to the question of, "what if we don't want > them all equally spaced out? How do we handle the syntax?" which is why for > the moment, I think it's best to keep it simple. Since any interpolation > could be handled using multiple scenes anyway, I think this is one are which > shows why versions on cutting lists can come in handy! Well I think here my version is best, since it's a real programm we can do "anything" in forth. > > > Fourthly, I've re-structured the document, and used Star Office to write > > > it. Unfortunately, there seems to be a problem with exporting PDF files > > > at the moment - the file is now on the website, but only seems to be > > > viewable using gv. I'm still trying to figure out what's going wrong. I > > > also intend to add an HTML version sometime soon. > > > > Well it looks quite well for a PDF file. > > > > On page 1 below the drawing there's a typo, scehduler should be written > > scheduler, I think. > > Scenes also can have a length of less than a frame. This might be > > important if you want to "fill up" scenes. > > ??? I don't understand this point. > I would have assumed that a scene would be at least a frame, otherwise it > would never get rendered by the cutter. Well, but it would create a space in our file format, besides it creates audio. > > Maybe we should somehow distinguish between file IDs and connection IDs > > since I have to have a "list" of all the files being used in a scene > > The trouble is that once we get inside of a scene, we would then need a way to > distinguish between them whilst parsing them. Actually we have to. > I think a better solution here is using C++ and making use of inheritance and > polymorphism. Ohh well I've worked with those technices in Pascal, they might be usefull, especially for the Forth interpreter. However C++ still is not much more than a smart assembler. Actually our files format can be seen as a much more sophisticated language than C :) I'd personally prefer writing it in Pascal which at least has propper string handling. :) > > > > Well about the smoothing input of the croma effect (I'd call it > > cromakey) How would you define it? I mean it'S a color value. > > Ok, here I am thinking that this "smoothing" value determines the range of > values over which the chroma works. For example, you specify a blue flatcolor > image as the croma - RGB(0.0, 0.0, 0.8) > > We could then specify another flatcolor video as the smoothing. A higher vaue > for the color indicates a wider range - effectively, any color with a value > over 0.5 would mean that the value would always be chromakeyed to some > degree. > > The advantage of using a color is that we can specify seperate smoothing > factors for the different parts of the color value. E.g., since we might want > a larger range of blue colors to be accepted than their red and green > components, then we might specify a flatcolor video for smoothing of say, > RGB(0.05, 0.05, 0.2) (meaning a small variation in Red and Green would be > cromakeyed, and a wider variation in blue would be chromakeyed). Smoothing > might not be the best word here - threshold might be better. > > Smoothing would then just be a single value - any chroma effect would get > "smeared out" a little depending on smoothing to help remove jaggies around > the edge of the effect. I understand it. However I think we might wait with it for some time. It might be to time-consuming to do, but it's definitely something we can do in the future. > > Maybe we should try something like this: A forth effekt. The same > > interpreter used in interpolation could calculate the whole effect. > > It would be called for every pixel and it get's the pixel values of the > > incoming images on the stack and leaves the outgoing pixel on top of it. > > When done efficiently that wouldn't be to slow. > > A simple way to make effects in some semi-intereprative way would be cool. We can do it that way. > Cheers, > Jason Servus Casandro > -- > Jason Wood > Homepage : www.uchian.pwp.blueyonder.co.uk > > _______________________________________________________________ > Hundreds of nodes, one monster rendering program. > Now that's a super model! Visit http://clustering.foundries.sf.net/ > > _______________________________________________ > Kdenlive-devel mailing list > Kdenlive-devel at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/kdenlive-devel
