Unfortunately, this is actually one of the few formats that I have found to be 
hit-or-miss for seeking within MLT. It is a rather passe format now, and it is 
very difficult to make format-specific improvements let alone seek and sync 
improvements as that requires heavy regression testing. On the other hand, I 
seem to recall I found that some HDV files I was testing against had some data 
loss or corruption contributing to my problem. For your app, it sounds like you 
just need non-seeking playback for live preview followed by seeking-based 
processing during post. In that case, you can transmux or transcode the HDV in 
post if needed.
Yes, that's pretty much it. Reason for originating HDV is that we already have 
some good cameras. We thought about HDMI, but as far as I can see HDMI capture 
is still largely unreliable (software codecs, dodgy hardware) and/or expensive 
(hardware codecs). Most of the hardware I've seen can't do 1080/60p and does 
1080/60i as 1080/30p.




Frame accurate sync between sources is not something MLT provides today. 
Presumably, this is based upon a timecode (starting value or track) embedded in 
the source. However, if all you need is to play - without seeking - from 2 
growing files with sync accurate to within a 100 ms or so based simply on task 
startup timing, then it will be sufficient.



In order to play multiple inputs with python (or other high level language) and 
SDL within the same process, you must use the composite transition to compose 
them into one output view. You can use a custom mlt_profile to define a 
resolution and aspect ratio that best accommodates that. That will be a little 
heavy due to the image processing involved. If you want to do something that 
has lower CPU overhead, then you will need to use C/C++ to receive the MLT 
consumer-frame-show event with the frame object, and then get the image data as 
YUV or RGB to paint it in your app GUI. And to do that drawing efficiently you 
may need to use OpenGL or combine buffers similar to composite transition to 
feed into an overlay (e.g. XVideo) API. IOW, there is little middle ground. 
Probably best to use composite and SDL in short term. 
Our discussion has given me an idea for how we can realize this. I think the 
answer is to capture the streams with dvgrab and composite them together to 
provide a side-by-side view from a single lower-resolution proxy file saved 
onto a separate machine's drive. We can then play or scrub through this file to 
identify the points for clipping and switching between cameras. I can create a 
simple logging application to do this, and then make another app which takes 
the logged data and processes the original captured video to create our desired 
outputs. which will be several transcoded versions (DVD, Web streaming, 
possibly Blu-Ray). Working with a lower resolution proxy should improve 
performance as well.

Thanks a lot for your input, Dan. I can now start doing some testing and 
experimentation. I'll avoid pestering you as I know you have a lot of other 
things to do, but I'll try to post updates on my progress here and share 
anything useful that comes up.

Keith
------------------------------------------------------------------------------
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
_______________________________________________
Mlt-devel mailing list
Mlt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mlt-devel

Reply via email to