On Fri, 2008-02-15 at 23:05 -0600, Carl Karsten wrote:

Hi, sorry for being late.

[...]
> I was at an hour long presentation with my VGA2USBLR gizmo hooked up to the 
> projector.  I gave up trying to use the v4l interface

That's bad. I can understand your annoyance; unfortunately, transcode
v4l module is built around tv tuners/video capture cards (to be exact, I
have _two_ SAA713x cards and nothing else...), so it can work strangely
or no work at all if the v4l devices does not export the "expected"
interface.

I've planned some initial fixes for this, for example introducing some
fallbacks for some IOCTL calls, but nothing was done yet due to
chronical lack of time :(

> btw, the presentation was:
> Paul Smith - mapping solution from scratch with Mapnik, Django, and 
> OpenLayers.
> so almost topical :)

And quite interesting :)

> I have an audio track.  I want to make a video track.  Not sure what format 
> to 
> use.  Size is secondary to quality.  Huge is fine, as long as it is needed.  
> I 
> am expecting a few gig per hour.

Any lossy codec with sanely-sized bitrate should be fine, since I'll
expect very static images with few differences, so the encoder will be
very happy and will delivery a fine job (hopefully!).

> I have used transcode -i image_list.txt -x imlist - but here is the problem: 
> the images were not evenly across time.  looking at the times, there are 
> groups 
> of "quick" (1 second per frame) and groups of "slow" (3 seconds per frame.)  
> (I 
> am guessing image complexity had something to do with it.)  So applying a 
> constant fps means it will easily drift by 5 or 10 seconds.  thus my quest to 
> get imlist to respect file timestamps.
> An easy solution to my current problem would be to write a script that copies 
> frames to fill in the gaps so that there is one frame per second.  but I 
> would 
> rather hack the code and make it somehow use the file's timestamp.

That will be EXTREMELY welcome since it starts to address an historical
weakness of transcode (A/V synch troubles and madness with variable
frame rate).
Of course the idea is to properly handle all those issues in transcode
core and solve this problem once for all (...someday in the future...)

> I have some questions about the code below.
> Does MARK_TIME_RANGE set anything?  (MARK made me think it did.)

Given a stream of A/V frames (so it is really a pair of two streams, but
let's start simpler), the -c options allow to select only some frame
ranges to be encoded. Like this: (use a fixed font to see this `figure')

(F = 1 frame)
-> FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
                 [ - range 1 - ]                  [ - range 2 - ]
   [out_of_range][___in_range__][__out_of_range__][__in_range___][out

Everything after the decoder, so filters and encoder itself, will skip
all out_of_range frames to save processing time.

>#define MARK_TIME_RANGE(PTR, VOB) do { \
>      /* Set skip attribute based on -c */ \
>      if (fc_time_contains((VOB)->ttime, (PTR)->id)) \
>          (PTR)->attributes &= ~TC_FRAME_IS_OUT_OF_RANGE; \
>      else \
>          (PTR)->attributes |= TC_FRAME_IS_OUT_OF_RANGE; \
> } while (0)
> 

the fc_time_contains() function checks if a given frame, identified by
it's id, is in any processing range or not.
Finally, the TC_FRAME_IS_OUT_OF_RANGE frame marks the frames that do NOT
belongs to such processing range, so frame that must be ignored by any
further processing stage.

> Why the while(0) loop?

It is an idiomatic expression, see:
http://www.c-faq.com/cpp/multistmt.html

> Why is there both ptr->video_len and video_size ?

See comments in src/framebuffer.h (pasted here)

/* 
 * Size vs Length
 *
 * Size represents the effective size of audio/video buffer,
 * while length represent the amount of valid data into buffer.
 * Until 1.1.0, there isn't such distinction, and 'size'
 * have approximatively a mixed meaning of above.
 *
 * In the long shot[1] (post-1.1.0) transcode will start
 * intelligently allocate frame buffers based on highest
 * request of all modules (core included) through filter
 * mangling pipeline. This will lead on circumstances on
 * which valid data into a buffer is less than buffer size:
 * think to demuxer->decoder transition or RGB24->YUV420.
 * 
 * There also are more specific cases like a full-YUV420P
 * pipeline with final conversion to RGB24 and raw output,
 * so we can have something like
 *
 * framebuffer size = sizeof(RGB24_frame)
 * after demuxer:
 *     frame length << frame size (compressed data)
 * after decoder:
 *     frame length < frame size (YUV420P smaller than RGB24)
 * in filtering:
 *      frame length < frame size (as above)
 * after encoding (in fact just colorspace transition):
 *     frame length == frame size (data becomes RGB24)
 * into muxer:
 *     frame length == frame size (as above)
 *
 * In all those cases having a distinct 'lenght' fields help
 * make things nicer and easier.
 *
 * +++
 *
 * [1] in 1.1.0 that not happens due to module interface constraints
 * since we're still bound to Old Module System.
 */

> How does anything's timing work?  I thought there was something that would 
> keep 
> the audio and video streams synced, but I don't see how.

transcode synchronization code is, to be VERY kind, either optimistic
(for non-VOB world) or very messy and poor (for VOB world). And
somethimes it is both.
I'll describe briefly and from a VERY HIGH point of view (in order to
hide the scary detauls).
- for VOB/MPEG stuff, the synchronization code is (mostly) in
import/tcdemux sources. Keep away if you can.
This code is very low-level and mpeg specific and tries to handle the
task by messying at stream-level. To make long (and painful) story
short: keep away. We (transcode-dev) should really nuke all this chaos
and rewrite the whole thing from the ground up because now it is pretty
unpredictable and unreliable.
- for anything else, but with the implicit assertion that this `else' is
something AVI-like, the synchronization is handled with 
TC_FRAME_IS_SKIPPED and TC_FRAME_IS_CLONED flags.
Maybe you can find some interesting sample code in
import/v4l/import_v4l2.c

+++

A short term, band-aid like but still VERY useful and welcome task would
be provide a thin, generic synchronization layer that attempts to
normalize synchronization issues using the above two flags and frame
timestamps. This is not of course the definitive solution but it is
still a very important first step.

If you want to try this way, you will have all support that I can
provide (this offer is of course valid for anything else listening).
Other way, I'll add this task on top of my todo-list for 1.2.0.

Sorry for pretty long answer and sorry again for being late answering;
for any other/further question, please do not hesitate to reply.


Bests,

-- 
Francesco Romani // Ikitt
[ Out of memory. ~ We wish to hold the whole sky, ~ But we never will. ]

Reply via email to