I am attempting to stream AVPackets with AVFrame data containing either video 
or audio data. The video format in play is FLV. The audio and video are being 
captured by QTKit -- which provides separate callbacks to deliver video samples 
and audio samples. 

My question revolves around the nature of av_interleaved_write_frame vs. 
av_write_frame. Presently, if I stream video using av_write_frame, it appears 
to be received and able to be played successfully on the other end. However, if 
I stream video using av_write_interleaved, I just get black video on the other 
end. While the audio data appears to be filling packets and streaming 
successfully (even thought the actual audio data is garbage and not what I want 
-- that's another problem I'm trying to figure out and have posted about on 
this list), it appears that only av_write_frame allows the video to come 
through on the other side. 

My question is this: what requirement is there to use one call over the other, 
and assuming for a moment that av_interleaved_write_frame is to be used when 
there is both video and audio (versus just one or the other), how is the fact 
that there's no guarantee when audio samples or video frames will be delivered 
(or if they are delivered at all) affect streaming? QTKit might be delivering 
both audio and video samples continually, or video only, or audio only -- 
there's no guarantee of absolutely either, and even if both are delivered, 
there's no guarantee of when samples will arrive. 

If someone can enlighten me a little bit to the nature of these calls, and 
if/when one is required over the other, or if they are merely optional, that 
would be great. 

Thanks, 

Brad
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to