Hi there,

I am interested in using ffmpeg in a very non-standard way.

There is an application that is currently displaying images to an X11 display. I would like to modify application, so that the pictures are not displayed via X,
but fed into ffmpeg, and encoded as frames of a H264 video stream instead.

How do I do this?

(I am aware that one can build a video file from a group of images, like this:

     ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi

... but that's now that I want!
I want to feed the data from my application directly to ffmpeg, without encoding it into JPEG, and writing it put to a file.

(The purpose of this is to get smaller latencies and better throughput.)

  * * *

Obviously, I will need to write code, but that's no problem; the question is where do I start? Where can I find relevant documentation, and/or code examples? How do I need to talk to?

Thank you for your help:

   Kristof
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to