On 9/10/2016 12:56 AM, Timur Guseynov wrote:
Hi.

I am new to C++ and FFmpeg and wanted to write application that streams video from camera (webcam at least) to some server.

I've seen StreamingGuide <https://trac.ffmpeg.org/wiki/StreamingGuide> and wanted to know how to implement it.
I think that basic flow is like this, please correct me if I'm wrong:

 1. Get input device AVInputFormat from libavdevice
 2. Open that input with avformat_open_input
 3. Find its streams' codecs
 4. Get decoder with avcodec_find_decoder
 5. Decode it somehow
 6. Encode it for stream
 7. Write data with muxer
 8. Send muxed data to server

So I imagine how to implement the first half of this list but not the second.

2 questions that I have are:

 1. Do I understand the streaming flow right? What are the nuances
    that I must consider? What modules/methods should I look into to
    implement it?
 2. How can I do a preview of stream in a GUI using, for example, Qt
    Quick? Is an input device blocked by one of the processes (either
    FFmpeg or Qt)? If it is, should I somehow copy frames for GUI to
    show them for user or just reference it?

Thanks in advance!

Kind regards,
Timur Guseynov


_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

That is pretty close if you want to display it in a gui.

To display it you just draw the decoded bytes to some hardware overlay. And yes, typically only one application can acquire the capture stream output at a time. You can register pseudo input devices which distribute the output of an actual camera. There are products available that do this.

For the server, you could set up a output for rtmp or rtsp and then write the encoded av packets to it.

Have Fun!

Andy

_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to