Hi everybody,

I'm currently using ffmpeg API to send video stream through RTP. The codec used 
in the RTP protocol is MJPEG. Below is my workflow:

+----------------------+   +--------------+   +----------------------+   
+----------------------+
| Camera Frame (JPEG)  +-->+  JPEG to RGB +-->+ RGB to MJPEG encoder +-->+ RTP 
Format container |
+----------------------+   +--------------+   +----------------------+   
+----------------------+

Actually I'm instantiating an MJPEG encoder that encodes the frame into a packet, then 
provide this packet to the "format container" for transmitting though the 
network.

The video frames I want to send are already in JPEG format, so I'm wondering if 
it is possible to avoid converting from JPEG to RGB, and reencoding using 
FFmpeg MJPEG encoder.
To my point of view, the ideal would be creating an AvPacket by hand and fill 
the buffer with my camera frame JPEG directly.

+----------------------+   +--------------------+   +----------------------+
| Camera Frame (MJPEG) +-->+ Writing to Packet  +-->+ RTP Format container |
+----------------------+   +--------------------+   +----------------------+

Any clues?

Thanks!



_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to