Hi Khyrul, I think there are some interesting aspects of this task. I think your approach in general sounds good, and you are right, the latency will be a major bottleneck for the solution.
Around FFMPEG real time transcoding you might want to look at: https://trac.ffmpeg.org/wiki/StreamingGuide So it might be possible to simply re-stream the stream from Red5 to FFMPEG, FFMPEG transcodes it and publishes it again, and you re-stream from FFMPEG to Red5 then to the conference room. Thanks, Sebastian 2016-03-14 21:51 GMT+13:00 khyrul Bashar <[email protected]>: > Hy, > Sorry if it's a duplicate. I've sent previous one before subscribing. I'm > Khyrul Bashar, 4th year CSE student from Bangladesh. I've participated > successfully in GSoC-2015. I've coded the PDFDebuuger app of PDFBox project > in apache foundation. > This year I hope to code Openmeetigns-550. I find it an interesting problem > but to solve available documentation resources is almost none like the > issue says "there is hardly any documentation on that available in the > internet". So far I've build the OpenMeetings source code and gone through > the source code in brief. > I'm trying to understand how the Openmeetings conference live streaming > works. This is I've understood so far, in client side flash 'publish' the > media and in server side that other clients in same scope is made aware > about the video. ScopeApplicationAdapter class which has extended > ApplicationAdapter class of Red5 gets notified on a new video stream in > 'streamPublishStart' method. And from there Openmmetings messages other > concerned clients via their 'Connection'. In flash client side upon > receiving the message the video is playback. All the necessary url, port > are stored in 'FlexGlobals.topLevelApplication', One thing I don't > understand how "FlexGlobals.topLevelApplication" data are populated in the > first place. Pardon me if it's trivial, I've read the ActionScript code > today for the first time. > Now my understanding of the issue is, we have to implement a way so the > video published by the client is available in three resolution (low, > medium, high) to other clients in the same room. As red5 takes care of > handling the live streaming itself(it's my understanding), to make any > versions of the video but original available we need to intervene in the > middle. e.g. like recording we add stream listener and from there data is > pulled using 'StreamPacket'. From this, we make an input for an FFmpeg > command which gives us transcoded videos as our requirement. Then those > outputs are relayed to other clients according to their demand. > I'm concerned about this approach as there is a real chance for latency as > one more step is added in the middle. Again as for the last three days, I > spent most of the time just to understand red5 and yet not really confident > (thanks to red5 documentations!), I haven't really chance to look at FFmpeg > workings, so I'm not sure, how the data received from the client is made > available to FFmpeg to transcode. I would appreciate some guidance here. > Please feel free to correct my understanding of the problem and application > architecture. > > Thanks > Khyrul Bashar > > PS: Why Red5 documentations are so rare? I'm not complaining just curious. > Red5 seems a decent media server used by many e.g. facebook. > -- Sebastian Wagner https://twitter.com/#!/dead_lock [email protected]
