We are currently working on a chat + (file sharing +) video conference
application using HTML5
websockets<http://stackoverflow.com/questions/4220672/implementing-webbased-real-time-video-chat-using-html5-websockets>.
To make our application more accessible we want to implement Adaptive
Streaming, using the following sequence:

   1. Raw audio/video data client goes to server
   2. Stream is split into 1 second chunks
   3. Encode stream into varying bandwidths
   4. Client receives manifest file describing available segments
   5. Downloads one segment using normal HTTP
   6. Bandwidth next segment chosen on performance of previous one
   7. Client may select from a number of different alternate streams at a
   variety of data rates

Does PyFFmpeg support Adaptive Streaming? Or, how can we implement this?
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user

Reply via email to