Hi Christopher,

Thanks for the reply and giving me an over-view of this project.

> The idea with this project is to detect when two different videos
> should be transposed (one is VGA, one is a speaker video), and switch
> back and forth as needed in the output stream.  This lets a mobile
> client see the "best" video for a particular time, yet keeps us to a
> single stream.  So something like openCV might be used for detecting
> when the speaker is looking at the audience, or when graphics have
> changed on the slides and they should be shown again.

To understand the output for obtaining a single stream delivery, I
referred to [1] and tried to understand the process. As I have never
tried this before, please let me know if this is related to something
that we are trying. Or something specific if I should look into.

Also, I'm a beginner to openCV right now, so I looked into tutorials,
started to learn( Basic openCV data structures, image DS, and basic
codes to process images) and understand them & found it interesting to
learn and work with.

[1]: 
http://msdn.microsoft.com/en-us/library/windows/desktop/dd743233(v=vs.85).aspx


> The ideal outcome is a single video, and not a mobile application.
> Thus the video would be compatible across multiple platforms and mobile
> applications.


Okay, sorry had some other notion about the process. Now, I understand
the process to some-extent and the way it is linked to mobile
development.

What should be my next step towards this? Anything specific that I
should try my hands on?

Please let me know.

Thanks.

-- 
Regards,
Chitesh T.
_______________________________________________
Matterhorn mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/matterhorn


To unsubscribe please email
[email protected]
_______________________________________________

Reply via email to