Hi!

On 01/10/10 15:19, Stefan Eilemann wrote:
>> Could Equalizer help on playing in synch a video wall splitted over N
> computers ?
> 
> Definitely.
> 
>> Thanks for any comments or ideas and sorry if i'm missing the thread
> ... Maybe Equalizer is more intended to be used in 3d OpenGL graphics
> systems and not for video player stuff...
> 
> This is true, but it doesn't mean that you can't make use of it for
> video. 'Collapsing' 3D into 2D is trivial, and OpenGL is often used for
> 2D stuff nowadays (eg the OSX desktop compositor).
> 
> The basic design question for an 'eqVideo' for me is: What is the video
> source - the application process or a file. Or, put otherwise, is the
> application multicasting new video frames or are the render clients
> accessing the video directly. The first approach is more flexible when
> it comes to the video source, e.g., a webcam, but does need some care to
> get the proper performance since the video is sent over the network.
> 
> I don't see anything Equalizer can't do what other video-wall solutions
> can. It might need some more work since the application does not yet
> exist, but the the 3D aspect also gives you more flexibility. One could
> place video 'billboards' in 3D space and create easily a VR experience.

Funny that this comes up now. I'm currently writing a video player with
OpenGL output. It supports stereoscopic (3D) videos, see
<http://www.nongnu.org/bino>. I'm currently preparing the first release.

The player does not have support for Equalizer yet because I first
wanted to get the decoding and video/audio synchronization right. My
plan is to add Equalizer support once this works fine (which I think it
does now) and I find the time.

The idea is to let every node decode the video file, and synchronize
video frame output using FrameData. I think it would be impossible to
send high resolution video data across the network fast enough in many
situations, especially for stereoscopic material where you might have
2x1080p at 30 frames per second. Audio would be played only on the
application node because the video output usually must be synchronized
to audio time.

I was planning on defining a video plane on which to display the video.
Each Equalizer Channel would render a subset of that plane. For
stereoscopic video, it would render the left or right view depending on
Channel::getEye().

I'm not sure yet what the best way would be to achieve that. My first
idea would be to simply configure the geometry of several walls as a
subset of e.g. [-1,1] x [-1,1]. But maybe it would be better to use the
canvas/view stuff described here
<http://www.equalizergraphics.com/documents/design/view.html>?

Frankly, I have a hard time understanding how all these concepts
(observer, canvas, segment, view, window, channel, compound, layout)
relate to each other and which subset to use for a given task. Can you
give usage examples?

For example, in our VR environment, it is sufficient for most tasks to
define the geometry of the display walls and then have two channels per
wall: one for the left eye, one for the right eye.

Martin

_______________________________________________
eq-dev mailing list
[email protected]
http://www.equalizergraphics.com/cgi-bin/mailman/listinfo/eq-dev
http://www.equalizergraphics.com

Reply via email to