Hello Stefan,

thanks for your explanations! I have a few questions :)

On 01/10/10 16:53, Stefan Eilemann wrote:
>> The idea is to let every node decode the video file, and
>> synchronize video frame output using FrameData. I think it would be
>> impossible to send high resolution video data across the network
>> fast enough in many situations, especially for stereoscopic
>> material where you might have 2x1080p at 30 frames per second.
> 
> It would definitely be challenging, but one could multicast the
> encoded video stream with buffering to the render nodes.
> 
> This is probably overkill if you only want to play movie files on a
> (pre-)distributed file system.

I think I'll start with a version that just plays video files. That can
probably extended later if needed.

>> Audio would be played only on the application node because the
>> video output usually must be synchronized to audio time.
> 
> Be careful to synchronize the audio with the video wall swap barrier,
> otherwise your audio might be one frame ahead of the video. Although
> I'm not sure if this is noticable.

It would probably not be noticable if this was the only error source,
but the synchronization is often already imprecise for other reasons.
Thanks for the hint!

>> I'm not sure yet what the best way would be to achieve that. My
>> first idea would be to simply configure the geometry of several
>> walls as a subset of e.g. [-1,1] x [-1,1]. But maybe it would be
>> better to use the canvas/view stuff described here 
>> <http://www.equalizergraphics.com/documents/design/view.html>?
> 
> Depends what you want to achieve. If you want 3D billboards in a VR
> environment, I would use a sensible real-world size for them, i.e,
> one which is close or equal to the canvas. If you want to video to be
> in 2D, I would use the normalized viewport (wrt canvas) in
> Channel::frameViewFinish to render the video. The canvas/layout API
> provides you with 2D spatial information for your 3D environment.

I think the 2D approach makes more sense, I'll start with this one.

>> Frankly, I have a hard time understanding how all these concepts 
>> (observer, canvas, segment, view, window, channel, compound,
>> layout) relate to each other and which subset to use for a given
>> task. Can you give usage examples?
> 
>> From the Programming Guide Section 3.6-3.9:
> 
> A canvas represents one logical projection surface, e.g., a
> PowerWall, a curved screen or an immersive installa- tion.    One
> configuration might drive multiple can- vases, for example an immer-
> sive installation and an oper- ator station.
> 
> A segment represents one output channel of the canvas, e.g., a
> projector or an LCD. A segment has an output channel, which
> references the chan- nel to which the display device is connected.

So instead of defining multiple wall geometries, one would now define
one wall geometry that represents the canvas and then subdivide this
into multiple segments? And the reason for this would be that the output
channel then knows which part of the complete canvas it is rendering to?
Or is there another reason to use canvas/segment definitions?

> A layout is the grouping of logical views. It is used by one or more
> canvases. For all given lay- out/canvas combinations, Equalizer
> creates des- tination channels when the configuration file is loaded.
> These destination channels can be refer- enced by compounds to
> configure scalable render- ing.
> 
> A view is a logical view of the application data, in the sense used
> by the Model- View-Controller pattern. It can be a scene, viewing
> mode, viewing position, or any other representation of the
> application’s data.

OK, I'm lost here. From the examples, I get the vague idea that
different layouts can group the same ressources into different
configurations, and different views can apply to different parts of a
layout. Is that right? What would this be used for? My impression is
that in most cases, you only have one fixed setup and thus do not need
any layouts or views.

> A view might have an observer, in which case its frustum is tracked
> by this observer. An observer represents an actor looking at multiple
> views. It has a head matrix, defining its position and orientation
> within the world, and an eye separation.

There is always implicitly a default view with a default observer,
right? In our VR environment, we never define views or observers, but
tracking works anyway. So is defining observers only required for very
special applications?

Martin
-- 
Computer Graphics and Multimedia Systems Group
University of Siegen, Germany
http://www.cg.informatik.uni-siegen.de/

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
eq-dev mailing list
[email protected]
http://www.equalizergraphics.com/cgi-bin/mailman/listinfo/eq-dev
http://www.equalizergraphics.com

Reply via email to