Hi Dirk.

Dirk Reiners wrote:

Under my desk, there's a bloody fast XW9300 Dual-Opteron (single core, 2.4GHz) HP-Workstation with 2 Quadro 3450 Cards, connected to 2 1280x1024 Displays and 2 1024x768 video projectors. As the OpenGL Window Size is limited to 4096 (I don't know whether this is Nvidia or OpenGL specific), I have to use the multi display window on the localhost.
It is OpenGL, but I'm not sure why that's a problem. You could put them
on top of each other to get down to 2560x1790 or 3304x2048 or so. But
you have two cards which have to be run by a separate X server, which is
the main problem.

Yes and no. With Xinerama and MultiView-SLI, it would be possible to
span one window over all screens. This works, but everything right of pixel 4096 stays black.

(...)

You'll probably run into OS buffering and waiting. The problem with
real-time apps is that a couple ms lost mean dropped frames. Running a
single process even the Linux kernel is good enough to be faster than
that. But if multiple processes are running this becomes pretty hard. On
the old SGI boxes you could isolate and dedicate processors to specific
processes, not sure if Linux allows that.

Never heard of something like that. Anybody else?

Nevertheless, even if I can get it to work on the loopback device, I fear that if I want to use a similar scene in a real cluster, even my GB-Ethernet can't handle this.

Is that the target configuration? If it is, we don't have to worry about
single machine problem.

Well, the target is to keep it open for as many as possible
configurations. So, again, yes and no. It should work in both cases.

Is there a way to reduce the amount of data sent over the network?

Fewer changes. ;)

Ok, maybe 2 out of 25 frames per second are really enough. ;)

Is it possible to compress the changelist?

The changelist itself is not big, the data sent over is the problem,
especially if you run videos.

Err, ok, sorry for that. Would it be possible to compress the data? (Although it would be a bit dumb to uncompress an mpeg-stream to compress the pictures again to send them over the network...)

Or to use local copies of a video?

You could, by changing the servers to decompress the video locally into
the used texture. You would need to identify it, and synchronisation
will be interesting.
This seems to be complicated but easier (for me) than writing a video
chunk. But it raises some new questions:

* How can I access the SceneGraph on the server side? For the client, I
set the the RootNode for every Viewport with setRoot(NodePtr xy). Is
there something like getRoot on the other side?
* How can I identify a specific texture or texture chunk? There
is an AttachmentsField - can I use getName/setName here?

I had similar problems with Rasmus' Animation Library, where a lot of vertices were transformed per frame and every changed vertex was sent over the network instead of the corresponding fraction value in the time sensor. How can I solve this?

The OpenSG clustering approach is based on the idea that all changes to
the scenegraph should be automatically shared to make it easy for apps.
If you change a lot of data (i.e. morphed vertices or video frames) that
can become a problem. The best way to fix this is to move these
expensive operations inside the scenegraph, e.g. transmit the time
change and do the actual morphing in the rendering traversal (or an
update traversal, once we have one).

I hope that someday I will understand how to implement something like
that in OpenSG. At this very moment, I don't even try.

Same thing for the video, the best
way would I can think of right now would be to create a
VideoTextureChunk or something similar that decodes the video to the
desired frame/time when it's activated. That way only the time change
needs to be transmitted.

Ok, let me see if I get this right: You mean that I should decode the
whole video if I enable the chunk and just select the frame afterwards?
This seems to be impractical for all but very short videos (or machines
with lots of RAM), but for the start this would be an option (my videos
are quite short and there's plenty of memory left)
Assumed I decode the whole video at startup into an OSG::Image with
multiple frames and select only the needed frame (don't know whether
this is possible, I see only methods to adjust the frame change time,
but this should be easier to add than a VideoChunk), then the data
is sent only once? In that case I could go with a simple TextureChunk
for now...

Thanks.

Yours,
Dominik




-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to