Hi, OpenSG users. I made a cluster rendering system with 6 rendering servers and a client. I receive some data, such as positions, through network using UDP and map the data to an object in the scene. It works well when I tested with 3 machines. But when I tested rendering with 6 machines (3x2 displays) there was delay, at most 30 seconds.

Data from network is received in Idle function of glut, and idle function calls display function. So in my opinion, there should be data loss instead of delay. Is there any kind of buffering mechanism in OpenSG cluster ? After several tests, I found of 6 machines don't have good rendering performance compared with the others. They renders OpenSG scene slower than the other 4. But still, I cannot understand why there are delays in cluster instead of loss.

_________________________________________________________________
상큼한 만남과 따뜻한 공동체 생활... 지금 MSN 커뮤니티에서 시작하세요! http://groups.msn.com/?pgmarket=ko-kr


-------------------------------------------------------
This SF.Net email is sponsored by: InterSystems CACHE
FREE OODBMS DOWNLOAD - A multidimensional database that combines
robust object and relational technologies, making it a perfect match
for Java, C++,COM, XML, ODBC and JDBC. www.intersystems.com/match8
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to