On 5/6/11 10:54 AM, Nathan Kidd wrote:
> In terms of the big picture, I'm not familiar with TurboVNC, but for 
> this specific detail I would think it would be simpler and give better 
> results to implement on the server side. You still can measure the 
> socket bandwith, but you don't have any issue with having to send or 
> wait for the "adjust rate" message coming from the desktop client. 
> Depending on your latency it's easy to get into ping/pong type bandwidth 
> shaping.

Implementing it on the server would mean that you had more information
about what the application was doing, such as whether it was doing big
PutImage or CopyArea requests, etc., and you could better guess what a
"frame" actually was.  However, there are big issues there as well.
There would be no way to communicate a target level of performance to
the server without either making it a per-session value (which would
mean that every client was subject to that value-- not what we want) or
extending the RFB protocol.

The other problem with the server is the deferred update timer, which is
the somewhat esoteric way in which VNC implements the equivalent of
frame spoiling without really implementing frame spoiling (because if it
really wanted to spoil frames the "right" way, it would have to set up a
separate transmission thread for every connected viewer.)

------------------------------------------------------------------------------
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
_______________________________________________
VirtualGL-Devel mailing list
VirtualGL-Devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/virtualgl-devel

Reply via email to