On 7/13/11 3:49 AM, Pierre Ossman wrote: >> (2) Automatic Lossless Refresh (ALR) -- This feature is similar to LR > > I'd like to believe this could be done in a way where it could always > be turned on, or at least on by default. I didn't get the impression > that any of the problems you listed would prevent that?
I disagree. For starters, the ALR timeout really needs to be something that the user can configure, because different applications and network environments may call for a different value there. Also, as someone who frequently uses the "low quality" setting, I would personally find it annoying to have ALR always on. There are certain times during which I would want it and other times during which regular LR is preferred (and still others during which neither is necessary.) >> -- This is currently implemented as a separate thread, which means >> introducing a dependency on libpthread in Xvnc. Not sure how people >> feel about that. > > Not good. As you've mentioned, threading is rather error prone and > getting all corner cases covered is generally an uphill battle. We have TurboVNC to use as a reference, but yes, in general you're right. There were a lot of bugs in the initial implementation that had to be addressed in the field. > Why does this require a thread? Don't we already have timers for the > deferred update thing. Couldn't this be handled in a similar manner? If > not, wouldn't it be feasible to add a sufficient timer mechanism? A separate timer might be possible. ALR depends on a global state, not a per-client state, so it couldn't really use the deferred update timer (which is per-client.) >> -- As implemented in TurboVNC, ALR defaults to only monitoring the >> regions that are drawn using X[Shm]PutImage() (but this behavior can be > > The timer might have to be some combination of the oldest event and the > newest. I don't think this is insurmountable and we can experiment with > this going forward, provided that the triggering logic is well > contained. > > It might also be worth considering updating parts of the screen, and > not all of it. After all, we're sending a better quality version of > what's already on the screen, so there should be no risk of tearing > caused by partial updates. This would also mitigate issues with the > lossless update introducing latency because it being much larger than > the lossy one. It already does this. Only the tiles that were previously sent with lossy compression are re-sent during an ALR. >> We've been discussing in the TurboVNC community the feasibility of >> making ALR a client-side feature. > > I'd rather not go down this path. IMO we should have as much of the > encoding logic in the server as possible. It generally has the best > view of things, and often more CPU to spend making the best decision. If we can figure out how to implement it in the server without threading, then I would tend to agree. >> CU takes advantage of the RFB Continuous >> Updates extension (this is a registered extension, not something we made >> up ourselves.) > > Interesting. Do you have some reference documentation for this? There > are a lot of annoying corner cases with regard to synchronising with > other RFB messages. I'm interested in seeing how that is dealt with. > There are also the flow/congestion control issues. Actually no, and I searched long and hard for it. It appears in the RFB protocol documentation: http://tigervnc.sourceforge.net/cgi-bin/rfbproto The only place I've actually seen it implemented, however, is in Paul Donohue's Java VNC Viewer code. It appears to have originally been designed such that you could drag-select a region of the window in which a video was playing, and the server would start sending continuous updates for that region until you disabled the feature. Based on the description in the RFB proto document, I had to construct my own header entries for it: #define rfbEnableContinuousUpdates 150 #define sig_rfbEnableContinuousUpdates "CUC_ENCU" /* * EnableContinuousUpdates */ typedef struct _rfbEnableContinuousUpdatesMsg { CARD8 type; /* always rfbEnableContinuousUpdates */ CARD8 enable; CARD16 x; CARD16 y; CARD16 w; CARD16 h; } rfbEnableContinuousUpdatesMsg; #define sz_rfbEnableContinuousUpdatesMsg 10 >> There are obvious problems with this whenever collaboration is enabled. >> CU causes all clients to be "lock-stepped" to the same frame rate (see >> general musings below.) > > Is this an inherent problem or something we can work around? Inherent. I'll send you a more comprehensive report offline. It has to do with how the deferred update timer interacts with the framebuffer updates coming in from the client. Most VNC implementations use a rather high value for the deferred update timer (40 ms), and vncviewer in those implementations waits until after the current FBU has been drawn before requesting a new one. That does not produce optimal performance, however, so in both TurboVNC and TigerVNC, we set the deferred update timer to a very low value by default (1 ms) and send a new FBU from the client as soon as the old one is received. The report details why this causes the lock-step problem in some cases. It's not that the deferred update timer is necessarily a bad idea. I think it may just need some re-thinking. >> There is also a lot of confusion among users as >> to when to use it and when not to. > > I think this is another feature that we should be able to make good > enough to always have on. If it could be implemented in such a way that every client is served at its own separate frame rate, then I wholeheartedly agree. Currently, if CU is enabled, every client locks into the same frame rate, per above. >> (4) Multi-threaded compression/decompression > > Given the way processors are evolving, this is probably a good idea. As > before though, threading is a source of a lot of problems. It is > probably more contained in this case though and less risk. > > Have you looked at using OpenMP? It looks like it would be a rather > good fit for something like this, and most major compilers support it. > I think it would be a great help in avoiding thred issues. I'm familiar with OpenMP, but I don't see it as really being a good solution for this. When you say "most compilers", of course the ones I'm currently using to build TigerVNC aren't included in that, and we really need more control over the threads. Since the implementation in TurboVNC is quite stable at this point, I can just borrow a lot of the logic from there. >> ... a lot of esoteric performance behavior could be eliminated >> simply by implementing a proper frame spoiling mechanism. This would >> eliminate the need for the CU feature as well. > > Howso? Without CU, then we would still be limited to one update per > round trip. What I meant was that the server would be pushing out images by default, so having a configurable CU option would be unnecessary (because it is, effectively, always on.) However, there are deeper issues with this. We can't just immediately send every update, because what if the update was only 1 pixel in size? There still has to be some sort of coalescing mechanism. The basic issue right now, at least as I see it, is that the coalescing mechanism we have (deferred updates) is somewhat broken, because it's based on outdated assumptions. I'm not really sure how to fix it, though. >> A proper frame spoiling mechanism would have a separate image queue and >> dispatch thread for each connected client, so that all of the clients >> could be driven at their own frame rates without requiring a >> client-driven protocol or a deferred update timer. > > Did you check that the RealVNC 4 code didn't really have this? I seem > to recall that it creates a separate object for each client, and it is > within that object it stores client state (like what needs to be > updated). It is not threaded (although I seem to recall some support > for that as well), but it should already server each client at the rate > it sees update requests? I haven't looked at the RealVNC 4 code recently, but wouldn't that code be in TigerVNC as well, if it existed? >> This is not only >> disruptive but possibly even violates the fundamental nature of the RFB >> protocol, > > What makes you say that? The protocol doesn't mention anything about > update rates, only that each update should represent a full screen > change. So having one client updating at 30 Hz and one at 10 Hz should > not be an issue. It's just that the second client will see the > aggregate of three changes (from the first's point of view). Isn't the fundamental nature of the RFB protocol client-driven? That is, it's required that the server not send an update until the client requests one? Maybe I'm misunderstanding. We can't ever be 100% server-driven, because the client still has to send FBUR's whenever a portion of its window is obscured, etc. However, in order to get decent WAN performance, we are going to have to become mostly server-driven, at least under certain circumstances. ------------------------------------------------------------------------------ AppSumo Presents a FREE Video for the SourceForge Community by Eric Ries, the creator of the Lean Startup Methodology on "Lean Startup Secrets Revealed." This video shows you how to validate your ideas, optimize your ideas and identify your business strategy. http://p.sf.net/sfu/appsumosfdev2dev _______________________________________________ Tigervnc-devel mailing list Tigervnc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/tigervnc-devel