> >  Yes basically, though a possible workaround would be to use some kind 
>of
> >  checksumming procedure that would try to autodetect the changed 
>regions.
> >
> >  Not exactly an elegant solution, but a possible workaround. (Remote 
>control
> >  software for Windoze uses this technique).
> >
> >  The idea would be to make checksums over the rows and columns when 
>blitting
> >  and comparing current results with the last results. This will result 
>in
> >  a rectangle that needs to be updated by leaving all lines alone that 
>have not
> >  changed and only changing thouse columns that have changes.
>
>Ahh.  I can see how that would be good when transfer time is a lot
>bigger than read time (e.g. over a network).  It won't work when
>transfer time is comparable to read time (e.g. tile blitting into
>local framebuffers), right ?
>
Here's some fresh testresults, that might help your above theories:
8 physical screens, 24bit, 1024x768 each.
PIII 500MHz

No load except for KDE and a few kvts:
--------------------------------------
X takes 84% of the cpu.
No swapping.

1 graphically active remote app., on 4 of the 8 screens screens:
---------------------------------------------
X takes 99.4 % of the cpu.
Network activity measured with gkrellm is at the most 400k per second,
which is 1/20th of the 100Mb network capacity.
No swapping.
Digital clock in application image updates every 21 seconds, instead of 
every 1-2 seconds, as seen when app runs locally.

Kam-pei,

Adam Huuva
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

Reply via email to