On Sat, Feb 12, 2011 at 03:19:48PM -0600, DRC wrote:
> You have to run glxspheres with vglrun. Otherwise, it will use the
> software OpenGL renderer in TigerVNC, and that will be the bottleneck,
> not TigerVNC's image pipeline.
>
> Also realize that the methodology is not just running GLXspheres and
> reading the output of that benchmark. You have to run
>
> vglrun -sp glxspheres -fs
>
> to ensure that frame spoiling is disabled in VirtualGL, so that VGL is
> drawing only the frames that get rendered. You then have to measure the
> frame rate from the client using TCBench, to ensure that you are reading
> only the frames that get delivered to the client. Using TCBench may not
> make any difference on a fast network, because generally TigerVNC won't
> drop frames in that scenario, but it definitely can drop frames in a
> low-bandwidth environment, so the frame rate reported by GLXspheres
> wouldn't be accurate in that case.
>
> Also, if you want to reproduce the exact scenario under which I am
> running the benchmarks, set your TigerVNC geometry to 1240x900 and use a
> 1280x1024 resolution on the client display.
I have captured the frame rates using tcbench:
* Xvnc arguments: -SecurityTypes TLSNone,None -geometry 1280x900 -depth 24
* glxspheres -fs
* all runinng on one computer
* vncviewer with None
Samples: 1493 Frames: 250 Time: 30.007244 s Frames/sec: 8.331322
Samples: 1493 Frames: 249 Time: 30.000328 s Frames/sec: 8.299909
* vncviewer with TLSNone
Samples: 1493 Frames: 208 Time: 30.003126 s Frames/sec: 6.932611
Samples: 1493 Frames: 209 Time: 30.014431 s Frames/sec: 6.963317
6.93/8.3 = ~16 % less frame rate
As the limit is the software renderer, I wanted to try a simpler GL application:
I have modifed glxgears to be full screen [unsigned int winWidth =
1280, winHeight = 900;] - every thing else is the same.
* vncviewer with TLSNone
Samples: 1491 Frames: 651 Time: 30.017669 s Frames/sec: 21.687227
Samples: 1492 Frames: 657 Time: 30.013056 s Frames/sec: 21.890473
* vncviewer with None
Samples: 1493 Frames: 707 Time: 30.014369 s Frames/sec: 23.555384
Samples: 1493 Frames: 714 Time: 30.014356 s Frames/sec: 23.788616
21.6/23.7 = ~9% less frame rate
vglrun (VirtualGL-2.2.tar.gz, 7ab7a3ff9c6e36879a1e37e2cacc7f18) is not working
on my Debian stable system:
Polygons in scene: 62464
[VGL] Shared memory segment ID for vglconfig: 950273
[VGL] XOpenDisplay (name=NULL [VGL] Opening local display :0
[VGL] XQueryExtension (dpy=0x01efd120(:0.0) name=XKEYBOARD *major_opcode=144
*first_event=97 *first_error=153 ) 0.020981 ms
dpy=0x01efd120(:0.0) ) 0.865936 ms
[VGL] glXChooseVisual (dpy=0x01efd120(:0.0) screen=0 attrib_list=[0x0004
0x0008=0x0008 0x0009=0x0008 0x000a=0x0008 0x000c=0x0001 0x0005 ] [VGL] WARNING:
VirtualGL attempted and failed to obtain a Pbuffer-enabled
[VGL] 24-bit visual on the 3D X server :0. If the application
[VGL] subsequently fails, then make sure that the 3D X server is configured
[VGL] for 24-bit color and has accelerated 3D drivers installed.
ERROR (559): Could not obtain RGB visual with requested properties
Xserver has 24bit support:
screen #0:
dimensions: 1920x1080 pixels (508x285 millimeters)
resolution: 96x96 dots per inch
depths (7): 24, 1, 4, 8, 15, 16, 32
root window id: 0x10c
depth of root window: 24 planes
number of colormaps: minimum 1, maximum 1
default colormap: 0x20
default number of colormap cells: 256
preallocated pixels: black 0, white 16777215
options: backing-store NO, save-unders NO
largest cursor: 64x64
current input event mask: 0xd2001d
KeyPressMask ButtonPressMask ButtonReleaseMask
EnterWindowMask StructureNotifyMask SubstructureRedirectMask
PropertyChangeMask ColormapChangeMask
number of visuals: 64
default visual id: 0x21
3D is also working:
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
[...]
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
[...]
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: Mesa DRI R600 (RS780 9614) 20090101 TCL DRI2
OpenGL version string: 1.5 Mesa 7.7.1
And: :0 is definitly my xorg server.
So sorry, I wast not able to test with your method.
The best test for the performance regression would be a simple X
program, which switches very fast between a few, very different, full
screen images and does not require very much cpu resources (in the
program and Xvnc).
=>
So TLS encryption is using more resources and is slower than not
encrypting - but for default linux installations, the slowdown is less
than 25 %. The amount of data transfered over the network increases
too - if the bandwith is already saturated, it can get worse.
Regards,
Martin Kögler
------------------------------------------------------------------------------
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
_______________________________________________
Tigervnc-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/tigervnc-devel