On Thu, 24 Sep 2009, DRC wrote:

... Creating /etc/modprobe.d/virtualgl to set requested permissions for
   /dev/nvidia* ...

As I understand it, there is no corresponding file for ATI cards? We
have a /dev/dri/card0.

ATI is a "pure" DRI driver, so the device permissions are set in
xorg.conf.  VirtualGL 2.1.3 and later should properly handle this.

Does this mean that the permissions of /dev/dri/card0 doesn't matter? Because if I don't comment out the <dri> line in /etc/security/console.perms.d/50-default.perms, then the permissions of /dev/dri/card0 will be incorrectly set as soon as I'm logging in on the console...


[server-Standard]
command=/usr/bin/Xorg -br -audit 0 -tst

Yeah, I've been aware of this problem for a while, but unfortunately,
there isn't an easy way for vglserver_config to work around it.  The
problem with adding a specific X server command line is that it's
different for every system, and vglserver_config won't know the correct
default args to add to it for a particular system.  RedHat really needs
to package a new custom.conf file with a default X server command line
that can be uncommented and modified.

I see the problem. However, on RHEL, the default settings are actually available in the /usr/share/gdm/defaults.conf file. Unfortunately, no such file seems to be available on Fedora.

I agree that this is a difficult problem.



[security]
DisallowTCP=false

As far as I know, DisallowTCP=true has always been the default.  Prior
to v2.1.1, vglserver_config would explicitly set DisallowTCP=false to
allow TCP connections to :0.  This was necessary because VGL used "xhost
+localhost" to open up the display permissions if you weren't using the
vglusers group.  I then discovered that I could use "xhost +LOCAL:"
instead, so VGL v2.1.1 and later no longer requires TCP connections to
display :0.  Thus, the newer versions of vglserver_config look for the
DisallowTCP line and comment it out, in case that line was added by an
older version of vglserver_config.

To reiterate, turning off TCP connections (the default) is what you want.

Oh, then I understand.



With these changes, the configuration looks good. However, VirtualGL
doesn't actually seem to work: With software-rendered OpenGL in Xvnc,
glxspheres gives about 5 frames/sec. When running glxspheres through
vglrun, the performance increases only to 11 frames/sec. The :0 Xserver
is idle, except for a few system calls when vglrun i launched. Running:

DISPLAY=:0 glxspheres

...gives a nice performance of ~ 1300 frames/sec. Any ideas?

If VirtualGL were using indirect rendering, it would complain about it
loudly, so that's apparently not the problem.  I'm wondering if VGL is
engaged at all.  Do you, for instance, get profiling output when you do
'vglrun +pr glxspheres'?

Yes.


If the :0 X server is idle, that means (for whatever reason) the
commands aren't ever making it there.

I've disabled GLX support in Xvnc now, but the problem is the same. Xvnc CPU usage is very high. I'll continue debugging...


Best regards, ---
Peter Åstrand           ThinLinc Chief Developer
Cendio AB               http://www.cendio.com
Wallenbergs gata 4
583 30 Linköping        Phone: +46-13-21 46 00
------------------------------------------------------------------------------
Come build with us! The BlackBerry&reg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9&#45;12, 2009. Register now&#33;
http://p.sf.net/sfu/devconf
_______________________________________________
VirtualGL-Users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/virtualgl-users

Reply via email to