On 9/11/18 6:22 PM, sergio wrote:
> [VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to
> [VGL]    <IP hostB>, the IP address of your SSH client.
> Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
> Visual ID of window: 0x21
> Context is Direct
> failed to create drawable
> X Error of failed request:  BadAlloc (insufficient resources for operation)
>   Major opcode of failed request:  153 (GLX)
>   Minor opcode of failed request:  27 (X_GLXCreatePbuffer)
>   Serial number of failed request:  29
>   Current serial number in output stream:  31
> zsh: exit 1     vglrun /opt/VirtualGL/bin/glxspheres64
> 
> What is wrong now?

No idea, but if it's an older Radeon with an older driver, then it might
have a broken Pbuffer implementation.  I seem to recall that I ran into
similar problems with certain Radeon GPUs years ago.  Try the following:

1. Try setting VGL_READBACK=sync in the environment on the server prior
to invoking vglrun.

2. If that doesn't work, then try setting VGL_DRAWABLE=pixmap in the
environment on the server prior to invoking vglrun.

3. If that doesn't work, then I don't know how to fix it.  VGL is known
to work well with nVidia GPUs and drivers, but AMD/ATI GPUs
(particularly their consumer-grade GPUs) have traditionally been
hit-and-miss.  I have not tested their more recent products, though.
The only AMD GPUs I have in my lab are an old Radeon HD 7660G and an
older FirePro V5700.


> Do you know a way to use local GPU for remote app acceleration?

No matter how you attempt it, using a local GPU for remote app
acceleration is going to require sending OpenGL commands and data over
the network.  The traditional method of doing that is using remote GLX.
That is not ideal, for reasons described in the VirtualGL background
article that I previously posted, but any other method of using a local
GPU for remote app acceleration would have similar drawbacks.  The only
product I know of that might accomplish that task is NICE (now Amazon)
DCV, but it is neither free nor open source.  Do I know of a way to use
a local GPU for remote app acceleration while running XDMCP on the
remote application server?  No.

-- 
You received this message because you are subscribed to the Google Groups 
"VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/virtualgl-users/91f3dbb2-4d1d-7b9d-d71e-515c7b44b160%40virtualgl.org.
For more options, visit https://groups.google.com/d/optout.

Reply via email to