I'm curious. Which display (:0 or :1) is the active display on the server? Can you try using CTRL-ALT-F7/F8 to switch between them and see if that is the reason why only one is working properly?
I know people have done this before. I'm hoping maybe one of them will chime in. Antony Cleave wrote: > Indeed something is amiss. > > DISPLAY=:0 vgl/linux64/bin/glreadtest > > GLreadtest v2.1.80 (Build 20100224) > > vgl/linux64/bin/glreadtest -h for advanced usage. > > Rendering to Pbuffer using GLX on display :0.0 > FB Config = 0xc5 > Drawable size = 701 x 701 pixels > Using 1-byte row alignment > >>>>>>>>>>> PIXEL FORMAT: LUM <<<<<<<<<< > glDrawPixels(): ERROR: invalid operation in frame buffer read > Buffer was not cleared > 8.948589 Mpixels/sec > glReadPixels() [bottom-up]: 517: ERROR: Bogus data read back. > >>>>>>>>>>> PIXEL FORMAT: RED <<<<<<<<<< > glDrawPixels(): ERROR: invalid operation in frame buffer read > Buffer was not cleared > 8.949448 Mpixels/sec > glReadPixels() [bottom-up]: 517: ERROR: Bogus data read back. > >>>>>>>>>>> PIXEL FORMAT: BGRA <<<<<<<<<< > glDrawPixels(): ERROR: invalid operation in frame buffer read > Buffer was not cleared > 8.684911 Mpixels/sec > glReadPixels() [bottom-up]: 517: ERROR: Bogus data read back. > >>>>>>>>>>> PIXEL FORMAT: ABGR <<<<<<<<<< > glDrawPixels(): ERROR: invalid operation in frame buffer read > Buffer was not cleared > 8.685765 Mpixels/sec > glReadPixels() [bottom-up]: 517: ERROR: Bogus data read back. > >>>>>>>>>>> PIXEL FORMAT: BGR <<<<<<<<<< > glDrawPixels(): ERROR: invalid operation in frame buffer read > Buffer was not cleared > 8.768956 Mpixels/sec > glReadPixels() [bottom-up]: 517: ERROR: Bogus data read back. > >>>>>>>>>>> PIXEL FORMAT: RGBA <<<<<<<<<< > glDrawPixels(): ERROR: invalid operation in frame buffer read > Buffer was not cleared > 8.688855 Mpixels/sec > glReadPixels() [bottom-up]: 517: ERROR: Bogus data read back. > >>>>>>>>>>> PIXEL FORMAT: RGB <<<<<<<<<< > glDrawPixels(): ERROR: invalid operation in frame buffer read > Buffer was not cleared > 8.769972 Mpixels/sec > glReadPixels() [bottom-up]: 517: ERROR: Bogus data read back. > > > I'm sure it should look more like this: > > DISPLAY=:1 vgl/linux64/bin/glreadtest > > GLreadtest v2.1.80 (Build 20100224) > > vgl/linux64/bin/glreadtest -h for advanced usage. > > Rendering to Pbuffer using GLX on display :1.0 > FB Config = 0xc5 > Drawable size = 701 x 701 pixels > Using 1-byte row alignment > >>>>>>>>>>> PIXEL FORMAT: LUM <<<<<<<<<< > glDrawPixels(): 1021.157178 Mpixels/sec > glReadPixels() [bottom-up]: 0.611564 Mpixels/sec > glReadPixels() [top-down]: 0.603276 Mpixels/sec > >>>>>>>>>>> PIXEL FORMAT: RED <<<<<<<<<< > glDrawPixels(): 402.142938 Mpixels/sec > glReadPixels() [bottom-up]: 442.310444 Mpixels/sec > glReadPixels() [top-down]: 6.548527 Mpixels/sec > >>>>>>>>>>> PIXEL FORMAT: BGRA <<<<<<<<<< > glDrawPixels(): 400.102140 Mpixels/sec > glReadPixels() [bottom-up]: 462.156454 Mpixels/sec > glReadPixels() [top-down]: 6.606006 Mpixels/sec > >>>>>>>>>>> PIXEL FORMAT: ABGR <<<<<<<<<< > glDrawPixels(): 399.779284 Mpixels/sec > glReadPixels() [bottom-up]: 461.921013 Mpixels/sec > glReadPixels() [top-down]: 6.639695 Mpixels/sec > >>>>>>>>>>> PIXEL FORMAT: BGR <<<<<<<<<< > glDrawPixels(): 402.073876 Mpixels/sec > glReadPixels() [bottom-up]: 442.128714 Mpixels/sec > glReadPixels() [top-down]: 6.525910 Mpixels/sec > >>>>>>>>>>> PIXEL FORMAT: RGBA <<<<<<<<<< > glDrawPixels(): 400.781598 Mpixels/sec > glReadPixels() [bottom-up]: 392.935682 Mpixels/sec > glReadPixels() [top-down]: 6.378294 Mpixels/sec > >>>>>>>>>>> PIXEL FORMAT: RGB <<<<<<<<<< > glDrawPixels(): 333.381574 Mpixels/sec > glReadPixels() [bottom-up]: 346.630269 Mpixels/sec > glReadPixels() [top-down]: 6.338858 Mpixels/sec > > regarding the other test the answer is no > > # /opt/VirtualGL/bin/glxinfo -display :0 -c >glxinfo0 > # /opt/VirtualGL/bin/glxinfo -display :1 -c >glxinfo1 > # diff glxinfo0 glxinfo1 > 1,2c1,2 > < name of display: :0.0 > < display: :0 screen: 0 > --- >> name of display: :1.0 >> display: :1 screen: 0 > > Finally, for both the glxspheres tests I was rendering to a TurboVNC > desktop running as root on Display :12. > > Thanks for your help. > > Antony > > DRC wrote: >>> cvs >>> >> -d:pserver:[email protected]:/cvsroot/virtualgl login >> >> >>> cvs -z3 >>> >> -d:pserver:[email protected]:/cvsroot/virtualgl co vgl >> >> >>> cd vgl/util >>> >> >> >>> make ../linux64/bin/glreadtest >>> >> >> Now run >> >> >>> DISPLAY=:0 ../linux64/bin/glreadtest >>> >> >> >>> DISPLAY=:1 ../linux64/bin/glreadtest >>> >> >> Both should ideally give you the same results. If not, something is >> amiss. >> >> What X server are you displaying to whenever you run the GLXspheres >> tests? That is, what is the value of the DISPLAY environment when you >> run those tests? >> >> Is there a significant difference between the output of >> >> /opt/VirtualGL/bin/glxinfo -display :0 -c >> >> and >> >> /opt/VirtualGL/bin/glxinfo -display :1 -c >> >> ? >> >> Antony Cleave wrote: >> >>> Thanks for the pointer DRC, >>> >>> I've been looking around and I've come up with the following solution to >>> get gdm to start 2 X servers at boot up on this machine by modifying >>> /etc/gdm/custom.conf as follows: >>> >>> [servers] >>> 0=/usr/bin/X -config /etc/X11/xorg.0.conf >>> 1=/usr/bin/X -config /etc/X11/xorg.1.conf >>> >>> where I have modified the default nvida configured xorg.conf to make the >>> two new xorg.conf files as below (for clarity I'll only show a diff for >>> the second file): >>> >>> /etc/X11/xorg.0.conf >>> ----------------------------------------------------------------------------------------------------------- >>> # nvidia-xconfig: X configuration file generated by nvidia-xconfig >>> # nvidia-xconfig: version 1.0 (buildmeis...@builder58) Wed Dec 9 >>> 16:34:26 PST 2009 >>> Section "DRI" >>> Mode 0666 >>> EndSection >>> >>> >>> Section "ServerLayout" >>> Identifier "X.org Configured" >>> Screen 0 "Screen0" 0 0 >>> InputDevice "Mouse0" "CorePointer" >>> InputDevice "Keyboard0" "CoreKeyboard" >>> EndSection >>> >>> Section "Files" >>> RgbPath "/usr/share/X11/rgb" >>> ModulePath "/usr/lib64/xorg/modules" >>> FontPath "unix/:7100" >>> FontPath "built-ins" >>> EndSection >>> >>> Section "Module" >>> Load "record" >>> Load "dbe" >>> Load "extmod" >>> Load "glx" >>> Load "xtrap" >>> EndSection >>> >>> Section "InputDevice" >>> Identifier "Keyboard0" >>> Driver "kbd" >>> EndSection >>> >>> Section "InputDevice" >>> Identifier "Mouse0" >>> Driver "mouse" >>> Option "Protocol" "auto" >>> Option "Device" "/dev/input/mice" >>> Option "ZAxisMapping" "4 5 6 7" >>> EndSection >>> >>> Section "Monitor" >>> Identifier "Monitor0" >>> VendorName "Monitor Vendor" >>> ModelName "Monitor Model" >>> EndSection >>> >>> Section "Device" >>> >>> ### Available Driver options are:- >>> ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", >>> ### <string>: "String", <freq>: "<f> Hz/kHz/MHz" >>> ### [arg]: arg optional >>> #Option "ShadowFB" # [<bool>] >>> #Option "DefaultRefresh" # [<bool>] >>> #Option "ModeSetClearScreen" # [<bool>] >>> Identifier "Card0" >>> Driver "nvidia" >>> VendorName "nVidia Corporation" >>> BoardName "Unknown Board" >>> BusID "PCI:3:0:0" >>> EndSection >>> >>> Section "Screen" >>> Identifier "Screen0" >>> Device "Card0" >>> Monitor "Monitor0" >>> SubSection "Display" >>> Viewport 0 0 >>> EndSubSection >>> SubSection "Display" >>> Viewport 0 0 >>> Depth 4 >>> EndSubSection >>> SubSection "Display" >>> Viewport 0 0 >>> Depth 8 >>> EndSubSection >>> SubSection "Display" >>> Viewport 0 0 >>> Depth 15 >>> EndSubSection >>> SubSection "Display" >>> Viewport 0 0 >>> Depth 16 >>> EndSubSection >>> SubSection "Display" >>> Viewport 0 0 >>> Depth 24 >>> EndSubSection >>> EndSection >>> ----------------------------------------------------------------------------- >>> diff /etc/X11/xorg.*.conf >>> 58c58 >>> < Identifier "Card0" >>> --- >>> >>>> Identifier "Card1" >>>> >>> 62c62 >>> < BusID "PCI:3:0:0" >>> --- >>> >>>> BusID "PCI:4:0:0" >>>> >>> 67c67 >>> < Device "Card0" >>> --- >>> >>>> Device "Card1" >>>> >>> Both X servers start successfully and both have sensible output for >>> xdpyinfo and /opt/VirtualGL/bin/glxinfo. >>> >>> I get sensible ouput for both >>> >>> # vglrun +v -d :0 /opt/VirtualGL/bin/glxinfo &> glxinfo0 >>> and >>> # vglrun +v -d :1 /opt/VirtualGL/bin/glxinfo &> glxinfo1 >>> >>> # diff glxinfo0 glxinfo1 >>> 1,2c1,2 >>> < [VGL] Shared memory segment ID for vglconfig: 5210123 >>> < [VGL] Opening local display :0 >>> --- >>> >>>> [VGL] Shared memory segment ID for vglconfig: 5177355 >>>> [VGL] Opening local display :1 >>>> >>> but when I try glxspheres I get an error on display 0 but it works >>> perfectly on display 1 >>> >>> # vglrun +v -d :0 /opt/VirtualGL/bin/glxspheres &> glxspheres0 >>> # cat glxspheres0 >>> Polygons in scene: 62464 >>> [VGL] Shared memory segment ID for vglconfig: 5242891 >>> [VGL] Opening local display :0 >>> Visual ID of window: 0x22 >>> [VGL] ERROR: OpenGL error 0x0502 >>> [VGL] ERROR: in readpixels-- >>> [VGL] 624: Could not Read Pixels >>> >>> # vglrun +v -d :0 /opt/VirtualGL/bin/glxspheres &> glxspheres1 >>> # cat glxspheres1 >>> Polygons in scene: 62464 >>> [VGL] Shared memory segment ID for vglconfig: 5308427 >>> [VGL] Opening local display :1 >>> Visual ID of window: 0x22 >>> 282.851470 frames/sec - 289.357054 Mpixels/sec >>> 287.179087 frames/sec - 293.784206 Mpixels/sec >>> 286.921263 frames/sec - 293.520452 Mpixels/sec >>> 287.996523 frames/sec - 294.620443 Mpixels/sec >>> >>> am I doing something fundamentally wrong here or is my approach correct >>> and I'm missing something? >>> >>> Antony >>> >>> DRC wrote: >>> >>>> Yeah, there isn't anything in the docs about it because I haven't ever >>>> personally done it. :) But many others have, and I think it's fairly >>>> straightforward. You just set up the second 3D card on display :1 with >>>> a second X server. There are many sites that describe how to set up >>>> multiple X servers on multiple graphics cards. >>>> >>>> Antony Cleave wrote: >>>> >>>> >>>>> Hi all, >>>>> >>>>> I am about to get a machine with two Quadro FX5800 cards in it that I >>>>> want to configure to allow multiple users to visualise their data. I >>>>> have a working testing box setup with a single graphics card configured >>>>> using TurboVNC and VirtualGL 2.1.4 and it is amazing. How much more >>>>> complicated would a multiple graphics card install be? What extra steps >>>>> would I have to go through? There does not appear to be much on this in >>>>> the documentation except a quick mention in the advanced configuration >>>>> settings at the bottom where it says I can use vglrun -d <display> or >>>>> the VGL_DISPLAY variable to select which X display should do the >>>>> rendering but there is nothing about how to setup the server. Is this >>>>> because it "just works" or are there some more things I have to do to >>>>> make it work? >>>>> >>>>> thanks for your time >>>>> >>>>> Antony >>>>> >>>>> >>>>> >>>>> >>>> ------------------------------------------------------------------------------ >>>> Download Intel® Parallel Studio Eval >>>> Try the new software tools for yourself. Speed compiling, find bugs >>>> proactively, and fine-tune applications for parallel performance. >>>> See why Intel Parallel Studio got high marks during beta. >>>> http://p.sf.net/sfu/intel-sw-dev >>>> _______________________________________________ >>>> VirtualGL-Users mailing list >>>> [email protected] >>>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >>>> >>>> >>> -- >>> Antony Cleave >>> >>> Systems Architect >>> ClusterVision >>> 12 Westgate House >>> The Island >>> Gloucester >>> GL1 2RU >>> United Kingdom >>> >>> Office: +44 1452 260024 >>> >>> skype: antony.cleave >>> >>> >>> ------------------------------------------------------------------------ >>> >>> ------------------------------------------------------------------------------ >>> Download Intel® Parallel Studio Eval >>> Try the new software tools for yourself. Speed compiling, find bugs >>> proactively, and fine-tune applications for parallel performance. >>> See why Intel Parallel Studio got high marks during beta. >>> http://p.sf.net/sfu/intel-sw-dev >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> VirtualGL-Users mailing list >>> [email protected] >>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >>> >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> VirtualGL-Users mailing list >> [email protected] >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> > > -- > Antony Cleave > > Systems Architect > ClusterVision > 12 Westgate House > The Island > Gloucester > GL1 2RU > United Kingdom > > Office: +44 1452 260024 > > skype: antony.cleave > ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ VirtualGL-Users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/virtualgl-users
