> cvs -d:pserver:[email protected]:/cvsroot/virtualgl login
> cvs -z3 -d:pserver:[email protected]:/cvsroot/virtualgl co vgl > cd vgl/util > make ../linux64/bin/glreadtest Now run > DISPLAY=:0 ../linux64/bin/glreadtest > DISPLAY=:1 ../linux64/bin/glreadtest Both should ideally give you the same results. If not, something is amiss. What X server are you displaying to whenever you run the GLXspheres tests? That is, what is the value of the DISPLAY environment when you run those tests? Is there a significant difference between the output of /opt/VirtualGL/bin/glxinfo -display :0 -c and /opt/VirtualGL/bin/glxinfo -display :1 -c ? Antony Cleave wrote: > Thanks for the pointer DRC, > > I've been looking around and I've come up with the following solution to > get gdm to start 2 X servers at boot up on this machine by modifying > /etc/gdm/custom.conf as follows: > > [servers] > 0=/usr/bin/X -config /etc/X11/xorg.0.conf > 1=/usr/bin/X -config /etc/X11/xorg.1.conf > > where I have modified the default nvida configured xorg.conf to make the > two new xorg.conf files as below (for clarity I'll only show a diff for > the second file): > > /etc/X11/xorg.0.conf > ----------------------------------------------------------------------------------------------------------- > # nvidia-xconfig: X configuration file generated by nvidia-xconfig > # nvidia-xconfig: version 1.0 (buildmeis...@builder58) Wed Dec 9 > 16:34:26 PST 2009 > Section "DRI" > Mode 0666 > EndSection > > > Section "ServerLayout" > Identifier "X.org Configured" > Screen 0 "Screen0" 0 0 > InputDevice "Mouse0" "CorePointer" > InputDevice "Keyboard0" "CoreKeyboard" > EndSection > > Section "Files" > RgbPath "/usr/share/X11/rgb" > ModulePath "/usr/lib64/xorg/modules" > FontPath "unix/:7100" > FontPath "built-ins" > EndSection > > Section "Module" > Load "record" > Load "dbe" > Load "extmod" > Load "glx" > Load "xtrap" > EndSection > > Section "InputDevice" > Identifier "Keyboard0" > Driver "kbd" > EndSection > > Section "InputDevice" > Identifier "Mouse0" > Driver "mouse" > Option "Protocol" "auto" > Option "Device" "/dev/input/mice" > Option "ZAxisMapping" "4 5 6 7" > EndSection > > Section "Monitor" > Identifier "Monitor0" > VendorName "Monitor Vendor" > ModelName "Monitor Model" > EndSection > > Section "Device" > > ### Available Driver options are:- > ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", > ### <string>: "String", <freq>: "<f> Hz/kHz/MHz" > ### [arg]: arg optional > #Option "ShadowFB" # [<bool>] > #Option "DefaultRefresh" # [<bool>] > #Option "ModeSetClearScreen" # [<bool>] > Identifier "Card0" > Driver "nvidia" > VendorName "nVidia Corporation" > BoardName "Unknown Board" > BusID "PCI:3:0:0" > EndSection > > Section "Screen" > Identifier "Screen0" > Device "Card0" > Monitor "Monitor0" > SubSection "Display" > Viewport 0 0 > EndSubSection > SubSection "Display" > Viewport 0 0 > Depth 4 > EndSubSection > SubSection "Display" > Viewport 0 0 > Depth 8 > EndSubSection > SubSection "Display" > Viewport 0 0 > Depth 15 > EndSubSection > SubSection "Display" > Viewport 0 0 > Depth 16 > EndSubSection > SubSection "Display" > Viewport 0 0 > Depth 24 > EndSubSection > EndSection > ----------------------------------------------------------------------------- > diff /etc/X11/xorg.*.conf > 58c58 > < Identifier "Card0" > --- >> Identifier "Card1" > 62c62 > < BusID "PCI:3:0:0" > --- >> BusID "PCI:4:0:0" > 67c67 > < Device "Card0" > --- >> Device "Card1" > > Both X servers start successfully and both have sensible output for > xdpyinfo and /opt/VirtualGL/bin/glxinfo. > > I get sensible ouput for both > > # vglrun +v -d :0 /opt/VirtualGL/bin/glxinfo &> glxinfo0 > and > # vglrun +v -d :1 /opt/VirtualGL/bin/glxinfo &> glxinfo1 > > # diff glxinfo0 glxinfo1 > 1,2c1,2 > < [VGL] Shared memory segment ID for vglconfig: 5210123 > < [VGL] Opening local display :0 > --- >> [VGL] Shared memory segment ID for vglconfig: 5177355 >> [VGL] Opening local display :1 > > but when I try glxspheres I get an error on display 0 but it works > perfectly on display 1 > > # vglrun +v -d :0 /opt/VirtualGL/bin/glxspheres &> glxspheres0 > # cat glxspheres0 > Polygons in scene: 62464 > [VGL] Shared memory segment ID for vglconfig: 5242891 > [VGL] Opening local display :0 > Visual ID of window: 0x22 > [VGL] ERROR: OpenGL error 0x0502 > [VGL] ERROR: in readpixels-- > [VGL] 624: Could not Read Pixels > > # vglrun +v -d :0 /opt/VirtualGL/bin/glxspheres &> glxspheres1 > # cat glxspheres1 > Polygons in scene: 62464 > [VGL] Shared memory segment ID for vglconfig: 5308427 > [VGL] Opening local display :1 > Visual ID of window: 0x22 > 282.851470 frames/sec - 289.357054 Mpixels/sec > 287.179087 frames/sec - 293.784206 Mpixels/sec > 286.921263 frames/sec - 293.520452 Mpixels/sec > 287.996523 frames/sec - 294.620443 Mpixels/sec > > am I doing something fundamentally wrong here or is my approach correct > and I'm missing something? > > Antony > > DRC wrote: >> Yeah, there isn't anything in the docs about it because I haven't ever >> personally done it. :) But many others have, and I think it's fairly >> straightforward. You just set up the second 3D card on display :1 with >> a second X server. There are many sites that describe how to set up >> multiple X servers on multiple graphics cards. >> >> Antony Cleave wrote: >> >>> Hi all, >>> >>> I am about to get a machine with two Quadro FX5800 cards in it that I >>> want to configure to allow multiple users to visualise their data. I >>> have a working testing box setup with a single graphics card configured >>> using TurboVNC and VirtualGL 2.1.4 and it is amazing. How much more >>> complicated would a multiple graphics card install be? What extra steps >>> would I have to go through? There does not appear to be much on this in >>> the documentation except a quick mention in the advanced configuration >>> settings at the bottom where it says I can use vglrun -d <display> or >>> the VGL_DISPLAY variable to select which X display should do the >>> rendering but there is nothing about how to setup the server. Is this >>> because it "just works" or are there some more things I have to do to >>> make it work? >>> >>> thanks for your time >>> >>> Antony >>> >>> >>> >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> VirtualGL-Users mailing list >> [email protected] >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> > > -- > Antony Cleave > > Systems Architect > ClusterVision > 12 Westgate House > The Island > Gloucester > GL1 2RU > United Kingdom > > Office: +44 1452 260024 > > skype: antony.cleave > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > > > ------------------------------------------------------------------------ > > _______________________________________________ > VirtualGL-Users mailing list > [email protected] > https://lists.sourceforge.net/lists/listinfo/virtualgl-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ VirtualGL-Users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/virtualgl-users
