Hey all, I should have an answer for you here. I set this up for our
workgroup a few months ago while we were playing with Equalizer. Keep
in mind that my instructions are tested for Fedora Core 5 and 7. Also
when I wrote these notes I was configuring virtual displays so
equalizer could render to a disconnected output rather than interfere
with users' session(s).

If the instructions dont work its because I forgot to document some
minor detail. Let me know and Ill find what was overlooked.

-Evan Bollig
[EMAIL PROTECTED]


2008/4/21 Stefan Eilemann <[EMAIL PROTECTED]>:
>
>
>
> On Mon, Apr 21, 2008 at 4:20 PM, anca <[EMAIL PROTECTED]> wrote:
> > But Stefan told us that on the cluster
> > used by the Equalizer team they managed somehow to display the eqPly
> window
> > over the login screen. So there must be a way to set/tell X Server to
> accept
> > the connection to :0.0 . My question is HOW ?
>
> It is some odd setting in your Xorg.conf or gdm.conf which switches of
> authentication.
>
> Max - do you remember this?
> Anca - Maybe google helps you as well...
>
>
> Cheers,
>
> Stefan.
>
> PS: Re: PBuffers - it was easy enough...
>
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> _______________________________________________
>  eq-dev mailing list
>  [email protected]
>  https://in-zueri.ch/cgi-bin/mailman/listinfo/eq-dev
>  http://www.equalizergraphics.com
>



-- 
-Evan Bollig
[EMAIL PROTECTED]
[EMAIL PROTECTED]
# Configuring Multi-Display Rendering Clusters
#
# Copyright 2007
# Evan Bollig 
# bollig [at] scs [dot] fsu [dot] edu
# School of Computational Science 
# Florida State University
----------------------------------------------------------------------
To get multiple nodes (each with a single GPU) to share a task, there are 
several 
prerequisites: 

1. The files 

/dev/nvidia0
/dev/nvidia1
/dev/nvidiactl

should have permissions 777. if this not the case, we have found that ownership 
sometimes
changes on these files, preventing the NVidia drivers from working. If the same 
person 
always uses the cluster, there would probably be no problem. If another person 
logs in, 
and does not have the proper permssions, he will not be able to run graphics 
programs. 

2. By default, many linux system (not part of a formal GPU cluster) require 
authentication 
when starting Xorg sessions. The consequence is that one cannot write to 
display :0.0 
(to see this, ssh -X to some machine, change the DISPLAY to :0.0 and execute 
the example 
program glxgears. It will not work). What follows are instructions to disable 
this 
authentication. 

----------------------------------------------------------------------
Make sure that the following four tasks are executed on each node of your 
cluster: 

***************
1) create /etc/profile.d/nvidia.(c)sh to enable global access to nvidia hardware

[EMAIL PROTECTED] eqPly]$ cat /etc/profile.d/nvidia.sh 
# Grant access to nvidia hardware to all users (for CUDA)
(chmod 777 /dev/nvidia* 2>&1) > /dev/null

***************
2) modify the /etc/gdm/custom.conf so the window manager starts Xorg sessions 
without authentication control (WARNING!: allows any user to render to terminal 
display; 
does not require users to login with ssh -X)

######### (Tail of custom.conf)##########
# Servers section should be empty in the original file
# This specifies that we are creating a dedicated rendering server
[servers]
0=Rendering

# Also note, that if you redefine a [server-foo] section, then GDM will
# use the definition in this file, not the defaults.conf file.  It is
# currently not possible to disable a [server-foo] section defined
# in the defaults.conf file.
#
# Xorg documentation is sparse; trial and error is the best way to find the
# right settings: 

[server-Rendering]
name=Rendering server
#-audit int             set audit trail level
#-ac                    disable access control restrictions
#-nolock                disable the locking mechanism (DOES NOTHING)
#-wr                    create root window with white background (DOES NOTHING)
command=/usr/bin/Xorg -ac -audit 0
flexible=true


***************
3) Shutdown existing gdm daemon:   /sbin/init 3

***************
4) Start custom configured daemon:   /sbin/init 5

-------------------------------------------------------------------------
To run Xorg servers on GPUs with no monitors connected:

Edit the xorg.conf in /etc/X11/xorg.conf:

***************
1) In the "ServerLayout" section provide the layout for the real screen (Screen 
0 == DISPLAY:0.0) 
the imaginary screen (Screen 1 == DISPLAY:0.1). Make sure Xinerama is off so we 
dont span the 
desktop to multiple screens:
    {...}
        Screen      0  "Screen0" 0 0
    Screen      1  "Screen1" RightOf "Screen0"
        Option          "Xinerama" "off"
        {...}

***************
2) In the "ServerFlags" section be sure Xinerama is off:
        {...}
        Option     "Xinerama" "0"
        {...}

***************
3) Add a second "Monitor" section:
        Section "Monitor"
       # Be sure to replace these values with values appropriate for your 
monitor!
           ## Specific modelines can be generated using /usr/XllR6/bin/gtf
           ## Modeline: HP L2335 Flat Panel Native Resolution [EMAIL PROTECTED]
           Identifier     "Monitor1"
           VendorName     "Unknown"
           ModelName      "CRT-0"
           HorizSync       49.3 - 98.5
           VertRefresh     60.0
           ModeLine       "1920x1200_60" 154.0 1920 1968 2000 2080 1200 1203 
1209 1235 -hsync +vsync
           ModeLine       "1600x1200_75" 206.0 1600 1720 1896 2192 1200 1201 
1204 1253 -hsync +vsync
           ModeLine       "3840x2400" 148.0 3840 3944 4328 4816 2400 2401 2404 
2418
           ModeLine       "2560x1600_60" 268.0 2560 2608 2640 2720 1600 1603 
1609 1646 -hsync +vsync
           Option         "dpms"
        EndSection

***************
4) In the "Device" section add the specific BusID for the GPU connected to your 
real monitor (this 
can be found in /var/log/Xorg.0.log. 

        {...}
        Section "Device"
           Identifier     "Videocard0"
           Driver         "nvidia"
           VendorName     "NVIDIA Corporation"
           BoardName      "GeForce 8800 GT"
           BusID          "PCI:64:0:0"
        EndSection
        {...}

**************
5) Add a second "Device" section with the BusID for the second GPU:

        Section "Device"
        Identifier     "Videocard1"
            Driver         "nvidia"
                VendorName     "NVIDIA Corporation"
                BoardName      "GeForce 7900 GTX"
                BusID          "PCI:96:0:0"
        EndSection

**************
6) Add a second "Sreen" section referring to the second videocard device and 
monitor:

        Section "Screen"
        Identifier     "Screen1"
            Device         "Videocard1"
                Monitor        "Monitor1"
                DefaultDepth    24
            Option         "TwinView" "0"
            Option         "TwinViewXineramaInfoOrder" "CRT-0"
            Option         "metamodes" "2560x1600_60 +0+0; 1920x1200_60 +0+0; 
1600x1200 +0+0; 1280x1024 +0+0; 1024x768 +0+0; 800x600 +0+0"
            SubSection     "Display"
                Depth       24
            EndSubSection
        EndSection

***************
7) Restart X to start new server (/sbin/init 3; /sbin/init 5)

***************
8) Launch applications on invisible server by setting display environment 
variable to DISPLAY=":0.1"
-----------------------------------------------------------------------
_______________________________________________
eq-dev mailing list
[email protected]
https://in-zueri.ch/cgi-bin/mailman/listinfo/eq-dev
http://www.equalizergraphics.com

Reply via email to