graphic dump on cluster
Hi Jianing, I don't think the orion cluster is set up for the worker nodes to be able to connect back to a remote workstation. I would collect the data locally on the master node, and use PETSC_VIEWER_DRAW_SELF. Please double-check that the display code works on single-processor jobs, that error looks like an X11 setup error as Matt notes. ~A On 1/24/07, js2615 at columbia.edu js2615 at columbia.edu wrote: Hi Petsc experts, I have a question about the graphic output using multiple processors. Suppose I am using X11 to ssh into the cluster, and would like to view the graphical output at each iteration using ierr = VecView(x,PETSC_VIEWER_DRAW_WORLD);CHKERRQ(ierr); I got the following error message, turns out, when I use multiple processors (not a problem using one processor though): Unable to open display on localhost:10.0 . Make sure your COMPUTE NODES are authorized to connect to this X server and either your DISPLAY variable is set or you use the -display name option ! [3]PETSC ERROR: [1]PETSC ERROR: Libraries linked from /opt/petsc-2.3.2/lib/linux-gnu-c-debug [2]PETSC ERROR: [3]PETSC ERROR: Configure run at Fri Oct 27 11:40:04 2006 Petsc Release Version 2.3.2, Patch 3, Fri Sep 29 17:09:34 CDT 2006 HG revision: 9215af156a9cbcdc1ec666e2b5c7934688ddc526 [3]PETSC ERROR: Libraries linked from /opt/petsc-2.3.2/lib/linux-gnu-c-debug See docs/changes/index.html for recent updates. [2]PETSC ERROR: [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [3]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: [3]PETSC ERROR: Configure options --with-mpich=/opt/mpich2 --with-blas-lapack=lib=/usr/lib/libatlas.so --with-shared=0 [3]PETSC ERROR: ./ex19 on a linux-gnu named n0010 by jianing Wed Jan 24 16:36:03 2007 [3]PETSC ERROR: Libraries linked from /opt/petsc-2.3.2/lib/linux-gnu-c-debug Configure run at Fri Oct 27 11:40:04 2006 [2]PETSC ERROR: [2]PETSC ERROR: PetscDrawXGetDisplaySize_Private() line 618 in src/sys/draw/impls/x/xops.c [2]PETSC ERROR: [3]PETSC ERROR: PETSc unable to use X windows proceeding without graphics Configure run at Fri Oct 27 11:40:04 2006 [1]PETSC ERROR: Configure options --with-mpich=/opt/mpich2 --with-blas-lapack=lib=/usr/lib/libatlas.so --with-shared=0 [3]PETSC ERROR: [1]PETSC ERROR: Configure options --with-mpich=/opt/mpich2 --with-blas-lapack=lib=/usr/lib/libatlas.so --with-shared=0 [3]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: PetscDrawXGetDisplaySize_Private() line 618 in src/sys/draw/impls/x/xops.c [3]PETSC ERROR: PETSc unable to use X windows proceeding without graphics PetscDrawXGetDisplaySize_Private() line 618 in src/sys/draw/impls/x/xops.c [1]PETSC ERROR: PETSc unable to use X windows proceeding without graphics Is there any permission issue I need to set up? Thanks, Jianing -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070124/c0763f9f/attachment.htm
graphic dump on cluster
The X graphics in PETSc are intended for debugging and simple stuff; this should be done on a laptop or workstation. Except in limited cases I would not use it on a cluster. Barry On Wed, 24 Jan 2007, Satish Balay wrote: Its hard to get this working properly on a cluster. One way to do this is: - ssh to each node [where the job will be run] so that there exists a X11 tunnel from all nodes - make sure all of them have the same ssh-x11 display. i.e once you ssh - check with 'echo $DISPLAY' - now run the PETSc binary with this display value.. Satish On Wed, 24 Jan 2007, Barry Smith wrote: Note that the PETSc X model has ALL nodes connect to the X server (not just where rank == 0). Thus all of them need 1) the correct display value (-display mymachine:0.0 is usually the best way to provide it) 2) permission to access the display (X permission stuff, read up on it) and 3) a route to the X server (via TCP/IP). Barry On Wed, 24 Jan 2007, Matthew Knepley wrote: On 1/24/07, js2615 at columbia.edu js2615 at columbia.edu wrote: Hi Petsc experts, I have a question about the graphic output using multiple processors. Suppose I am using X11 to ssh into the cluster, and would like to view the graphical output at each iteration using ierr = VecView(x,PETSC_VIEWER_DRAW_WORLD);CHKERRQ(ierr); I got the following error message, turns out, when I use multiple processors (not a problem using one processor though): Unable to open display on localhost:10.0 . Make sure your COMPUTE NODES are authorized to connect to this X server and either your DISPLAY variable is set or you use the -display name option The error message is exactly right. You either need to set the DISPLAY env var or give the -display option to PETSc. I think giving -Y to ssh forwards your Xconnection automatically (setting DISPLAY). Matt
graphic dump on cluster
On Wed, 24 Jan 2007, Aron Ahmadia wrote: Is there a good script lying around somewhere for setting the X11 connections up from the master/interactive node? This seems like it could be a huge pain if you've got a bunch of worker nodes sitting in a private network behind the master in classic Beowulf style and you don't have a systems administrator to set it up for you. Different clusters have different 'scripts' for job submission. So you'll have to figureout how to 'sneak' in the ssh-x11 connections in here during the job startup. [i.e this not easy to automate] And then there are issues of x11 authentication ssh authentication to worry about [if we are opening sockets connectins across machines] However there is a simple alternative to get this working - it the following are true for your cluster: - all compute-nodes share home filesystem with the frontend-node [i.e everyone can read the same ~/.Xauthority file for x11 permissions] - a willing sys-admin who can change sshd config on the front-end. The change is to add the following to /etc/ssh/sshd_config and restart sshd X11UseLocalhost no With the above config - one can get X11 working as follows: - compute nodes directly talk via x11 to the front-end - the frontend forwards this x11 communication to the users desktop via ssh. i.e you would do the following: - login to frontend node from your desktop with ssh-x11 [ssh -Y frontend]. - Check what the display is [echo $DISPLAY]. It should be frontend:10.0 or something equivalent. - Now run the PETSc executable with the option [-display frontend:10.0] There might be some firewall issues that need to be taken canre of [the x11 connections from compute-nodes to front-end should not be blocked by the firewall] Satish