Fernando,

1) I meant that your solution, to edit the script, was incorrect.  I've updated 
the readme to correctly specify the DISKORDER on the kernel command line.

Note, I would like you to verify if, in fact, you have a problem with the 
client boot after installation.  Your step 9 should not be needed.  Please 
verify that you, in fact, actually need to do this.  If you do, then we need a 
valid solution, i.e., I need to find why SC is not doing the right thing, we 
may need something like Bernard's modprobe.conf fix.

I will be looking at the boot issue this morning, would you please test on your 
side.

-- 
David N. Lombard
 
My comments represent my opinions, not those of Intel Corporation.

>-----Original Message-----
>From: [EMAIL PROTECTED] [mailto:oscar-devel-
>[EMAIL PROTECTED] On Behalf Of Fernando Laudares Camargos
>Sent: Tuesday, December 21, 2004 7:46 AM
>To: Bernard Li; OSCAR-DEVEL
>Subject: [Oscar-devel] Re: [Oscar-core] Release
>
>Hello all,
>
>      I apologise for not being present at the teleconference yesterday.
>It was the only day available for interviews to apply for an american
>VISA in the consulate of Montreal before january 18th. They took me
>there from 9 am to 3:30 pm just to tell me that their computer/printer
>machine was broken and that we could not get our passports back and
>should return in a couple of days.
>
>1) David said that step 8 of README.RHEL was unnecessary; I disagree. I
>have done another installation test without this modification and the
>client's installation failed:
>------------------------------------------------------------------------
>DISKORDER=hd,sd,cciss,ida,rd
>enumerate_disks
>/dev/hda
>/dev/sda
>/dev/sdb
>Partitioning /dev/hda ...
>Old partitioning table for /dev/hda:
>WARNING: Unable to open /dev/ide/host0/bus0/target0/lun0/cd read-write
>(Read-only file system). /dev/ide/host0/bus0/target0/lun0/cd has been
>opened read-only.
>ERROR: could not read geometry of /dev/ide/host0/bus0/target0/lun0/cd -
>Invalid argument.
>--------------------------------------------------------------------------
>      As I see this, the system is recognaising our CD-ROM as the disk.
>What me mention in the step 8 of the README is the possible necessity of
>change the DISKORDER sequence if this happen. In our case, if I do that,
>client's installation proceeds OK.
>
>
>2) test_cluster (RHEL AS 3 update 2, ia64)
>
>First try:
>       PVM and MPICH fails (with still 25-28 seconds to count down):
>--------------------------------------------------------------------------
>PVM:
>[EMAIL PROTECTED] pvm]# cat pvmtest.err
>/var/spool/pbs/mom_priv/jobs/1.whitebox2.SC: line 25:  1796 Segmentation
>fault      pvmd pvm_nodes
>libpvm [pid1807] /var/spool/pbs/mom_priv/jobs/1.whitebox2.SC: line 27:
>1807 Segmentation fault      ./master1
>pvmd3: no process killed
>[EMAIL PROTECTED] pvm]# cat pvmtest.out
>[EMAIL PROTECTED] pvm]#
>
>MPICH:
>[EMAIL PROTECTED] mpich]# cat mpichtest.err
>---------------------------------------------------------------------------
>--
>Synopsis:       mpirun [options] <app>
>                 mpirun [options] <where> <program> [<prog args>]
>
>Description:    Start an MPI application in LAM/MPI.
>
>Notes:
>                 [options]       Zero or more of the options listed below
>                 <app>           LAM/MPI appschema
>                 <where>         List of LAM nodes and/or CPUs (examples
>                                 below)
>                 <program>       Must be a LAM/MPI program that either
>                                 invokes MPI_INIT or has exactly one of
>                                 its children invoke MPI_INIT
>                 <prog args>     Optional list of command line arguments
>                                 to <program>
>
>Options:
>                 -c <num>        Run <num> copies of <program> (same as -
>np)
>                 -c2c            Use fast library (C2C) mode
>                 -client <rank>  <host>:<port>
>                                 Run IMPI job; connect to the IMPI
>server <host>
>                                 at port <port> as IMPI client number
><rank>
>                 -D              Change current working directory of new
>                                 processes to the directory where the
>                                 executable resides
>                 -f              Do not open stdio descriptors
>                 -ger            Turn on GER mode
>                 -h              Print this help message
>                 -l              Force line-buffered output
>                 -lamd           Use LAM daemon (LAMD) mode (opposite of
>-c2c)
>                 -nger           Turn off GER mode
>                 -np <num>       Run <num> copies of <program> (same as -c)
>                 -nx             Don't export LAM_MPI_* environment
>variables
>                 -O              Universe is homogeneous
>                 -pty / -npty    Use/don't use pseudo terminals when
>stdout is
>                                 a tty
>                 -s <nodeid>     Load <program> from node <nodeid>
>                 -sigs / -nsigs  Catch/don't catch signals in MPI
>application
>                 -ssi <n> <arg>  Set environment variable
>LAM_MPI_SSI_<n>=<arg>
>                 -toff           Enable tracing with generation
>initially off
>                 -ton, -t        Enable tracing with generation initially
>on
>                 -tv             Launch processes under TotalView Debugger
>                 -v              Be verbose
>                 -w / -nw        Wait/don't wait for application to
>complete
>                 -wd <dir>       Change current working directory of new
>                                 processes to <dir>
>                 -x <envlist>    Export environment vars in <envlist>
>
>Nodes:          n<list>, e.g., n0-3,5
>CPUS:           c<list>, e.g., c0-3,5
>Extras:         h (local node), o (origin node), N (all nodes), C (all
>CPUs)
>
>Examples:       mpirun n0-7 prog1
>                 Executes "prog1" on nodes 0 through 7.
>
>                 mpirun -lamd -x FOO=bar,DISPLAY N prog2
>                 Executes "prog2" on all nodes using the LAMD RPI.
>                 In the environment of each process, set FOO to the value
>                 "bar", and set DISPLAY to the current value.
>
>                 mpirun n0 N prog3
>                 Run "prog3" on node 0, *and* all nodes.  This executes *2*
>                 copies on n0.
>
>                 mpirun C prog4 arg1 arg2
>                 Run "prog4" on each available CPU with command line
>                 arguments of "arg1" and "arg2".  If each node has a
>                 CPU count of 1, the "C" is equivalent to "N".  If at
>                 least one node has a CPU count greater than 1, LAM
>                 will run neighboring ranks of MPI_COMM_WORLD on that
>                 node.  For example, if node 0 has a CPU count of 4 and
>                 node 1 has a CPU count of 2, "prog4" will have
>                 MPI_COMM_WORLD ranks 0 through 3 on n0, and ranks 4
>                 and 5 on n1.
>
>                 mpirun c0 C prog5
>                 Similar to the "prog3" example above, this runs "prog5"
>                 on CPU 0 *and* on each available CPU.  This executes
>                 *2* copies on the node where CPU 0 is (i.e., n0).
>                 This is probably not a useful use of the "C" notation;
>                 it is only shown here for an example.
>
>Defaults:       -c2c -w -pty -nger -nsigs
>---------------------------------------------------------------------------
>--
>
>Second try:
>       PVM still fails (with still 29 seconds to count down):
>--------------------------------------------------------------------------
>PVM:
>[EMAIL PROTECTED] pvm]# cat pvmtest.err
>/var/spool/pbs/mom_priv/jobs/5.whitebox2.SC: line 25:  2190 Segmentation
>fault      pvmd pvm_nodes
>libpvm [pid2201] /var/spool/pbs/mom_priv/jobs/5.whitebox2.SC: line 27:
>2201 Segmentation fault      ./master1
>pvmd3: no process killed
>[EMAIL PROTECTED] pvm]# cat pvmtest.out
>[EMAIL PROTECTED] pvm]#
>---------------------------------------------------------------------------
>-
>
>3) NFS version:
>       nfs-utils-1.0.6-8.EL
>       redhat-config-nfs-1.0.13-1
>
>
>Regards,
>--
>Fernando Laudares Camargos
>
>       R�volution Linux
>http://www.revolutionlinux.com
>---------------------------------------
>* Tout opinion et prise de position exprim�e dans ce message est celle
>de son auteur et pas n�cessairement celle de R�volution Linux.
>** Any views and opinion presented in this e-mail are solely those of
>the author and do not necessarily represent those of R�volution Linux.
>
>
>
>-------------------------------------------------------
>SF email is sponsored by - The IT Product Guide
>Read honest & candid reviews on hundreds of IT Products from real users.
>Discover which products truly live up to the hype. Start reading now.
>http://productguide.itmanagersjournal.com/
>_______________________________________________
>Oscar-devel mailing list
>[email protected]
>https://lists.sourceforge.net/lists/listinfo/oscar-devel


-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://productguide.itmanagersjournal.com/
_______________________________________________
Oscar-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oscar-devel

Reply via email to