Lissa,
This particular issue was resolved by correcting the kickstart file for
rhels6.1.  The reason why it was hanging at this point was because the
incorrect xcat kickstart template file in /install/custom/rh/ directory as
identified by the console output for the rhels6.1 system.  Using the remote
console (rcons), you can see the iterative failures in the kickstart file
where it fails.  A kvm console session will only display "*Freeing unused
kernel memory: 1796k freed".  *Once the kickstart file was corrected,
rhels6.1 built correctly : )

I hope this helps someone else!
Mr. Blevens

On Thu, Oct 25, 2012 at 12:45 PM, Jacob Blevens
<[email protected]>wrote:

> Lissa,
> The rinstall is not working and something is incorrect.  I ran the
> recommended commands and I received the same output errors as before except
> for the USB-5 lines were not outputted.
>
> What happened:
>
>    - The following commands were run # *nodeset <nodename> install
>    ; rpower <nodename> boot*
>
>    - At the console of node8, it gets its ip
>    - mgt(management) and sn(servicenode) servers perform
>    dhcpdiscover/dhcpoffer/dhcprequest/dhcpack for node8
>    - mgt server successfully transfers the rhels6.1 vmlinuz and initrd.img
>    - At the console the "Ready ..." message is displayed and starts to
>    load
>    *
>    - Then if freezes displaying the following last lines outputted:**
>
>    Initializing network drop monitor service
>    Freeing unused kernel memory: 1232k freed
>    Write protecting the kernel read-only data: 10240k
>    Freeing unused kernel memory: 1112k freed
>    Freeing unused kernel memory: 1796k freed
>    *
>
> Thank you!
> Mr. Blevens
>
>
> On Wed, Oct 24, 2012 at 9:18 AM, Lissa Valletta <[email protected]> wrote:
>
>> I assume if the cluster was built by the CET with xCAT,  then rinstall
>> must have been working to these nodes in the past.   Running that command
>> or the equivalent  two commands is part of the process of installing the
>> nodes.
>>  *nodeset <nodename> install
>> rpower <nodename> boot*
>>
>> The only thing I can suggest is check your configuration against our
>> documentation
>>
>> http://sourceforge.net/apps/mediawiki/xcat/index.php?title=XCAT_BladeCenter_Linux_Cluster
>>
>> Maybe someone else on the mailing list will recognize the errors you get.
>>
>>
>> Lissa K. Valletta
>> 2-3/T12
>> Poughkeepsie, NY 12601
>> (tie 293) 433-3102
>>
>>
>>
>> [image: Inactive hide details for Jacob Blevens ---10/23/2012 01:17:19
>> PM---The hardware for 'node8' is an x3550 M3 and it is likely th]Jacob
>> Blevens ---10/23/2012 01:17:19 PM---The hardware for 'node8' is an x3550 M3
>> and it is likely that this particular node was installed fro
>>
>> From: Jacob Blevens <[email protected]>
>> To: Lissa Valletta/Poughkeepsie/IBM@IBMUS
>> Cc: [email protected]
>> Date: 10/23/2012 01:17 PM
>>
>> Subject: Re: Fw: [xcat-user] Fwd: On a Previously Working Node and
>> Rinstall Fails
>> ------------------------------
>>
>>
>>
>> The hardware for 'node8' is an x3550 M3 and it is likely that this
>> particular node was installed from the Management Node and not from the
>> Service Node based on the configuration.
>>
>> Secondly, 'node8' was originally built by the CET with xCAT.  To clarify,
>> node1-8 were all built and were running rhel6.1 fine until an 'rinstall
>> node8' was performed on 'node8'.  If i were to run an 'rinstall' on another
>> node in this group 'test', I would assume it would do the same thing as
>> 'node8' which is not a new node install, it was is a working node that
>> 'rinstall node8' was performed on unsuccessfully.
>> Kindly,
>> Mr. Blevens
>>
>> On Tue, Oct 23, 2012 at 12:02 PM, Lissa Valletta 
>> <*[email protected]*<[email protected]>>
>> wrote:
>>
>>    I notice that you have this set servicenode=servicenode_ip,managements_ip
>>    and nfsserver=managements_ip.   Since before you had
>>    xcatmaster=managements_ip,  is it possible that the nodes were always
>>    installing from the management node and not from your service node.
>>    Another  question - was node8 installed originally by xCAT, or are you
>>    trying to setup to install a node using xCAT for the first time.
>>    What hardware are you using?
>>
>>
>>
>>    Lissa K. Valletta
>>    2-3/T12
>>    Poughkeepsie, NY 12601
>>    (tie 293) 433-3102
>>
>>
>>
>>    [image: Inactive hide details for Jacob Blevens ---10/23/2012
>>    10:58:42 AM---Lissa, Thank you for your response and support on 
>> this!]Jacob
>>    Blevens ---10/23/2012 10:58:42 AM---Lissa, Thank you for your response and
>>    support on this!
>>
>>
>>    From: Jacob Blevens 
>> <*[email protected]*<[email protected]>
>>    >
>>    To: *[email protected]*<[email protected]>
>>    Cc: Lissa Valletta/Poughkeepsie/IBM@IBMUS
>>    Date: 10/23/2012 10:58 AM
>>    Subject: Re: Fw: [xcat-user] Fwd: On a Previously Working Node and
>>    Rinstall Fails
>>
>>    ------------------------------
>>
>>
>>
>>    Lissa,
>>    Thank you for your response and support on this!
>>
>>    We are running xCAT Version 2.6.11 (svn r11798, built Thur Mar 8
>>    16:09 2012) on both the Management Node and the Service Node.  Management
>>    Node and Service Node are running RHEL 6.1.  The image install for the
>>    'node8' is at RHEL 6.1.  Both Management and Service nodes are syncing
>>    tables correctly.
>>
>>    I investigated the xcatmaster attribute for the node and corrected
>>    the following entry in the noderes xcat configuration table:
>>    *
>>    From: *
>>    
>> "test","servicenode_ip,managementnode_ip","pxe",,"managementnode_ip",,,"mac","mac",,,
>>    *"managementnode_ip"*,,,,,,"0"
>>    *
>>    To:*
>>
>>    
>> "test","servicenode_ip,managementnode_ip","pxe",,"managementnode_ip",,,"mac","mac",,,
>>    *"servicenode_ip"*,,,,,,"0"
>>
>>    With the xcatmaster attribute pointing to the Service Node, I ran
>>    another install for node8 'rinstall node8' and still received the same
>>    hangup when it was trying to boot as identified below and in the previous
>>    note:
>>
>>    - The following command is run # rinstall node8
>>    - At the console of node8, it gets its ip
>>    - mgt(management) and sn(servicenode) servers perform
>>    dhcpdiscover/dhcpoffer/dhcprequest/dhcpack for node8
>>    - mgt server successfully transfers the rhels6.1 vmlinuz and
>>    initrd.img
>>    - At the console the "Ready ..." message is displayed and starts to
>>    load*
>>    - Then if freezes displaying the following last lines outputted:**
>>
>>    Initializing network drop monitor service
>>    Freeing unused kernel memory: 1232k freed
>>    Write protecting the kernel read-only data: 10240k
>>    Freeing unused kernel memory: 1112k freed
>>    Freeing unused kernel memory: 1796k freed
>>    usb 5-2: New USB device found, idVendor=04b3, idProduct=4010
>>    usb 5-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
>>    usb 5-2: Product: RNDISKDC ETHER*
>>
>>    Thank you again! Kindly,
>>    Mr. Blevens
>>       From: Lissa Valletta/Poughkeepsie/IBM@IBMUS
>>       To: xCAT Users Mailing list 
>> <*[email protected]*<[email protected]>>,
>>
>>       Cc: *[email protected]*<[email protected]>
>>       Date: 10/23/2012 07:48 AM
>>       Subject: Re: [xcat-user] Fwd: On a Previously Working Node and
>>       Rinstall Fails
>>       ------------------------------
>>
>>
>>
>>       Could you give us the level of xCAT you are running?   Is it the
>>       same level on the Service node?
>>       nodels -v  will do it.
>>
>>       I did notice that on your lsdef of the node8 below,  your
>>       xcatmaster attribute  is the address of the management node.  If you 
>> want
>>       that node installed by the service node, then it should be the ip 
>> address
>>       of the servicenode as known by the node.
>>
>>
>>       Lissa K. Valletta
>>       2-3/T12
>>       Poughkeepsie, NY 12601
>>       (tie 293) 433-3102
>>
>>
>>
>>       [image: Inactive hide details for Jacob Blevens ---10/22/2012
>>       03:55:34 PM---*Background:* On a previously built x3550 M3 server with 
>> a]Jacob
>>       Blevens ---10/22/2012 03:55:34 PM---*Background:* On a previously built
>>       x3550 M3 server with a stateful install of RHEL 6.1
>>
>>       From: Jacob Blevens 
>> <*[email protected]*<[email protected]>
>>       >
>>       To: *[email protected]*<[email protected]>
>>       Date: 10/22/2012 03:55 PM
>>       Subject: [xcat-user] Fwd: On a Previously Working Node and
>>       Rinstall Fails
>>       ------------------------------
>>       *
>>
>>
>>       Background:*
>>       On a previously built x3550 M3 server with a stateful install of
>>       RHEL 6.1 that was working fine after an IBM CET install does not work 
>> after
>>       running 'rinstall' on the node.  It is important to note that the 
>> cluster
>>       has a management node and a servicenode.  We tested 'rinstalling' the 
>> test
>>       'node8' with the following results:*
>>
>>       The sequence of events before problem:*
>>       - The following command is run # rinstall node8
>>       - At the console of node8, it gets its ip
>>       - mgt(management) and sn(servicenode) servers perform
>>       dhcpdiscover/dhcpoffer/dhcprequest/dhcpack for node8
>>       - mgt server successfully transfers the rhels6.1 vmlinuz and
>>       initrd.img
>>       - At the console the "Ready ..." message is displayed and starts
>>       to load
>>       - Then if freezes displaying the following last lines outputted:
>>
>>       Initializing network drop monitor service
>>       Freeing unused kernel memory: 1232k freed
>>       Write protecting the kernel read-only data: 10240k
>>       Freeing unused kernel memory: 1112k freed
>>       Freeing unused kernel memory: 1796k freed
>>       usb 5-2: New USB device found, idVendor=04b3, idProduct=4010
>>       usb 5-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
>>       usb 5-2: Product: RNDISKDC ETHER*
>>
>>       At during the problem troubleshooting:*
>>       - Can ping node8
>>       - Can login to the IMM through the browser
>>       - Cannot ssh or telnet into node8
>>       - rcons is empty with nothing to display/no logon*
>>
>>       xCAT tables:*
>>
>>       #nodels node8 chain
>>       chain.ondiscover: nodediscover
>>       chain.chain: runcmd=bmcsetup,standby
>>       chain.node:node8
>>       chain.currstate: install rhels6.1-x86_64-compute
>>       chain.currchain: boot
>>       chain.commments:
>>       chain.disable:
>>
>>       #lsdef node8
>>       Object Name: node8
>>       arch = x86_64
>>       bmc=node8-bmc
>>       bmcport=0
>>       chain=runcmd=bmcsetup,standby
>>       cons=ipmi
>>       conserver=#.#.#.# (managements_ip)
>>       currchain=boot
>>       currstate=install rhels6.1-x86_64-compute
>>       groups=rack10,test,intel,ipmi,compute
>>       initrd=xcat/rhels6.1/x86_64/initrd.img
>>       installnic=mac
>>       kcmdline=nofb utf8 ks:*http://managements_ip/install/autoinst/node8
>>       * <http://managements_ip/install/autoinst/node8>
>>        ksdevice=#:#:#:#:#:# console=tty0
>>       console=tty0,115200n8r noipv6
>>       kernel=xcat/rhels6.1/x86_64/vmlinuz
>>       mac=#:#:#:#:#:#
>>       mgt=ipmi
>>       mtm=serial##
>>       netboot=pxe
>>       nfsserver=managements_ip
>>       nodetype=osi
>>       ondiscover=nodediscover
>>       os=rhels6.1
>>       postbootscripts=otherpkgs,site.post,site.gpfs
>>       postscripts=syslog,remoteshell,syncfiles,site.hardeths,setupntp
>>       power=ipmi
>>       primarynic=mac
>>       profile=compute
>>       provmethod=install
>>       rack=10
>>       serial=K######
>>       serialflow=hard
>>       serialport=0
>>       serial speed=115200
>>       servicenode=servicenode_ip,managements_ip
>>       status=installing
>>       statustime=10-22-2012 13:00
>>       switch=cisco-enet01
>>       switchport=#/#
>>       unit=40
>>       xcatmaster=managements_ip
>>
>>       #tabdump nodehm
>>       "ipmi","ipmi","ipmi",,,"managements_ip","0","115200","hard",,,
>>
>>       "test","ipmi","ipmi","ipmi",,,"managements_ip","0","115200","hard",,,
>>
>>       #tabdump ipmi
>>       "ipmi","/\z/-bmc/","0",,,,
>>
>>       #tabdump nodetype
>>       "test","rhels6.1","x86_64",,,,
>>
>>       Any input or advice on how to resolve this issue would be greatly
>>       appreciated!  Thank you!
>>       Mr. Blevens
>>
>>
>>
>>
>>
>>
>>       
>> ------------------------------------------------------------------------------
>>       Everyone hates slow websites. So do we.
>>       Make your web apps faster with AppDynamics
>>       Download AppDynamics Lite for free today:*
>>       
>> **http://p.sf.net/sfu/appdyn_sfd2d_oct*<http://p.sf.net/sfu/appdyn_sfd2d_oct>
>>       _______________________________________________
>>       xCAT-user mailing list*
>>       **[email protected]*<[email protected]>
>>       *
>>       
>> **https://lists.sourceforge.net/lists/listinfo/xcat-user*<https://lists.sourceforge.net/lists/listinfo/xcat-user>
>>
>>       
>> ------------------------------------------------------------------------------
>>       Everyone hates slow websites. So do we.
>>       Make your web apps faster with AppDynamics
>>       Download AppDynamics Lite for free today:*
>>       
>> **http://p.sf.net/sfu/appdyn_sfd2d_oct*<http://p.sf.net/sfu/appdyn_sfd2d_oct>
>>       _______________________________________________
>>       xCAT-user mailing list*
>>       **[email protected]*<[email protected]>
>>       *
>>       
>> **https://lists.sourceforge.net/lists/listinfo/xcat-user*<https://lists.sourceforge.net/lists/listinfo/xcat-user>
>>
>>
>>
>>
>

<<graycol.gif>>

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
xCAT-user mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/xcat-user

Reply via email to