Re: [xcat-user] noderes,nics,confignics

2019-02-19 Thread Bin XA Xu
Hi Thomas,
 
    I don't think 'noderes.installnic' could meet your expectation:  PXE from eth0, but set IP to eth2 after provisioning (DHCP or static).
 
    I'm still curious about your new discovering, looks like it is discovered by eth0 with other method, instead of eth2 with switch based discovery.
 
    And as discovery is only done once,  is it acceptable for you to set MAC of eth0 to `noip` and then  refresh dhcp? This could make dhcp server not answering the MAC of eth0, in that your node always use eth2 for PXE.
 
Bin Xu
HPC Software DevelopmentSoftware Defined Infrastructure, IBM Systems
Phone: 86-010-82454067
E-mail: bx...@cn.ibm.com
 
 
- Original message -From: Thomas HUMMEL To: xcat-user@lists.sourceforge.netCc:Subject: Re: [xcat-user] noderes,nics,confignicsDate: Tue, Feb 19, 2019 7:29 PM 
On 2/19/19 11:20 AM, Yuan Y Bai wrote:> Hi Thomas> To set install NIC with static ip, you can follow these steps:ok thanks> *nics.nicips* is mainly used for secondary nics, it contains> comma-separated list of IP addresses per NIC.You can refer to usage> command: `tabdump -d nics|grep nicips`Ok, what I previously tried with this attribute was to set eth2 ip withthe same regexp as in the hosts table but the object definition wouldtake the regexp litterally. Then I tried to put the real ip (once regexpis extanded) : the node object definition would have correct nicpiattribute but in the end the node would still gets netbooted with eth0configured : why ?> *noderes.installnic* : The network adapter on the node that will be used> for OS deployment, the installnic can be set to the network adapter name> or the mac address or the keyword "mac" which means that the network> interface specified by the mac address in the mac table will be used.> You can refer to usage command: `tabdump -d noderes|grep installnic`.Yes, I know this man section but I cannot see where installnic fits inmy scenario : again, setting installnic to eth2 has no effect on whichnic gets configured after discovery and once the node is netbooted :how is this attribute supposed to be used ? Is it only for stateful mode ?Thanks.--TH.___xCAT-user mailing listxCAT-user@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/xcat-user 
 


___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


Re: [xcat-user] noderes,nics,confignics

2019-02-19 Thread Er Tao Zhao
Hi, Thomas
 
Can tars-113-eth2 be resolved to 192.168.128.115 from your DNS server?
 
Can you show us the entry in /var/lib/dhcpd/dhcpd.leases for MAC of eth0 and eth2?
 
Thx!
 
Best Regards,---Zhao Er TaoIBM China System and Technology Laboratory, BeijingTel:(86-10)82450485Email: erta...@cn.ibm.comAddress: 1/F, 28 Building,ZhongGuanCun Software Park,No.8 DongBeiWang West Road, Haidian District,Beijing, 100193, P.R.China
 
 
- Original message -From: Thomas HUMMEL To: xcat-user@lists.sourceforge.netCc:Subject: Re: [xcat-user] noderes,nics,confignicsDate: Tue, Feb 19, 2019 5:18 PM 
On 2/19/19 2:34 AM, Bin XA Xu wrote:> To set static, you can use `hardeths` or `confignetwork -s`Ok. But does it has something to do with either noderes.installnic ornics.nicips (I still cannot figure out how to use thoses attributes).Thanks.--TH___xCAT-user mailing listxCAT-user@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/xcat-user 
 


___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


Re: [xcat-user] xCAT and Vagrant

2019-02-19 Thread Song BJ Yang
Hi Daniel,
 
Great work!
 
I will look into the repo and forward this mail to the team to see whether we can build the automation of discovery and hardware control like this, currently all the such kind of test are triggered manually on bare metal servers.
 
one comment for your repo https://github.com/dhilst/qemu-ipmi, it will be better if you can add some description and steps in the README.md, maybe some information in this mail:) 
 
--YANG Song (杨嵩)IBM China System Technology LaboratoryTel: 86-10-82452903Email: yang...@cn.ibm.comAddress: Building 28, ZhongGuanCun Software Park,No.8, Dong Bei Wang West Road, Haidian District Beijing 100193, PRC北京市海淀区东北旺西路8号中关村软件园28号楼邮编: 100193
 
 
- Original message -From: Daniel Hilst To: xCAT Users Mailing list Cc:Subject: Re: [xcat-user] xCAT and VagrantDate: Wed, Feb 20, 2019 9:45 AM 
Hi everyone
 
I'm using OpenIPMI and QEMU to get xCAT working in a virtualized environment. I could test sequential discovery with such setup, and also use rpower an rcons commands transparently. I have everything on github. It's very handcrafted for my environment but it has been working ... I was just waiting for the right timing to improve it :) 
First of all, you need OpenIPMI and OpenIPMI-lanserv installed, this is easy to install on most modern distributions, I'm using Fedora, but any distro would work. The network topology is the simplest one. There is a br0 at my host that receives the gateway IP and all BMC IPs, And there is three virtual machines, HN, CN1 and CN2, representing headnode and two computing nodes. Each machine has a single NIC for sake of simplicity. I'm using iptables to masquerade the network output to my wifi card. 
For each VM there is a ipmisim process that acts as its BMC. For each process a address is attached to br0, this was the way I found to get  host processes and VM communicating. Every MAC and IP was hard coded to get it working, there are a lot of space for improvements. Each ipmsim has its own configuration file, which has a start command. When you issue a power on command to ipmi it runs this start command that points to a script that power ups the virtual machine. SoL is working too, at last for kernel messages, as long as you redirect it to the serial console.
 
 
HN IP: 192.168.123.2
CN 1 IP 192.168.123.3 BMC 192.168.123.4
CN 2 IP 192.168.123.5 BMC 192.168.123.6 (I rarely use this one, my host machine lacks of memory)
 
 
Here it is: https://github.com/dhilst/qemu-ipmi
 
PS: I spend about a month on trying to get snmpsim [1] integrated with this environment for emulating a SNMPv3 switch in sake of testing switch based discovery without touching the xcat bits but failed miserably, there may be lost bits of this work on the scripts.[1] https://github.com/etingof/snmpsim 
Regards! 

Em ter, 19 de fev de 2019 2:36 PM, Kevin Keane  wrote:
Is there a Vagrant model of an xCAT cluster?If there were, it should be possible to build a test case for:https://github.com/xcat2/xcat-core/issues/2633While clearly it doesn't model the hardware, it would allow some sort oftesting of changes.Chris--Dr Christopher J. WalkerITS ResearchQueen Mary University of London, E1 4NS___xCAT-user mailing listxCAT-user@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/xcat-user___xCAT-user mailing listxCAT-user@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/xcat-user
___xCAT-user mailing listxCAT-user@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/xcat-user
 



Re: [xcat-user] xCAT and Vagrant

2019-02-19 Thread Daniel Hilst
I forgot to say how I use it. At this time I just fire up start-all.sh and
it does everything I need.

Before start, you need to install HN machine OS. Xcat and define the
network and nodes. The computing nodes can be started afterwards with
rpower command.

Regards

Em ter, 19 de fev de 2019 10:13 PM, Daniel Hilst  Hi everyone
>
> I'm using OpenIPMI and QEMU to get xCAT working in a virtualized
> environment. I could test sequential discovery with such setup, and also
> use rpower an rcons commands transparently. I have everything on github.
> It's very handcrafted for my environment but it has been working ... I was
> just waiting for the right timing to improve it :)
>
> First of all, you need OpenIPMI and OpenIPMI-lanserv installed, this is
> easy to install on most modern distributions, I'm using Fedora, but any
> distro would work. The network topology is the simplest one. There is a br0
> at my host that receives the gateway IP and all BMC IPs, And there is three
> virtual machines, HN, CN1 and CN2, representing headnode and two computing
> nodes. Each machine has a single NIC for sake of simplicity. I'm using
> iptables to masquerade the network output to my wifi card.
>
> For each VM there is a ipmisim process that acts as its BMC. For each
> process a address is attached to br0, this was the way I found to get  host
> processes and VM communicating. Every MAC and IP was hard coded to get it
> working, there are a lot of space for improvements. Each ipmsim has its own
> configuration file, which has a start command. When you issue a power on
> command to ipmi it runs this start command that points to a script that
> power ups the virtual machine. SoL is working too, at last for kernel
> messages, as long as you redirect it to the serial console.
>
> HN IP: 192.168.123.2
> CN 1 IP 192.168.123.3 BMC 192.168.123.4
> CN 2 IP 192.168.123.5 BMC 192.168.123.6 (I rarely use this one, my host
> machine lacks of memory)
>
> Here it is: https://github.com/dhilst/qemu-ipmi
>
> PS: I spend about a month on trying to get snmpsim [1] integrated with
> this environment for emulating a SNMPv3 switch in sake of testing switch
> based discovery without touching the xcat bits but failed miserably, there
> may be lost bits of this work on the scripts.
>
> [1] https://github.com/etingof/snmpsim
>
> Regards!
>
> Em ter, 19 de fev de 2019 2:36 PM, Kevin Keane  escreveu:
>
>> I've tried virtualizing xCAT for testing purposes. To some extent, it
>> works, but the really interesting parts are very hard to virtualize. What
>> tripped me up was UEFI booting and BMC setup/IPMI. Without getting these
>> pieces, all you can test in xCAT is whether the tables are set up
>> correctly. Even when you do get it to work, the virtualized version was
>> different enough from actual hardware to be of limited use.
>>
>> Also, even when you do get it to work, these things are very
>> hypervisor-specific. I eventually got UEFI-booting to work in libvirt, but
>> then had to switch to VirtualBox due to another project. And I never got to
>> the point where I could have put it into Vagrant.
>>
>> ___
>> Kevin Keane | Systems Architect | University of San Diego ITS |
>> kke...@sandiego.edu
>> Maher Hall, 192 |5998 Alcalá Park | San Diego, CA 92110-2492 |
>> 619.260.6859
>>
>> *REMEMBER! **No one from IT at USD will ever ask to confirm or supply
>> your password*.
>> These messages are an attempt to steal your username and password. Please
>> do not reply to, click the links within, or open the attachments of these
>> messages. Delete them!
>>
>>
>>
>>
>> On Tue, Feb 19, 2019 at 7:38 AM Christopher Walker 
>> wrote:
>>
>>> Is there a Vagrant model of an xCAT cluster?
>>>
>>> If there were, it should be possible to build a test case for:
>>> https://github.com/xcat2/xcat-core/issues/2633
>>>
>>> While clearly it doesn't model the hardware, it would allow some sort of
>>> testing of changes.
>>>
>>> Chris
>>> --
>>> Dr Christopher J. Walker
>>> ITS Research
>>> Queen Mary University of London, E1 4NS
>>>
>>>
>>> ___
>>> xCAT-user mailing list
>>> xCAT-user@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/xcat-user
>>>
>> ___
>> xCAT-user mailing list
>> xCAT-user@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/xcat-user
>>
>
___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


Re: [xcat-user] xCAT and Vagrant

2019-02-19 Thread Daniel Hilst
Hi everyone

I'm using OpenIPMI and QEMU to get xCAT working in a virtualized
environment. I could test sequential discovery with such setup, and also
use rpower an rcons commands transparently. I have everything on github.
It's very handcrafted for my environment but it has been working ... I was
just waiting for the right timing to improve it :)

First of all, you need OpenIPMI and OpenIPMI-lanserv installed, this is
easy to install on most modern distributions, I'm using Fedora, but any
distro would work. The network topology is the simplest one. There is a br0
at my host that receives the gateway IP and all BMC IPs, And there is three
virtual machines, HN, CN1 and CN2, representing headnode and two computing
nodes. Each machine has a single NIC for sake of simplicity. I'm using
iptables to masquerade the network output to my wifi card.

For each VM there is a ipmisim process that acts as its BMC. For each
process a address is attached to br0, this was the way I found to get  host
processes and VM communicating. Every MAC and IP was hard coded to get it
working, there are a lot of space for improvements. Each ipmsim has its own
configuration file, which has a start command. When you issue a power on
command to ipmi it runs this start command that points to a script that
power ups the virtual machine. SoL is working too, at last for kernel
messages, as long as you redirect it to the serial console.

HN IP: 192.168.123.2
CN 1 IP 192.168.123.3 BMC 192.168.123.4
CN 2 IP 192.168.123.5 BMC 192.168.123.6 (I rarely use this one, my host
machine lacks of memory)

Here it is: https://github.com/dhilst/qemu-ipmi

PS: I spend about a month on trying to get snmpsim [1] integrated with this
environment for emulating a SNMPv3 switch in sake of testing switch based
discovery without touching the xcat bits but failed miserably, there may be
lost bits of this work on the scripts.

[1] https://github.com/etingof/snmpsim

Regards!

Em ter, 19 de fev de 2019 2:36 PM, Kevin Keane  I've tried virtualizing xCAT for testing purposes. To some extent, it
> works, but the really interesting parts are very hard to virtualize. What
> tripped me up was UEFI booting and BMC setup/IPMI. Without getting these
> pieces, all you can test in xCAT is whether the tables are set up
> correctly. Even when you do get it to work, the virtualized version was
> different enough from actual hardware to be of limited use.
>
> Also, even when you do get it to work, these things are very
> hypervisor-specific. I eventually got UEFI-booting to work in libvirt, but
> then had to switch to VirtualBox due to another project. And I never got to
> the point where I could have put it into Vagrant.
>
> ___
> Kevin Keane | Systems Architect | University of San Diego ITS |
> kke...@sandiego.edu
> Maher Hall, 192 |5998 Alcalá Park | San Diego, CA 92110-2492 |
> 619.260.6859
>
> *REMEMBER! **No one from IT at USD will ever ask to confirm or supply
> your password*.
> These messages are an attempt to steal your username and password. Please
> do not reply to, click the links within, or open the attachments of these
> messages. Delete them!
>
>
>
>
> On Tue, Feb 19, 2019 at 7:38 AM Christopher Walker 
> wrote:
>
>> Is there a Vagrant model of an xCAT cluster?
>>
>> If there were, it should be possible to build a test case for:
>> https://github.com/xcat2/xcat-core/issues/2633
>>
>> While clearly it doesn't model the hardware, it would allow some sort of
>> testing of changes.
>>
>> Chris
>> --
>> Dr Christopher J. Walker
>> ITS Research
>> Queen Mary University of London, E1 4NS
>>
>>
>> ___
>> xCAT-user mailing list
>> xCAT-user@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/xcat-user
>>
> ___
> xCAT-user mailing list
> xCAT-user@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/xcat-user
>
___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


Re: [xcat-user] xCAT and Vagrant

2019-02-19 Thread Kevin Keane
I've tried virtualizing xCAT for testing purposes. To some extent, it
works, but the really interesting parts are very hard to virtualize. What
tripped me up was UEFI booting and BMC setup/IPMI. Without getting these
pieces, all you can test in xCAT is whether the tables are set up
correctly. Even when you do get it to work, the virtualized version was
different enough from actual hardware to be of limited use.

Also, even when you do get it to work, these things are very
hypervisor-specific. I eventually got UEFI-booting to work in libvirt, but
then had to switch to VirtualBox due to another project. And I never got to
the point where I could have put it into Vagrant.

___
Kevin Keane | Systems Architect | University of San Diego ITS |
kke...@sandiego.edu
Maher Hall, 192 |5998 Alcalá Park | San Diego, CA 92110-2492 | 619.260.6859

*REMEMBER! **No one from IT at USD will ever ask to confirm or supply your
password*.
These messages are an attempt to steal your username and password. Please
do not reply to, click the links within, or open the attachments of these
messages. Delete them!




On Tue, Feb 19, 2019 at 7:38 AM Christopher Walker 
wrote:

> Is there a Vagrant model of an xCAT cluster?
>
> If there were, it should be possible to build a test case for:
> https://github.com/xcat2/xcat-core/issues/2633
>
> While clearly it doesn't model the hardware, it would allow some sort of
> testing of changes.
>
> Chris
> --
> Dr Christopher J. Walker
> ITS Research
> Queen Mary University of London, E1 4NS
>
>
> ___
> xCAT-user mailing list
> xCAT-user@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/xcat-user
>
___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


[xcat-user] xCAT and Vagrant

2019-02-19 Thread Christopher Walker
Is there a Vagrant model of an xCAT cluster?

If there were, it should be possible to build a test case for:
https://github.com/xcat2/xcat-core/issues/2633

While clearly it doesn't model the hardware, it would allow some sort of 
testing of changes.

Chris
-- 
Dr Christopher J. Walker
ITS Research
Queen Mary University of London, E1 4NS


___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


Re: [xcat-user] noderes,nics,confignics

2019-02-19 Thread Thomas HUMMEL

On 2/19/19 11:20 AM, Yuan Y Bai wrote:

Hi Thomas
To set install NIC with static ip, you can follow these steps:


ok thanks


*nics.nicips* is mainly used for secondary nics, it contains 
comma-separated list of IP addresses per NIC.You can refer to usage 
command: `tabdump -d nics|grep nicips`


Ok, what I previously tried with this attribute was to set eth2 ip with 
the same regexp as in the hosts table but the object definition would 
take the regexp litterally. Then I tried to put the real ip (once regexp 
is extanded) : the node object definition would have correct nicpi 
attribute but in the end the node would still gets netbooted with eth0 
configured : why ?


*noderes.installnic* : The network adapter on the node that will be used 
for OS deployment, the installnic can be set to the network adapter name 
or the mac address or the keyword "mac" which means that the network 
interface specified by the mac address in the mac table will be used. 
You can refer to usage command: `tabdump -d noderes|grep installnic`.


Yes, I know this man section but I cannot see where installnic fits in 
my scenario : again, setting installnic to eth2 has no effect on which 
nic gets configured after discovery and once the node is netbooted :


how is this attribute supposed to be used ? Is it only for stateful mode ?

Thanks.

--
TH.




___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


Re: [xcat-user] noderes,nics,confignics

2019-02-19 Thread Yuan Y Bai
Hi Thomas
 
To set install NIC with static ip, you can follow these steps:
1.
chdef  ip=
 
2.  Use `confignetwork -s`, details refer to: https://xcat-docs.readthedocs.io/en/latest/guides/admin-guides/manage_clusters/common/deployment/network/cfg_network_ethernet_nic.html#configure-adapters-with-static-ips
 
 
BTW:
 
nics.nicips is mainly used for secondary nics, it contains comma-separated list of IP addresses per NIC. You can refer to usage command: `tabdump -d nics|grep nicips`
 
noderes.installnic : The network adapter on the node that will be used for OS deployment, the installnic can be set to the network adapter name or the mac address or the keyword "mac" which means that the network interface specified by the mac address in the mac table will be used. You can refer to usage command: `tabdump -d noderes|grep installnic`.
 
 
 
Best Regards--Yuan Bai (白媛)CSTL HPC System Management DevelopmentTel:86-10-82451401E-mail: by...@cn.ibm.comAddress: IBM ZGC Campus. Ring Building 28,ZhongGuanCun Software Park,No.8 Dong Bei Wang West Road, Haidian District,Beijing P.R.China 100193IBM环宇大厦北京市海淀区东北旺西路8号,中关村软件园28号楼邮编:100193
 
 
- Original message -From: Thomas HUMMEL To: xcat-user@lists.sourceforge.netCc:Subject: Re: [xcat-user] noderes,nics,confignicsDate: Tue, Feb 19, 2019 5:18 PM 
On 2/19/19 2:34 AM, Bin XA Xu wrote:> To set static, you can use `hardeths` or `confignetwork -s`Ok. But does it has something to do with either noderes.installnic ornics.nicips (I still cannot figure out how to use thoses attributes).Thanks.--TH___xCAT-user mailing listxCAT-user@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/xcat-user 
 


___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


Re: [xcat-user] noderes,nics,confignics

2019-02-19 Thread Thomas HUMMEL

On 2/19/19 2:34 AM, Bin XA Xu wrote:

To set static, you can use `hardeths` or `confignetwork -s`


Ok. But does it has something to do with either noderes.installnic or 
nics.nicips (I still cannot figure out how to use thoses attributes).


Thanks.

--
TH


___
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user


Re: [xcat-user] noderes,nics,confignics

2019-02-19 Thread Thomas HUMMEL

On 2/19/19 2:34 AM, Bin XA Xu wrote:

  But it should be handled when discovering,  xCAT will assign the same 
IP to eth0 and eth2 during the auto-discovery.

Ertao,  could you help to give more information about that?
And Thomas,  could you give  a `lsdef` output on your node,  before 
discovering and after  discovering?


Thanks for your answer. Sorry for the following long post but it'll give 
you any details needed just to make sure and be complete about my setup :


- eth0 is connected to 1Gb/s switchA/portA which allows untagged 
incoming packets and tags them in the vlan matching the cluster private 
subnet
- eth2 is connected  to 10Gb/s switchB/portB which allows untagged 
incoming packets and tags them in the vlan matching the cluster private 
subnet (same vlan as above)


That's what I meant when I said "are on the same subnet" but I expect 
only one of those 2 nics to get the node desired ip address (as stated 
with a regexp in the hosts table)


[In addition, bmc is configured as a chain task and uses the same 
physical port as eth0 but a differant vlan - bmc card is configured to 
tag packets]


Here are the info you asked corresponding to a scenario where I'm 
starting from scratch (node doesn't exist) and bios on the node PXE 
boots in this order :


1. eth0
2. eth1 [not connected]
3. eth2

and ends up the node beeing correctly provisionned (and with ONLY one 
ip) but through eth0 and with eth0 carrying the final desired ip. Which 
is what I'd like to avoid (prevent such bios misconfiguration as eth2 
should be first)


1) my subnets (note the dynamic rande address range)

"tars-ipmi","10.6.96.0","255.255.252.0",,"10.6.96.1",,
"tars","192.168.128.0","255.255.248.0","eth1",,"192.168.132.2","192.168.132.2""192.168.134.2-192.168.135.254",,"tars.cluster.pasteur.fr",,

2) I rmdef'ed the node and did some cleaning to emulate a first time 
creation


# ls -l /tftpboot/xcat/xnba/nodes/tars-113*
ls: cannot access /tftpboot/xcat/xnba/nodes/tars-113*: No such file or 
directory


# grep -E '(0c:c4:7a:4d:85:a8|0c:c4:7a:4d:85:a9|0c:c4:7a:58:c7:6a)' 
/var/lib/dhcpd/dhcpd.leases

#

3) the node before genesis :

# lsdef tars-113
Object name: tars-113
addkcmdline=ipv6.disable=1 biosdevname=0 net.ifnames=0 
rd.driver.blacklist=nouveau nouveau.modeset=0

arch=x86_64
bmc=10.6.96.115
bmcpassword=
bmcport=0
bmcusername=

chain=runcmd=bmcsetup,runimage=http://xcat-tars/install/sum_activate/sum_activate.tgz,osimage=centos6.10-x86_64-netboot-compute-prod
groups=tars-compute,tars-ipmi,tars,standard,b10
ip=192.168.128.115
mgt=ipmi
os=centos6.10
postbootscripts=otherpkgs
profile=compute
provmethod=centos6.10-x86_64-netboot-compute-prod
supportedarchs=x86,x86_64
switch=b10b4.dc1.pasteur.fr
switchport=8

4) at the console I saw the following happen

eth0 : 192.168.134.252
no dhcp answer for eth2

then :

eth0 gets  192.168.128.115 which is the correct node regexp assigned ip
eth2 gets 192.168.134.250 which is from the dynamic range

-> I'm not sure what happened here and who did what

5) the node once netbooted (after genesis)

$ ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
2: eth0:  mtu 1500 qdisc mq state UP 
qlen 1000

link/ether 0c:c4:7a:4d:85:a8 brd ff:ff:ff:ff:ff:ff
inet 192.168.128.115/21 brd 192.168.135.255 scope global eth0
3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 0c:c4:7a:4d:85:a9 brd ff:ff:ff:ff:ff:ff
4: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 0c:c4:7a:58:c7:6a brd ff:ff:ff:ff:ff:ff

-> it has been installed via and on eth0. I would have liked to be able 
to force eth2 configuration with this ip even in the case where PXE was 
initially done through eth0


6) the node definition once discovered :

# lsdef -t node tars-113
Object name: tars-113
addkcmdline=ipv6.disable=1 biosdevname=0 net.ifnames=0 
rd.driver.blacklist=nouveau nouveau.modeset=0

arch=x86_64
bmc=10.6.96.115
bmcpassword=
bmcport=0
bmcusername=

chain=runcmd=bmcsetup,runimage=http://xcat-tars/install/sum_activate/sum_activate.tgz,osimage=centos6.10-x86_64-netboot-compute-prod
cpucount=12
cputype=Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
currchain=osimage=centos6.10-x86_64-netboot-compute-prod
currstate=netboot centos6.10-x86_64-compute
disksize=sda:256GB
groups=tars-compute,tars-ipmi,tars,standard,b10

initrd=xcat/osimage/centos6.10-x86_64-netboot-compute-prod/initrd-stateless.gz
ip=192.168.128.115

kcmdline=imgurl=http://!myipfn!:80//install/netboot/centos6.10/x86_64/compute/prod/rootimg.gz 
XCAT=!myipfn!:3001 NODE=tars-113 FC=0

kernel=xcat/osimage/centos6.10-x86_64-netboot-compute-prod/kernel
mac=0c:c4:7a:4d:85:a8|0c:c4:7a:58:c7:6a!tars-113-eth2
memory=258373MB
mgt=ipmi
netboot=xnba
os=centos6.10