Hi,

I've got a cluster running 2.11 with 2 routers and 68  compute nodes.  It's the 
first time I've used a post-multi-rail version of Lustre.  

The problem I'm trying to troubleshoot is that my sample compute node (ulna66) 
seems to think the router I configured (ulna4) is down, and so an attempt to 
ping outside the cluster results in failure and "no route to XXX" on the 
console.  I can lctl ping the router from the compute node and vice-versa.   
Forwarding is enabled on the router node via modprobe argument.

lnetctl route show reports that the route is down.  Where I'm stuck is figuring 
out what in userspace (e.g. lnetctl or lctl) can tell me why.

The compute node's lnet configuration is:

[root@ulna66:lustre-211]# cat /etc/lnet.conf
ip2nets:
  - net-spec: o2ib33
    interfaces:
         0: hsi0
    ip-range:
         0: 192.168.128.*
route:
    - net: o2ib100
      gateway: 192.168.128.4@o2ib33

After I start lnet, systemctl reports success and the state is as follows:

[root@ulna66:lustre-211]# lnetctl net show
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
    - net type: o2ib33
      local NI(s):
        - nid: 192.168.128.66@o2ib33
          status: up
          interfaces:
              0: hsi0

[root@ulna66:lustre-211]# lnetctl peer show --verbose
peer:
    - primary nid: 192.168.128.4@o2ib33
      Multi-Rail: False
      peer ni:
        - nid: 192.168.128.4@o2ib33
          state: up
          max_ni_tx_credits: 8
          available_tx_credits: 8
          min_tx_credits: 7
          tx_q_num_of_buf: 0
          available_rtr_credits: 8
          min_rtr_credits: 8
          refcount: 4
          statistics:
              send_count: 2
              recv_count: 2
              drop_count: 0

[root@ulna66:lustre-211]# lnetctl route show --verbose
route:
    - net: o2ib100
      gateway: 192.168.128.4@o2ib33
      hop: -1
      priority: 0
      state: down

I can instrument the code, but I figure there must be someplace available to 
normal users to look, that I'm unaware of.

thanks,

Olaf P. Faaland
Livermore Computing
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to