Ok, I just tried 2.12.4, and the problem still persists. The only difference I see now is that the error messages are appearing in syslog instead of needing to pull them from the debug log. [ 230.413761] LNetError: 1423:0:(o2iblnd.c:941:kiblnd_create_conn()) Can't create QP: -12, send_wr: 32634, recv_wr: 254, send_sge: 2, recv_sge: 1
Thanks, Kevin On Wed, Feb 12, 2020 at 3:50 PM Andreas Dilger <[email protected]> wrote: > Can you please try 2.12.4, it was just released yesterday and has a number > of fixes. > > On Feb 12, 2020, at 13:36, Kevin M. Hildebrand <[email protected]> wrote: > > I just updated some of my clients to RHEL 7.7, Lustre 2.12.3, MOFED 4.7. > Server version is 2.10.8. > > I'm now getting errors mounting the filesystem on the client. In fact, I > can't even do an 'lctl ping' to any of the servers without getting an I/O > error. > > Debug logs show this message when I attempt an lctl ping: > 00000800:00020000:0.0:1581538955.090767:0:20471:0:(o2iblnd.c:941:kiblnd_create_conn()) > Can't create QP: -12, send_wr: 32634, recv_wr: 254, send_sge: 2, recv_sge: 1 > > # lctl list_nids > 10.11.80.65@o2ib3 > # lctl ping 10.11.80.50@o2ib3 > failed to ping 10.11.80.50@o2ib3: Input/output error > > Interestingly, if I do an 'lctl ping' to the client _from_ the server, the > ping succeeds, and from that point on pings from client _to_ server work > fine until the client is rebooted or lnet is reloaded. > > ko2iblnd parameters match on clients and servers, namely: > options ko2iblnd peer_credits=128 peer_credits_hiw=64 credits=1024 > concurrent_sends=256 ntx=2048 map_on_demand=32 fmr_pool_size=2048 > fmr_flush_trigger=512 fmr_cache=1 > > Anyone have any thoughts? > > Thanks, > Kevin > > -- > Kevin Hildebrand > University of Maryland > Division of IT > _______________________________________________ > lustre-discuss mailing list > [email protected] > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > > > Cheers, Andreas > -- > Andreas Dilger > Principal Lustre Architect > Whamcloud > > > > > > >
_______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
