I'm curious about this too. We see these messages sometimes when things have gone horribly wrong but also sometimes during recovery events. Here's a recent one:

loremds20 (manager/nsd node):
Mon Jan 16 14:19:02.048 2017: [E] VERBS RDMA rdma read error IBV_WC_REM_ACCESS_ERR to 10.101.11.6 (lorej006) on mlx5_0 port 1 fabnum 3 vendor_err 136 Mon Jan 16 14:19:02.049 2017: [E] VERBS RDMA closed connection to 10.101.11.6 (lorej006) on mlx5_0 port 1 fabnum 3 due to RDMA read error IBV_WC_REM_ACCESS_ERR index 11

lorej006 (client):
Mon Jan 16 14:19:01.990 2017: [N] VERBS RDMA closed connection to 10.101.53.18 (loremds18) on mlx5_0 port 1 fabnum 3 index 2 Mon Jan 16 14:19:01.995 2017: [N] VERBS RDMA closed connection to 10.101.53.19 (loremds19) on mlx5_0 port 1 fabnum 3 index 0 Mon Jan 16 14:19:01.997 2017: [I] Recovering nodes: 10.101.53.18 10.101.53.19 Mon Jan 16 14:19:02.047 2017: [W] VERBS RDMA async event IBV_EVENT_QP_ACCESS_ERR on mlx5_0 qp 0x7fffe550f1c8. Mon Jan 16 14:19:02.051 2017: [E] VERBS RDMA closed connection to 10.101.53.20 (loremds20) on mlx5_0 port 1 fabnum 3 error 733 index 1
Mon Jan 16 14:19:02.071 2017: [I] Recovered 2 nodes for file system tnb32.
Mon Jan 16 14:19:02.140 2017: [I] VERBS RDMA connecting to 10.101.53.20 (loremds20) on mlx5_0 port 1 fabnum 3 index 0 Mon Jan 16 14:19:02.160 2017: [I] VERBS RDMA connected to 10.101.53.20 (loremds20) on mlx5_0 port 1 fabnum 3 sl 0 index 0

I had just shut down loremds18 and loremds19 so there was certainly recovery taking place and during that time is when the error seems to have occurred.

I looked up the meaning of IBV_WC_REM_ACCESS_ERR here (http://www.rdmamojo.com/2013/02/15/ibv_poll_cq/) and see this:

IBV_WC_REM_ACCESS_ERR (10) - Remote Access Error: a protection error occurred on a remote data buffer to be read by an RDMA Read, written by an RDMA Write or accessed by an atomic operation. This error is reported only on RDMA operations or atomic operations. Relevant for RC QPs.

my take on it during recovery it seems like one end of the connection more or less hanging up on the other end (e.g. Connection reset by peer
/ECONNRESET).

But like I said at the start, we also see this when there something has gone awfully wrong.

-Aaron

On 1/18/17 3:59 AM, Simon Thompson (Research Computing - IT Services) wrote:
I'd be inclined to look at something like:

ibqueryerrors -s
PortXmitWait,LinkDownedCounter,PortXmitDiscards,PortRcvRemotePhysicalErrors
-c

And see if you have a high number of symbol errors, might be a cable
needs replugging or replacing.

Simon

From: <gpfsug-discuss-boun...@spectrumscale.org
<mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of "J. Eric
Wonderley" <eric.wonder...@vt.edu <mailto:eric.wonder...@vt.edu>>
Reply-To: "gpfsug-discuss@spectrumscale.org
<mailto:gpfsug-discuss@spectrumscale.org>"
<gpfsug-discuss@spectrumscale.org <mailto:gpfsug-discuss@spectrumscale.org>>
Date: Tuesday, 17 January 2017 at 21:16
To: "gpfsug-discuss@spectrumscale.org
<mailto:gpfsug-discuss@spectrumscale.org>"
<gpfsug-discuss@spectrumscale.org <mailto:gpfsug-discuss@spectrumscale.org>>
Subject: [gpfsug-discuss] rmda errors scatter thru gpfs logs

I have messages like these frequent my logs:
Tue Jan 17 11:25:49.731 2017: [E] VERBS RDMA rdma write error
IBV_WC_REM_ACCESS_ERR to 10.51.10.5 (cl005) on mlx5_0 port 1 fabnum 0
vendor_err 136
Tue Jan 17 11:25:49.732 2017: [E] VERBS RDMA closed connection to
10.51.10.5 (cl005) on mlx5_0 port 1 fabnum 0 due to RDMA write error
IBV_WC_REM_ACCESS_ERR index 23

Any ideas on cause..?



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to