This is rather disconcerting. We just finished upgrading our nsd servers from 3.5.0.31 to 4.1.1.10 (All clients were previously migrated from 3.5.0.31 to 4.1.1.10). After finishing that upgrade I'm now seeing these errors with some frequency (a couple every few minutes). Anyone have insight?

On 1/18/17 11:58 AM, Brian Marshall wrote:
As background, we recently upgraded GPFS from 4.2.0 to 4.2.1  and
updated the Mellanox OFED on our compute cluster to allow it to move
from CentOS 7.1 to 7.2

We do some transient warnings from the Mellanox switch gear about
various port counters that we are tracking down with them.

Jobs and filesystem seem stable, but the logs are concerning.

On Wed, Jan 18, 2017 at 10:22 AM, Aaron Knister
<[email protected] <mailto:[email protected]>> wrote:

    I'm curious about this too. We see these messages sometimes when
    things have gone horribly wrong but also sometimes during recovery
    events. Here's a recent one:

    loremds20 (manager/nsd node):
    Mon Jan 16 14:19:02.048 2017: [E] VERBS RDMA rdma read error
    IBV_WC_REM_ACCESS_ERR to 10.101.11.6 (lorej006) on mlx5_0 port 1
    fabnum 3 vendor_err 136
    Mon Jan 16 14:19:02.049 2017: [E] VERBS RDMA closed connection to
    10.101.11.6 (lorej006) on mlx5_0 port 1 fabnum 3 due to RDMA read
    error IBV_WC_REM_ACCESS_ERR index 11

    lorej006 (client):
    Mon Jan 16 14:19:01.990 2017: [N] VERBS RDMA closed connection to
    10.101.53.18 (loremds18) on mlx5_0 port 1 fabnum 3 index 2
    Mon Jan 16 14:19:01.995 2017: [N] VERBS RDMA closed connection to
    10.101.53.19 (loremds19) on mlx5_0 port 1 fabnum 3 index 0
    Mon Jan 16 14:19:01.997 2017: [I] Recovering nodes: 10.101.53.18
    10.101.53.19
    Mon Jan 16 14:19:02.047 2017: [W] VERBS RDMA async event
    IBV_EVENT_QP_ACCESS_ERR on mlx5_0 qp 0x7fffe550f1c8.
    Mon Jan 16 14:19:02.051 2017: [E] VERBS RDMA closed connection to
    10.101.53.20 (loremds20) on mlx5_0 port 1 fabnum 3 error 733 index 1
    Mon Jan 16 14:19:02.071 2017: [I] Recovered 2 nodes for file system
    tnb32.
    Mon Jan 16 14:19:02.140 2017: [I] VERBS RDMA connecting to
    10.101.53.20 (loremds20) on mlx5_0 port 1 fabnum 3 index 0
    Mon Jan 16 14:19:02.160 2017: [I] VERBS RDMA connected to
    10.101.53.20 (loremds20) on mlx5_0 port 1 fabnum 3 sl 0 index 0

    I had just shut down loremds18 and loremds19 so there was certainly
    recovery taking place and during that time is when the error seems
    to have occurred.

    I looked up the meaning of IBV_WC_REM_ACCESS_ERR here
    (http://www.rdmamojo.com/2013/02/15/ibv_poll_cq/
    <http://www.rdmamojo.com/2013/02/15/ibv_poll_cq/>) and see this:

    IBV_WC_REM_ACCESS_ERR (10) - Remote Access Error: a protection error
    occurred on a remote data buffer to be read by an RDMA Read, written
    by an RDMA Write or accessed by an atomic operation. This error is
    reported only on RDMA operations or atomic operations. Relevant for
    RC QPs.

    my take on it during recovery it seems like one end of the
    connection more or less hanging up on the other end (e.g. Connection
    reset by peer
    /ECONNRESET).

    But like I said at the start, we also see this when there something
    has gone awfully wrong.

    -Aaron

    On 1/18/17 3:59 AM, Simon Thompson (Research Computing - IT
    Services) wrote:

        I'd be inclined to look at something like:

        ibqueryerrors -s
        
PortXmitWait,LinkDownedCounter,PortXmitDiscards,PortRcvRemotePhysicalErrors
        -c

        And see if you have a high number of symbol errors, might be a cable
        needs replugging or replacing.

        Simon

        From: <[email protected]
        <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>> on behalf of
        "J. Eric
        Wonderley" <[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>>
        Reply-To: "[email protected]
        <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>"
        <[email protected]
        <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>>
        Date: Tuesday, 17 January 2017 at 21:16
        To: "[email protected]
        <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>"
        <[email protected]
        <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>>
        Subject: [gpfsug-discuss] rmda errors scatter thru gpfs logs

        I have messages like these frequent my logs:
        Tue Jan 17 11:25:49.731 2017: [E] VERBS RDMA rdma write error
        IBV_WC_REM_ACCESS_ERR to 10.51.10.5 (cl005) on mlx5_0 port 1
        fabnum 0
        vendor_err 136
        Tue Jan 17 11:25:49.732 2017: [E] VERBS RDMA closed connection to
        10.51.10.5 (cl005) on mlx5_0 port 1 fabnum 0 due to RDMA write error
        IBV_WC_REM_ACCESS_ERR index 23

        Any ideas on cause..?



        _______________________________________________
        gpfsug-discuss mailing list
        gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
        http://gpfsug.org/mailman/listinfo/gpfsug-discuss
        <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>


    --
    Aaron Knister
    NASA Center for Climate Simulation (Code 606.2)
    Goddard Space Flight Center
    (301) 286-2776 <tel:%28301%29%20286-2776>
    _______________________________________________
    gpfsug-discuss mailing list
    gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
    http://gpfsug.org/mailman/listinfo/gpfsug-discuss
    <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to