Hi John,

Thank you so much for your detailed response, I really appreciate it.  It was 
very helpful. We had recently updated the IB card firmware on the compute 
nodes. It appears that downgrading the firmware resolves the issue.

Thank you again!
Best regards,
Shan-Ho

----------------------------------------------------
Shan-Ho Tsai
University of Georgia, Athens GA



________________________________
From: John Hearns <hear...@gmail.com>
Sent: Thursday, February 17, 2022 3:10 AM
To: Open MPI Users <users@lists.open-mpi.org>
Cc: Shan-ho Tsai <sht...@uga.edu>
Subject: Re: [OMPI users] Verbose logging options to track IB communication 
issues

[EXTERNAL SENDER - PROCEED CAUTIOUSLY]

I would start at a lower level.  Clear your error counters then run some fabric 
over the network, maybe using an IMB or OSU benchmark.
Then look to see if any ports are very noisy - that usually indicates a cable 
needing a reseat or replacement.

Now start at a low level. Run IMB or OSU bandwidth or latency tests between 
pairs of nodes. Are any nodes particularly slow?

Now run tests between groups of nodes which share a leaf switch.

Finally, if this is really a problem which is being triggered by an application 
start by bisecting your network.  Run the application on half the nodes, then 
the other half.  My hunch is that you will find faulty cables.
I can of course be very wrong and it is something that this application 
triggers.






On Wed, 16 Feb 2022 at 19:28, Shan-ho Tsai via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>> wrote:

Greetings,

We are troubleshooting an IB network fabric issue that is causing some of our 
MPI applications to failed with errors like this:


--------------------------------------------------------------------------
The InfiniBand retry count between two MPI processes has been
exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

    The total number of times that the sender wishes the receiver to
    retry timeout, packet sequence, etc. errors before posting a
    completion error.

This error typically means that there is something awry within the
InfiniBand fabric itself.  You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.

Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:

* btl_openib_ib_retry_count - The number of times the sender will
  attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
  to 20).  The actual timeout value used is calculated as:

     4.096 microseconds * (2^btl_openib_ib_timeout)

  See the InfiniBand spec 1.2 (section 12.7.34) for more details.

Below is some information about the host that raised the error and the
peer to which it was connected:

  Local host:   a3-6
  Local device: mlx5_0
  Peer host:    a3-14

You may need to consult with your system administrator to get this
problem fixed.
--------------------------------------------------------------------------

I would like to enable verbose logging for the MPI application to see if that 
could help us pinpoint the IB communication issue (or the nodes with the issue).

I see many verbose logging options reported by "ompi_info -a | grep verbose", 
but I am not sure which one(s) could be helpful here. Would any of them be 
useful here or are there any other ways to enable verbose logging to help with 
tracking down the issue?

Thank you so much in advance.

Best regards,

----------------------------------------------------
Shan-Ho Tsai
University of Georgia, Athens GA


Reply via email to