I, personally, haven't been burned by mixing UD and RC IPoIB clients on the same fabric but that doesn't mean it can't happen. What I *have* been bitten by a couple times is not having enough entries in the arp cache after bringing a bunch of new nodes online (that made for a long Christmas Eve one year...). You can toggle that via the gc_thresh settings. These settings work for ~3700 nodes (and technically could go much higher).

net.ipv4.neigh.default.gc_thresh3 = 10240
net.ipv4.neigh.default.gc_thresh2 = 9216
net.ipv4.neigh.default.gc_thresh1 = 8192

It's the kind of thing that will bite you when you expand the cluster and it may make sense that it's exacerbated by metadata operations because those may require initiating connections to many nodes in the cluster which could blow your arp cache.

-Aaron

On 3/10/18 11:31 AM, Wei Guo wrote:
Hi, Saula,

This sounds like the problem with the jumbo frame.

Ping or metadata query use small packets, so any time you can ping or ls file.

However, data transferring use large packets, the MTU size. Your MTU 65536 nodes send out large packets, but they get dropped to the 2044 nodes, because the packet size cannot fit in 2044 size limit. The reverse is ok.

I think the gpfs client nodes always communicate with each other to sync the sdr repo files, or other user job mpi communications if there are any. I think all the nodes should agree on a single MTU. I guess ipoib supports up to 4096.

I might missed your Ethernet network switch part whether jumbo frame is enabled or not, if you are using any.

Wei Guo






On Sat, Mar 10, 2018 at 8:29 AM -0600, "Saula, Oluwasijibomi" <oluwasijibomi.sa...@ndsu.edu <mailto:oluwasijibomi.sa...@ndsu.edu>> wrote:

    Wei -  So the expelled node could ping the rest of the cluster just
    fine. In fact, after adding this new node to the cluster I could
    traverse the filesystem for simple lookups, however, heavy data
    moves in or out of the filesystem seemed to trigger the expel
    messages to the new node.


    This experience prompted my tunning exercise on the node and has
    since resolved the expel messages to node even during times of high
    I/O activity.


    Nevertheless, I still have this nagging feeling that the IPoIB
    tuning for GPFS may not be optimal.


    To answer your questions, Ed - IB supports both administrative and
    daemon communications, and we have verbsRdma configured.


    Currently, we have both 2044 and 65520 MTU nodes on our IB network
    and I've been told this should not be the case. I'm hoping to settle
    on 4096 MTU nodes for the entire cluster but I fear there may be
    some caveats - any thoughts on this?


    (Oh, Ed - Hideaki was my mentor for a short while when I began my
    HPC career with NDSU but he left us shortly after. Maybe like you I
    can tune up my Japanese as well once my GPFS issues are put to rest!
    😊 )


    Thanks,
    Siji Saula
    HPC System Administrator
    Center for Computationally Assisted Science & Technology
    *NORTH DAKOTA STATE UNIVERSITY*

    <https://www.ndsu.edu/alphaindex/buildings/Building::395>Research 2
    Building
    
<https://www.ndsu.edu/alphaindex/buildings/Building::396><https://www.ndsu.edu/alphaindex/buildings/Building::395>
 –
    Room 220B
    Dept 4100, PO Box 6050  / Fargo, ND 58108-6050
    p:701.231.7749
    www.ccast.ndsu.edu
    <file://composeviewinternalloadurl/www.ccast.ndsu.edu> |
    www.ndsu.edu <file://composeviewinternalloadurl/www.ndsu.edu>

    ------------------------------------------------------------------------
    *From:* Edward Wahl <ew...@osc.edu>
    *Sent:* Friday, March 9, 2018 8:19:10 AM
    *To:* gpfsug-discuss@spectrumscale.org
    *Cc:* Saula, Oluwasijibomi
    *Subject:* Re: [gpfsug-discuss] Thoughts on GPFS on IB & MTU sizes

    Welcome to the list.

    If Hideaki Kikuchi is still around CCAST, say "Oh hisashiburi, des
    ne?" for me.
    Though I recall he may have left.


    A couple of questions as I, unfortunately, have a good deal of expel
    experience.

    -Are you set up to use verbs or only IPoIB? "mmlsconfig verbsRdma"

    -Are you using the IB as the administrative IP network?

    -As Wei asked, can nodes sending the expel requests ping the victim over
    whatever interface is being used administratively?  Other interfaces
    do NOT
    matter for expels. Nodes that cannot even mount the file systems can
    still
    request expels.  Many many things can cause issues here from routing and
    firewalls to bad switch software which will not update ARP tables,
    and you get
    nodes trying to expel each other.

    -are your NSDs logging the expels in /tmp/mmfs?  You can mmchconfig
    expelDataCollectionDailyLimit if you need more captures to narrow
    down what is
    happening outside the mmfs.log.latest.  Just be wary of the disk
    space if you
    have "expel storms".

    -That tuning page is very out of date and appears to be mostly
    focused on GPFS
    3.5.x tuning.   While there is also a Spectrum Scale wiki, it's
    Linux tuning
    page does not appear to be kernel and network focused and is dated
    even older.


    Ed



    On Thu, 8 Mar 2018 15:06:03 +0000
    "Saula, Oluwasijibomi" <oluwasijibomi.sa...@ndsu.edu> wrote:

    > Hi Folks,
> > > As this is my first post to the group, let me start by saying I applaud the
    > commentary from the user group as it has been a resource to those of us
    > watching from the sidelines.
> > > That said, we have a GPFS layered on IPoIB, and recently, we started having
    > some issues on our IB FDR fabric which manifested when GPFS began sending
    > persistent expel messages to particular nodes.
> > > Shortly after, we embarked on a tuning exercise using IBM tuning
    > 
recommendations<https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Welcome%20to%20High%20Performance%20Computing%20%28HPC%29%20Central/page/Linux%20System%20Tuning%20Recommendations>
    > but this page is quite old and we've run into some snags, specifically 
with
    > setting 4k MTUs using mlx4_core/mlx4_en module options.
> > > While setting 4k MTUs as the guide recommends is our general inclination, I'd
    > like to solicit some advice as to whether 4k MTUs are a good idea and any
    > hitch-free steps to accomplishing this. I'm getting some conflicting 
remarks
    > from Mellanox support asking why we'd want to use 4k MTUs with Unreliable
    > Datagram mode.
> > > Also, any pointers to best practices or resources for network configurations
    > for heavy I/O clusters would be much appreciated.
> > > Thanks, > > Siji Saula
    > HPC System Administrator
    > Center for Computationally Assisted Science & Technology
    > NORTH DAKOTA STATE UNIVERSITY
> > > <https://www.ndsu.edu/alphaindex/buildings/Building::395>Research 2
    > 
Building<https://www.ndsu.edu/alphaindex/buildings/Building::396><https://www.ndsu.edu/alphaindex/buildings/Building::395>
    > – Room 220B Dept 4100, PO Box 6050  / Fargo, ND 58108-6050 p:701.231.7749
    > www.ccast.ndsu.edu<file://composeviewinternalloadurl/www.ccast.ndsu.edu> |
    > www.ndsu.edu<file://composeviewinternalloadurl/www.ndsu.edu>
>


--
    Ed Wahl
    Ohio Supercomputer Center
    614-292-9302


------------------------------------------------------------------------

UTSouthwestern

Medical Center

The future of medicine, today.



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to