> This is a draft patch to address the following bug:
 > https://bugs.openfabrics.org/show_bug.cgi?id=728

Might be nice to include a description with the patch, so everyone
doesn't have to go figure out this bug report (the issue I guess is
that ehca doesn't support enough SG entries to handle 16 4K pages on
the IPoIB CM receive queue).

 > While working on this I observed that for mthca max_srq_sge
 > returned by ib_query_device() is not equal to max_sge returned
 > by ib_query_srq(). Why is that?

Not sure.  I'll take a look.  What are the two values that you get?

 >  struct ipoib_cm_rx_buf {
 >      struct sk_buff *skb;
 > -    u64 mapping[IPOIB_CM_RX_SG];
 > +    u64 *mapping;
 >  };

I think it would be much simpler just to leave the array here.  You
waste a few bytes in the worst case but the memory used for each
ipoib_cm_rx_buf structures is much less than the actual receive
buffers it points to anyway, so I think the overhead is negligible.

 > +    if (IPOIB_CM_RX_SG >= max_sge_supported) {
 > +            fragment_size   = CM_PACKET_SIZE/max_sge_supported;
 > +            num_frags       = CM_PACKET_SIZE/fragment_size;
 > +    } else {
 > +            fragment_size   = CM_PACKET_SIZE/IPOIB_CM_RX_SG;
 > +            num_frags       = IPOIB_CM_RX_SG;
 > +    }
 > +    order = get_order(fragment_size);

I think that if the device can't handle enough SG entries to handle
the full CM_PACKET_SIZE with PAGE_SIZE fragments, we just have to
reduce the size of the receive buffers.  Trying to allocate multi-page
receive fragments (especially with GFP_ATOMIC on the receive path) is
almost certainly going to fail once memory gets fragmented.  Lots
of other ethernet drivers have been forced to avoid multi-page
allocations when using jumbo frames because of serious issues observed
in practice, so we should avoid making the same mistake.

 - R.
_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to