Eric Barton wrote:

BTW, Or Gerlitz reckons there is a performance penalty for using multiple
CQs.  The reason I'm interested in separate CQs is to avoid CQ overflow as I
add connections.  On other stacks (Voltaire, Cisco, Silverstorm) I size a
single CQ large enough for 'n' connections (i.e. cluster size - 1), but that
means I have to refuse connections when 'n' have been established.

Talking about CQ wrt adding connections here's my take: the max CQ size (reported by struct ib_device_attr->max_cqe of ib_query_device) is 128K
(this is on memfull HCA, you would need to check the memfree HCA). So
when the number of RX credits per connection is low it allows for many-K connections to use the same CQ (eg eight credits allow for 120K connections which is much more then the ~48K limit on LMC0 IB clusters size...). If you need more connections (QPs) than a single CQ can carry, create another one and attach it to new QPs. The CQ callback gets the CQ pointer as its first element, so you need not change you polling/arming logic.

Also note that a 128K entries CQ consumes about 4MB (Roland can you confirm?) of the HCA attached memory (or host memory for memfree),
so per my taste, coding apps for the cq_resize is kind of over doing.

> In one stack it also stressed vmalloc() and prevented me from using a > single whole-memory mapping.

Is there a chance that you are confusing CQs with QPs? Before implementing FMR scheme for the voltaire NAL, you were creating a giant QP for which the gen1 driver was allocating the host side memory using vmalloc, so it could not allocate more then ~300 QPs.

With the mthca driver you should be able to allocate a CQ with the maximum allowed size (and if not it will be fixed...)

Or.




_______________________________________________
openib-general mailing list
[email protected]
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to