Quoting r. Pete Wyckoff <[EMAIL PROTECTED]>: > Subject: Re: max_send_sge < max_sge > > [EMAIL PROTECTED] wrote on Wed, 28 Jun 2006 01:38 +0300: > > If this works for you, great. I was just trying to point out query device > > can not guarantee that QP allocaton will always succeed even if you stay > > within limits it reports. > > > > For example, are you using a large number of WRs per QP as well? If so > > after alocating a couple of QPs you might run out of locked memory limit > > allowed per-user, depending on your system setup. QP allocation will then > > fail, even if you use the hcacap - 1 heuristic. > > Thanks for all the comments. I'm not specifically trying to be a > pain here. The bit I was failing to notice was that when > considering many QP allocations, the resource demands add up faster > when using more SGEs each. Still find it odd that the very first > QP created can not achieve the maximum-reported values, but > understand your general argument.
Yea, that's because the API only can report 1 max value. But when this was considered the concensus was its not worth extending the API because of the other issues you mention. > Regarding the API, some interfaces I've seen will do the equivalent > of putting the "max currently available" values in ibv_qp_init_attr > so userspace can reconsider and try again. I never liked that very > much, and it doesn't help much in this multi-dimensional space where > WRs and SGEs apparently share the same overall constraints. Plus > the returned values aren't guaranteed to be valid next time an > attempt is made anyway, so don't do that. :) Yep. We could have an option to have the stack scale the requested values down to some legal set instead of failing an allocation. But we couldn't come up with a clean way to tell the stack e.g. what should it round down: the SGE or WR value. Do you think selecting something arbitrarily might still be a good idea? So in the end we are back to either using low numbers that just work empirically, or starting with some value and going down till it succeeds. -- MST _______________________________________________ openib-general mailing list [email protected] http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
