On Thu, 2008-05-01 at 08:07 -0700, Roland Dreier wrote:
> OK, that makes sense -- although did you see any performance difference?
Yes. With four streams on a 4 cores machine, the senders sum up to:
898 * 10^6 bits/sec @ 256 tx queue length
756 * 10^6 bits/sec @ 64 tx queue length
>
> > Also
> I agree, but I want to have a larger buffer to absorb larger picks. For
> example, after applying this patch I tested how many times the net queue
> is stopped and woken up when running four streams of netperf, udp, small
> packets. When using the default 64 tx queue size it happened 500 time
On Wed, 2008-04-30 at 20:05 -0700, Roland Dreier wrote:
> > we haves seen a few other cases where a large tx queue is needed. I
> > think we should choose a larger default value than the current 64.
>
> maybe yes, maybe no... what are the cases where it is needed?
>
> The send queue is basicall
> we haves seen a few other cases where a large tx queue is needed. I
> think we should choose a larger default value than the current 64.
maybe yes, maybe no... what are the cases where it is needed?
The send queue is basically acting as a "shock absorber" for bursty
traffic. If the queue is
thanks, looks like a good solution, applied, just adding an ipoib_
prefix since
> +void send_comp_handler(struct ib_cq *cq, void *dev_ptr)
is too generic a name for a global symbol.
By the way I figured out the crash on unload -- it was an mlx4 bug that
I introduced, which is fixed by:
IB/mlx