On 29/05/15 15:36, Ola Liljedahl wrote:
On 29 May 2015 at 16:26, Zoltan Kiss <[email protected]
<mailto:[email protected]>> wrote:
On 29/05/15 13:33, Savolainen, Petri (Nokia - FI/Espoo) wrote:
-----Original Message-----
From: lng-odp [mailto:[email protected]
<mailto:[email protected]>] On Behalf Of ext
Zoltan Kiss
Sent: Friday, May 29, 2015 2:56 PM
To: Ola Liljedahl
Cc: LNG ODP Mailman List
Subject: Re: [lng-odp] [API-NEXT PATCH] api-next: pktio: add
odp_pktio_send_complete() definition
On 28/05/15 17:40, Ola Liljedahl wrote:
On 28 May 2015 at 17:23, Zoltan Kiss
<[email protected] <mailto:[email protected]>
<mailto:[email protected]
<mailto:[email protected]>>> wrote:
On 28/05/15 16:00, Ola Liljedahl wrote:
I disprove of this solution. TX completion
processing (cleaning
TX
descriptor rings after transmission complete)
is an
implementation
(hardware) aspect and should be hidden from
the application.
Unfortunately you can't, if you want your pktio
application work
with poll mode drivers. In that case TX completion
interrupt (can
be) disabled and the application has to control
that as well. In
case of DPDK you just call the send function (with
0 packets, if you
don't have anything to send at the time)
Why do you have to retire transmitted packet if you are
not transmitting
new packets (and need those descriptors in the TX ring)?
Because otherwise they are a memory leak. Those buffers
might be needed
somewhere else. If they are only released when you
send/receive packets
out next time, you are in trouble, because that might never
happen.
Especially when that event is blocked because your TX ring
is full of
unreleased packets.
Does the
application have too few packets in the pool so that
reception will
suffer?
Let me approach the problem from a different angle: the current
workaround is that you have to allocate a pool with _loooads_ of
buffers, so you have a good chance you never run out of free
buffers.
Probably. Because it still doesn't guarantee that there will
be a next
send/receive event on that interface to release the packets.
I guess CPUs can always burst packets so fast that the TX ring
gets full. So, you should design the pool/ring
configuration/init so that "full ring" is part of normal
operation. What is the benefit of configuring so large ring that
it cannot be filled to the max? The pools size needs to be RX +
TX ring size + number of in-flight packets.
In case of l2fwd that calculation is: src RX ring size * 2 (so you
can always refill) + dst RX ring size (because the RX queue holds
the buffers even when not used) + dst TX ring size. That's for
unidirectional traffic, both direction looks like: 2 * (if1 RX ring
size + if2 RX ring size + max(if1,if2) ring size)
You only need to know the ring sizes in this case (which we doesn't
expose now), but there could be more complicated scenarios.
In case of OVS you need 2 * RX ring size + TX ring size, for each
port. You need to create a separate pool for each port, currently we
have one big pool for each port created at startup.
But I guess there could be more applications than a simple store and
forward scenario, and they would need to make very careful
assumptions about the theoretical highest pool usage with the actual
platform they use, and reserve memory accordingly. I th
- we have to expose RX/TX ring sizes through pktio
- it's very easy to make a mistake in those assumptions
- you have to scale your application for extreme buffer usage in
order to make sure you never fail
If you are not wire-speed for the worst packet rate (minimum packet
size), there is no guarantee that you will "never fail".
When I say "fail" here, I mean you are deadlocked, not just overloaded.
Increasing
buffer sizes (e.g. RX/TX ring sizes and pool size) doesn't help and is
actually a bad solution anyway. Overload situations should be expected
and the design should handle them gracefully (maintain peak packet rate
and drop excess packets according to some chosen QoS policy).
Does it matter if unused packets are located in a TX ring or in the pool
proper? If odp_packet_alloc() encounters a pool exhausted situation,
attempt to reclaim transmitted packets from TX rings.
If your ODP platform builds on a vendor SDK (like most of them
nowadays), your driver won't call odp_packet_alloc when filling up the
RX ring.
I will check how it could work when receive returns 0 received packets.
Zoli
-Petri
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp