bogdanPricope replied on github web page:

include/odp/api/spec/packet_io.h
line 25
@@ -884,14 +887,19 @@ int odp_pktin_recv(odp_pktin_queue_t queue, odp_packet_t 
packets[], int num);
  * @param      num        Maximum number of packets to receive
  * @param      wait       Wait time specified as as follows:
  *                        * ODP_PKTIN_NO_WAIT: Do not wait
- *                        * ODP_PKTIN_WAIT:    Wait infinitely
+ *                        * ODP_PKTIN_WAIT:    Wait indefinitely. Returns


Comment:
All three possibilities for setting ‚wait’ parameter are useful in 
different application designs: ODP_PKTIN_NO_WAIT, ODP_PKTIN_WAIT, finite 
wait.... mainly because each option has its pros/cons and depending on your 
application profile and expected traffic you need to select one of them for 
best performance.


ODP_PKTIN_NO_WAIT
Pros:  best performance on high core load with equally spread traffic
Cons: wildly looping when there is no traffic: considering direct RX mode with 
RSS, we can imagine a scenario where all traffic goes to a single core while 
the rest of cores are only looping

ODP_PKTIN_WAIT
Pros: no wild loop
Cons:  At some point you need to restart you applications and you need to 
shutdown resources gracefully (else, in some cases initialization will fail and 
you will need to reboot host - bigger down time). You have no way to gracefully 
interrupt this function.


finite wait:
Pros: no wild loop, no graceful shutdown issue
Cons: You need to arm/disable a timer (or similar) for each loop. This is 
especially painful on high load.


Having another way to stop an infinite/finite wait, on request 
(odp_pktio_stop()) or error (someone tripped the cable in the lab, NIC failure, 
etc.) is very useful.

The select() example is not very good ‘con’ as select triggers on socket 
end-of-file (and errors); also it triggers on signals (I wonder why).

See man select:
“in particular, a file descriptor is also ready on end-of-file”
“EINTR  A signal was caught; see signal(7).”


Main question is if NXP and Cavium can implement this functionality.

Else, I am OK with this.

Reviewed-by: Bogdan Pricope <bogdan.pric...@linaro.org>


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> @bala-manoharan @bogdanPricope Can you comment on how `ODP_PKTIN_WAIT` is 
> currently being used, to your knowledge?


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> The application can always compensate for an implementation's limitations, 
>> but why should it have to? The ODP philosophy is for applications to state 
>> their functional needs and let the implementation provide those functional 
>> needs in a platform-optimal manner. Having an indefinite wait is simpler 
>> from an application standpoint and avoids needless overhead. If I have 
>> dozens of threads that specify `ODP_PKTIN_WAIT` the implementation is free 
>> to consolidate any internal timer management to amortize costs across all of 
>> those threads. If each is managing its own timers that's not possible. 
>> 
>> I have no problem with having an explicit timeout as a wait option, but I'd 
>> argue that if we had to deprecate one it would be the variable timeouts 
>> since a well-structured application shouldn't have threads that are trying 
>> to do six different things that it's constantly flipping between. The 
>> provision of timeouts in general is to accommodate older apps moving to ODP 
>> that aren't as well structured to a world where threads are cheap and can be 
>> used "wastefully".
>> 
>> In any event, the `ODP_PKTIN_WAIT` feature has been around for some time and 
>> deprecating it would conceivably impact existing applications in significant 
>> ways, so I'd be reluctant to make such changes without careful 
>> consideration. But there is an ambiguity surrounding the intersection of 
>> this feature with `odp_pktio_stop()` behavior that this PR looks to clarify.


>>> Petri Savolainen(psavol) wrote:
>>> The problem is caused by infinite wait option. I'd just deprecate the 
>>> option. Implementation gets problematic when this call would need to 
>>> monitor three different things: packet arrival, timeout and user calling 
>>> stop. E.g. socket based implementation of this use select() which monitors 
>>> packet and timeout, but not user input. If this change would be made, 
>>> select could not sleep long but keep polling potential user call to stop 
>>> (which normally would not happen almost ever).
>>> 
>>> So, it's better to leave stop synchronization to application control. It 
>>> sets the timeout such that it can react fast enough to potential interface 
>>> shutdown calls, but long enough to save energy. E.g. application can decide 
>>> to use 1sec interface for shutdown polling, but implementation would need 
>>> to poll more often e.g. every 10-100ms.


https://github.com/Linaro/odp/pull/387#discussion_r161696919
updated_at 2018-01-16 09:18:06

Reply via email to