Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

include/odp/api/spec/packet_io.h
line 25
@@ -884,14 +887,19 @@ int odp_pktin_recv(odp_pktin_queue_t queue, odp_packet_t 
packets[], int num);
  * @param      num        Maximum number of packets to receive
  * @param      wait       Wait time specified as as follows:
  *                        * ODP_PKTIN_NO_WAIT: Do not wait
- *                        * ODP_PKTIN_WAIT:    Wait infinitely
+ *                        * ODP_PKTIN_WAIT:    Wait indefinitely. Returns


Comment:
The application can always compensate for an implementation's limitations, but 
why should it have to? The ODP philosophy is for applications to state their 
functional needs and let the implementation provide those functional needs in a 
platform-optimal manner. Having an indefinite wait is simpler from an 
application standpoint and avoids needless overhead. If I have dozens of 
threads that specify `ODP_PKTIN_WAIT` the implementation is free to consolidate 
any internal timer management to amortize costs across all of those threads. If 
each is managing its own timers that's not possible. 

I have no problem with having an explicit timeout as a wait option, but I'd 
argue that if we had to deprecate one it would be the variable timeouts since a 
well-structured application shouldn't have threads that are trying to do six 
different things that it's constantly flipping between. The provision of 
timeouts in general is to accommodate older apps moving to ODP that aren't as 
well structured to a world where threads are cheap and can be used "wastefully".

In any event, the `ODP_PKTIN_WAIT` feature has been around for some time and 
deprecating it would conceivably impact existing applications in significant 
ways, so I'd be reluctant to make such changes without careful consideration. 
But there is an ambiguity surrounding the intersection of this feature with 
`odp_pktio_stop()` behavior that this PR looks to clarify.

> Petri Savolainen(psavol) wrote:
> The problem is caused by infinite wait option. I'd just deprecate the option. 
> Implementation gets problematic when this call would need to monitor three 
> different things: packet arrival, timeout and user calling stop. E.g. socket 
> based implementation of this use select() which monitors packet and timeout, 
> but not user input. If this change would be made, select could not sleep long 
> but keep polling potential user call to stop (which normally would not happen 
> almost ever).
> 
> So, it's better to leave stop synchronization to application control. It sets 
> the timeout such that it can react fast enough to potential interface 
> shutdown calls, but long enough to save energy. E.g. application can decide 
> to use 1sec interface for shutdown polling, but implementation would need to 
> poll more often e.g. every 10-100ms.


https://github.com/Linaro/odp/pull/387#discussion_r161276333
updated_at 2018-01-12 17:09:35

Reply via email to