muvarov replied on github web page:

example/generator/odp_generator.c
line 246
@@ -838,39 +892,55 @@ static int gen_recv_thread(void *arg)
                if (thr_args->stop)
                        break;
 
-               /* Use schedule to get buf from any input queue */
-               ev_cnt = odp_schedule_multi(NULL, ODP_SCHED_NO_WAIT,
-                                           events, burst_size);
-               if (ev_cnt == 0)
-                       continue;
-               for (i = 0, pkt_cnt = 0; i < ev_cnt; i++) {
-                       pkt = odp_packet_from_event(events[i]);
-                       itf = &itfs[odp_pktio_index(odp_packet_input(pkt))];
-
-                       if (odp_packet_has_ipv4(pkt)) {
-                               if (itf->config.pktin.bit.ipv4_chksum) {
-                                       if (odp_packet_has_l3_error(pkt))
-                                               printf("HW detected L3 
error\n");
-                               }
-                       }
+               pkt_cnt = odp_pktin_recv_tmo(pktin, pkts, burst_size,
+                                            ODP_PKTIN_NO_WAIT);


Comment:
it's not blocking, it just polling function.

> muvarov wrote
> how about ODP_SCHED_WAIT and timeout event to break the loop and exit 
> application?


>> bogdanPricope wrote
>> In general case is true, but in this particular case is not:  we are not 
>> waiting other events / configuring other scheduled queues for 
>> ODP_EVENT_BUFFER, ODP_EVENT_TIMEOUT, ODP_EVENT_CRYPTO_COMPL or 
>> ODP_EVENT_IPSEC_RESULT. 
>> I hope ODP is not throwing garbage events in default scheduler group... or?


>>> bogdanPricope wrote
>>> As mentioned before, we need to be able to stop receive side in 'ping' mode 
>>> (not waiting infinitely). If spinning affects performance, we can put a 1 s 
>>> wait.... (don't know if spinning or extra timer is worse)


>>>> bogdanPricope wrote
>>>> yep. My impression was that odp_pktin_recv() is blocking but it seems is 
>>>> no: in 'ping' mode we need to be able to stop the receive thread after a 
>>>> number of pings.


>>>>> bogdanPricope wrote
>>>>> This csum check is done with newer API in API-NEXT 
>>>>> (odp_packet_l3_chksum_status()). No sense to optimize this part for this 
>>>>> older implementation


>>>>>> bogdanPricope wrote
>>>>>> Yes, this can be part of another PR.


>>>>>>> bogdanPricope wrote
>>>>>>>  * @return Number of events outputted (0 ... num)


>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>> The new `odp_event_filter_packet()` API would be useful here.


>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>> Why `ODP_SCHED_NO_WAIT` vs. `ODP_SCHED_WAIT` here? You're just 
>>>>>>>>> spinning if no packets are available so why not let the scheduler do 
>>>>>>>>> the waiting?


>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>> Agree with @muvarov, this could use some comments to explain why 
>>>>>>>>>> these calls are being used. You'd expect a dedicated RX thread to 
>>>>>>>>>> simply wait for packet input.


>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>> Checksum errors will result in `odp_packet_has_error()` being set 
>>>>>>>>>>> as well, so these checks can be done only if the summary packet 
>>>>>>>>>>> error predicate is set, avoiding unnecessary checks for known good 
>>>>>>>>>>> packets.


>>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>>> Might be good to have options for controlling the queue sync type 
>>>>>>>>>>>> here as `ODP_SCHED_SYNC_PARALLEL` should result in highest 
>>>>>>>>>>>> throughput, and `ODP_SCHED_SYNC_ORDERED` would be useful in 
>>>>>>>>>>>> testing performance of scheduler implementations (in theory should 
>>>>>>>>>>>> be better than `ODP_SCHED_SYNC_ATOMIC`).
>>>>>>>>>>>> 
>>>>>>>>>>>> Something to explore in another PR


>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>> ok


>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>> and why odp_pktin_recv_tmo() and not odp_pktin_recv() ?


>>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>>> why not ODP_PKTIN_WAIT?


>>>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>>>> not all events are packets.


>>>>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>>>>> ```
>>>>>>>>>>>>>>>>>  * @return Next highest priority event
>>>>>>>>>>>>>>>>>  * @retval ODP_EVENT_INVALID on timeout and no events 
>>>>>>>>>>>>>>>>> available
>>>>>>>>>>>>>>>>> ```


>>>>>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>>>>>> just separate rx function for scheduler and on thread start 
>>>>>>>>>>>>>>>>>> you just select scheduler or direct.


>>>>>>>>>>>>>>>>>>> bogdanPricope wrote
>>>>>>>>>>>>>>>>>>> This will complicate this already over-complicated code: we 
>>>>>>>>>>>>>>>>>>> may need to decide between ultimate performance and feature 
>>>>>>>>>>>>>>>>>>> richness. 


>>>>>>>>>>>>>>>>>>>> bogdanPricope wrote
>>>>>>>>>>>>>>>>>>>> No -  we need to print csum errors first.
>>>>>>>>>>>>>>>>>>>> This part was significantly changed in api-next (csum 
>>>>>>>>>>>>>>>>>>>> checks use different/ new API) and it makes no sense to 
>>>>>>>>>>>>>>>>>>>> optimize it for the old (master) code. After integration 
>>>>>>>>>>>>>>>>>>>> in api-next, this part will be  reworked  to use less 
>>>>>>>>>>>>>>>>>>>> parser flags (reduce parsing level).
>>>>>>>>>>>>>>>>>>>> For example, removing L4 parsing and locating interface is 
>>>>>>>>>>>>>>>>>>>> bringing an extra 1 mpps.


>>>>>>>>>>>>>>>>>>>>> bogdanPricope wrote
>>>>>>>>>>>>>>>>>>>>> '-r' may work.


>>>>>>>>>>>>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>>>>>>>>>>>>> Having an option to use direct mode seems reasonable, 
>>>>>>>>>>>>>>>>>>>>>> but shouldn't we retain schedule mode (perhaps as a 
>>>>>>>>>>>>>>>>>>>>>> command line switch)? This would provide an easy means 
>>>>>>>>>>>>>>>>>>>>>> of testing scheduler efficiency as it is tuned. At least 
>>>>>>>>>>>>>>>>>>>>>> in some environments we'd like schedule mode to show 
>>>>>>>>>>>>>>>>>>>>>> better performance than direct.


>>>>>>>>>>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>>>>>>>>>>> that has to be the first check.


>>>>>>>>>>>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>>>>>>>>>>>> -r ?


https://github.com/Linaro/odp/pull/343#discussion_r158240818
updated_at 2017-12-21 10:17:20

Reply via email to