Petri Savolainen(psavol) replied on github web page:

platform/linux-generic/odp_packet_io.c
line 50
@@ -638,8 +638,20 @@ static odp_buffer_hdr_t *pktin_dequeue(queue_t q_int)
        if (pkts <= 0)
                return NULL;
 
-       if (pkts > 1)
-               queue_fn->enq_multi(q_int, &hdr_tbl[1], pkts - 1);
+       if (pkts > 1) {
+               int num_enq;
+               int num = pkts - 1;
+
+               num_enq = queue_fn->enq_multi(q_int, &hdr_tbl[1], num);
+
+               if (odp_unlikely(num_enq < num)) {
+                       if (odp_unlikely(num_enq < 0))
+                               num_enq = 0;
+
+                       buffer_free_multi(&hdr_tbl[num_enq + 1], num - num_enq);


Comment:
ODP_DBG() added in v2

> Petri Savolainen(psavol) wrote:
> I wanted to be conservative and not change synchronization of parallel queues 
> yet. I'll do another patch on top, so that it's easy to undo parallel 
> optimization later if necessary.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Worth an `ODP_DBG()` here? At minimum I'd think we'd want to capture some 
>> sort of statistic for these drops.
>> 
>> Same comment for the rest of the similar drops in this commit.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> If we relax this to cover atomic and parallel queues then this would simply 
>>> be:
>>> ```
>>> int use_stash = !queue_is_ordered(qi);
>>> ```


>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>> Is it really necessary to restrict this optimization to atomic queues? 
>>>> Ordered obviously cannot be stashed, but parallel queues make no ordering 
>>>> guarantees so accelerating them like this would also seem reasonable. In 
>>>> that case the `atomic` variable to this function would be better named 
>>>> something like `use_stash` .


>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>> Might be nice to say which interface wasn't started here for debug 
>>>>> purposes since many could be in play.


https://github.com/Linaro/odp/pull/504#discussion_r171823373
updated_at 2018-03-02 11:21:39

Reply via email to