Re: [lng-odp] IPsec API finialization

2017-09-10 Thread Peltonen, Janne (Nokia - FI/Espoo)

Bill Fischofer [mailto:bill.fischo...@linaro.org] wrote:
> On Fri, Sep 8, 2017 at 6:10 AM, Janne Peltonen  
> wrote:
> >
> >
> > On Fri, 8 Sep 2017, Nikhil Agarwal wrote:
> >> On 7 September 2017 at 14:09, Peltonen, Janne (Nokia - FI/Espoo)
> >>  wrote:>
> >> > Bill Fischofer wrote:
> >> > >  if (odp_ipsec_result(&ipsec_res, pkt) < 0) { /* Stale
> >> > > event, discard */
> >>
> >> > There is a race condition here if the idea is that calling
> >> > odp_ipsec_sa_disable() (or destroy()) marks the SA stale to prevent
> >> > further processing for that SA in the result event handling.
> >>
> >> > Maybe the odp_ipsec_result() succeeds but the SA becomes stale
> >> > right after that, before rest of the processing is done.
> >>
> >> Same race condition exist in the proposed API when one thread received the 
> >> last
> >> packet of SA and still process it while the other thread on receiving 
> >> disable
> >> event destroy the SA, application will always need to synchronize its own
> >> thread.
> >
> > There is no race in the API, only in incorrect use of it. As I explained,
> > synchronization is needed even with the disable event and it can be
> > easily done using an ordered lock that ensures that other event handling
> > threads are finished processing the SA (they have released the queue
> > context through another call to schedule or explicitly). So the event
> > handler for the disable event can destroy the SA safely without any race
> > if it uses an ordered lock (or if the SA queue is atomic).
> >
> > I wrote a piece of pseudocode about this in my reply, maybe you missed it.
> >
> >>
> >> Let me reiterate the solution we discussed yesterday. After the SA is 
> >> disabled
> >> successfully, implementations will stop enqueuing any more events from 
> >> that SA
> >> and any call to odp_ipsec_result will indicate that SA is disabled.(SA 
> >> Entry is
> >> still valid though). Application may choose to synchronize its database and
> >> process that packet or may drop it. Then it will call odp_ipsec_sa_destroy 
> >> which
> >> will delete the SA and any further call to odp_ipsec_result pertaining to 
> >> that
> >> SA will result in invalid event. Hope this resolves the issue.
> >
> > That clarifies the API you are proposing but not how you intend it to
> > be used. It is easy to say that an application just should do whatever
> > synchronization is necessary.
> >
> > That said, I think your proposal could be made to work from application
> > perspective, but with some inconvenience:
> >
> > After the packet event handler checks (for every IPsec packet event)
> > that the event is not stale, it has to prevent odp_ipsec_sa_destroy()
> > from being called in any other thread until it no longer uses the SA.
> > While this could be done e.g. using an RCU based or epoch based
> > synchronization scheme to delay the destroy() just long enough, it
> > may perhaps not be easy for all application authors to do so since
> > ODP does not really provide anything than the basic atomics for that.
> 
> Wouldn't this happen automatically if the event queue were atomic? Or
> if an ordered lock were used to protect odp_ipsec_result() for the
> duration of the SA context reference? For example:

That approach would work if the event handler destroyed the SA in
the same queue context as it handles the packet events of the SA.

The problem is that SA deletion starts elsewhere in some control thread
and there would have to be a way to reliably delegate the actual destroy
to the correct queue context by having a an event enqueued through the
SA queue. And after the proposed API change the ODP implementation would
not necessarily send such an event (e.g. disable complete event) and
would not allow the application to send any event either.

The approach would work if there are unhandled ipsec packets events
for the SA in the SA queues but that may not be the case. In async case
one could send a dummy packet after disable() to get such an event, but
in the inline case that would not work (well, if one shared the same
event queue between inline and async processing, then that trick could
work).

The atomic queue or ordered lock approach works with the current API
just because there is the disable completion event that can be used
to transfer control to the correct packet processing thread (even if
the actual destroy would be further deferred to some control thread).

A variation of this, with probably significant overhead, would be
such that the ipsec packet event handler just enqueued the events
to an application created normal ODP queue and the application would
generate a suitable "disable complete" event to the same queue and
then the actual event processing in the context of the application
created normal queue could do the needed ordering using an ordered
lock.

So yes, things can be made to work with the proposed API change,
but in a less straightforward manner than with the current API.

Janne

> 
>

Re: [lng-odp] Compiler Barrier API

2017-09-10 Thread Brian Brooks
On Sun, Sep 10, 2017 at 10:33 PM, Bill Fischofer
 wrote:
> Before we consider adding new synchronization APIs, we need a clearly
> defined use case for why a portable application would need this. ODP
> implementations may make use of such things based on their knowledge
> of the platform architecture, but ODP applications (should) have no
> such platform-specific dependencies. So we need a platform-independent
> use case.

There's a disconnect here. odp_compiler_internal.h implies
platform/xxx/include/odp_compiler_internal.h. That would be the
location to add a compiler barrier macro for implementation use only.

I think there's enough consensus in this thread to say that there's no
need to add an ODP API for a compiler barrier. This same rationale
could be extended to all shared memory synchronization algorithms
(deprecate them from ODP API) since they are not Control / Data Plane
objects and are implemented using atomic primitives (which are an
abstraction layer provided by the compiler). Perhaps they should be
moved to a odp-helper-sync library that depends on an
odp-helper-atomic library.

> On Sun, Sep 10, 2017 at 10:28 PM, Brian Brooks  
> wrote:
>> Hi Andriy,
>>
>> On Wed, Sep 6, 2017 at 12:24 PM, Andriy Berestovskyy
>>  wrote:
>>> Hey Brian,
>>> You are right, there are no compiler barriers on master, I just was on
>>> api-next branch:
>>>
>>> https://git.linaro.org/lng/odp.git/tree/platform/linux-generic/arch/arm/odp_atomic.h?h=api-next#n56
>>
>> How about a #define for compiler barrier in odp_compiler_internal.h?
>>
>>> Andriy
>>>
>>>
>>> On 05.09.2017 17:06, Brian Brooks wrote:

 I don't see a compiler barrier in the odp.git repo. Perhaps 'nop', but
 this acts as more than a pure compiler barrier?


 On Tue, Sep 5, 2017 at 8:23 AM, Andriy Berestovskyy
  wrote:
>
> Hey Petri,
>
> On 05.09.2017 14:17, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>>
>>
>> I think compiler barrier is too weak for writing portable code, since it
>> does not guarantee that the CPU would not re-order the instructions.
>
>
>
>
> No, compiler barrier does not guarantee CPU order. Though, sometimes we
> do
> not need such a guarantee.
>
> Compiler barriers are in the same league with volatiles and provide the
> same
> weak guarantees: eventually the effect will be observed and code will not
> be
> optimized out by the compiler.
>
>
>
>> ODP implementation has ":::memory" in context of inline assembly
>> instructions, but that's when we are writing code against specific ISA
>> and
>> thus know which amount of synchronization is needed.
>
>
>
>
> The barrier is quite rare, so it might remain in form of asm volatile...
>
>
> Andriy


Re: [lng-odp] Compiler Barrier API

2017-09-10 Thread Bill Fischofer
Before we consider adding new synchronization APIs, we need a clearly
defined use case for why a portable application would need this. ODP
implementations may make use of such things based on their knowledge
of the platform architecture, but ODP applications (should) have no
such platform-specific dependencies. So we need a platform-independent
use case.

On Sun, Sep 10, 2017 at 10:28 PM, Brian Brooks  wrote:
> Hi Andriy,
>
> On Wed, Sep 6, 2017 at 12:24 PM, Andriy Berestovskyy
>  wrote:
>> Hey Brian,
>> You are right, there are no compiler barriers on master, I just was on
>> api-next branch:
>>
>> https://git.linaro.org/lng/odp.git/tree/platform/linux-generic/arch/arm/odp_atomic.h?h=api-next#n56
>
> How about a #define for compiler barrier in odp_compiler_internal.h?
>
>> Andriy
>>
>>
>> On 05.09.2017 17:06, Brian Brooks wrote:
>>>
>>> I don't see a compiler barrier in the odp.git repo. Perhaps 'nop', but
>>> this acts as more than a pure compiler barrier?
>>>
>>>
>>> On Tue, Sep 5, 2017 at 8:23 AM, Andriy Berestovskyy
>>>  wrote:

 Hey Petri,

 On 05.09.2017 14:17, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>
>
> I think compiler barrier is too weak for writing portable code, since it
> does not guarantee that the CPU would not re-order the instructions.




 No, compiler barrier does not guarantee CPU order. Though, sometimes we
 do
 not need such a guarantee.

 Compiler barriers are in the same league with volatiles and provide the
 same
 weak guarantees: eventually the effect will be observed and code will not
 be
 optimized out by the compiler.



> ODP implementation has ":::memory" in context of inline assembly
> instructions, but that's when we are writing code against specific ISA
> and
> thus know which amount of synchronization is needed.




 The barrier is quite rare, so it might remain in form of asm volatile...


 Andriy


Re: [lng-odp] Compiler Barrier API

2017-09-10 Thread Brian Brooks
Hi Andriy,

On Wed, Sep 6, 2017 at 12:24 PM, Andriy Berestovskyy
 wrote:
> Hey Brian,
> You are right, there are no compiler barriers on master, I just was on
> api-next branch:
>
> https://git.linaro.org/lng/odp.git/tree/platform/linux-generic/arch/arm/odp_atomic.h?h=api-next#n56

How about a #define for compiler barrier in odp_compiler_internal.h?

> Andriy
>
>
> On 05.09.2017 17:06, Brian Brooks wrote:
>>
>> I don't see a compiler barrier in the odp.git repo. Perhaps 'nop', but
>> this acts as more than a pure compiler barrier?
>>
>>
>> On Tue, Sep 5, 2017 at 8:23 AM, Andriy Berestovskyy
>>  wrote:
>>>
>>> Hey Petri,
>>>
>>> On 05.09.2017 14:17, Savolainen, Petri (Nokia - FI/Espoo) wrote:


 I think compiler barrier is too weak for writing portable code, since it
 does not guarantee that the CPU would not re-order the instructions.
>>>
>>>
>>>
>>>
>>> No, compiler barrier does not guarantee CPU order. Though, sometimes we
>>> do
>>> not need such a guarantee.
>>>
>>> Compiler barriers are in the same league with volatiles and provide the
>>> same
>>> weak guarantees: eventually the effect will be observed and code will not
>>> be
>>> optimized out by the compiler.
>>>
>>>
>>>
 ODP implementation has ":::memory" in context of inline assembly
 instructions, but that's when we are writing code against specific ISA
 and
 thus know which amount of synchronization is needed.
>>>
>>>
>>>
>>>
>>> The barrier is quite rare, so it might remain in form of asm volatile...
>>>
>>>
>>> Andriy


Re: [lng-odp] Supporting ODP_PKTIO_OP_MT_SAFE

2017-09-10 Thread Brian Brooks
Honnappa,

Could your proposal be simplified to: MT-safe pktio should be
deprecated because it is not a common use case. Applications will
either use MT-unsafe pktio or the MT-safe scheduler.

> 1) Polling method - in which one pkt I/O will be created for each receive 
> worker thread. In this case, support for ODP_PKTIO_OP_MT_SAFE is not required.

Absence of MT-safe does not require 1:1 mapping of thread to pktio. It
just means that it is the application's responsibility to ensure
exclusive access to a pktio.

> for high throughput packet I/Os [..] we do not need to support 
> ODP_PKTIO_OP_MT_SAFE
> We could keep the support for ODP_PKTIO_OP_MT_SAFE for other pkt I/Os.

This would introduce an undesirable leaky abstraction.

BB

On Sun, Sep 10, 2017 at 12:40 PM, Bill Fischofer
 wrote:
> We can discuss this during tomorrow's ARCH call, and probably further
> at Connect. MT Safe is the default behavior and it's opposite ("MT
> Unsafe") was added as a potential optimization when applications
> assure implementations that only a single thread will be polling a
> PktIn queue or adding to a Pktout queue.
>
> Ideally, we'd like to retire all application I/O polling and use the
> scheduler exclusively, but that's that's a longer-term goal. For now
> we have both.
>
> On Sun, Sep 10, 2017 at 8:11 AM, Honnappa Nagarahalli
>  wrote:
>> Hi,
>> I think there are 2 ways in which pkt I/O will be used:
>>
>> 1) Polling method - in which one pkt I/O will be created for each
>> receive worker thread. In this case, support for ODP_PKTIO_OP_MT_SAFE
>> is not required.
>> 2) Event method - the scheduler is used to receive packets. In this
>> case the scheduler will provide the exclusive access to a pkt I/O.
>> Again in this case support for ODP_PKTIO_OP_MT_SAFE is not required.
>>
>> I am thinking, for high throughput packet I/Os such as dpdk or ODP
>> native drivers (in the future), we do not need to support
>> ODP_PKTIO_OP_MT_SAFE. The odp_pktio_open API call can return an error
>> if ODP_PKTIO_OP_MT_SAFE is asked for.
>>
>> We could keep the support for ODP_PKTIO_OP_MT_SAFE for other pkt I/Os.
>>
>> This will save space in cache for the locks as well as instruction cycles.
>>
>> Thank you,
>> Honnappa


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 683
@@ -414,97 +490,217 @@ static inline odp_packet_hdr_t 
*add_segments(odp_packet_hdr_t *pkt_hdr,
new_hdr->buf_hdr.seg[0].len   = seg_len;
 
packet_seg_copy_md(new_hdr, pkt_hdr);
-   new_hdr->frame_len= pkt_hdr->frame_len + len;
-   new_hdr->unshared_len = pkt_hdr->unshared_len + len;
-   new_hdr->headroom = pool->headroom + offset;
-   new_hdr->tailroom = pkt_hdr->tailroom;
+   new_hdr->frame_len = pkt_hdr->frame_len + len;
+   new_hdr->headroom  = pool->headroom + offset;
+   new_hdr->tailroom  = pkt_hdr->tailroom;
+   new_hdr->shared_len = pkt_hdr->shared_len;
 
pkt_hdr = new_hdr;
} else {
-   int last;
+   seg_entry_t *last_seg;
 
/* add into the tail */
add_all_segs(pkt_hdr, new_hdr);
 
/* adjust last segment length */
-   last = packet_last_seg(pkt_hdr);
-   pkt_hdr->buf_hdr.seg[last].len = seg_len;
+   last_seg  = seg_entry_last(pkt_hdr);
+   last_seg->len = seg_len;
 
-   pkt_hdr->frame_len+= len;
-   pkt_hdr->unshared_len += len;
-   pkt_hdr->tailroom  = pool->tailroom + offset;
+   pkt_hdr->frame_len += len;
+   pkt_hdr->tailroom   = pool->tailroom + offset;
}
 
return pkt_hdr;
 }
 
-static inline void free_bufs(odp_packet_hdr_t *pkt_hdr, int first, int num)
+static inline int seg_is_link(void *hdr)
 {
-   int i, nfree;
-   odp_buffer_hdr_t *buf_hdr[num];
+   odp_packet_hdr_t *pkt_hdr = hdr;
+
+   return pkt_hdr != pkt_hdr->buf_hdr.seg[0].hdr;
+}
+
+static inline void packet_free_multi(odp_buffer_hdr_t *hdr[], int num)
+{
+   int i;
+   uint32_t ref_cnt;
+   int num_ref = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Zero when reference API has not been used */
+   ref_cnt = odp_atomic_load_u32(&hdr[i]->ref_cnt);
+
+   if (odp_unlikely(ref_cnt)) {
+   ref_cnt = odp_atomic_fetch_dec_u32(&hdr[i]->ref_cnt);
+
+   if (ref_cnt > 1) {
+   num_ref++;
+   continue;
+   }
+   }
+
+   /* Reset link header back to normal header */
+   if (odp_unlikely(seg_is_link(hdr[i])))
+   hdr[i]->seg[0].hdr = hdr[i];
+
+   /* Skip references and pack to be freed headers to array head */
+   if (odp_unlikely(num_ref))
+   hdr[i - num_ref] = hdr[i];
+
+   }
+
+   num -= num_ref;
+
+   if (odp_likely(num))
+   buffer_free_multi(hdr, num);
+}
 
-   for (i = 0, nfree = 0; i < num; i++) {
-   odp_packet_hdr_t *hdr = pkt_hdr->buf_hdr.seg[first + i].hdr;
+static inline void free_all_segments(odp_packet_hdr_t *pkt_hdr, int num)
+{
+   int i;
+   odp_buffer_hdr_t *buf_hdr[num + 1];
 
-   if (packet_ref_count(hdr) == 1 || packet_ref_dec(hdr) == 1) {
-   ODP_ASSERT((packet_ref_count_set(hdr, 0), 1));
-   buf_hdr[nfree++] = &hdr->buf_hdr;
+   if (odp_likely(pkt_hdr->buf_hdr.num_seg == num)) {
+   for (i = 0; i < num; i++)
+   buf_hdr[i] = pkt_hdr->buf_hdr.seg[i].hdr;
+
+   if (odp_unlikely(seg_is_link(pkt_hdr))) {
+   buf_hdr[num] = &pkt_hdr->buf_hdr;
+   num++;
}
+   } else {
+   seg_entry_t *seg;
+   odp_buffer_hdr_t *link_hdr[num];
+   uint8_t idx = 0;
+   int links = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Free also link headers */
+   if (odp_unlikely(idx == 0 && seg_is_link(pkt_hdr))) {
+   link_hdr[links] = &pkt_hdr->buf_hdr;
+   links++;
+   }
+
+   seg = seg_entry_next(&pkt_hdr, &idx);
+   buf_hdr[i] = seg->hdr;
+   }
+
+   if (odp_unlikely(links))
+   packet_free_multi(link_hdr, links);
}
 
-   if (nfree > 0)
-   buffer_free_multi(buf_hdr, nfree);
+   packet_free_multi(buf_hdr, num);
 }
 
 static inline odp_packet_hdr_t *free_segments(odp_packet_hdr_t *pkt_hdr,
  int num, uint32_t free_len,
  uint32_t pull_len, int head)
 {
+   seg_entry_t *seg;
+   int i;
int num_remain = pkt_hdr->buf_hdr.segcount - num;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   odp_packet_hdr_t *last_hdr 

Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 608
@@ -414,97 +490,217 @@ static inline odp_packet_hdr_t 
*add_segments(odp_packet_hdr_t *pkt_hdr,
new_hdr->buf_hdr.seg[0].len   = seg_len;
 
packet_seg_copy_md(new_hdr, pkt_hdr);
-   new_hdr->frame_len= pkt_hdr->frame_len + len;
-   new_hdr->unshared_len = pkt_hdr->unshared_len + len;
-   new_hdr->headroom = pool->headroom + offset;
-   new_hdr->tailroom = pkt_hdr->tailroom;
+   new_hdr->frame_len = pkt_hdr->frame_len + len;
+   new_hdr->headroom  = pool->headroom + offset;
+   new_hdr->tailroom  = pkt_hdr->tailroom;
+   new_hdr->shared_len = pkt_hdr->shared_len;
 
pkt_hdr = new_hdr;
} else {
-   int last;
+   seg_entry_t *last_seg;
 
/* add into the tail */
add_all_segs(pkt_hdr, new_hdr);
 
/* adjust last segment length */
-   last = packet_last_seg(pkt_hdr);
-   pkt_hdr->buf_hdr.seg[last].len = seg_len;
+   last_seg  = seg_entry_last(pkt_hdr);
+   last_seg->len = seg_len;
 
-   pkt_hdr->frame_len+= len;
-   pkt_hdr->unshared_len += len;
-   pkt_hdr->tailroom  = pool->tailroom + offset;
+   pkt_hdr->frame_len += len;
+   pkt_hdr->tailroom   = pool->tailroom + offset;
}
 
return pkt_hdr;
 }
 
-static inline void free_bufs(odp_packet_hdr_t *pkt_hdr, int first, int num)
+static inline int seg_is_link(void *hdr)
 {
-   int i, nfree;
-   odp_buffer_hdr_t *buf_hdr[num];
+   odp_packet_hdr_t *pkt_hdr = hdr;
+
+   return pkt_hdr != pkt_hdr->buf_hdr.seg[0].hdr;
+}
+
+static inline void packet_free_multi(odp_buffer_hdr_t *hdr[], int num)
+{
+   int i;
+   uint32_t ref_cnt;
+   int num_ref = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Zero when reference API has not been used */
+   ref_cnt = odp_atomic_load_u32(&hdr[i]->ref_cnt);
+
+   if (odp_unlikely(ref_cnt)) {
+   ref_cnt = odp_atomic_fetch_dec_u32(&hdr[i]->ref_cnt);
+
+   if (ref_cnt > 1) {
+   num_ref++;
+   continue;
+   }
+   }
+
+   /* Reset link header back to normal header */
+   if (odp_unlikely(seg_is_link(hdr[i])))
+   hdr[i]->seg[0].hdr = hdr[i];
+
+   /* Skip references and pack to be freed headers to array head */
+   if (odp_unlikely(num_ref))
+   hdr[i - num_ref] = hdr[i];
+
+   }
+
+   num -= num_ref;
+
+   if (odp_likely(num))
+   buffer_free_multi(hdr, num);
+}
 
-   for (i = 0, nfree = 0; i < num; i++) {
-   odp_packet_hdr_t *hdr = pkt_hdr->buf_hdr.seg[first + i].hdr;
+static inline void free_all_segments(odp_packet_hdr_t *pkt_hdr, int num)
+{
+   int i;
+   odp_buffer_hdr_t *buf_hdr[num + 1];
 
-   if (packet_ref_count(hdr) == 1 || packet_ref_dec(hdr) == 1) {
-   ODP_ASSERT((packet_ref_count_set(hdr, 0), 1));
-   buf_hdr[nfree++] = &hdr->buf_hdr;
+   if (odp_likely(pkt_hdr->buf_hdr.num_seg == num)) {
+   for (i = 0; i < num; i++)
+   buf_hdr[i] = pkt_hdr->buf_hdr.seg[i].hdr;
+
+   if (odp_unlikely(seg_is_link(pkt_hdr))) {
+   buf_hdr[num] = &pkt_hdr->buf_hdr;
+   num++;
}
+   } else {
+   seg_entry_t *seg;
+   odp_buffer_hdr_t *link_hdr[num];
+   uint8_t idx = 0;
+   int links = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Free also link headers */
+   if (odp_unlikely(idx == 0 && seg_is_link(pkt_hdr))) {
+   link_hdr[links] = &pkt_hdr->buf_hdr;
+   links++;
+   }
+
+   seg = seg_entry_next(&pkt_hdr, &idx);
+   buf_hdr[i] = seg->hdr;
+   }
+
+   if (odp_unlikely(links))
+   packet_free_multi(link_hdr, links);
}
 
-   if (nfree > 0)
-   buffer_free_multi(buf_hdr, nfree);
+   packet_free_multi(buf_hdr, num);
 }
 
 static inline odp_packet_hdr_t *free_segments(odp_packet_hdr_t *pkt_hdr,
  int num, uint32_t free_len,
  uint32_t pull_len, int head)
 {
+   seg_entry_t *seg;
+   int i;
int num_remain = pkt_hdr->buf_hdr.segcount - num;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   odp_packet_hdr_t *last_hdr 

Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 562
@@ -414,97 +490,217 @@ static inline odp_packet_hdr_t 
*add_segments(odp_packet_hdr_t *pkt_hdr,
new_hdr->buf_hdr.seg[0].len   = seg_len;
 
packet_seg_copy_md(new_hdr, pkt_hdr);
-   new_hdr->frame_len= pkt_hdr->frame_len + len;
-   new_hdr->unshared_len = pkt_hdr->unshared_len + len;
-   new_hdr->headroom = pool->headroom + offset;
-   new_hdr->tailroom = pkt_hdr->tailroom;
+   new_hdr->frame_len = pkt_hdr->frame_len + len;
+   new_hdr->headroom  = pool->headroom + offset;
+   new_hdr->tailroom  = pkt_hdr->tailroom;
+   new_hdr->shared_len = pkt_hdr->shared_len;
 
pkt_hdr = new_hdr;
} else {
-   int last;
+   seg_entry_t *last_seg;
 
/* add into the tail */
add_all_segs(pkt_hdr, new_hdr);
 
/* adjust last segment length */
-   last = packet_last_seg(pkt_hdr);
-   pkt_hdr->buf_hdr.seg[last].len = seg_len;
+   last_seg  = seg_entry_last(pkt_hdr);
+   last_seg->len = seg_len;
 
-   pkt_hdr->frame_len+= len;
-   pkt_hdr->unshared_len += len;
-   pkt_hdr->tailroom  = pool->tailroom + offset;
+   pkt_hdr->frame_len += len;
+   pkt_hdr->tailroom   = pool->tailroom + offset;
}
 
return pkt_hdr;
 }
 
-static inline void free_bufs(odp_packet_hdr_t *pkt_hdr, int first, int num)
+static inline int seg_is_link(void *hdr)
 {
-   int i, nfree;
-   odp_buffer_hdr_t *buf_hdr[num];
+   odp_packet_hdr_t *pkt_hdr = hdr;
+
+   return pkt_hdr != pkt_hdr->buf_hdr.seg[0].hdr;
+}
+
+static inline void packet_free_multi(odp_buffer_hdr_t *hdr[], int num)
+{
+   int i;
+   uint32_t ref_cnt;
+   int num_ref = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Zero when reference API has not been used */
+   ref_cnt = odp_atomic_load_u32(&hdr[i]->ref_cnt);
+
+   if (odp_unlikely(ref_cnt)) {
+   ref_cnt = odp_atomic_fetch_dec_u32(&hdr[i]->ref_cnt);
+
+   if (ref_cnt > 1) {
+   num_ref++;
+   continue;
+   }
+   }
+
+   /* Reset link header back to normal header */
+   if (odp_unlikely(seg_is_link(hdr[i])))
+   hdr[i]->seg[0].hdr = hdr[i];
+
+   /* Skip references and pack to be freed headers to array head */
+   if (odp_unlikely(num_ref))
+   hdr[i - num_ref] = hdr[i];
+
+   }
+
+   num -= num_ref;
+
+   if (odp_likely(num))
+   buffer_free_multi(hdr, num);
+}
 
-   for (i = 0, nfree = 0; i < num; i++) {
-   odp_packet_hdr_t *hdr = pkt_hdr->buf_hdr.seg[first + i].hdr;
+static inline void free_all_segments(odp_packet_hdr_t *pkt_hdr, int num)
+{
+   int i;
+   odp_buffer_hdr_t *buf_hdr[num + 1];
 
-   if (packet_ref_count(hdr) == 1 || packet_ref_dec(hdr) == 1) {
-   ODP_ASSERT((packet_ref_count_set(hdr, 0), 1));
-   buf_hdr[nfree++] = &hdr->buf_hdr;
+   if (odp_likely(pkt_hdr->buf_hdr.num_seg == num)) {
+   for (i = 0; i < num; i++)
+   buf_hdr[i] = pkt_hdr->buf_hdr.seg[i].hdr;
+
+   if (odp_unlikely(seg_is_link(pkt_hdr))) {
+   buf_hdr[num] = &pkt_hdr->buf_hdr;
+   num++;
}
+   } else {
+   seg_entry_t *seg;
+   odp_buffer_hdr_t *link_hdr[num];
+   uint8_t idx = 0;
+   int links = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Free also link headers */
+   if (odp_unlikely(idx == 0 && seg_is_link(pkt_hdr))) {
+   link_hdr[links] = &pkt_hdr->buf_hdr;
+   links++;
+   }
+
+   seg = seg_entry_next(&pkt_hdr, &idx);
+   buf_hdr[i] = seg->hdr;
+   }
+
+   if (odp_unlikely(links))
+   packet_free_multi(link_hdr, links);


Comment:
Given what's necessary to enter this arm, shouldn't this be `odp_likely()`? 
Isn't the only way `segcount` != `num_seg` is if we have a link segment?

https://github.com/Linaro/odp/pull/170#discussion_r137961970
updated_at 2017-09-10 22:52:50


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 543
@@ -414,97 +490,217 @@ static inline odp_packet_hdr_t 
*add_segments(odp_packet_hdr_t *pkt_hdr,
new_hdr->buf_hdr.seg[0].len   = seg_len;
 
packet_seg_copy_md(new_hdr, pkt_hdr);
-   new_hdr->frame_len= pkt_hdr->frame_len + len;
-   new_hdr->unshared_len = pkt_hdr->unshared_len + len;
-   new_hdr->headroom = pool->headroom + offset;
-   new_hdr->tailroom = pkt_hdr->tailroom;
+   new_hdr->frame_len = pkt_hdr->frame_len + len;
+   new_hdr->headroom  = pool->headroom + offset;
+   new_hdr->tailroom  = pkt_hdr->tailroom;
+   new_hdr->shared_len = pkt_hdr->shared_len;
 
pkt_hdr = new_hdr;
} else {
-   int last;
+   seg_entry_t *last_seg;
 
/* add into the tail */
add_all_segs(pkt_hdr, new_hdr);
 
/* adjust last segment length */
-   last = packet_last_seg(pkt_hdr);
-   pkt_hdr->buf_hdr.seg[last].len = seg_len;
+   last_seg  = seg_entry_last(pkt_hdr);
+   last_seg->len = seg_len;
 
-   pkt_hdr->frame_len+= len;
-   pkt_hdr->unshared_len += len;
-   pkt_hdr->tailroom  = pool->tailroom + offset;
+   pkt_hdr->frame_len += len;
+   pkt_hdr->tailroom   = pool->tailroom + offset;
}
 
return pkt_hdr;
 }
 
-static inline void free_bufs(odp_packet_hdr_t *pkt_hdr, int first, int num)
+static inline int seg_is_link(void *hdr)
 {
-   int i, nfree;
-   odp_buffer_hdr_t *buf_hdr[num];
+   odp_packet_hdr_t *pkt_hdr = hdr;
+
+   return pkt_hdr != pkt_hdr->buf_hdr.seg[0].hdr;
+}
+
+static inline void packet_free_multi(odp_buffer_hdr_t *hdr[], int num)
+{
+   int i;
+   uint32_t ref_cnt;
+   int num_ref = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Zero when reference API has not been used */
+   ref_cnt = odp_atomic_load_u32(&hdr[i]->ref_cnt);
+
+   if (odp_unlikely(ref_cnt)) {
+   ref_cnt = odp_atomic_fetch_dec_u32(&hdr[i]->ref_cnt);
+
+   if (ref_cnt > 1) {
+   num_ref++;
+   continue;
+   }
+   }
+
+   /* Reset link header back to normal header */
+   if (odp_unlikely(seg_is_link(hdr[i])))
+   hdr[i]->seg[0].hdr = hdr[i];
+
+   /* Skip references and pack to be freed headers to array head */
+   if (odp_unlikely(num_ref))
+   hdr[i - num_ref] = hdr[i];
+
+   }
+
+   num -= num_ref;
+
+   if (odp_likely(num))
+   buffer_free_multi(hdr, num);
+}
 
-   for (i = 0, nfree = 0; i < num; i++) {
-   odp_packet_hdr_t *hdr = pkt_hdr->buf_hdr.seg[first + i].hdr;
+static inline void free_all_segments(odp_packet_hdr_t *pkt_hdr, int num)
+{
+   int i;
+   odp_buffer_hdr_t *buf_hdr[num + 1];
 
-   if (packet_ref_count(hdr) == 1 || packet_ref_dec(hdr) == 1) {
-   ODP_ASSERT((packet_ref_count_set(hdr, 0), 1));
-   buf_hdr[nfree++] = &hdr->buf_hdr;
+   if (odp_likely(pkt_hdr->buf_hdr.num_seg == num)) {
+   for (i = 0; i < num; i++)
+   buf_hdr[i] = pkt_hdr->buf_hdr.seg[i].hdr;
+
+   if (odp_unlikely(seg_is_link(pkt_hdr))) {
+   buf_hdr[num] = &pkt_hdr->buf_hdr;
+   num++;
}


Comment:
```
if (odp_unlikely(seg_is_link(pkt_hdr)))
buf_hdr[num++] = &pkt_hdr->buf_hdr;
```

https://github.com/Linaro/odp/pull/170#discussion_r137961855
updated_at 2017-09-10 22:52:50


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 555
@@ -414,97 +490,217 @@ static inline odp_packet_hdr_t 
*add_segments(odp_packet_hdr_t *pkt_hdr,
new_hdr->buf_hdr.seg[0].len   = seg_len;
 
packet_seg_copy_md(new_hdr, pkt_hdr);
-   new_hdr->frame_len= pkt_hdr->frame_len + len;
-   new_hdr->unshared_len = pkt_hdr->unshared_len + len;
-   new_hdr->headroom = pool->headroom + offset;
-   new_hdr->tailroom = pkt_hdr->tailroom;
+   new_hdr->frame_len = pkt_hdr->frame_len + len;
+   new_hdr->headroom  = pool->headroom + offset;
+   new_hdr->tailroom  = pkt_hdr->tailroom;
+   new_hdr->shared_len = pkt_hdr->shared_len;
 
pkt_hdr = new_hdr;
} else {
-   int last;
+   seg_entry_t *last_seg;
 
/* add into the tail */
add_all_segs(pkt_hdr, new_hdr);
 
/* adjust last segment length */
-   last = packet_last_seg(pkt_hdr);
-   pkt_hdr->buf_hdr.seg[last].len = seg_len;
+   last_seg  = seg_entry_last(pkt_hdr);
+   last_seg->len = seg_len;
 
-   pkt_hdr->frame_len+= len;
-   pkt_hdr->unshared_len += len;
-   pkt_hdr->tailroom  = pool->tailroom + offset;
+   pkt_hdr->frame_len += len;
+   pkt_hdr->tailroom   = pool->tailroom + offset;
}
 
return pkt_hdr;
 }
 
-static inline void free_bufs(odp_packet_hdr_t *pkt_hdr, int first, int num)
+static inline int seg_is_link(void *hdr)
 {
-   int i, nfree;
-   odp_buffer_hdr_t *buf_hdr[num];
+   odp_packet_hdr_t *pkt_hdr = hdr;
+
+   return pkt_hdr != pkt_hdr->buf_hdr.seg[0].hdr;
+}
+
+static inline void packet_free_multi(odp_buffer_hdr_t *hdr[], int num)
+{
+   int i;
+   uint32_t ref_cnt;
+   int num_ref = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Zero when reference API has not been used */
+   ref_cnt = odp_atomic_load_u32(&hdr[i]->ref_cnt);
+
+   if (odp_unlikely(ref_cnt)) {
+   ref_cnt = odp_atomic_fetch_dec_u32(&hdr[i]->ref_cnt);
+
+   if (ref_cnt > 1) {
+   num_ref++;
+   continue;
+   }
+   }
+
+   /* Reset link header back to normal header */
+   if (odp_unlikely(seg_is_link(hdr[i])))
+   hdr[i]->seg[0].hdr = hdr[i];
+
+   /* Skip references and pack to be freed headers to array head */
+   if (odp_unlikely(num_ref))
+   hdr[i - num_ref] = hdr[i];
+
+   }
+
+   num -= num_ref;
+
+   if (odp_likely(num))
+   buffer_free_multi(hdr, num);
+}
 
-   for (i = 0, nfree = 0; i < num; i++) {
-   odp_packet_hdr_t *hdr = pkt_hdr->buf_hdr.seg[first + i].hdr;
+static inline void free_all_segments(odp_packet_hdr_t *pkt_hdr, int num)
+{
+   int i;
+   odp_buffer_hdr_t *buf_hdr[num + 1];
 
-   if (packet_ref_count(hdr) == 1 || packet_ref_dec(hdr) == 1) {
-   ODP_ASSERT((packet_ref_count_set(hdr, 0), 1));
-   buf_hdr[nfree++] = &hdr->buf_hdr;
+   if (odp_likely(pkt_hdr->buf_hdr.num_seg == num)) {
+   for (i = 0; i < num; i++)
+   buf_hdr[i] = pkt_hdr->buf_hdr.seg[i].hdr;
+
+   if (odp_unlikely(seg_is_link(pkt_hdr))) {
+   buf_hdr[num] = &pkt_hdr->buf_hdr;
+   num++;
}
+   } else {
+   seg_entry_t *seg;
+   odp_buffer_hdr_t *link_hdr[num];
+   uint8_t idx = 0;
+   int links = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Free also link headers */
+   if (odp_unlikely(idx == 0 && seg_is_link(pkt_hdr))) {
+   link_hdr[links] = &pkt_hdr->buf_hdr;
+   links++;
+   }


Comment:
```
if (odp_unlikely(idx == 0 && seg_is_link(pkt_hdr)))
link_hdr[links++] = &pkt_hdr->buf_hdr;
```

https://github.com/Linaro/odp/pull/170#discussion_r137961940
updated_at 2017-09-10 22:52:50


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 499
@@ -414,97 +490,217 @@ static inline odp_packet_hdr_t 
*add_segments(odp_packet_hdr_t *pkt_hdr,
new_hdr->buf_hdr.seg[0].len   = seg_len;
 
packet_seg_copy_md(new_hdr, pkt_hdr);
-   new_hdr->frame_len= pkt_hdr->frame_len + len;
-   new_hdr->unshared_len = pkt_hdr->unshared_len + len;
-   new_hdr->headroom = pool->headroom + offset;
-   new_hdr->tailroom = pkt_hdr->tailroom;
+   new_hdr->frame_len = pkt_hdr->frame_len + len;
+   new_hdr->headroom  = pool->headroom + offset;
+   new_hdr->tailroom  = pkt_hdr->tailroom;
+   new_hdr->shared_len = pkt_hdr->shared_len;
 
pkt_hdr = new_hdr;
} else {
-   int last;
+   seg_entry_t *last_seg;
 
/* add into the tail */
add_all_segs(pkt_hdr, new_hdr);
 
/* adjust last segment length */
-   last = packet_last_seg(pkt_hdr);
-   pkt_hdr->buf_hdr.seg[last].len = seg_len;
+   last_seg  = seg_entry_last(pkt_hdr);
+   last_seg->len = seg_len;
 
-   pkt_hdr->frame_len+= len;
-   pkt_hdr->unshared_len += len;
-   pkt_hdr->tailroom  = pool->tailroom + offset;
+   pkt_hdr->frame_len += len;
+   pkt_hdr->tailroom   = pool->tailroom + offset;
}
 
return pkt_hdr;
 }
 
-static inline void free_bufs(odp_packet_hdr_t *pkt_hdr, int first, int num)
+static inline int seg_is_link(void *hdr)
 {
-   int i, nfree;
-   odp_buffer_hdr_t *buf_hdr[num];
+   odp_packet_hdr_t *pkt_hdr = hdr;
+
+   return pkt_hdr != pkt_hdr->buf_hdr.seg[0].hdr;
+}
+
+static inline void packet_free_multi(odp_buffer_hdr_t *hdr[], int num)
+{
+   int i;
+   uint32_t ref_cnt;
+   int num_ref = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Zero when reference API has not been used */
+   ref_cnt = odp_atomic_load_u32(&hdr[i]->ref_cnt);


Comment:
The previous code wrapped the `ref_cnt` manipulation routines  in `static 
inline` functions to give more flexibility in the implementation. Not a major 
point, but I think it would make sense to continue that in this version. For 
example, in cases when references are not being used it's not really necessary 
to use an `odp_atomic_load_u32()` to inspect the value of `ref_cnt`.  If you 
read this as zero, you know there's no reference. It's only when `ref_cnt` is 
non-zero do you need the atomics. 

https://github.com/Linaro/odp/pull/170#discussion_r137961721
updated_at 2017-09-10 22:52:50


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 502
@@ -414,97 +490,217 @@ static inline odp_packet_hdr_t 
*add_segments(odp_packet_hdr_t *pkt_hdr,
new_hdr->buf_hdr.seg[0].len   = seg_len;
 
packet_seg_copy_md(new_hdr, pkt_hdr);
-   new_hdr->frame_len= pkt_hdr->frame_len + len;
-   new_hdr->unshared_len = pkt_hdr->unshared_len + len;
-   new_hdr->headroom = pool->headroom + offset;
-   new_hdr->tailroom = pkt_hdr->tailroom;
+   new_hdr->frame_len = pkt_hdr->frame_len + len;
+   new_hdr->headroom  = pool->headroom + offset;
+   new_hdr->tailroom  = pkt_hdr->tailroom;
+   new_hdr->shared_len = pkt_hdr->shared_len;
 
pkt_hdr = new_hdr;
} else {
-   int last;
+   seg_entry_t *last_seg;
 
/* add into the tail */
add_all_segs(pkt_hdr, new_hdr);
 
/* adjust last segment length */
-   last = packet_last_seg(pkt_hdr);
-   pkt_hdr->buf_hdr.seg[last].len = seg_len;
+   last_seg  = seg_entry_last(pkt_hdr);
+   last_seg->len = seg_len;
 
-   pkt_hdr->frame_len+= len;
-   pkt_hdr->unshared_len += len;
-   pkt_hdr->tailroom  = pool->tailroom + offset;
+   pkt_hdr->frame_len += len;
+   pkt_hdr->tailroom   = pool->tailroom + offset;
}
 
return pkt_hdr;
 }
 
-static inline void free_bufs(odp_packet_hdr_t *pkt_hdr, int first, int num)
+static inline int seg_is_link(void *hdr)
 {
-   int i, nfree;
-   odp_buffer_hdr_t *buf_hdr[num];
+   odp_packet_hdr_t *pkt_hdr = hdr;
+
+   return pkt_hdr != pkt_hdr->buf_hdr.seg[0].hdr;
+}
+
+static inline void packet_free_multi(odp_buffer_hdr_t *hdr[], int num)
+{
+   int i;
+   uint32_t ref_cnt;
+   int num_ref = 0;
+
+   for (i = 0; i < num; i++) {
+   /* Zero when reference API has not been used */
+   ref_cnt = odp_atomic_load_u32(&hdr[i]->ref_cnt);
+
+   if (odp_unlikely(ref_cnt)) {
+   ref_cnt = odp_atomic_fetch_dec_u32(&hdr[i]->ref_cnt);


Comment:
As above, I'd prefer to see the `odp_atomic_fetch_dec_u32()` in a wrapper 
function. The mainline code shouldn't need to be aware of  how `ref_cnt` is 
maintained, only to make the necessary increment/decrement calls.

https://github.com/Linaro/odp/pull/170#discussion_r137961757
updated_at 2017-09-10 22:52:50


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1993
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   /* Allocate link segment */
+   if (packet_alloc(pkt_hdr->buf_hdr.pool_ptr, 0, 1, 1, &ref) != 1) {
+   ODP_ERR("segment alloc failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   link_hdr = packet_hdr(ref);
+
+   seg_entry_find_offset(&hdr, &idx, &seg_offset, &seg_idx, offset);
+   num_copy = hdr->buf_hdr.num_seg - idx;
+   segcount = pkt_hdr->buf_hdr.segcount;
+
+   if (seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   link_hdr->buf_hdr.num_seg = 1;
+   link_hdr->buf_hdr.seg[0].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[0].data = seg->data + seg_offset;
+   link_hdr->buf_hdr.seg[0].len  = seg->len  - seg_offset;
+   buffer_ref_inc(seg->hdr);
+
+   for (i = 1; i < num_copy; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+
+   link_hdr->buf_hdr.num_seg++;
+   link_hdr->buf_hdr.seg[i].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[i].data = seg->data;
+   link_hdr->buf_hdr.seg[i].len  = seg->len;
+   buffer_ref_inc(seg->hdr);
+   }
+
+   next_hdr = hdr;
+
+   /* Increment ref count for remaining segments */
+   for (i = seg_idx + num_copy; i < segcount; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+
+   len = pkt_hdr->frame_len - offset;
+   link_hdr->buf_hdr.next_seg  = next_hdr;
+   link_hdr->buf_hdr.last_seg  = pkt_hdr->buf_hdr.last_seg;
+   link_hdr->buf_hdr.segcount  = segcount - seg_idx;
+   link_hdr->frame_len = len;
+   link_hdr->tailroom  = pkt_hdr->tailroom;
+   link_hdr->shared_len= len;
+
+   /* Link header does not have headroom */
+   link_hdr->headroom  = 0;
+
+   if (pkt_hdr->shared_len < len)
+   pkt_hdr->shared_len = len;
+
+   return ref;
+
+}
+
+odp_packet_t odp_packet_ref_pkt(odp_packet_t pkt, uint32_t offset,
+   odp_packet_t hdr)
+{
+   odp_packet_t ref;
+   int ret;
+   odp_packet_hdr_t *new_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   uint32_t len = packet_len(pkt_hdr);
+
+   ref = odp_packet_ref(pkt, offset);
+
+   if (ref == ODP_PACKET_INVALID) {
+   ODP_ERR("reference create failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   ret = odp_packet_concat(&hdr, ref);
+
+   if (ret < 0) {
+   ODP_ERR("concat failed\n");
+   odp_packet_free(ref);
+   return ODP_PACKET_INVALID;
+   }
+
+   new_hdr = packet_hdr(hdr);
+   new_hdr->shared_len = len - offset;
+
+   return hdr;
+
+}
+
+int odp_packet_has_ref(odp_packet_t pkt)
+{
+   odp_buffer_hdr_t *buf_hdr;
+   seg_entry_t *seg;
+   int i;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+ 

Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 747
@@ -611,80 +807,59 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t 
len,
return num;
 }
 
-static inline void packet_free(odp_packet_hdr_t *pkt_hdr)
+void odp_packet_free(odp_packet_t pkt)
 {
-   odp_packet_hdr_t *ref_hdr;
-   odp_buffer_hdr_t *buf_hdr;
-   uint32_t ref_count;
-   int num_seg;
-
-   do {
-   buf_hdr = &pkt_hdr->buf_hdr;
-   ref_count = packet_ref_count(pkt_hdr);
-   num_seg = pkt_hdr->buf_hdr.segcount;
-   ref_hdr = pkt_hdr->ref_hdr;
-   ODP_ASSERT(ref_count >= 1);
-
-   if (odp_likely((CONFIG_PACKET_MAX_SEGS == 1 || num_seg == 1) &&
-  ref_count == 1)) {
-   ODP_ASSERT((packet_ref_count_set(pkt_hdr, 0), 1));
-   buffer_free_multi(&buf_hdr, 1);
-   } else {
-   free_bufs(pkt_hdr, 0, num_seg);
-   }
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   int num_seg = pkt_hdr->buf_hdr.segcount;
 
-   pkt_hdr = ref_hdr;
-   } while (pkt_hdr);
-}
+   if (odp_likely(CONFIG_PACKET_MAX_SEGS == 1 || num_seg == 1)) {
+   odp_buffer_hdr_t *buf_hdr[2];
+   int num = 1;
 
-void odp_packet_free(odp_packet_t pkt)
-{
-   packet_free(packet_hdr(pkt));
+   buf_hdr[0] = &pkt_hdr->buf_hdr;
+
+   if (odp_unlikely(seg_is_link(pkt_hdr))) {
+   buf_hdr[1] = pkt_hdr->buf_hdr.seg[0].hdr;
+   num++;
+   }


Comment:
This could simply be:
```
if (odp_unlikely(seg_is_link(pkt_hdr)))
buf_hdr[num++] = pkt_hdr->buf_hdr.seg[0].hdr;
```
Which looks a bit tidier.

https://github.com/Linaro/odp/pull/170#discussion_r137961037
updated_at 2017-09-10 22:52:50


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 806
@@ -611,80 +807,59 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t 
len,
return num;
 }
 
-static inline void packet_free(odp_packet_hdr_t *pkt_hdr)
+void odp_packet_free(odp_packet_t pkt)
 {
-   odp_packet_hdr_t *ref_hdr;
-   odp_buffer_hdr_t *buf_hdr;
-   uint32_t ref_count;
-   int num_seg;
-
-   do {
-   buf_hdr = &pkt_hdr->buf_hdr;
-   ref_count = packet_ref_count(pkt_hdr);
-   num_seg = pkt_hdr->buf_hdr.segcount;
-   ref_hdr = pkt_hdr->ref_hdr;
-   ODP_ASSERT(ref_count >= 1);
-
-   if (odp_likely((CONFIG_PACKET_MAX_SEGS == 1 || num_seg == 1) &&
-  ref_count == 1)) {
-   ODP_ASSERT((packet_ref_count_set(pkt_hdr, 0), 1));
-   buffer_free_multi(&buf_hdr, 1);
-   } else {
-   free_bufs(pkt_hdr, 0, num_seg);
-   }
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   int num_seg = pkt_hdr->buf_hdr.segcount;
 
-   pkt_hdr = ref_hdr;
-   } while (pkt_hdr);
-}
+   if (odp_likely(CONFIG_PACKET_MAX_SEGS == 1 || num_seg == 1)) {
+   odp_buffer_hdr_t *buf_hdr[2];
+   int num = 1;
 
-void odp_packet_free(odp_packet_t pkt)
-{
-   packet_free(packet_hdr(pkt));
+   buf_hdr[0] = &pkt_hdr->buf_hdr;
+
+   if (odp_unlikely(seg_is_link(pkt_hdr))) {
+   buf_hdr[1] = pkt_hdr->buf_hdr.seg[0].hdr;
+   num++;
+   }
+
+   packet_free_multi(buf_hdr, num);
+   } else {
+   free_all_segments(pkt_hdr, num_seg);
+   }
 }
 
 void odp_packet_free_multi(const odp_packet_t pkt[], int num)
 {
-   odp_packet_hdr_t *pkt_hdr, *ref_hdr, *hdr;
-   int nbufs = num * CONFIG_PACKET_MAX_SEGS * 2;
-   odp_buffer_hdr_t *buf_hdr[nbufs];
-   int num_seg;
-   int i, j;
-   uint32_t ref_count;
-   int nfree = 0;
+   odp_buffer_hdr_t *buf_hdr[num];
+   odp_buffer_hdr_t *buf_hdr2[num];
+   int i;
+   int links = 0;
+   int num_freed = 0;
 
for (i = 0; i < num; i++) {
-   pkt_hdr = packet_hdr(pkt[i]);
-
-   do {
-   num_seg = pkt_hdr->buf_hdr.segcount;
-   ref_hdr = pkt_hdr->ref_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt[i]);
+   int num_seg = pkt_hdr->buf_hdr.segcount;
 
-   /* Make sure we have enough space for this pkt's segs */
-   if (nfree + num_seg > nbufs) {
-   buffer_free_multi(buf_hdr, nfree);
-   nfree = 0;
-   }
+   if (odp_unlikely(num_seg > 1)) {
+   free_all_segments(pkt_hdr, num_seg);
+   num_freed++;
+   continue;
+   }
 
-   for (j = 0; j < num_seg; j++) {
-   hdr = pkt_hdr->buf_hdr.seg[j].hdr;
-   ref_count = packet_ref_count(hdr);
-   ODP_ASSERT(ref_count >= 1);
-
-   if (ref_count == 1 ||
-   packet_ref_dec(hdr) == 1) {
-   ODP_ASSERT
-   ((packet_ref_count_set(hdr, 0),
- 1));
-   buf_hdr[nfree++] = &hdr->buf_hdr;
-   }
-   }
+   if (odp_unlikely(seg_is_link(pkt_hdr))) {
+   buf_hdr2[links] = pkt_hdr->buf_hdr.seg[0].hdr;
+   links++;
+   }


Comment:
```
if (odp_unlikely(seg_is_link(pkt_hdr)))
buf_hdr2[links++] = pkt_hdr->buf_hdr.seg[0].hdr;
```

https://github.com/Linaro/odp/pull/170#discussion_r137961586
updated_at 2017-09-10 22:52:50


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1927
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   /* Allocate link segment */
+   if (packet_alloc(pkt_hdr->buf_hdr.pool_ptr, 0, 1, 1, &ref) != 1) {
+   ODP_ERR("segment alloc failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   link_hdr = packet_hdr(ref);
+
+   seg_entry_find_offset(&hdr, &idx, &seg_offset, &seg_idx, offset);
+   num_copy = hdr->buf_hdr.num_seg - idx;
+   segcount = pkt_hdr->buf_hdr.segcount;
+
+   if (seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   link_hdr->buf_hdr.num_seg = 1;
+   link_hdr->buf_hdr.seg[0].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[0].data = seg->data + seg_offset;
+   link_hdr->buf_hdr.seg[0].len  = seg->len  - seg_offset;
+   buffer_ref_inc(seg->hdr);
+
+   for (i = 1; i < num_copy; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+
+   link_hdr->buf_hdr.num_seg++;
+   link_hdr->buf_hdr.seg[i].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[i].data = seg->data;
+   link_hdr->buf_hdr.seg[i].len  = seg->len;
+   buffer_ref_inc(seg->hdr);
+   }
+
+   next_hdr = hdr;
+
+   /* Increment ref count for remaining segments */
+   for (i = seg_idx + num_copy; i < segcount; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+
+   len = pkt_hdr->frame_len - offset;
+   link_hdr->buf_hdr.next_seg  = next_hdr;
+   link_hdr->buf_hdr.last_seg  = pkt_hdr->buf_hdr.last_seg;
+   link_hdr->buf_hdr.segcount  = segcount - seg_idx;
+   link_hdr->frame_len = len;
+   link_hdr->tailroom  = pkt_hdr->tailroom;
+   link_hdr->shared_len= len;
+
+   /* Link header does not have headroom */
+   link_hdr->headroom  = 0;


Comment:
I think I see why, but some explanation would again be helpful here.

https://github.com/Linaro/odp/pull/170#discussion_r137960722
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1904
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   /* Allocate link segment */
+   if (packet_alloc(pkt_hdr->buf_hdr.pool_ptr, 0, 1, 1, &ref) != 1) {
+   ODP_ERR("segment alloc failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   link_hdr = packet_hdr(ref);
+
+   seg_entry_find_offset(&hdr, &idx, &seg_offset, &seg_idx, offset);
+   num_copy = hdr->buf_hdr.num_seg - idx;
+   segcount = pkt_hdr->buf_hdr.segcount;
+
+   if (seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   link_hdr->buf_hdr.num_seg = 1;
+   link_hdr->buf_hdr.seg[0].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[0].data = seg->data + seg_offset;
+   link_hdr->buf_hdr.seg[0].len  = seg->len  - seg_offset;
+   buffer_ref_inc(seg->hdr);
+
+   for (i = 1; i < num_copy; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+
+   link_hdr->buf_hdr.num_seg++;
+   link_hdr->buf_hdr.seg[i].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[i].data = seg->data;
+   link_hdr->buf_hdr.seg[i].len  = seg->len;
+   buffer_ref_inc(seg->hdr);


Comment:
Again this seems to imply that we're double-incrementing the `ref_cnt` for the 
first segment when dealing with links. Some explanation would be helpful here.

https://github.com/Linaro/odp/pull/170#discussion_r137960662
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1824
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);


Comment:
Very clever performance optimization trick. Nice! Might an `odp_likely()` be 
called for here since single references are going to be the most common?

https://github.com/Linaro/odp/pull/170#discussion_r137960532
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1915
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   /* Allocate link segment */
+   if (packet_alloc(pkt_hdr->buf_hdr.pool_ptr, 0, 1, 1, &ref) != 1) {
+   ODP_ERR("segment alloc failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   link_hdr = packet_hdr(ref);
+
+   seg_entry_find_offset(&hdr, &idx, &seg_offset, &seg_idx, offset);
+   num_copy = hdr->buf_hdr.num_seg - idx;
+   segcount = pkt_hdr->buf_hdr.segcount;
+
+   if (seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   link_hdr->buf_hdr.num_seg = 1;
+   link_hdr->buf_hdr.seg[0].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[0].data = seg->data + seg_offset;
+   link_hdr->buf_hdr.seg[0].len  = seg->len  - seg_offset;
+   buffer_ref_inc(seg->hdr);
+
+   for (i = 1; i < num_copy; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+
+   link_hdr->buf_hdr.num_seg++;
+   link_hdr->buf_hdr.seg[i].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[i].data = seg->data;
+   link_hdr->buf_hdr.seg[i].len  = seg->len;
+   buffer_ref_inc(seg->hdr);
+   }
+
+   next_hdr = hdr;
+
+   /* Increment ref count for remaining segments */
+   for (i = seg_idx + num_copy; i < segcount; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);


Comment:
Same double-increment comment here.

https://github.com/Linaro/odp/pull/170#discussion_r137960689
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 102
@@ -88,19 +74,112 @@ static inline odp_buffer_t packet_to_buffer(odp_packet_t 
pkt)
return (odp_buffer_t)pkt;
 }
 
+static inline seg_entry_t *seg_entry(odp_packet_hdr_t *hdr,
+uint32_t seg_idx)
+{
+   uint32_t idx = 0;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   while (odp_unlikely(idx + num_seg - 1 < seg_idx)) {
+   idx+= num_seg;
+   hdr = hdr->buf_hdr.next_seg;
+   num_seg = hdr->buf_hdr.num_seg;
+   }
+
+   idx = seg_idx - idx;
+
+   return &hdr->buf_hdr.seg[idx];
+}
+
+static inline void seg_entry_find_idx(odp_packet_hdr_t **p_hdr,
+ uint8_t *p_idx,
+ uint32_t find_idx)
+{
+   odp_packet_hdr_t *hdr = *p_hdr;
+   uint32_t idx = 0;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   while (odp_unlikely(idx + num_seg - 1 < find_idx)) {
+   idx+= num_seg;
+   hdr = hdr->buf_hdr.next_seg;
+   num_seg = hdr->buf_hdr.num_seg;
+   }
+
+   idx = find_idx - idx;
+   *p_hdr = hdr;
+   *p_idx = idx;
+}
+
+static inline seg_entry_t *seg_entry_next(odp_packet_hdr_t **cur_hdr,
+ uint8_t *cur_idx)
+{
+   odp_packet_hdr_t *hdr = *cur_hdr;
+   uint8_t idx = *cur_idx;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   if (idx == num_seg - 1) {
+   *cur_hdr = hdr->buf_hdr.next_seg;
+   *cur_idx = 0;
+   } else {
+   *cur_idx = idx + 1;
+   }
+
+   return &hdr->buf_hdr.seg[idx];
+}
+
+static inline void seg_entry_find_offset(odp_packet_hdr_t **p_hdr,
+uint8_t *p_idx,
+uint32_t *seg_offset,
+uint32_t *seg_idx,
+uint32_t offset)
+{
+   int i;
+   odp_packet_hdr_t *hdr, *cur_hdr;
+   uint8_t idx, cur_idx;
+   seg_entry_t *seg = NULL;
+   uint32_t seg_start = 0, seg_end = 0;
+   int seg_count;
+
+   hdr = *p_hdr;
+   cur_hdr = hdr;
+   idx = 0;
+   cur_idx = 0;
+   seg_count = hdr->buf_hdr.segcount;


Comment:
Since `seg_count` >= 1, the init of `cur_hdr` and `cur_idx` seems redundant 
here since they are overwritten in the `for` loop below. Is there a corner case 
where `seg_count` can be 0 to make this necessary?

https://github.com/Linaro/odp/pull/170#discussion_r137960356
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1892
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   /* Allocate link segment */
+   if (packet_alloc(pkt_hdr->buf_hdr.pool_ptr, 0, 1, 1, &ref) != 1) {
+   ODP_ERR("segment alloc failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   link_hdr = packet_hdr(ref);
+
+   seg_entry_find_offset(&hdr, &idx, &seg_offset, &seg_idx, offset);
+   num_copy = hdr->buf_hdr.num_seg - idx;
+   segcount = pkt_hdr->buf_hdr.segcount;
+
+   if (seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   link_hdr->buf_hdr.num_seg = 1;
+   link_hdr->buf_hdr.seg[0].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[0].data = seg->data + seg_offset;
+   link_hdr->buf_hdr.seg[0].len  = seg->len  - seg_offset;
+   buffer_ref_inc(seg->hdr);


Comment:
Doesn't this cause `hdr` to have it's `ref_cnt` incremented by 2 when 
`seg_is_link(hdr)`? This is obviously some very subtle code here that could 
benefit from a few comments explaining the link structure and logic.

https://github.com/Linaro/odp/pull/170#discussion_r137959942
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1955
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   /* Allocate link segment */
+   if (packet_alloc(pkt_hdr->buf_hdr.pool_ptr, 0, 1, 1, &ref) != 1) {
+   ODP_ERR("segment alloc failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   link_hdr = packet_hdr(ref);
+
+   seg_entry_find_offset(&hdr, &idx, &seg_offset, &seg_idx, offset);
+   num_copy = hdr->buf_hdr.num_seg - idx;
+   segcount = pkt_hdr->buf_hdr.segcount;
+
+   if (seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   link_hdr->buf_hdr.num_seg = 1;
+   link_hdr->buf_hdr.seg[0].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[0].data = seg->data + seg_offset;
+   link_hdr->buf_hdr.seg[0].len  = seg->len  - seg_offset;
+   buffer_ref_inc(seg->hdr);
+
+   for (i = 1; i < num_copy; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+
+   link_hdr->buf_hdr.num_seg++;
+   link_hdr->buf_hdr.seg[i].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[i].data = seg->data;
+   link_hdr->buf_hdr.seg[i].len  = seg->len;
+   buffer_ref_inc(seg->hdr);
+   }
+
+   next_hdr = hdr;
+
+   /* Increment ref count for remaining segments */
+   for (i = seg_idx + num_copy; i < segcount; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+
+   len = pkt_hdr->frame_len - offset;
+   link_hdr->buf_hdr.next_seg  = next_hdr;
+   link_hdr->buf_hdr.last_seg  = pkt_hdr->buf_hdr.last_seg;
+   link_hdr->buf_hdr.segcount  = segcount - seg_idx;
+   link_hdr->frame_len = len;
+   link_hdr->tailroom  = pkt_hdr->tailroom;
+   link_hdr->shared_len= len;
+
+   /* Link header does not have headroom */
+   link_hdr->headroom  = 0;
+
+   if (pkt_hdr->shared_len < len)
+   pkt_hdr->shared_len = len;
+
+   return ref;
+
+}
+
+odp_packet_t odp_packet_ref_pkt(odp_packet_t pkt, uint32_t offset,
+   odp_packet_t hdr)
+{
+   odp_packet_t ref;
+   int ret;
+   odp_packet_hdr_t *new_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   uint32_t len = packet_len(pkt_hdr);
+
+   ref = odp_packet_ref(pkt, offset);
+
+   if (ref == ODP_PACKET_INVALID) {
+   ODP_ERR("reference create failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   ret = odp_packet_concat(&hdr, ref);
+
+   if (ret < 0) {
+   ODP_ERR("concat failed\n");


Comment:
`ODP_DBG()`

https://github.com/Linaro/odp/pull/170#discussion_r137939629
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1874
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   /* Allocate link segment */
+   if (packet_alloc(pkt_hdr->buf_hdr.pool_ptr, 0, 1, 1, &ref) != 1) {
+   ODP_ERR("segment alloc failed\n");


Comment:
`ODP_DBG()`

https://github.com/Linaro/odp/pull/170#discussion_r137959649
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1868
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");


Comment:
This should be `ODP_DBG()` rather than `ODP_ERR()`. `ODP_ERR()` should be 
reserved for fatal errors where ODP itself is going to abort.

https://github.com/Linaro/odp/pull/170#discussion_r137939439
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 1948
@@ -2458,6 +2089,184 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl)
return _odp_pri(hdl);
 }
 
+static inline void buffer_ref_inc(odp_buffer_hdr_t *buf_hdr)
+{
+   /* First count increment after alloc */
+   if (odp_atomic_load_u32(&buf_hdr->ref_cnt) == 0)
+   odp_atomic_store_u32(&buf_hdr->ref_cnt, 2);
+   else
+   odp_atomic_inc_u32(&buf_hdr->ref_cnt);
+}
+
+static inline void packet_ref_inc(odp_packet_hdr_t *pkt_hdr)
+{
+   seg_entry_t *seg;
+   int i;
+   int seg_count = pkt_hdr->buf_hdr.segcount;
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   uint8_t idx = 0;
+
+   for (i = 0; i < seg_count; i++) {
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+}
+
+odp_packet_t odp_packet_ref_static(odp_packet_t pkt)
+{
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+
+   packet_ref_inc(pkt_hdr);
+   pkt_hdr->shared_len = packet_len(pkt_hdr);
+
+   return pkt;
+}
+
+odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset)
+{
+   odp_packet_t ref;
+   odp_packet_hdr_t *link_hdr;
+   odp_packet_hdr_t *next_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   odp_packet_hdr_t *hdr = pkt_hdr;
+   seg_entry_t *seg;
+   uint32_t seg_idx = 0;
+   uint8_t idx = 0;
+   uint32_t seg_offset = 0;
+   int i, num_copy, segcount;
+   uint32_t len;
+
+   if (offset >= packet_len(pkt_hdr)) {
+   ODP_ERR("offset too large\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   /* Allocate link segment */
+   if (packet_alloc(pkt_hdr->buf_hdr.pool_ptr, 0, 1, 1, &ref) != 1) {
+   ODP_ERR("segment alloc failed\n");
+   return ODP_PACKET_INVALID;
+   }
+
+   link_hdr = packet_hdr(ref);
+
+   seg_entry_find_offset(&hdr, &idx, &seg_offset, &seg_idx, offset);
+   num_copy = hdr->buf_hdr.num_seg - idx;
+   segcount = pkt_hdr->buf_hdr.segcount;
+
+   if (seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   link_hdr->buf_hdr.num_seg = 1;
+   link_hdr->buf_hdr.seg[0].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[0].data = seg->data + seg_offset;
+   link_hdr->buf_hdr.seg[0].len  = seg->len  - seg_offset;
+   buffer_ref_inc(seg->hdr);
+
+   for (i = 1; i < num_copy; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+
+   link_hdr->buf_hdr.num_seg++;
+   link_hdr->buf_hdr.seg[i].hdr  = seg->hdr;
+   link_hdr->buf_hdr.seg[i].data = seg->data;
+   link_hdr->buf_hdr.seg[i].len  = seg->len;
+   buffer_ref_inc(seg->hdr);
+   }
+
+   next_hdr = hdr;
+
+   /* Increment ref count for remaining segments */
+   for (i = seg_idx + num_copy; i < segcount; i++) {
+   if (idx == 0 && seg_is_link(hdr))
+   buffer_ref_inc((odp_buffer_hdr_t *)hdr);
+
+   seg = seg_entry_next(&hdr, &idx);
+   buffer_ref_inc(seg->hdr);
+   }
+
+   len = pkt_hdr->frame_len - offset;
+   link_hdr->buf_hdr.next_seg  = next_hdr;
+   link_hdr->buf_hdr.last_seg  = pkt_hdr->buf_hdr.last_seg;
+   link_hdr->buf_hdr.segcount  = segcount - seg_idx;
+   link_hdr->frame_len = len;
+   link_hdr->tailroom  = pkt_hdr->tailroom;
+   link_hdr->shared_len= len;
+
+   /* Link header does not have headroom */
+   link_hdr->headroom  = 0;
+
+   if (pkt_hdr->shared_len < len)
+   pkt_hdr->shared_len = len;
+
+   return ref;
+
+}
+
+odp_packet_t odp_packet_ref_pkt(odp_packet_t pkt, uint32_t offset,
+   odp_packet_t hdr)
+{
+   odp_packet_t ref;
+   int ret;
+   odp_packet_hdr_t *new_hdr;
+   odp_packet_hdr_t *pkt_hdr = packet_hdr(pkt);
+   uint32_t len = packet_len(pkt_hdr);
+
+   ref = odp_packet_ref(pkt, offset);
+
+   if (ref == ODP_PACKET_INVALID) {
+   ODP_ERR("reference create failed\n");


Comment:
`ODP_DBG()`

https://github.com/Linaro/odp/pull/170#discussion_r137939614
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 379
@@ -271,52 +336,83 @@ void packet_parse_reset(odp_packet_hdr_t *pkt_hdr)
pkt_hdr->p.l2_offset= 0;
pkt_hdr->p.l3_offset= ODP_PACKET_OFFSET_INVALID;
pkt_hdr->p.l4_offset= ODP_PACKET_OFFSET_INVALID;
+}
+
+static inline void link_segments(odp_packet_hdr_t *pkt_hdr[], int num)
+{
+   int cur, i;
+   odp_packet_hdr_t *hdr;
+   odp_packet_hdr_t *head = pkt_hdr[0];
+
+   cur = 0;
+
+   while (1) {
+   hdr = pkt_hdr[cur];
+
+   for (i = 0; i < CONFIG_PACKET_MAX_SEGS; i++) {
+   odp_buffer_hdr_t *buf_hdr;
+
+   buf_hdr = &pkt_hdr[cur]->buf_hdr;
+   hdr->buf_hdr.seg[i].hdr  = buf_hdr;
+   hdr->buf_hdr.seg[i].data = buf_hdr->base_data;
+   hdr->buf_hdr.seg[i].len  = BASE_LEN;
+   cur++;
 
-   /* Ensure dummy pkt hdrs used in I/O recv classification are valid */
-   pkt_hdr->ref_hdr = NULL;
+   if (cur == num) {
+   /* Last segment */
+   hdr->buf_hdr.num_seg   = i + 1;
+   hdr->buf_hdr.next_seg  = NULL;
+   head->buf_hdr.last_seg = &hdr->buf_hdr;
+   return;
+   }
+   }
+
+   hdr->buf_hdr.num_seg  = CONFIG_PACKET_MAX_SEGS;
+   hdr->buf_hdr.next_seg = pkt_hdr[cur];
+   }
 }
 
 static inline void init_segments(odp_packet_hdr_t *pkt_hdr[], int num)
 {
odp_packet_hdr_t *hdr;
-   int i;
 
/* First segment is the packet descriptor */
hdr = pkt_hdr[0];
 
hdr->buf_hdr.seg[0].data = hdr->buf_hdr.base_data;
hdr->buf_hdr.seg[0].len  = BASE_LEN;
-   packet_ref_count_set(hdr, 1);
 
/* Link segments */
if (CONFIG_PACKET_MAX_SEGS != 1) {
hdr->buf_hdr.segcount = num;
 
+   /* Defaults for single segment packet */
+   hdr->buf_hdr.num_seg  = 1;
+   hdr->buf_hdr.next_seg = NULL;
+   hdr->buf_hdr.last_seg = &hdr->buf_hdr;
+
if (odp_unlikely(num > 1)) {
-   for (i = 1; i < num; i++) {
-   odp_buffer_hdr_t *buf_hdr;
-
-   packet_ref_count_set(pkt_hdr[i], 1);
-   buf_hdr = &pkt_hdr[i]->buf_hdr;
-   hdr->buf_hdr.seg[i].hdr  = buf_hdr;
-   hdr->buf_hdr.seg[i].data = buf_hdr->base_data;
-   hdr->buf_hdr.seg[i].len  = BASE_LEN;
-   }
+   link_segments(pkt_hdr, num);
+


Comment:
Since you have to do the `num > 1` test anyway, wouldn't it be better to use an 
`else` here to avoid the duplicate init of segment 0 for multi-segment inits?

```
if (CONFIG_PACKET_MAX_SEGS != 1) {
hdr->buf_hdr.segcount = num;

if (odp_unlikely(num > 1)) {
link_segments(pkt_hdr, num);
} else {
hdr->buf_hdr.num_seg = 1;
hdr->buf_hdr.next_seg = NULL;
hdr->buf_hdr.last_seg = &hdr->buf_hdr;
}
}
```

https://github.com/Linaro/odp/pull/170#discussion_r137936596
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 142
@@ -88,19 +74,112 @@ static inline odp_buffer_t packet_to_buffer(odp_packet_t 
pkt)
return (odp_buffer_t)pkt;
 }
 
+static inline seg_entry_t *seg_entry(odp_packet_hdr_t *hdr,
+uint32_t seg_idx)
+{
+   uint32_t idx = 0;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   while (odp_unlikely(idx + num_seg - 1 < seg_idx)) {
+   idx+= num_seg;
+   hdr = hdr->buf_hdr.next_seg;
+   num_seg = hdr->buf_hdr.num_seg;
+   }
+
+   idx = seg_idx - idx;
+
+   return &hdr->buf_hdr.seg[idx];
+}
+
+static inline void seg_entry_find_idx(odp_packet_hdr_t **p_hdr,
+ uint8_t *p_idx,
+ uint32_t find_idx)
+{
+   odp_packet_hdr_t *hdr = *p_hdr;
+   uint32_t idx = 0;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   while (odp_unlikely(idx + num_seg - 1 < find_idx)) {
+   idx+= num_seg;
+   hdr = hdr->buf_hdr.next_seg;
+   num_seg = hdr->buf_hdr.num_seg;
+   }
+
+   idx = find_idx - idx;
+   *p_hdr = hdr;
+   *p_idx = idx;
+}
+
+static inline seg_entry_t *seg_entry_next(odp_packet_hdr_t **cur_hdr,
+ uint8_t *cur_idx)
+{
+   odp_packet_hdr_t *hdr = *cur_hdr;
+   uint8_t idx = *cur_idx;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   if (idx == num_seg - 1) {
+   *cur_hdr = hdr->buf_hdr.next_seg;
+   *cur_idx = 0;
+   } else {
+   *cur_idx = idx + 1;
+   }
+
+   return &hdr->buf_hdr.seg[idx];
+}
+
+static inline void seg_entry_find_offset(odp_packet_hdr_t **p_hdr,
+uint8_t *p_idx,
+uint32_t *seg_offset,
+uint32_t *seg_idx,
+uint32_t offset)
+{
+   int i;
+   odp_packet_hdr_t *hdr, *cur_hdr;
+   uint8_t idx, cur_idx;
+   seg_entry_t *seg = NULL;
+   uint32_t seg_start = 0, seg_end = 0;
+   int seg_count;
+
+   hdr = *p_hdr;
+   cur_hdr = hdr;
+   idx = 0;
+   cur_idx = 0;
+   seg_count = hdr->buf_hdr.segcount;
+
+   for (i = 0; i < seg_count; i++) {
+   cur_hdr = hdr;
+   cur_idx = idx;
+   seg = seg_entry_next(&hdr, &idx);
+   seg_end += seg->len;
+
+   if (odp_likely(offset < seg_end))
+   break;
+
+   seg_start = seg_end;
+   }
+
+   *p_hdr = cur_hdr;
+   *p_idx = cur_idx;
+   *seg_offset = offset - seg_start;
+   *seg_idx = i;
+}
+
 static inline uint32_t packet_seg_len(odp_packet_hdr_t *pkt_hdr,
  uint32_t seg_idx)
 {
-   return pkt_hdr->buf_hdr.seg[seg_idx].len;
+   seg_entry_t *seg = seg_entry(pkt_hdr, seg_idx);
+
+   return seg->len;
 }
 
-static inline uint8_t *packet_seg_data(odp_packet_hdr_t *pkt_hdr,
-  uint32_t seg_idx)
+static inline void *packet_seg_data(odp_packet_hdr_t *pkt_hdr, uint32_t 
seg_idx)
 {
-   return pkt_hdr->buf_hdr.seg[seg_idx].data;
+   seg_entry_t *seg = seg_entry(pkt_hdr, seg_idx);
+
+   return seg->data;
 }
 
-static inline uint32_t packet_last_seg(odp_packet_hdr_t *pkt_hdr)
+static inline int packet_last_seg(odp_packet_hdr_t *pkt_hdr)


Comment:
Since buf_hdr.segcount is of type `uint16_t` why is this output changed from 
`uint32_t` to `int`? 

https://github.com/Linaro/odp/pull/170#discussion_r137936214
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 184
@@ -127,26 +199,27 @@ static inline void *packet_data(odp_packet_hdr_t *pkt_hdr)
 
 static inline void *packet_tail(odp_packet_hdr_t *pkt_hdr)
 {
-   int last = packet_last_seg(pkt_hdr);
-   uint32_t seg_len = pkt_hdr->buf_hdr.seg[last].len;
+   seg_entry_t *last_seg = seg_entry_last(pkt_hdr);
 
-   return pkt_hdr->buf_hdr.seg[last].data + seg_len;
+   return last_seg->data + last_seg->len;
 }
 
-static inline uint32_t seg_headroom(odp_packet_hdr_t *pkt_hdr, int seg)
+static inline uint32_t seg_headroom(odp_packet_hdr_t *pkt_hdr, int seg_idx)
 {
-   odp_buffer_hdr_t *hdr = pkt_hdr->buf_hdr.seg[seg].hdr;
+   seg_entry_t *seg = seg_entry(pkt_hdr, seg_idx);
+   odp_buffer_hdr_t *hdr = seg->hdr;
uint8_t *base = hdr->base_data;
-   uint8_t *head = pkt_hdr->buf_hdr.seg[seg].data;
+   uint8_t *head = seg->data;
 
return CONFIG_PACKET_HEADROOM + (head - base);


Comment:
Since PR #152 is making per-pool headroom configurable, presumably this needs 
to be modified to account for this.  I'd recommend we merge this PR first and 
incorporate those changes into that PR, however.

https://github.com/Linaro/odp/pull/170#discussion_r137936297
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

platform/linux-generic/odp_packet.c
line 83
@@ -88,19 +74,112 @@ static inline odp_buffer_t packet_to_buffer(odp_packet_t 
pkt)
return (odp_buffer_t)pkt;
 }
 
+static inline seg_entry_t *seg_entry(odp_packet_hdr_t *hdr,
+uint32_t seg_idx)
+{
+   uint32_t idx = 0;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   while (odp_unlikely(idx + num_seg - 1 < seg_idx)) {
+   idx+= num_seg;
+   hdr = hdr->buf_hdr.next_seg;
+   num_seg = hdr->buf_hdr.num_seg;
+   }
+
+   idx = seg_idx - idx;
+
+   return &hdr->buf_hdr.seg[idx];
+}
+
+static inline void seg_entry_find_idx(odp_packet_hdr_t **p_hdr,
+ uint8_t *p_idx,
+ uint32_t find_idx)
+{
+   odp_packet_hdr_t *hdr = *p_hdr;
+   uint32_t idx = 0;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   while (odp_unlikely(idx + num_seg - 1 < find_idx)) {
+   idx+= num_seg;
+   hdr = hdr->buf_hdr.next_seg;
+   num_seg = hdr->buf_hdr.num_seg;
+   }
+
+   idx = find_idx - idx;
+   *p_hdr = hdr;
+   *p_idx = idx;
+}
+
+static inline seg_entry_t *seg_entry_next(odp_packet_hdr_t **cur_hdr,
+ uint8_t *cur_idx)
+{
+   odp_packet_hdr_t *hdr = *cur_hdr;
+   uint8_t idx = *cur_idx;
+   uint8_t num_seg = hdr->buf_hdr.num_seg;
+
+   if (idx == num_seg - 1) {
+   *cur_hdr = hdr->buf_hdr.next_seg;
+   *cur_idx = 0;
+   } else {
+   *cur_idx = idx + 1;
+   }
+
+   return &hdr->buf_hdr.seg[idx];
+}


Comment:
It wasn't immediately obvious that the intent of this routine is to return the 
current `seg_entry_t` and step to the next one. A comment or two might help 
here.

https://github.com/Linaro/odp/pull/170#discussion_r137916598
updated_at 2017-09-10 22:52:49


Re: [lng-odp] [PATCH v1] Packet references re-implementation

2017-09-10 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/common_plat/validation/api/packet/packet.c
line 122
@@ -2226,6 +2319,14 @@ void packet_test_ref(void)
if (odp_packet_has_ref(base_pkt) == 1) {
/* CU_ASSERT needs braces */


Comment:
This comment is now superfluous since braces are being used anyway.

https://github.com/Linaro/odp/pull/170#discussion_r137915628
updated_at 2017-09-10 22:52:49


[lng-odp] [PATCH CLOUD-DEV v3 2/3] linux-gen: schedule: move schedulers into modules

2017-09-10 Thread Github ODP bot
From: Brian Brooks 

Signed-off-by: Brian Brooks 
Reviewed-by: Honnappa Nagarahalli 
Reviewed-by: Kevin Wang 
---
/** Email created from pull request 166 (brbrooks:cloud-dev)
 ** https://github.com/Linaro/odp/pull/166
 ** Patch: https://github.com/Linaro/odp/pull/166.patch
 ** Base sha: 344fe21a06ef4273f34c4441edd0caef876517d9
 ** Merge commit sha: 8ff1431b3b5fb48cac483a36ca85d0132470276f
 **/
 platform/linux-dpdk/Makefile.am|  18 +-
 platform/linux-dpdk/odp_init.c |   8 +-
 platform/linux-generic/Makefile.am |  21 +-
 platform/linux-generic/include/odp_internal.h  |   5 +
 platform/linux-generic/include/odp_schedule_if.h   |  39 +---
 platform/linux-generic/odp_init.c  |   8 +-
 platform/linux-generic/odp_schedule_if.c   | 121 +-
 .../{odp_schedule.c => schedule/generic.c} |  28 ++-
 .../{odp_schedule_iquery.c => schedule/iquery.c}   |  22 +-
 .../scalable.c}|  22 +-
 .../scalable_ordered.c}|   0
 .../{odp_schedule_sp.c => schedule/sp.c}   |  22 +-
 platform/linux-generic/schedule/subsystem.c| 253 +
 13 files changed, 374 insertions(+), 193 deletions(-)
 rename platform/linux-generic/{odp_schedule.c => schedule/generic.c} (98%)
 rename platform/linux-generic/{odp_schedule_iquery.c => schedule/iquery.c} 
(98%)
 rename platform/linux-generic/{odp_schedule_scalable.c => schedule/scalable.c} 
(99%)
 rename platform/linux-generic/{odp_schedule_scalable_ordered.c => 
schedule/scalable_ordered.c} (100%)
 rename platform/linux-generic/{odp_schedule_sp.c => schedule/sp.c} (97%)

diff --git a/platform/linux-dpdk/Makefile.am b/platform/linux-dpdk/Makefile.am
index 8b94b7d2e..eb5c7c38a 100644
--- a/platform/linux-dpdk/Makefile.am
+++ b/platform/linux-dpdk/Makefile.am
@@ -252,9 +252,12 @@ __LIB__libodp_dpdk_la_SOURCES = \
   ../linux-generic/odp_rwlock.c \
   ../linux-generic/odp_rwlock_recursive.c \
   ../linux-generic/pool/subsystem.c \
-  ../linux-generic/odp_schedule.c \
   ../linux-generic/odp_schedule_if.c \
-  ../linux-generic/odp_schedule_iquery.c \
+  ../linux-generic/schedule/generic.c \
+  ../linux-generic/schedule/iquery.c \
+  ../linux-generic/schedule/scalable.c \
+  ../linux-generic/schedule/scalable_ordered.c \
+  ../linux-generic/schedule/sp.c \
   ../linux-generic/schedule/subsystem.c \
   ../linux-generic/odp_shared_memory.c \
   ../linux-generic/odp_sorted_list.c \
@@ -297,6 +300,17 @@ __LIB__libodp_dpdk_la_SOURCES += arch/x86/cpu_flags.c \
 endif
 
 pool/dpdk.lo: AM_CFLAGS += -DIM_ACTIVE_MODULE
+if ODP_SCHEDULE_SCALABLE
+schedule/scalable.lo: AM_CFLAGS += -DIM_ACTIVE_MODULE
+else
+schedule/generic.lo: AM_CFLAGS += -DIM_ACTIVE_MODULE
+endif
+if ODP_SCHEDULE_SP
+schedule/sp.lo: AM_CFLAGS += -DIM_ACTIVE_MODULE
+endif
+if ODP_SCHEDULE_IQUERY
+schedule/iquery.lo: AM_CFLAGS += -DIM_ACTIVE_MODULE
+endif
 
 # Build modular framework into odp-linux library
 modularframeworkdir = $(top_srcdir)/frameworks/modular
diff --git a/platform/linux-dpdk/odp_init.c b/platform/linux-dpdk/odp_init.c
index 96c535ba2..cd7abce65 100644
--- a/platform/linux-dpdk/odp_init.c
+++ b/platform/linux-dpdk/odp_init.c
@@ -358,7 +358,7 @@ int odp_init_global(odp_instance_t *instance,
}
stage = QUEUE_INIT;
 
-   if (sched_fn->init_global()) {
+   if (odp_schedule_init_global()) {
ODP_ERR("ODP schedule init failed.\n");
goto init_failed;
}
@@ -488,7 +488,7 @@ int _odp_term_global(enum init_stage stage)
/* Fall through */
 
case SCHED_INIT:
-   if (sched_fn->term_global()) {
+   if (odp_schedule_term_global()) {
ODP_ERR("ODP schedule term failed.\n");
rc = -1;
}
@@ -590,7 +590,7 @@ int odp_init_local(odp_instance_t instance, 
odp_thread_type_t thr_type)
}
stage = POOL_INIT;
 
-   if (sched_fn->init_local()) {
+   if (odp_schedule_init_local()) {
ODP_ERR("ODP schedule local init failed.\n");
goto init_fail;
}
@@ -617,7 +617,7 @@ int _odp_term_local(enum init_stage stage)
case ALL_INIT:
 
case SCHED_INIT:
-   if (sched_fn->term_local()) {
+   if (odp_schedule_term_local()) {
ODP_ERR("ODP schedule local term failed.\n");
rc = -1;
}
diff --git a/platform/linux-generic/Makefile.am 
b/platform/linux-generic/Makefile.am
index 78e957609..cbbefa4e6 100644
--- a/platform/linux-gener

[lng-odp] [PATCH CLOUD-DEV v3 3/3] linux-gen: cleanup internal schedule interface

2017-09-10 Thread Github ODP bot
From: Brian Brooks 

Signed-off-by: Brian Brooks 
Reviewed-by: Ola Liljedahl 
---
/** Email created from pull request 166 (brbrooks:cloud-dev)
 ** https://github.com/Linaro/odp/pull/166
 ** Patch: https://github.com/Linaro/odp/pull/166.patch
 ** Base sha: 344fe21a06ef4273f34c4441edd0caef876517d9
 ** Merge commit sha: 8ff1431b3b5fb48cac483a36ca85d0132470276f
 **/
 .../linux-dpdk/include/odp_packet_io_internal.h|  3 +
 .../linux-generic/include/odp_packet_io_internal.h |  3 +
 .../linux-generic/include/odp_queue_internal.h |  5 ++
 platform/linux-generic/include/odp_schedule_if.h   | 65 ++
 platform/linux-generic/odp_packet_io.c |  4 +-
 platform/linux-generic/odp_queue.c |  8 +--
 platform/linux-generic/schedule/generic.c  | 22 +++-
 platform/linux-generic/schedule/iquery.c   | 20 +++
 platform/linux-generic/schedule/scalable.c | 10 ++--
 platform/linux-generic/schedule/sp.c   | 18 +++---
 10 files changed, 67 insertions(+), 91 deletions(-)

diff --git a/platform/linux-dpdk/include/odp_packet_io_internal.h 
b/platform/linux-dpdk/include/odp_packet_io_internal.h
index 8e158dc99..14b6a8d3b 100644
--- a/platform/linux-dpdk/include/odp_packet_io_internal.h
+++ b/platform/linux-dpdk/include/odp_packet_io_internal.h
@@ -207,6 +207,9 @@ extern const pktio_if_ops_t loopback_pktio_ops;
 extern const pktio_if_ops_t dpdk_pktio_ops;
 extern const pktio_if_ops_t * const pktio_if_ops[];
 
+int pktin_poll(int pktio_index, int num_queue, int index[]);
+void pktio_stop_finalize(int pktio_index);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/platform/linux-generic/include/odp_packet_io_internal.h 
b/platform/linux-generic/include/odp_packet_io_internal.h
index 1a4e345f5..dc049f467 100644
--- a/platform/linux-generic/include/odp_packet_io_internal.h
+++ b/platform/linux-generic/include/odp_packet_io_internal.h
@@ -271,6 +271,9 @@ int sock_stats_fd(pktio_entry_t *pktio_entry,
  int fd);
 int sock_stats_reset_fd(pktio_entry_t *pktio_entry, int fd);
 
+int pktin_poll(int pktio_index, int num_queue, int index[]);
+void pktio_stop_finalize(int pktio_index);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/platform/linux-generic/include/odp_queue_internal.h 
b/platform/linux-generic/include/odp_queue_internal.h
index dd846d592..a1b2a73da 100644
--- a/platform/linux-generic/include/odp_queue_internal.h
+++ b/platform/linux-generic/include/odp_queue_internal.h
@@ -76,6 +76,11 @@ static inline queue_t qentry_to_int(queue_entry_t *qentry)
return (queue_t)(qentry);
 }
 
+odp_queue_t queue_handle(uint32_t queue_index);
+void queue_destroy_finalize(uint32_t queue_index);
+int queue_idx_deq_multi(uint32_t queue_index, odp_event_t ev[], int num);
+int queue_empty(uint32_t queue_index);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/platform/linux-generic/include/odp_schedule_if.h 
b/platform/linux-generic/include/odp_schedule_if.h
index 2bcf23f26..8f39eec13 100644
--- a/platform/linux-generic/include/odp_schedule_if.h
+++ b/platform/linux-generic/include/odp_schedule_if.h
@@ -7,64 +7,31 @@
 #ifndef ODP_SCHEDULE_IF_H_
 #define ODP_SCHEDULE_IF_H_
 
-#ifdef __cplusplus
-extern "C" {
-#endif
-
 #include 
 #include 
 #include 
 
-typedef void (*schedule_pktio_start_fn_t)(int pktio_index, int num_in_queue,
- int in_queue_idx[]);
-typedef int (*schedule_thr_add_fn_t)(odp_schedule_group_t group, int thr);
-typedef int (*schedule_thr_rem_fn_t)(odp_schedule_group_t group, int thr);
-typedef int (*schedule_num_grps_fn_t)(void);
-typedef int (*schedule_init_queue_fn_t)(
-   uint32_t queue_index, const odp_schedule_param_t *sched_param);
-typedef void (*schedule_destroy_queue_fn_t)(uint32_t queue_index);
-typedef int (*schedule_sched_queue_fn_t)(uint32_t queue_index);
-typedef int (*schedule_unsched_queue_fn_t)(uint32_t queue_index);
-typedef int (*schedule_ord_enq_multi_fn_t)(queue_t q_int,
-  void *buf_hdr[], int num, int *ret);
-typedef void (*schedule_order_lock_fn_t)(void);
-typedef void (*schedule_order_unlock_fn_t)(void);
-typedef unsigned (*schedule_max_ordered_locks_fn_t)(void);
-typedef void (*schedule_save_context_fn_t)(uint32_t queue_index);
-
 typedef struct schedule_fn_t {
-   int status_sync;
-   schedule_pktio_start_fn_t   pktio_start;
-   schedule_thr_add_fn_t   thr_add;
-   schedule_thr_rem_fn_t   thr_rem;
-   schedule_num_grps_fn_t  num_grps;
-   schedule_init_queue_fn_tinit_queue;
-   schedule_destroy_queue_fn_t destroy_queue;
-   schedule_sched_queue_fn_t   sched_queue;
-   schedule_ord_enq_multi_fn_t ord_enq_multi;
-   schedule_order_lock_fn_torder_lock;
-   schedule_order_unlock_fn_t  order_unlock;
-   schedule_max_ordered_locks_fn_t max_ordered_locks;
+   int status_sync;
+   void (*pktio_start)(int pktio_index, int num_in_queue,
+

[lng-odp] [PATCH CLOUD-DEV v3 0/3] Modular Framework: schedulers

2017-09-10 Thread Github ODP bot


github
/** Email created from pull request 166 (brbrooks:cloud-dev)
 ** https://github.com/Linaro/odp/pull/166
 ** Patch: https://github.com/Linaro/odp/pull/166.patch
 ** Base sha: 344fe21a06ef4273f34c4441edd0caef876517d9
 ** Merge commit sha: 8ff1431b3b5fb48cac483a36ca85d0132470276f
 **/
/github

checkpatch.pl
total: 0 errors, 0 warnings, 0 checks, 124 lines checked


to_send-p-000.patch has no obvious style problems and is ready for submission.
WARNING: externs should be avoided in .c files
#297: FILE: platform/linux-generic/odp_schedule_if.c:11:
+extern const schedule_fn_t schedule_scalable_fn;

WARNING: externs should be avoided in .c files
#298: FILE: platform/linux-generic/odp_schedule_if.c:12:
+extern const schedule_fn_t schedule_default_fn;

total: 0 errors, 2 warnings, 0 checks, 685 lines checked


to_send-p-001.patch has style problems, please review.

If any of these errors are false positives, please report
them to the maintainer, see CHECKPATCH in MAINTAINERS.
total: 0 errors, 0 warnings, 0 checks, 371 lines checked


to_send-p-002.patch has no obvious style problems and is ready for submission.
/checkpatch.pl


[lng-odp] [PATCH CLOUD-DEV v3 1/3] linux-gen: schedule: add subsystem

2017-09-10 Thread Github ODP bot
From: Brian Brooks 

Signed-off-by: Brian Brooks 
Reviewed-by: Honnappa Nagarahalli 
Reviewed-by: Yi He 
---
/** Email created from pull request 166 (brbrooks:cloud-dev)
 ** https://github.com/Linaro/odp/pull/166
 ** Patch: https://github.com/Linaro/odp/pull/166.patch
 ** Base sha: 344fe21a06ef4273f34c4441edd0caef876517d9
 ** Merge commit sha: 8ff1431b3b5fb48cac483a36ca85d0132470276f
 **/
 platform/linux-dpdk/Makefile.am|  2 +
 platform/linux-generic/Makefile.am |  2 +
 .../linux-generic/include/odp_schedule_subsystem.h | 77 ++
 platform/linux-generic/schedule/subsystem.c| 19 ++
 4 files changed, 100 insertions(+)
 create mode 100644 platform/linux-generic/include/odp_schedule_subsystem.h
 create mode 100644 platform/linux-generic/schedule/subsystem.c

diff --git a/platform/linux-dpdk/Makefile.am b/platform/linux-dpdk/Makefile.am
index 6e7f0b1dd..8b94b7d2e 100644
--- a/platform/linux-dpdk/Makefile.am
+++ b/platform/linux-dpdk/Makefile.am
@@ -203,6 +203,7 @@ noinst_HEADERS = \
  ${top_srcdir}/platform/linux-generic/include/odp_queue_if.h \
  
${top_srcdir}/platform/linux-generic/include/odp_ring_internal.h \
  
${top_srcdir}/platform/linux-generic/include/odp_schedule_if.h \
+ 
${top_srcdir}/platform/linux-generic/include/odp_schedule_subsystem.h \
  
${top_srcdir}/platform/linux-generic/include/odp_sorted_list_internal.h \
  
${top_srcdir}/platform/linux-generic/include/odp_shm_internal.h \
  
${top_srcdir}/platform/linux-generic/include/odp_time_internal.h \
@@ -254,6 +255,7 @@ __LIB__libodp_dpdk_la_SOURCES = \
   ../linux-generic/odp_schedule.c \
   ../linux-generic/odp_schedule_if.c \
   ../linux-generic/odp_schedule_iquery.c \
+  ../linux-generic/schedule/subsystem.c \
   ../linux-generic/odp_shared_memory.c \
   ../linux-generic/odp_sorted_list.c \
   ../linux-generic/odp_spinlock.c \
diff --git a/platform/linux-generic/Makefile.am 
b/platform/linux-generic/Makefile.am
index 3e3e0c221..78e957609 100644
--- a/platform/linux-generic/Makefile.am
+++ b/platform/linux-generic/Makefile.am
@@ -203,6 +203,7 @@ noinst_HEADERS = \
  ${srcdir}/include/odp_schedule_scalable.h \
  ${srcdir}/include/odp_schedule_scalable_config.h \
  ${srcdir}/include/odp_schedule_scalable_ordered.h \
+ ${srcdir}/include/odp_schedule_subsystem.h \
  ${srcdir}/include/odp_sorted_list_internal.h \
  ${srcdir}/include/odp_shm_internal.h \
  ${srcdir}/include/odp_time_internal.h \
@@ -286,6 +287,7 @@ __LIB__libodp_linux_la_SOURCES = \
   odp_schedule_iquery.c \
   odp_schedule_scalable.c \
   odp_schedule_scalable_ordered.c \
+  schedule/subsystem.c \
   odp_shared_memory.c \
   odp_sorted_list.c \
   odp_spinlock.c \
diff --git a/platform/linux-generic/include/odp_schedule_subsystem.h 
b/platform/linux-generic/include/odp_schedule_subsystem.h
new file mode 100644
index 0..c3edef631
--- /dev/null
+++ b/platform/linux-generic/include/odp_schedule_subsystem.h
@@ -0,0 +1,77 @@
+/* Copyright (c) 2017, ARM Limited. All rights reserved.
+ *
+ * Copyright (c) 2017, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#ifndef ODP_SCHEDULE_SUBSYSTEM_H_
+#define ODP_SCHEDULE_SUBSYSTEM_H_
+
+/* API header files */
+#include 
+#include 
+
+/* Internal header files */
+#include 
+
+#define SCHEDULE_SUBSYSTEM_VERSION 0x0001UL
+
+ODP_SUBSYSTEM_DECLARE(schedule);
+
+ODP_SUBSYSTEM_API(schedule, uint64_t, wait_time, uint64_t ns);
+ODP_SUBSYSTEM_API(schedule, odp_event_t, schedule, odp_queue_t *from,
+ uint64_t wait);
+ODP_SUBSYSTEM_API(schedule, int, schedule_multi, odp_queue_t *from,
+ uint64_t wait, odp_event_t events[], int num);
+ODP_SUBSYSTEM_API(schedule, void, schedule_pause, void);
+ODP_SUBSYSTEM_API(schedule, void, schedule_resume, void);
+ODP_SUBSYSTEM_API(schedule, void, schedule_release_atomic, void);
+ODP_SUBSYSTEM_API(schedule, void, schedule_release_ordered, void);
+ODP_SUBSYSTEM_API(schedule, void, schedule_prefetch, int num);
+ODP_SUBSYSTEM_API(schedule, int, schedule_num_prio, void);
+ODP_SUBSYSTEM_API(schedule, odp_schedule_group_t, schedule_group_create,
+ const char *name, const odp_thrmask_t *mask);
+ODP_SUBSYSTEM_API(schedule, int, schedule_group_destroy,
+ odp_schedule_group_t group);
+ODP_SUBSYSTEM_API(schedule, odp_schedule_group_t, schedule_group_lookup,
+ const char *name);
+ODP_

Re: [lng-odp] Supporting ODP_PKTIO_OP_MT_SAFE

2017-09-10 Thread Bill Fischofer
We can discuss this during tomorrow's ARCH call, and probably further
at Connect. MT Safe is the default behavior and it's opposite ("MT
Unsafe") was added as a potential optimization when applications
assure implementations that only a single thread will be polling a
PktIn queue or adding to a Pktout queue.

Ideally, we'd like to retire all application I/O polling and use the
scheduler exclusively, but that's that's a longer-term goal. For now
we have both.

On Sun, Sep 10, 2017 at 8:11 AM, Honnappa Nagarahalli
 wrote:
> Hi,
> I think there are 2 ways in which pkt I/O will be used:
>
> 1) Polling method - in which one pkt I/O will be created for each
> receive worker thread. In this case, support for ODP_PKTIO_OP_MT_SAFE
> is not required.
> 2) Event method - the scheduler is used to receive packets. In this
> case the scheduler will provide the exclusive access to a pkt I/O.
> Again in this case support for ODP_PKTIO_OP_MT_SAFE is not required.
>
> I am thinking, for high throughput packet I/Os such as dpdk or ODP
> native drivers (in the future), we do not need to support
> ODP_PKTIO_OP_MT_SAFE. The odp_pktio_open API call can return an error
> if ODP_PKTIO_OP_MT_SAFE is asked for.
>
> We could keep the support for ODP_PKTIO_OP_MT_SAFE for other pkt I/Os.
>
> This will save space in cache for the locks as well as instruction cycles.
>
> Thank you,
> Honnappa


[lng-odp] Supporting ODP_PKTIO_OP_MT_SAFE

2017-09-10 Thread Honnappa Nagarahalli
Hi,
I think there are 2 ways in which pkt I/O will be used:

1) Polling method - in which one pkt I/O will be created for each
receive worker thread. In this case, support for ODP_PKTIO_OP_MT_SAFE
is not required.
2) Event method - the scheduler is used to receive packets. In this
case the scheduler will provide the exclusive access to a pkt I/O.
Again in this case support for ODP_PKTIO_OP_MT_SAFE is not required.

I am thinking, for high throughput packet I/Os such as dpdk or ODP
native drivers (in the future), we do not need to support
ODP_PKTIO_OP_MT_SAFE. The odp_pktio_open API call can return an error
if ODP_PKTIO_OP_MT_SAFE is asked for.

We could keep the support for ODP_PKTIO_OP_MT_SAFE for other pkt I/Os.

This will save space in cache for the locks as well as instruction cycles.

Thank you,
Honnappa