Ping

2014-11-11 23:58, Thomas Monjalon:
> Is there anyone interested in KNI to review this patch please?
> 
> 
> 2014-07-23 12:15, Hemant Agrawal:
> > The current implementation of rte_kni_rx_burst polls the fifo for buffers.
> > Irrespective of success or failure, it allocates the mbuf and try to put 
> > them into the alloc_q
> > if the buffers are not added to alloc_q, it frees them.
> > This waste lots of cpu cycles in allocating and freeing the buffers if 
> > alloc_q is full.
> > 
> > The logic has been changed to:
> > 1. Initially allocand add buffer(burstsize) to alloc_q
> > 2. Add buffers to alloc_q only when you are pulling out the buffers.
> > 
> > Signed-off-by: Hemant Agrawal <Hemant at freescale.com>
> > ---
> >  lib/librte_kni/rte_kni.c |    8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c
> > index 76feef4..01e85f8 100644
> > --- a/lib/librte_kni/rte_kni.c
> > +++ b/lib/librte_kni/rte_kni.c
> > @@ -263,6 +263,9 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
> >  
> >     ctx->in_use = 1;
> >  
> > +   /* Allocate mbufs and then put them into alloc_q */
> > +   kni_allocate_mbufs(ctx);
> > +
> >     return ctx;
> >  
> >  fail:
> > @@ -369,8 +372,9 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf 
> > **mbufs, unsigned num)
> >  {
> >     unsigned ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
> >  
> > -   /* Allocate mbufs and then put them into alloc_q */
> > -   kni_allocate_mbufs(kni);
> > +   /* If buffers removed, allocate mbufs and then put them into alloc_q */
> > +   if(ret)
> > +           kni_allocate_mbufs(kni);
> >  
> >     return ret;
> >  }

Reply via email to