[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-06-01 Thread Jerin Jacob
On Wed, Jun 01, 2016 at 11:46:20AM +0100, Hunt, David wrote:
> 
> 
> On 5/31/2016 10:11 PM, Jerin Jacob wrote:
> > On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote:
> > > Hi,
> > > 
> > > On 05/31/2016 06:03 PM, Jerin Jacob wrote:
> > > > On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
> > > > > 
> > > > > On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> > > > > > On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> > > > > > > New mempool handlers will use rte_mempool_create_empty(),
> > > > > > > rte_mempool_set_handler(),
> > > > > > > then rte_mempool_populate_*(). These three functions are new to 
> > > > > > > this
> > > > > > > release, to no problem
> > > > > > Having separate APIs for external pool-manager create is worrisome 
> > > > > > in
> > > > > > application perspective. Is it possible to have 
> > > > > > rte_mempool_[xmem]_create
> > > > > > for the both external and existing SW pool manager and make
> > > > > > rte_mempool_create_empty and rte_mempool_populate_*  internal 
> > > > > > functions.
> > > > > > 
> > > > > > IMO, We can do that by selecting  specific rte_mempool_set_handler()
> > > > > > based on _flags_ encoding, something like below
> > > > > > 
> > > > > > bit 0 - 16   // generic bits uses across all the pool managers
> > > > > > bit 16 - 23  // pool handler specific flags bits
> > > > > > bit 24 - 31  // to select the specific pool manager(Up to 256 
> > > > > > different flavors of
> > > > > > pool managers, For backward compatibility, make '0'(in 24-31) to 
> > > > > > select
> > > > > > existing SW pool manager.
> > > > > > 
> > > > > > and applications can choose the handlers by selecting the flag in
> > > > > > rte_mempool_[xmem]_create, That way it will be easy in testpmd or 
> > > > > > any other
> > > > > > applications to choose the pool handler from command line etc in 
> > > > > > future.
> > > > > There might be issues with the 8-bit handler number, as we'd have to 
> > > > > add an
> > > > > api call to
> > > > > first get the index of a given hander by name, then OR it into the 
> > > > > flags.
> > > > > That would mean
> > > > > also extra API calls for the non-default external handlers. I do 
> > > > > agree with
> > > > > the handler-specific
> > > > > bits though.
> > > > That would be an internal API(upper 8 bits to handler name). Right ?
> > > > Seems to be OK for me.
> > > > 
> > > > > Having the _empty and _set_handler  APIs seems to me to be OK for the
> > > > > moment. Maybe Olivier could comment?
> > > > > 
> > > > But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
> > > > it is better reduce the public API in spec where ever possible ?
> > > > 
> > > > Maybe Olivier could comment ?
> > > Well, I think having 3 different functions is not a problem if the API
> > > is clearer.
> > > 
> > > In my opinion, the following:
> > >   rte_mempool_create_empty()
> > >   rte_mempool_set_handler()
> > >   rte_mempool_populate()
> > > 
> > > is clearer than:
> > >   rte_mempool_create(15 args)
> > But proposed scheme is not adding any new arguments to
> > rte_mempool_create. It just extending the existing flag.
> > 
> > rte_mempool_create(15 args) is still their as API for internal pool
> > creation.
> > 
> > > Splitting the flags into 3 groups, with one not beeing flags but a
> > > pool handler number looks overcomplicated from a user perspective.
> > I am concerned with seem less integration with existing applications,
> > IMO, Its not worth having separate functions for external vs internal
> > pool creation for application(now each every applications has to added this
> > logic every where for no good reason), just my 2 cents.
> 
> I think that there is always going to be some  extra code in the
> applications
>  that want to use an external mempool. The _set_handler approach does
> create, set_hander, populate. The Flags method queries the handler list to
> get the index, sets the flags bits, then calls create. Both methods will
> work.

I was suggesting flags like TXQ in ethdev where application just
selects the mode. Not sure why application has to get the index first.

some thing like,
#define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
#define ETH_TXQ_FLAGS_NOREFCOUNT 0x0002 /**< refcnt can be ignored */
#define ETH_TXQ_FLAGS_NOMULTMEMP 0x0004 /**< all bufs come from same mempool */ 

Anyway, Looks like  no one else much bothered about external pool
manger creation API being different. So, I given up. No objections from my side 
:-)

> 
> But I think the _set_handler approach is more user friendly, therefore that
> it the method I would lean towards.
> 
> > > > > > and we can remove "mbuf: get default mempool handler from 
> > > > > > configuration"
> > > > > > change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is 
> > > > > > defined then set
> > > > > > the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
> > > > > > 
> > > > > > What do you think?
> > > > 

[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-06-01 Thread Hunt, David


On 5/31/2016 10:11 PM, Jerin Jacob wrote:
> On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote:
>> Hi,
>>
>> On 05/31/2016 06:03 PM, Jerin Jacob wrote:
>>> On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:

 On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
>> New mempool handlers will use rte_mempool_create_empty(),
>> rte_mempool_set_handler(),
>> then rte_mempool_populate_*(). These three functions are new to this
>> release, to no problem
> Having separate APIs for external pool-manager create is worrisome in
> application perspective. Is it possible to have rte_mempool_[xmem]_create
> for the both external and existing SW pool manager and make
> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
>
> IMO, We can do that by selecting  specific rte_mempool_set_handler()
> based on _flags_ encoding, something like below
>
> bit 0 - 16   // generic bits uses across all the pool managers
> bit 16 - 23  // pool handler specific flags bits
> bit 24 - 31  // to select the specific pool manager(Up to 256 different 
> flavors of
> pool managers, For backward compatibility, make '0'(in 24-31) to select
> existing SW pool manager.
>
> and applications can choose the handlers by selecting the flag in
> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any 
> other
> applications to choose the pool handler from command line etc in future.
 There might be issues with the 8-bit handler number, as we'd have to add an
 api call to
 first get the index of a given hander by name, then OR it into the flags.
 That would mean
 also extra API calls for the non-default external handlers. I do agree with
 the handler-specific
 bits though.
>>> That would be an internal API(upper 8 bits to handler name). Right ?
>>> Seems to be OK for me.
>>>
 Having the _empty and _set_handler  APIs seems to me to be OK for the
 moment. Maybe Olivier could comment?

>>> But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
>>> it is better reduce the public API in spec where ever possible ?
>>>
>>> Maybe Olivier could comment ?
>> Well, I think having 3 different functions is not a problem if the API
>> is clearer.
>>
>> In my opinion, the following:
>>  rte_mempool_create_empty()
>>  rte_mempool_set_handler()
>>  rte_mempool_populate()
>>
>> is clearer than:
>>  rte_mempool_create(15 args)
> But proposed scheme is not adding any new arguments to
> rte_mempool_create. It just extending the existing flag.
>
> rte_mempool_create(15 args) is still their as API for internal pool
> creation.
>
>> Splitting the flags into 3 groups, with one not beeing flags but a
>> pool handler number looks overcomplicated from a user perspective.
> I am concerned with seem less integration with existing applications,
> IMO, Its not worth having separate functions for external vs internal
> pool creation for application(now each every applications has to added this
> logic every where for no good reason), just my 2 cents.

I think that there is always going to be some  extra code in the 
applications
  that want to use an external mempool. The _set_handler approach does
create, set_hander, populate. The Flags method queries the handler list to
get the index, sets the flags bits, then calls create. Both methods will 
work.

But I think the _set_handler approach is more user friendly, therefore that
it the method I would lean towards.

> and we can remove "mbuf: get default mempool handler from configuration"
> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined 
> then set
> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
>
> What do you think?
 The "configuration" patch is to allow users to quickly change the mempool
 handler
 by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
 handler. It could just as easily be left out and use the 
 rte_mempool_create.

>>> Yes, I understand, but I am trying to avoid build time constant. IMO, It
>>> would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
>>> defined in config. and for quick change developers can introduce the build
>>> with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"
>> My understanding of the compile-time configuration option was
>> to allow a specific architecture to define a specific hw-assisted
>> handler by default.
>>
>> Indeed, if there is no such need for now, we may remove it. But
>> we need a way to select another handler, at least in test-pmd
>> (in command line arguments?).
> like txflags in testpmd, IMO, mempool flags will help to select the handlers
> seamlessly as suggest above.
>
> If we are _not_ taking the flags based selection scheme then it makes to
> keep 

[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-06-01 Thread Jerin Jacob
On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote:
> Hi,
> 
> On 05/31/2016 06:03 PM, Jerin Jacob wrote:
> > On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
> >>
> >>
> >> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> >>> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
>  New mempool handlers will use rte_mempool_create_empty(),
>  rte_mempool_set_handler(),
>  then rte_mempool_populate_*(). These three functions are new to this
>  release, to no problem
> >>> Having separate APIs for external pool-manager create is worrisome in
> >>> application perspective. Is it possible to have rte_mempool_[xmem]_create
> >>> for the both external and existing SW pool manager and make
> >>> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
> >>>
> >>> IMO, We can do that by selecting  specific rte_mempool_set_handler()
> >>> based on _flags_ encoding, something like below
> >>>
> >>> bit 0 - 16   // generic bits uses across all the pool managers
> >>> bit 16 - 23  // pool handler specific flags bits
> >>> bit 24 - 31  // to select the specific pool manager(Up to 256 different 
> >>> flavors of
> >>> pool managers, For backward compatibility, make '0'(in 24-31) to select
> >>> existing SW pool manager.
> >>>
> >>> and applications can choose the handlers by selecting the flag in
> >>> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any 
> >>> other
> >>> applications to choose the pool handler from command line etc in future.
> >>
> >> There might be issues with the 8-bit handler number, as we'd have to add an
> >> api call to
> >> first get the index of a given hander by name, then OR it into the flags.
> >> That would mean
> >> also extra API calls for the non-default external handlers. I do agree with
> >> the handler-specific
> >> bits though.
> > 
> > That would be an internal API(upper 8 bits to handler name). Right ?
> > Seems to be OK for me.
> > 
> >>
> >> Having the _empty and _set_handler  APIs seems to me to be OK for the
> >> moment. Maybe Olivier could comment?
> >>
> > 
> > But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
> > it is better reduce the public API in spec where ever possible ?
> > 
> > Maybe Olivier could comment ?
> 
> Well, I think having 3 different functions is not a problem if the API
> is clearer.
> 
> In my opinion, the following:
>   rte_mempool_create_empty()
>   rte_mempool_set_handler()
>   rte_mempool_populate()
> 
> is clearer than:
>   rte_mempool_create(15 args)

But proposed scheme is not adding any new arguments to
rte_mempool_create. It just extending the existing flag.

rte_mempool_create(15 args) is still their as API for internal pool
creation.

> 
> Splitting the flags into 3 groups, with one not beeing flags but a
> pool handler number looks overcomplicated from a user perspective.

I am concerned with seem less integration with existing applications,
IMO, Its not worth having separate functions for external vs internal
pool creation for application(now each every applications has to added this
logic every where for no good reason), just my 2 cents.

> 
> >>> and we can remove "mbuf: get default mempool handler from configuration"
> >>> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined 
> >>> then set
> >>> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
> >>>
> >>> What do you think?
> >>
> >> The "configuration" patch is to allow users to quickly change the mempool
> >> handler
> >> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
> >> handler. It could just as easily be left out and use the 
> >> rte_mempool_create.
> >>
> > 
> > Yes, I understand, but I am trying to avoid build time constant. IMO, It
> > would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
> > defined in config. and for quick change developers can introduce the build 
> > with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"
> 
> My understanding of the compile-time configuration option was
> to allow a specific architecture to define a specific hw-assisted
> handler by default.
> 
> Indeed, if there is no such need for now, we may remove it. But
> we need a way to select another handler, at least in test-pmd
> (in command line arguments?).

like txflags in testpmd, IMO, mempool flags will help to select the handlers
seamlessly as suggest above.

If we are _not_ taking the flags based selection scheme then it makes to
keep RTE_MBUF_DEFAULT_MEMPOOL_HANDLER

> 
> 
>  to add a parameter to one of them for the config data. Also since we're
>  adding some new
>  elements to the mempool structure, how about we add a new pointer for a 
>  void
>  pointer to a
>  config data structure, as defined by the handler.
> 
>  So, new element in rte_mempool struct alongside the *pool
>   void *pool;
>   void *pool_config;
> 
>  Then add a param to the 

[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-31 Thread Olivier MATZ
Hi,

On 05/31/2016 06:03 PM, Jerin Jacob wrote:
> On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
>>
>>
>> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
>>> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
 New mempool handlers will use rte_mempool_create_empty(),
 rte_mempool_set_handler(),
 then rte_mempool_populate_*(). These three functions are new to this
 release, to no problem
>>> Having separate APIs for external pool-manager create is worrisome in
>>> application perspective. Is it possible to have rte_mempool_[xmem]_create
>>> for the both external and existing SW pool manager and make
>>> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
>>>
>>> IMO, We can do that by selecting  specific rte_mempool_set_handler()
>>> based on _flags_ encoding, something like below
>>>
>>> bit 0 - 16   // generic bits uses across all the pool managers
>>> bit 16 - 23  // pool handler specific flags bits
>>> bit 24 - 31  // to select the specific pool manager(Up to 256 different 
>>> flavors of
>>> pool managers, For backward compatibility, make '0'(in 24-31) to select
>>> existing SW pool manager.
>>>
>>> and applications can choose the handlers by selecting the flag in
>>> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
>>> applications to choose the pool handler from command line etc in future.
>>
>> There might be issues with the 8-bit handler number, as we'd have to add an
>> api call to
>> first get the index of a given hander by name, then OR it into the flags.
>> That would mean
>> also extra API calls for the non-default external handlers. I do agree with
>> the handler-specific
>> bits though.
> 
> That would be an internal API(upper 8 bits to handler name). Right ?
> Seems to be OK for me.
> 
>>
>> Having the _empty and _set_handler  APIs seems to me to be OK for the
>> moment. Maybe Olivier could comment?
>>
> 
> But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
> it is better reduce the public API in spec where ever possible ?
> 
> Maybe Olivier could comment ?

Well, I think having 3 different functions is not a problem if the API
is clearer.

In my opinion, the following:
rte_mempool_create_empty()
rte_mempool_set_handler()
rte_mempool_populate()

is clearer than:
rte_mempool_create(15 args)

Splitting the flags into 3 groups, with one not beeing flags but a
pool handler number looks overcomplicated from a user perspective.

>>> and we can remove "mbuf: get default mempool handler from configuration"
>>> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then 
>>> set
>>> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
>>>
>>> What do you think?
>>
>> The "configuration" patch is to allow users to quickly change the mempool
>> handler
>> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
>> handler. It could just as easily be left out and use the rte_mempool_create.
>>
> 
> Yes, I understand, but I am trying to avoid build time constant. IMO, It
> would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
> defined in config. and for quick change developers can introduce the build 
> with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"

My understanding of the compile-time configuration option was
to allow a specific architecture to define a specific hw-assisted
handler by default.

Indeed, if there is no such need for now, we may remove it. But
we need a way to select another handler, at least in test-pmd
(in command line arguments?).


 to add a parameter to one of them for the config data. Also since we're
 adding some new
 elements to the mempool structure, how about we add a new pointer for a 
 void
 pointer to a
 config data structure, as defined by the handler.

 So, new element in rte_mempool struct alongside the *pool
  void *pool;
  void *pool_config;

 Then add a param to the rte_mempool_set_handler function:
 int
 rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
 *pool_config)
>>> IMO, Maybe we need to have _set_ and _get_.So I think we can have
>>> two separate callback in external pool-manger for that if required.
>>> IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
>>> for the configuration as mentioned above and add the new callbacks for
>>> set and get when required?
>>
>> OK, We'll keep the config to the 8 bits of the flags for now. That will also
>> mean I won't
>> add the pool_config void pointer either (for the moment)
> 
> OK to me.

I'm not sure I'm getting it. Does it mean having something like
this ?

rte_mempool_set_handler(struct rte_mempool *mp, const char *name,
unsigned int flags)

Or does it mean some of the flags passed to rte_mempool_create*()
will be specific to some handlers?


Before adding handler-specific flags or config, can we 

[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-31 Thread Jerin Jacob
On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
> 
> 
> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> > On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> > > New mempool handlers will use rte_mempool_create_empty(),
> > > rte_mempool_set_handler(),
> > > then rte_mempool_populate_*(). These three functions are new to this
> > > release, to no problem
> > Having separate APIs for external pool-manager create is worrisome in
> > application perspective. Is it possible to have rte_mempool_[xmem]_create
> > for the both external and existing SW pool manager and make
> > rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
> > 
> > IMO, We can do that by selecting  specific rte_mempool_set_handler()
> > based on _flags_ encoding, something like below
> > 
> > bit 0 - 16   // generic bits uses across all the pool managers
> > bit 16 - 23  // pool handler specific flags bits
> > bit 24 - 31  // to select the specific pool manager(Up to 256 different 
> > flavors of
> > pool managers, For backward compatibility, make '0'(in 24-31) to select
> > existing SW pool manager.
> > 
> > and applications can choose the handlers by selecting the flag in
> > rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
> > applications to choose the pool handler from command line etc in future.
> 
> There might be issues with the 8-bit handler number, as we'd have to add an
> api call to
> first get the index of a given hander by name, then OR it into the flags.
> That would mean
> also extra API calls for the non-default external handlers. I do agree with
> the handler-specific
> bits though.

That would be an internal API(upper 8 bits to handler name). Right ?
Seems to be OK for me.

> 
> Having the _empty and _set_handler  APIs seems to me to be OK for the
> moment. Maybe Olivier could comment?
> 

But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
it is better reduce the public API in spec where ever possible ?

Maybe Olivier could comment ?


> > and we can remove "mbuf: get default mempool handler from configuration"
> > change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then 
> > set
> > the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
> > 
> > What do you think?
> 
> The "configuration" patch is to allow users to quickly change the mempool
> handler
> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
> handler. It could just as easily be left out and use the rte_mempool_create.
>

Yes, I understand, but I am trying to avoid build time constant. IMO, It
would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
defined in config. and for quick change developers can introduce the build 
with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"


> > > to add a parameter to one of them for the config data. Also since we're
> > > adding some new
> > > elements to the mempool structure, how about we add a new pointer for a 
> > > void
> > > pointer to a
> > > config data structure, as defined by the handler.
> > > 
> > > So, new element in rte_mempool struct alongside the *pool
> > >  void *pool;
> > >  void *pool_config;
> > > 
> > > Then add a param to the rte_mempool_set_handler function:
> > > int
> > > rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
> > > *pool_config)
> > IMO, Maybe we need to have _set_ and _get_.So I think we can have
> > two separate callback in external pool-manger for that if required.
> > IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
> > for the configuration as mentioned above and add the new callbacks for
> > set and get when required?
> 
> OK, We'll keep the config to the 8 bits of the flags for now. That will also
> mean I won't
> add the pool_config void pointer either (for the moment)

OK to me.

> 
> > > > 2) IMO, It is better to change void *pool in struct rte_mempool to
> > > > anonymous union type, something like below, so that mempool
> > > > implementation can choose the best type.
> > > > union {
> > > > void *pool;
> > > > uint64_t val;
> > > > }
> > > Could we do this by using the union for the *pool_config suggested above,
> > > would that give
> > > you what you need?
> > It would be an extra overhead for external pool manager to _alloc_ memory
> > and store the allocated pointer in mempool struct(as *pool) and use pool for
> > pointing other data structures as some implementation need only
> > limited bytes to store the external pool manager specific context.
> > 
> > In order to fix this problem, We may classify fast path and slow path
> > elements in struct rte_mempool and move all fast path elements in first
> > cache line and create an empty opaque space in the remaining bytes in the
> > cache line so that if the external pool manager needs only limited space
> > then it is not required to allocate the separate memory to 

[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-31 Thread Hunt, David


On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
>> New mempool handlers will use rte_mempool_create_empty(),
>> rte_mempool_set_handler(),
>> then rte_mempool_populate_*(). These three functions are new to this
>> release, to no problem
> Having separate APIs for external pool-manager create is worrisome in
> application perspective. Is it possible to have rte_mempool_[xmem]_create
> for the both external and existing SW pool manager and make
> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
>
> IMO, We can do that by selecting  specific rte_mempool_set_handler()
> based on _flags_ encoding, something like below
>
> bit 0 - 16   // generic bits uses across all the pool managers
> bit 16 - 23  // pool handler specific flags bits
> bit 24 - 31  // to select the specific pool manager(Up to 256 different 
> flavors of
> pool managers, For backward compatibility, make '0'(in 24-31) to select
> existing SW pool manager.
>
> and applications can choose the handlers by selecting the flag in
> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
> applications to choose the pool handler from command line etc in future.

There might be issues with the 8-bit handler number, as we'd have to add 
an api call to
first get the index of a given hander by name, then OR it into the 
flags. That would mean
also extra API calls for the non-default external handlers. I do agree 
with the handler-specific
bits though.

Having the _empty and _set_handler  APIs seems to me to be OK for the
moment. Maybe Olivier could comment?

> and we can remove "mbuf: get default mempool handler from configuration"
> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
>
> What do you think?

The "configuration" patch is to allow users to quickly change the 
mempool handler
by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
handler. It could just as easily be left out and use the 
rte_mempool_create.

>> to add a parameter to one of them for the config data. Also since we're
>> adding some new
>> elements to the mempool structure, how about we add a new pointer for a void
>> pointer to a
>> config data structure, as defined by the handler.
>>
>> So, new element in rte_mempool struct alongside the *pool
>>  void *pool;
>>  void *pool_config;
>>
>> Then add a param to the rte_mempool_set_handler function:
>> int
>> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
>> *pool_config)
> IMO, Maybe we need to have _set_ and _get_.So I think we can have
> two separate callback in external pool-manger for that if required.
> IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
> for the configuration as mentioned above and add the new callbacks for
> set and get when required?

OK, We'll keep the config to the 8 bits of the flags for now. That will 
also mean I won't
add the pool_config void pointer either (for the moment)

>>> 2) IMO, It is better to change void *pool in struct rte_mempool to
>>> anonymous union type, something like below, so that mempool
>>> implementation can choose the best type.
>>> union {
>>> void *pool;
>>> uint64_t val;
>>> }
>> Could we do this by using the union for the *pool_config suggested above,
>> would that give
>> you what you need?
> It would be an extra overhead for external pool manager to _alloc_ memory
> and store the allocated pointer in mempool struct(as *pool) and use pool for
> pointing other data structures as some implementation need only
> limited bytes to store the external pool manager specific context.
>
> In order to fix this problem, We may classify fast path and slow path
> elements in struct rte_mempool and move all fast path elements in first
> cache line and create an empty opaque space in the remaining bytes in the
> cache line so that if the external pool manager needs only limited space
> then it is not required to allocate the separate memory to save the
> per core cache  in fast-path
>
> something like below,
> union {
>   void *pool;
>   uint64_t val;
>   uint8_t extra_mem[16] // available free bytes in fast path cache line
>
> }

Something for the future, perhaps? Will the 8-bits in the flags suffice 
for now?


> Other points,
>
> 1) Is it possible to remove unused is_mp in  __mempool_put_bulk
> function as it is just a internal function.

Fixed

> 2) Considering "get" and "put" are the fast-path callbacks for
> pool-manger, Is it possible to avoid the extra overhead of the following
> _load_ and additional cache line on each call,
> rte_mempool_handler_table.handler[handler_idx]
>
> I understand it is for multiprocess support but I am thing can we
> introduce something like ethernet API support for multiprocess and
> resolve "put" and "get" functions pointer on init and store 

[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-31 Thread Jerin Jacob
On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> 
> New mempool handlers will use rte_mempool_create_empty(),
> rte_mempool_set_handler(),
> then rte_mempool_populate_*(). These three functions are new to this
> release, to no problem

Having separate APIs for external pool-manager create is worrisome in
application perspective. Is it possible to have rte_mempool_[xmem]_create
for the both external and existing SW pool manager and make
rte_mempool_create_empty and rte_mempool_populate_*  internal functions.

IMO, We can do that by selecting  specific rte_mempool_set_handler()
based on _flags_ encoding, something like below

bit 0 - 16   // generic bits uses across all the pool managers
bit 16 - 23  // pool handler specific flags bits
bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors 
of
pool managers, For backward compatibility, make '0'(in 24-31) to select
existing SW pool manager.

and applications can choose the handlers by selecting the flag in
rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
applications to choose the pool handler from command line etc in future.

and we can remove "mbuf: get default mempool handler from configuration"
change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.

What do you think?

> to add a parameter to one of them for the config data. Also since we're
> adding some new
> elements to the mempool structure, how about we add a new pointer for a void
> pointer to a
> config data structure, as defined by the handler.
> 
> So, new element in rte_mempool struct alongside the *pool
> void *pool;
> void *pool_config;
> 
> Then add a param to the rte_mempool_set_handler function:
> int
> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
> *pool_config)

IMO, Maybe we need to have _set_ and _get_.So I think we can have
two separate callback in external pool-manger for that if required.
IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
for the configuration as mentioned above and add the new callbacks for
set and get when required?

> > 2) IMO, It is better to change void *pool in struct rte_mempool to
> > anonymous union type, something like below, so that mempool
> > implementation can choose the best type.
> > union {
> > void *pool;
> > uint64_t val;
> > }
> 
> Could we do this by using the union for the *pool_config suggested above,
> would that give
> you what you need?

It would be an extra overhead for external pool manager to _alloc_ memory
and store the allocated pointer in mempool struct(as *pool) and use pool for
pointing other data structures as some implementation need only
limited bytes to store the external pool manager specific context.

In order to fix this problem, We may classify fast path and slow path
elements in struct rte_mempool and move all fast path elements in first
cache line and create an empty opaque space in the remaining bytes in the
cache line so that if the external pool manager needs only limited space
then it is not required to allocate the separate memory to save the
per core cache  in fast-path

something like below,
union {
void *pool;
uint64_t val;
uint8_t extra_mem[16] // available free bytes in fast path cache line

}

Other points,

1) Is it possible to remove unused is_mp in  __mempool_put_bulk
function as it is just a internal function.

2) Considering "get" and "put" are the fast-path callbacks for
pool-manger, Is it possible to avoid the extra overhead of the following
_load_ and additional cache line on each call,
rte_mempool_handler_table.handler[handler_idx]

I understand it is for multiprocess support but I am thing can we
introduce something like ethernet API support for multiprocess and
resolve "put" and "get" functions pointer on init and store in
struct mempool. Some thinking like

file: drivers/net/ixgbe/ixgbe_ethdev.c
search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {

Jerin



[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-30 Thread Jerin Jacob
On Fri, May 27, 2016 at 03:44:31PM +0100, Hunt, David wrote:
> 
> 
Hi David,
[snip]
>  That chunk of code above would be better moved all right. I'd suggest
> moving it to the
> rte_mempool_create function, as that's the one that needs the backward
> compatibility.

OK

> 
> On the flags issue, each mempool handler can re-interpret the flags as
> needed. Maybe we
> could use the upper half of the bits for different handlers, changing the
> meaning of the
> bits depending on which handler is being set up. We can then keep the lower
> half for bits that are common across all handlers? That way the user can

Common lower half bit in flags looks good.

> just set the bits they
> are interested in for that handler. Also, the alloc function has access to
> the flags, so maybe the
> handler specific setup could be handled in the alloc function rather than
> adding a new function pointer?

Yes. I agree.

> 
> Of course, that won't help if we need to pass in more data, in which case
> we'd probably need an
> opaque data pointer somewhere. It would probably be most useful to pass it
> in with the
> alloc, which may need the data. Any suggestions?

But the top level rte_mempool_create() function needs to pass the data. Right?
That would be an ABI change. IMO, we need to start thinking about
passing a struct of config data to rte_mempool_create to create
backward compatibility on new argument addition to rte_mempool_create()

Other points in HW assisted pool manager perspective,

1) May be RING can be replaced with some other higher abstraction name
for the internal MEMPOOL_F_RING_CREATED flag
2) IMO, It is better to change void *pool in struct rte_mempool to
anonymous union type, something like below, so that mempool
implementation can choose the best type.
union {
void *pool;
uint64_t val;
}

3) int32_t handler_idx creates 4 byte hole in struct rte_mempool in
64 bit system. IMO it better to rearrange.(as const struct rte_memzone
*mz comes next)

4) IMO, It is better to change ext_alloc/ext_free to ext_create/ext_destroy
as their is no allocation in HW assisted pool manager case,
it will be mostly creating some HW initialization.

> 
> Regards,
> Dave.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 


[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-27 Thread Jerin Jacob
On Fri, May 27, 2016 at 10:52:42AM +0100, Hunt, David wrote:
> 
> 
> On 5/24/2016 4:35 PM, Jerin Jacob wrote:
> > On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
> > > + /*
> > > +  * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> > > +  * set the correct index into the handler table.
> > > +  */
> > > + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> > > + rte_mempool_set_handler(mp, "ring_sp_sc");
> > > + else if (flags & MEMPOOL_F_SP_PUT)
> > > + rte_mempool_set_handler(mp, "ring_sp_mc");
> > > + else if (flags & MEMPOOL_F_SC_GET)
> > > + rte_mempool_set_handler(mp, "ring_mp_sc");
> > > + else
> > > + rte_mempool_set_handler(mp, "ring_mp_mc");
> > IMO, We should decouple the implementation specific flags of _a_
> > external pool manager implementation from the generic 
> > rte_mempool_create_empty
> > function as going further when we introduce new flags for custom HW 
> > accelerated
> > external pool manager then this common code will be bloated.
> 
> These flags are only there to maintain backward compatibility for the
> default handlers. I would not
> envisage adding more flags to this, I would suggest just adding a new
> handler using the new API calls.
> So I would not see this code growing much in the future.

IMHO, For _each_ HW accelerated external pool manager we may need to introduce
specific flag to tune to specific use cases.i.e MEMPOOL_F_* flags for
this exiting pool manager implemented in SW. For instance, when we add
a new HW external pool manager we may need to add MEMPOOL_MYHW_DONT_FREE_ON_SEND
(just a random name) to achieve certain functionally.

So I propose let "unsigned flags" in mempool create to be the opaque type and 
each
external pool manager can define what it makes sense to that specific
pool manager as there is NO other means to configure the pool manager.

For instance, on HW accelerated pool manager, the flag MEMPOOL_F_SP_PUT may
not make much sense as it can work with MP without any additional
settings in HW.

So instead of adding these checks in common code, IMO, lets move this
to a pool manager specific "function pointer" function and invoke
the function pointer from generic mempool create function.

What do you think?

Jerin



[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-27 Thread Hunt, David


On 5/27/2016 11:33 AM, Jerin Jacob wrote:
> On Fri, May 27, 2016 at 10:52:42AM +0100, Hunt, David wrote:
>>
>> On 5/24/2016 4:35 PM, Jerin Jacob wrote:
>>> On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
 +  /*
 +   * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
 +   * set the correct index into the handler table.
 +   */
 +  if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
 +  rte_mempool_set_handler(mp, "ring_sp_sc");
 +  else if (flags & MEMPOOL_F_SP_PUT)
 +  rte_mempool_set_handler(mp, "ring_sp_mc");
 +  else if (flags & MEMPOOL_F_SC_GET)
 +  rte_mempool_set_handler(mp, "ring_mp_sc");
 +  else
 +  rte_mempool_set_handler(mp, "ring_mp_mc");
>>> IMO, We should decouple the implementation specific flags of _a_
>>> external pool manager implementation from the generic 
>>> rte_mempool_create_empty
>>> function as going further when we introduce new flags for custom HW 
>>> accelerated
>>> external pool manager then this common code will be bloated.
>> These flags are only there to maintain backward compatibility for the
>> default handlers. I would not
>> envisage adding more flags to this, I would suggest just adding a new
>> handler using the new API calls.
>> So I would not see this code growing much in the future.
> IMHO, For _each_ HW accelerated external pool manager we may need to introduce
> specific flag to tune to specific use cases.i.e MEMPOOL_F_* flags for
> this exiting pool manager implemented in SW. For instance, when we add
> a new HW external pool manager we may need to add 
> MEMPOOL_MYHW_DONT_FREE_ON_SEND
> (just a random name) to achieve certain functionally.
>
> So I propose let "unsigned flags" in mempool create to be the opaque type and 
> each
> external pool manager can define what it makes sense to that specific
> pool manager as there is NO other means to configure the pool manager.
>
> For instance, on HW accelerated pool manager, the flag MEMPOOL_F_SP_PUT may
> not make much sense as it can work with MP without any additional
> settings in HW.
>
> So instead of adding these checks in common code, IMO, lets move this
> to a pool manager specific "function pointer" function and invoke
> the function pointer from generic mempool create function.
>
> What do you think?
>
> Jerin

Jerin,
  That chunk of code above would be better moved all right. I'd 
suggest moving it to the
rte_mempool_create function, as that's the one that needs the backward 
compatibility.

On the flags issue, each mempool handler can re-interpret the flags as 
needed. Maybe we
could use the upper half of the bits for different handlers, changing 
the meaning of the
bits depending on which handler is being set up. We can then keep the lower
half for bits that are common across all handlers? That way the user can 
just set the bits they
are interested in for that handler. Also, the alloc function has access 
to the flags, so maybe the
handler specific setup could be handled in the alloc function rather 
than adding a new function pointer?

Of course, that won't help if we need to pass in more data, in which 
case we'd probably need an
opaque data pointer somewhere. It would probably be most useful to pass 
it in with the
alloc, which may need the data. Any suggestions?

Regards,
Dave.




















[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-27 Thread Hunt, David


On 5/24/2016 4:35 PM, Jerin Jacob wrote:
> On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
>> +/*
>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>> + * set the correct index into the handler table.
>> + */
>> +if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>> +rte_mempool_set_handler(mp, "ring_sp_sc");
>> +else if (flags & MEMPOOL_F_SP_PUT)
>> +rte_mempool_set_handler(mp, "ring_sp_mc");
>> +else if (flags & MEMPOOL_F_SC_GET)
>> +rte_mempool_set_handler(mp, "ring_mp_sc");
>> +else
>> +rte_mempool_set_handler(mp, "ring_mp_mc");
> IMO, We should decouple the implementation specific flags of _a_
> external pool manager implementation from the generic rte_mempool_create_empty
> function as going further when we introduce new flags for custom HW 
> accelerated
> external pool manager then this common code will be bloated.

These flags are only there to maintain backward compatibility for the 
default handlers. I would not
envisage adding more flags to this, I would suggest just adding a new 
handler using the new API calls.
So I would not see this code growing much in the future.


>> +/** Structure storing the table of registered handlers. */
>> +struct rte_mempool_handler_table {
>> +rte_spinlock_t sl; /**< Spinlock for add/delete. */
>> +uint32_t num_handlers; /**< Number of handlers in the table. */
>> +/** Storage for all possible handlers. */
>> +struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
>> +};
> add __rte_cache_aligned to this structure to avoid "handler" memory
> cacheline being shared with other variables

Will do.

>> +
>> +/** Array of registered handlers */
>> +extern struct rte_mempool_handler_table rte_mempool_handler_table;
>> +
>> +/**
>> + * @internal Get the mempool handler from its index.
>> + *
>> + * @param handler_idx
>> + *   The index of the handler in the handler table. It must be a valid
>> + *   index: (0 <= idx < num_handlers).
>> + * @return
>> + *   The pointer to the handler in the table.
>> + */
>> +static struct rte_mempool_handler *
> inline?

Will do.

>>  /* How many do we require i.e. number to fill the cache + the 
>> request */
>> -ret = rte_ring_mc_dequeue_bulk(mp->ring, 
>> >objs[cache->len], req);
>> +ret = rte_mempool_ext_get_bulk(mp,
> This makes inline function to a function pointer. Nothing wrong in
> that. However, Do you see any performance drop with "local cache" only
> use case?
>
> http://dpdk.org/dev/patchwork/patch/12993/

With the latest mempool manager patch (without 12933), I see no 
performance degradation on my Haswell machine.
However, when I apply patch 12933, I'm seeing a 200-300 kpps drop.

With 12933, the mempool_perf_autotest is showing 24% more 
enqueues/dequeues, but testpmd forwarding
traffic between 2 40Gig interfaces from a hardware traffic generator  
with one core doing the forwarding
is showing a drop of 200-300kpps.

Regards,
Dave.



---snip---




[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-24 Thread Jerin Jacob
On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
> Until now, the objects stored in mempool mempool were internally stored a
> ring. This patch introduce the possibility to register external handlers
> replacing the ring.
> 
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_handler() right after rte_mempool_create_empty() allows to
> change the handler that will be used when populating the mempool.
> 
> v5 changes: rebasing on top of 35 patch set mempool work.
> 
> Signed-off-by: David Hunt 
> Signed-off-by: Olivier Matz 
> ---
>  app/test/test_mempool_perf.c   |   1 -
>  lib/librte_mempool/Makefile|   2 +
>  lib/librte_mempool/rte_mempool.c   |  73 --
>  lib/librte_mempool/rte_mempool.h   | 212 
> +
>  lib/librte_mempool/rte_mempool_default.c   | 147 
>  lib/librte_mempool/rte_mempool_handler.c   | 139 +++
>  lib/librte_mempool/rte_mempool_version.map |   4 +
>  7 files changed, 506 insertions(+), 72 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_default.c
>  create mode 100644 lib/librte_mempool/rte_mempool_handler.c
> 
> diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
> index cdc02a0..091c1df 100644
> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>  n_get_bulk);
>   if (unlikely(ret < 0)) {
>   rte_mempool_dump(stdout, mp);
> - rte_ring_dump(stdout, mp->ring);
>   /* in this case, objects are lost... */
>   return -1;
>   }
> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
> index 43423e0..f19366e 100644
> --- a/lib/librte_mempool/Makefile
> +++ b/lib/librte_mempool/Makefile
> @@ -42,6 +42,8 @@ LIBABIVER := 2
>  
>  # all source are stored in SRCS-y
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
>  # install includes
>  SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
>  
> diff --git a/lib/librte_mempool/rte_mempool.c 
> b/lib/librte_mempool/rte_mempool.c
> index 1ab6701..6ec2b3f 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, 
> phys_addr_t physaddr)
>  #endif
>  
>   /* enqueue in ring */
> - rte_ring_sp_enqueue(mp->ring, obj);
> + rte_mempool_ext_put_bulk(mp, , 1);
>  }
>  
>  /* call obj_cb() for each mempool element */
> @@ -300,40 +300,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, 
> uint32_t elt_num,
>   return (size_t)paddr_idx << pg_shift;
>  }
>  
> -/* create the internal ring */
> -static int
> -rte_mempool_ring_create(struct rte_mempool *mp)
> -{
> - int rg_flags = 0, ret;
> - char rg_name[RTE_RING_NAMESIZE];
> - struct rte_ring *r;
> -
> - ret = snprintf(rg_name, sizeof(rg_name),
> - RTE_MEMPOOL_MZ_FORMAT, mp->name);
> - if (ret < 0 || ret >= (int)sizeof(rg_name))
> - return -ENAMETOOLONG;
> -
> - /* ring flags */
> - if (mp->flags & MEMPOOL_F_SP_PUT)
> - rg_flags |= RING_F_SP_ENQ;
> - if (mp->flags & MEMPOOL_F_SC_GET)
> - rg_flags |= RING_F_SC_DEQ;
> -
> - /* Allocate the ring that will be used to store objects.
> -  * Ring functions will return appropriate errors if we are
> -  * running as a secondary process etc., so no checks made
> -  * in this function for that condition.
> -  */
> - r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
> - mp->socket_id, rg_flags);
> - if (r == NULL)
> - return -rte_errno;
> -
> - mp->ring = r;
> - mp->flags |= MEMPOOL_F_RING_CREATED;
> - return 0;
> -}
> -
>  /* free a memchunk allocated with rte_memzone_reserve() */
>  static void
>  rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
> @@ -351,7 +317,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>   void *elt;
>  
>   while (!STAILQ_EMPTY(>elt_list)) {
> - rte_ring_sc_dequeue(mp->ring, );
> + rte_mempool_ext_get_bulk(mp, , 1);
>   (void)elt;
>   STAILQ_REMOVE_HEAD(>elt_list, next);
>   mp->populated_size--;
> @@ -380,15 +346,18 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char 
> *vaddr,
>   unsigned i = 0;
>   size_t off;
>   struct rte_mempool_memhdr *memhdr;
> - int ret;
>  
>   /* create the internal ring if not already done */
>   if ((mp->flags 

[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

2016-05-19 Thread David Hunt
Until now, the objects stored in mempool mempool were internally stored a
ring. This patch introduce the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows to
change the handler that will be used when populating the mempool.

v5 changes: rebasing on top of 35 patch set mempool work.

Signed-off-by: David Hunt 
Signed-off-by: Olivier Matz 
---
 app/test/test_mempool_perf.c   |   1 -
 lib/librte_mempool/Makefile|   2 +
 lib/librte_mempool/rte_mempool.c   |  73 --
 lib/librte_mempool/rte_mempool.h   | 212 +
 lib/librte_mempool/rte_mempool_default.c   | 147 
 lib/librte_mempool/rte_mempool_handler.c   | 139 +++
 lib/librte_mempool/rte_mempool_version.map |   4 +
 7 files changed, 506 insertions(+), 72 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_handler.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
   n_get_bulk);
if (unlikely(ret < 0)) {
rte_mempool_dump(stdout, mp);
-   rte_ring_dump(stdout, mp->ring);
/* in this case, objects are lost... */
return -1;
}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..f19366e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2

 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 1ab6701..6ec2b3f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, 
phys_addr_t physaddr)
 #endif

/* enqueue in ring */
-   rte_ring_sp_enqueue(mp->ring, obj);
+   rte_mempool_ext_put_bulk(mp, , 1);
 }

 /* call obj_cb() for each mempool element */
@@ -300,40 +300,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t 
elt_num,
return (size_t)paddr_idx << pg_shift;
 }

-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-   int rg_flags = 0, ret;
-   char rg_name[RTE_RING_NAMESIZE];
-   struct rte_ring *r;
-
-   ret = snprintf(rg_name, sizeof(rg_name),
-   RTE_MEMPOOL_MZ_FORMAT, mp->name);
-   if (ret < 0 || ret >= (int)sizeof(rg_name))
-   return -ENAMETOOLONG;
-
-   /* ring flags */
-   if (mp->flags & MEMPOOL_F_SP_PUT)
-   rg_flags |= RING_F_SP_ENQ;
-   if (mp->flags & MEMPOOL_F_SC_GET)
-   rg_flags |= RING_F_SC_DEQ;
-
-   /* Allocate the ring that will be used to store objects.
-* Ring functions will return appropriate errors if we are
-* running as a secondary process etc., so no checks made
-* in this function for that condition.
-*/
-   r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-   mp->socket_id, rg_flags);
-   if (r == NULL)
-   return -rte_errno;
-
-   mp->ring = r;
-   mp->flags |= MEMPOOL_F_RING_CREATED;
-   return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -351,7 +317,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
void *elt;

while (!STAILQ_EMPTY(>elt_list)) {
-   rte_ring_sc_dequeue(mp->ring, );
+   rte_mempool_ext_get_bulk(mp, , 1);
(void)elt;
STAILQ_REMOVE_HEAD(>elt_list, next);
mp->populated_size--;
@@ -380,15 +346,18 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char 
*vaddr,
unsigned i = 0;
size_t off;
struct rte_mempool_memhdr *memhdr;
-   int ret;

/* create the internal ring if not already done */
if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-   ret = rte_mempool_ring_create(mp);
-   if (ret < 0)
-   return ret;
+   rte_errno = 0;
+   mp->pool =