[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-10 Thread Jerin Jacob
On Fri, Jun 10, 2016 at 09:29:44AM +0200, Olivier Matz wrote:
> Hi,
> 
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>> 
> >>> ... #define MEMPOOL_F_SC_GET0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>> 
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>> 
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>> 
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>> 
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>> 
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size, 
> >>> -sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); 
> >>> +sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
> >> 
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >> 
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> > 
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> > 
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.
> 
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
> 
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.
> 
> +1
> 
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?

+1

MEMPOOL_F_PKT_ALLOC introduced only for legacy application to make it
work with external pool manager. If we are not taking that path(and
expects applications to change) then IMO we can we  have proper mempool
create API to accommodate the external pool  and deprecate
rte_mempool_create/rte_mempool_xmem_create

like,
1) Remove 10 to 15 arguments pool create and make it as structure(more
forward looking)
2) Remove flags
3) Have the same API for external and internal mempool create and
differentiate through handler through "name". NULL can be default
mempool handler(MPMC)


> 
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
> 


[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-10 Thread Shreyansh Jain
Hi Olivier,

> -Original Message-
> From: Olivier Matz [mailto:olivier.matz at 6wind.com]
> Sent: Friday, June 10, 2016 1:00 PM
> To: Jan Viktorin ; Hunt, David
> 
> Cc: Shreyansh Jain ; dev at dpdk.org;
> jerin.jacob at caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> Hi,
> 
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>>
> >>> ... #define MEMPOOL_F_SC_GET0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>>
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>>
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>>
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>>
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>>
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> >>> -sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> >>> +sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
> >>
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >>
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> >
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> >
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.
> 
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
> 
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.
> 
> +1
> 
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
> 
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.

Agree - mempool should be agnostic of the mbufs using it.
But, mempool should be aware of the allocator it is using, in my opinion.

And, agree with your argument of "MEMPOOL_F_PKT_ALLOC + MEMPOOL_F_SP_PUT" - it 
is bad semantics.

> 
> 
> > In overall we would get exactly 2 approaches (and not more):
> >
> > * using rte_mempool_create with flags calling the
> > rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> > (so this layer can be marked as deprecated and removed in the
> > future)
> 
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
> 
> > * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> > allowing any customizations but with the necessity to change the
> > applications (new preferred API)
> 
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.
> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
> 
>   rte_pktmbuf_pool_create_(..., name)
> 
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
> 
>   It sounds fine if calls to rte_mempool_* can select an external
>

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-10 Thread Shreyansh Jain
Hi David,

> -Original Message-
> From: Hunt, David [mailto:david.hunt at intel.com]
> Sent: Friday, June 10, 2016 3:05 PM
> To: Olivier Matz ; Jan Viktorin
> 
> Cc: Shreyansh Jain ; dev at dpdk.org;
> jerin.jacob at caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> Hi all,
> 
> On 10/6/2016 8:29 AM, Olivier Matz wrote:
> > Hi,
> >

[...snip...]
> >
> >
> >> So, the old applications can stay as they are (OK, with a possible
> >> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> >> have to set the ops explicitly.
> >>
> >> The more different ways of using those APIs we have, the greater hell
> >> we have to maintain.
> > I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.
> 
> I would tend to agree, even though yesterday I proposed making that
> change. However,
> thinking about it some more, I'm not totally happy with the
> MEMPOOL_F_PKT_ALLOC addition. It adds yet another method of creating a
> mempool,
> and I think that may introduce some confusion with some developers.
> 
> I also like the suggestion of rte_pktmbuf_pool_create_(...,
> name) suggested
> above, I was thinking the same myself last night, and I would prefer
> that rather than adding the
> MEMPOOL_F_PKT_ALLOC flag. Developers can add that function into their
> apps as a wrapper
> to rte_mempool_create_empty->rte_mempool_set_ops_byname should the need
> to have
> more than one pktmbuf allocator. Otherwise they can use the one that
> makes use of the
> RTE_MBUF_DEFAULT_MEMPOOL_OPS config setting.

+1

> 
> 
> > I think David's patch is already a good step forward. Let's do it
> > step by step. Next step is maybe to update some applications (at least
> > testpmd) to select a new pool handler dynamically.
> >
> > Regards,
> > Olivier

Thanks.

-
Shreyansh



[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-10 Thread Jan Viktorin
On Fri, 10 Jun 2016 09:29:44 +0200
Olivier Matz  wrote:

> Hi,
> 
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>> 
> >>> ... #define MEMPOOL_F_SC_GET0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>> 
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>> 
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>> 
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>> 
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>> 
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size, 
> >>> -sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); 
> >>> +sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;  
> >> 
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >> 
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?  
> > 
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> > 
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.  
> 
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.

Yes, I consider it quite strange. When thinking more about the mempool API,
I'd move the flags processing to the rte_mempool_create. Semantically, it
makes more sense as the "empty" clearly describes that it is empty. But with
the flags, it is not... What is the reason to have those flags there?

> 
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.  
> 
> +1
> 
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?

+1 :)

> 
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
> 
> 
> > In overall we would get exactly 2 approaches (and not more):
> > future)  

Well, now I can see that I've just written the same thing but with my own words 
:).

> > 
> > * using rte_mempool_create with flags calling the
> > rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> > (so this layer can be marked as deprecated and removed in the
> 
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
> 
> > * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> > allowing any customizations but with the necessity to change the
> > applications (new preferred API)  
> 
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.

I don't count this. It's an upper layer, but yes.

> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
> 
>   rte_pktmbuf_pool_create_(..., name)

Seems like a good idea.

> 
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
> 
>   It sounds fine if calls to rte_mempool_* can select an external
>   handler *optionally* - but, if we pass it as command line, it would
>   be binding (at least, semantically) for rte_pktmbuf_* calls as well.
>   Isn't it?

I think, the question here is whether the processing of such optional
command line specification is up to the application or up the the DPDK
core. If we leave it in applications, it's just a matter of API.

> 
> > So, the old applications can stay as they are (OK, with a possible
> > new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> > have to set the ops explicitly.
> > 
> > The more different ways of using those APIs we have, the greater hell
> > we have to maintain.  
> 
> I'm 

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-10 Thread Hunt, David
Hi all,

On 10/6/2016 8:29 AM, Olivier Matz wrote:
> Hi,
>
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
 My suggestion is to have an additional flag,
 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:

 ... #define MEMPOOL_F_SC_GET0x0008 #define
 MEMPOOL_F_PKT_ALLOC 0x0010 ...

 in rte_mempool_create_empty: ... after checking the other
 MEMPOOL_F_* flags...

 if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
 RTE_MBUF_DEFAULT_MEMPOOL_OPS)

 And removing the redundant call to rte_mempool_set_ops_byname()
 in rte_pktmbuf_create_pool().

 Thereafter, rte_pktmbuf_pool_create can be changed to:

 ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
 -sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
 +sizeof(struct rte_pktmbuf_pool_private), socket_id, +
 MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
>>> Yes, this would simplify somewhat the creation of a pktmbuf pool,
>>> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
>>> However, I'm not sure we want to introduce a third method of
>>> creating a mempool to the developers. If we introduced this, we
>>> would then have: 1. rte_pktmbuf_pool_create() 2.
>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
>>> would use the configured custom handler) 3.
>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
>>> followed by a call to rte_mempool_set_ops_byname() (would allow
>>> several different custom handlers to be used in one application
>>>
>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>> I am quite careful about this topic as I don't feel to be very
>> involved in all the use cases. My opinion is that the _new API_
>> should be able to cover all cases and the _old API_ should be
>> backwards compatible, however, built on top of the _new API_.
>>
>> I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
>> of the old API) should be accepted by the old API ONLY. The
>> rte_mempool_create_empty should not process them.
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
>
>> Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
>> rte_mempool_create_empty by this anymore.
> +1
>
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
>
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
>
>
>> In overall we would get exactly 2 approaches (and not more):
>>
>> * using rte_mempool_create with flags calling the
>> rte_mempool_create_empty and rte_mempool_set_ops_byname internally
>> (so this layer can be marked as deprecated and removed in the
>> future)
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
>
>> * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
>> allowing any customizations but with the necessity to change the
>> applications (new preferred API)
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.
> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
>
>rte_pktmbuf_pool_create_(..., name)
>
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
>
>It sounds fine if calls to rte_mempool_* can select an external
>handler *optionally* - but, if we pass it as command line, it would
>be binding (at least, semantically) for rte_pktmbuf_* calls as well.
>Isn't it?
>
>
>> So, the old applications can stay as they are (OK, with a possible
>> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
>> have to set the ops explicitly.
>>
>> The more different ways of using those APIs we have, the greater hell
>> we have to maintain.
> I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.

I would tend to agree, even though yesterday I proposed making that 
change. However,
thinking about it some more, I'm not totally happy with the
MEMPOOL_F_PKT_ALLOC addition. It adds yet another method of creating a 
mempool,
and I think that may introduce some confusion with some developers.

I also like the suggestion of rte_pktmbuf_pool_create_(..., 
name) suggested
above, I was thinking the same myself last night, and I would prefer 
that rather than adding the
MEMPOOL_F_PKT_ALLOC flag. Developers can add that function into their 
apps as a wrapper
to rte_mempool_create_empty->rte_mempool_set_ops_byname should the need 
to have
more than one pktmbuf allocator. Otherwise they can use the one that 
makes use of the

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-10 Thread Olivier Matz
Hi,

On 06/09/2016 03:09 PM, Jan Viktorin wrote:
>>> My suggestion is to have an additional flag,
>>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>> 
>>> ... #define MEMPOOL_F_SC_GET0x0008 #define
>>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
>>> 
>>> in rte_mempool_create_empty: ... after checking the other
>>> MEMPOOL_F_* flags...
>>> 
>>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
>>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>> 
>>> And removing the redundant call to rte_mempool_set_ops_byname()
>>> in rte_pktmbuf_create_pool().
>>> 
>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>> 
>>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size, 
>>> -sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); 
>>> +sizeof(struct rte_pktmbuf_pool_private), socket_id, +
>>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
>> 
>> Yes, this would simplify somewhat the creation of a pktmbuf pool,
>> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
>> However, I'm not sure we want to introduce a third method of
>> creating a mempool to the developers. If we introduced this, we
>> would then have: 1. rte_pktmbuf_pool_create() 2.
>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
>> would use the configured custom handler) 3.
>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
>> followed by a call to rte_mempool_set_ops_byname() (would allow
>> several different custom handlers to be used in one application
>> 
>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> 
> I am quite careful about this topic as I don't feel to be very
> involved in all the use cases. My opinion is that the _new API_
> should be able to cover all cases and the _old API_ should be
> backwards compatible, however, built on top of the _new API_.
> 
> I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> of the old API) should be accepted by the old API ONLY. The
> rte_mempool_create_empty should not process them.

The rte_mempool_create_empty() function already processes these flags
(SC_GET, SP_PUT) as of today.

> Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> rte_mempool_create_empty by this anymore.

+1

I think we should stop adding flags. Flags are prefered for independent
features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
MEMPOOL_F_SP_PUT?

Another reason to not add this flag is the rte_mempool library
should not be aware of mbufs. The mbuf pools rely on mempools, but
not the contrary.


> In overall we would get exactly 2 approaches (and not more):
> 
> * using rte_mempool_create with flags calling the
> rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> (so this layer can be marked as deprecated and removed in the
> future)

Agree. This was one of the objective of my mempool rework patchset:
provide a more flexible API, and avoid functions with 10 to 15
arguments.

> * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> allowing any customizations but with the necessity to change the
> applications (new preferred API)

Yes.
And if required, maybe a third API is possible in case of mbuf pools.
Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
to create a pool of mbuf instead of mempool API. If an application
wants to select specific ops for it, we could add:

  rte_pktmbuf_pool_create_(..., name)

instead of using the mempool API.
I think this is what Shreyansh suggests when he says:

  It sounds fine if calls to rte_mempool_* can select an external
  handler *optionally* - but, if we pass it as command line, it would
  be binding (at least, semantically) for rte_pktmbuf_* calls as well.
  Isn't it?


> So, the old applications can stay as they are (OK, with a possible
> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> have to set the ops explicitly.
> 
> The more different ways of using those APIs we have, the greater hell
> we have to maintain.

I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.

I think David's patch is already a good step forward. Let's do it
step by step. Next step is maybe to update some applications (at least
testpmd) to select a new pool handler dynamically.

Regards,
Olivier


[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Jerin Jacob
On Thu, Jun 09, 2016 at 02:18:57PM +0100, Hunt, David wrote:
> 
> 
> > > > As I mentioned earlier, My take is not to create the separate API's for
> > > > external mempool handlers.In my view, It's same,  just that sepreate
> > > > mempool handler through function pointers.
> > > > 
> > > > To keep the backward compatibility, I think we can extend the flags
> > > > in rte_mempool_create and have a single API external/internal pool
> > > > creation(makes easy for existing applications too, add a just mempool
> > > > flag command line argument to existing applications to choose the
> > > > mempool handler)
> > > May be I am interpreting it wrong, but, are you suggesting a single 
> > > mempool handler for all buffer/packet needs of an application (passed as 
> > > command line argument)?
> > > That would be inefficient especially for cases where pool is backed by a 
> > > hardware. The application wouldn't want its generic buffers to consume 
> > > hardware resource which would be better used for packets.
> > It may vary from platform to platform or particular use case. For instance,
> > the HW external pool manager for generic buffers may scale better than SW 
> > multi
> > producers/multi-consumer implementation when the number of cores > N
> > (as no locking involved in enqueue/dequeue(again it is depended on
> > specific HW implementation))
> > 
> > I thought their no harm in selecting the external pool handlers
> > in root level itself(rte_mempool_create) as by default it is
> > SW MP/MC and it just an option to override if the application wants it.
> > 
> > Jerin
> > 
> 
> 
> So, how about we go with the following, based on Shreyansh's suggestion:
> 
> 1. Add in #define MEMPOOL_F_EMM_ALLOC 0x0010  (EMM for External Mempool
> Manager)
> 
> 2. Check this bit in rte_mempool_create() just before the other bits are
> checked (by the way, the flags check has been moved to rte_mempool_create(),
> as per an earlier patchset, but was inadvertantly reverted)
> 
> /*
>  * First check to see if we use the config'd mempool hander.
>  * Then examine the combinations of SP/SC/MP/MC flags to
>  * set the correct index into the table of ops structs.
>  */
> if (flags & MEMPOOL_F_EMM_ALLOC)
> rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> else (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> else if (flags & MEMPOOL_F_SP_PUT)
> rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> else if (flags & MEMPOOL_F_SC_GET)
> rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> else
> rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> 
> 3. Modify rte_pktmbuf_pool_create to pass the bit to rte_mempool_create
> 
> -sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> +sizeof(struct rte_pktmbuf_pool_private), socket_id,
> +MEMPOOL_F_PKT_ALLOC);
> 
> 
> This will allow legacy apps to use one external handler ( as defined
> RTE_MBUF_DEFAULT_MEMPOOL_OPS) by adding the MEMPOOL_F_EMM_ALLOC bit to their
> flags in the call to rte_mempool_create().
> 
> Of course, if an app wants to use more than one external handler, they can
> call create_empty and set_ops_byname() for each mempool they create.

+1

Since rte_pktmbuf_pool_create does not take flag, I think this the only
option left with for legacy apps.

> 
> Regards,
> Dave.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 


[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Jerin Jacob
On Thu, Jun 09, 2016 at 11:49:44AM +, Shreyansh Jain wrote:
> Hi Jerin,

Hi Shreyansh,

> 
> > > Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
> > it
> > > replaces
> > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> > > want
> > > to introduce a third method of creating a mempool to the developers. If we
> > > introduced this, we would then have:
> > > 1. rte_pktmbuf_pool_create()
> > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> > >use the configured custom handler)
> > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set 
> > > followed
> > >by a call to rte_mempool_set_ops_byname() (would allow several 
> > > different
> > > custom
> > >handlers to be used in one application
> > >
> > > Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> > 
> > As I mentioned earlier, My take is not to create the separate API's for
> > external mempool handlers.In my view, It's same,  just that sepreate
> > mempool handler through function pointers.
> > 
> > To keep the backward compatibility, I think we can extend the flags
> > in rte_mempool_create and have a single API external/internal pool
> > creation(makes easy for existing applications too, add a just mempool
> > flag command line argument to existing applications to choose the
> > mempool handler)
> 
> May be I am interpreting it wrong, but, are you suggesting a single mempool 
> handler for all buffer/packet needs of an application (passed as command line 
> argument)?
> That would be inefficient especially for cases where pool is backed by a 
> hardware. The application wouldn't want its generic buffers to consume 
> hardware resource which would be better used for packets.

It may vary from platform to platform or particular use case. For instance,
the HW external pool manager for generic buffers may scale better than SW multi
producers/multi-consumer implementation when the number of cores > N
(as no locking involved in enqueue/dequeue(again it is depended on
specific HW implementation))

I thought their no harm in selecting the external pool handlers
in root level itself(rte_mempool_create) as by default it is
SW MP/MC and it just an option to override if the application wants it.

Jerin


> 
> I was hoping that external mempool handler would help segregate such 
> use-cases and allow applications to tap into accelerated packet processors.
> 
> Do correct me if I my understanding is wrong.
> 
> > 
> > Jerin
> > 
> > >
> > > Regards,
> > > Dave.
> > >
> > >
> 
> -
> Shreyansh


[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Jerin Jacob
On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote:
> Hi Shreyansh,
> 
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> > 
> > Thanks for explanation. I have some comments inline...
> > 
> > > -Original Message-
> > > From: Hunt, David [mailto:david.hunt at intel.com]
> > > Sent: Tuesday, June 07, 2016 2:56 PM
> > > To: Shreyansh Jain ; dev at dpdk.org
> > > Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
> > > jerin.jacob at caviumnetworks.com
> > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> > > operations
> > > 
> > > Hi Shreyansh,
> > > 
> > > On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> > > > Hi,
> > > > 
> > > > (Apologies for overly-eager email sent on this thread earlier. Will be 
> > > > more
> > > careful in future).
> > > > This is more of a question/clarification than a comment. (And I have 
> > > > taken
> > > only some snippets from original mail to keep it cleaner)
> > > > 
> > > > > +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> > > > 
> > > > 
> > > >   From the above what I understand is that multiple packet pool 
> > > > handlers can
> > > be created.
> > > > I have a use-case where application has multiple pools but only the 
> > > > packet
> > > pool is hardware backed. Using the hardware for general buffer 
> > > requirements
> > > would prove costly.
> > > >   From what I understand from the patch, selection of the pool is based 
> > > > on
> > > the flags below.
> > > 
> > > The flags are only used to select one of the default handlers for
> > > backward compatibility through
> > > the rte_mempool_create call. If you wish to use a mempool handler that
> > > is not one of the
> > > defaults, (i.e. a new hardware handler), you would use the
> > > rte_create_mempool_empty
> > > followed by the rte_mempool_set_ops_byname call.
> > > So, for the external handlers, you create and empty mempool, then set
> > > the operations (ops)
> > > for that particular mempool.
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' 
> > model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to 
> > use packet pools (backed by hardware) and buffer pools (backed by 
> > ring/others) - transparently.
> > 
> > If I go by your suggestions, what I understand is, doing the above without 
> > modification to applications would be equivalent to:
> > 
> >struct rte_mempool_ops custom_hw_allocator = {...}
> > 
> > thereafter, in config/common_base:
> > 
> >CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> > 
> > calls to rte_pktmbuf_pool_create would use the new allocator.
> 
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> rte_mempool_create will continue to use the default handlers (ring based).
> > But, another problem arises here.
> > 
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> > rte_pktmbuf_pool_create
> >   \- rte_mempool_create_empty
> >   |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> >   |
> >   `- rte_mempool_set_ops_byname
> > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> > /* Override default 'ring_mp_mc' of
> >  * rte_mempool_create */
> > 
> > 2. Through generic mempool create API
> > rte_mempool_create
> >   \- rte_mempool_create_empty
> > (passing pktmbuf and pool constructors)
> > I found various instances in example applications where 
> > rte_mempool_create() is being called directly for packet pools - bypassing 
> > the more semantically correct call to rte_pktmbuf_* for packet pools.
> > 
> > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to 
> > replace custom handler operations for packet buffer allocations.
> > 
> >  From a performance point-of-view, Applications should be able to select 
> > between packet pools and non-packe

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Jan Viktorin
On Thu, 9 Jun 2016 10:39:46 +0100
"Hunt, David"  wrote:

> Hi Shreyansh,
> 
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> >
> > Thanks for explanation. I have some comments inline...
> >  
> >> -Original Message-
> >> From: Hunt, David [mailto:david.hunt at intel.com]
> >> Sent: Tuesday, June 07, 2016 2:56 PM
> >> To: Shreyansh Jain ; dev at dpdk.org
> >> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
> >> jerin.jacob at caviumnetworks.com
> >> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> >> operations
> >>
> >> Hi Shreyansh,
> >>
> >> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:  
> >>> Hi,
> >>>
> >>> (Apologies for overly-eager email sent on this thread earlier. Will be 
> >>> more  
> >> careful in future).  
> >>> This is more of a question/clarification than a comment. (And I have 
> >>> taken  
> >> only some snippets from original mail to keep it cleaner)  
> >>>   
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);  
> >>> 
> >>>
> >>>   From the above what I understand is that multiple packet pool handlers 
> >>> can  
> >> be created.  
> >>> I have a use-case where application has multiple pools but only the 
> >>> packet  
> >> pool is hardware backed. Using the hardware for general buffer requirements
> >> would prove costly.  
> >>>   From what I understand from the patch, selection of the pool is based 
> >>> on  
> >> the flags below.
> >>
> >> The flags are only used to select one of the default handlers for
> >> backward compatibility through
> >> the rte_mempool_create call. If you wish to use a mempool handler that
> >> is not one of the
> >> defaults, (i.e. a new hardware handler), you would use the
> >> rte_create_mempool_empty
> >> followed by the rte_mempool_set_ops_byname call.
> >> So, for the external handlers, you create and empty mempool, then set
> >> the operations (ops)
> >> for that particular mempool.  
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' 
> > model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to 
> > use packet pools (backed by hardware) and buffer pools (backed by 
> > ring/others) - transparently.
> >
> > If I go by your suggestions, what I understand is, doing the above without 
> > modification to applications would be equivalent to:
> >
> >struct rte_mempool_ops custom_hw_allocator = {...}
> >
> > thereafter, in config/common_base:
> >
> >CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> >
> > calls to rte_pktmbuf_pool_create would use the new allocator.  
> 
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to 
> rte_mempool_create will continue to use the default handlers (ring based).
> > But, another problem arises here.
> >
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> > rte_pktmbuf_pool_create
> >   \- rte_mempool_create_empty
> >   |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> >   |
> >   `- rte_mempool_set_ops_byname
> > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> > /* Override default 'ring_mp_mc' of
> >  * rte_mempool_create */
> >
> > 2. Through generic mempool create API
> > rte_mempool_create
> >   \- rte_mempool_create_empty
> > (passing pktmbuf and pool constructors)
> >
> > I found various instances in example applications where 
> > rte_mempool_create() is being called directly for packet pools - bypassing 
> > the more semantically correct call to rte_pktmbuf_* for packet pools.
> >
> > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to 
> > replace custom handler operations for packet buffer allocations.
> >
> >  From a performance point-of-view, Applications should be able to select 
> > between packet pools and non-packet pools.  
> 
> This is intended for backw

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Hunt, David


On 9/6/2016 1:30 PM, Jerin Jacob wrote:
> On Thu, Jun 09, 2016 at 11:49:44AM +, Shreyansh Jain wrote:
>> Hi Jerin,
> Hi Shreyansh,
>
 Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
>>> it
 replaces
 the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
 want
 to introduce a third method of creating a mempool to the developers. If we
 introduced this, we would then have:
 1. rte_pktmbuf_pool_create()
 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
 use the configured custom handler)
 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
 by a call to rte_mempool_set_ops_byname() (would allow several 
 different
 custom
 handlers to be used in one application

 Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>>> As I mentioned earlier, My take is not to create the separate API's for
>>> external mempool handlers.In my view, It's same,  just that sepreate
>>> mempool handler through function pointers.
>>>
>>> To keep the backward compatibility, I think we can extend the flags
>>> in rte_mempool_create and have a single API external/internal pool
>>> creation(makes easy for existing applications too, add a just mempool
>>> flag command line argument to existing applications to choose the
>>> mempool handler)
>> May be I am interpreting it wrong, but, are you suggesting a single mempool 
>> handler for all buffer/packet needs of an application (passed as command 
>> line argument)?
>> That would be inefficient especially for cases where pool is backed by a 
>> hardware. The application wouldn't want its generic buffers to consume 
>> hardware resource which would be better used for packets.
> It may vary from platform to platform or particular use case. For instance,
> the HW external pool manager for generic buffers may scale better than SW 
> multi
> producers/multi-consumer implementation when the number of cores > N
> (as no locking involved in enqueue/dequeue(again it is depended on
> specific HW implementation))
>
> I thought their no harm in selecting the external pool handlers
> in root level itself(rte_mempool_create) as by default it is
> SW MP/MC and it just an option to override if the application wants it.
>
> Jerin
>


So, how about we go with the following, based on Shreyansh's suggestion:

1. Add in #define MEMPOOL_F_EMM_ALLOC 0x0010  (EMM for External Mempool 
Manager)

2. Check this bit in rte_mempool_create() just before the other bits are 
checked (by the way, the flags check has been moved to 
rte_mempool_create(), as per an earlier patchset, but was inadvertantly 
reverted)

 /*
  * First check to see if we use the config'd mempool hander.
  * Then examine the combinations of SP/SC/MP/MC flags to
  * set the correct index into the table of ops structs.
  */
 if (flags & MEMPOOL_F_EMM_ALLOC)
 rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
 else (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
 rte_mempool_set_ops_byname(mp, "ring_sp_sc");
 else if (flags & MEMPOOL_F_SP_PUT)
 rte_mempool_set_ops_byname(mp, "ring_sp_mc");
 else if (flags & MEMPOOL_F_SC_GET)
 rte_mempool_set_ops_byname(mp, "ring_mp_sc");
 else
 rte_mempool_set_ops_byname(mp, "ring_mp_mc");

3. Modify rte_pktmbuf_pool_create to pass the bit to rte_mempool_create

-sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+sizeof(struct rte_pktmbuf_pool_private), socket_id,
+MEMPOOL_F_PKT_ALLOC);


This will allow legacy apps to use one external handler ( as defined 
RTE_MBUF_DEFAULT_MEMPOOL_OPS) by adding the MEMPOOL_F_EMM_ALLOC bit to 
their flags in the call to rte_mempool_create().

Of course, if an app wants to use more than one external handler, they 
can call create_empty and set_ops_byname() for each mempool they create.

Regards,
Dave.












[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Hunt, David


On 9/6/2016 12:41 PM, Shreyansh Jain wrote:
> Hi David,
>
>> -Original Message-
>> From: Hunt, David [mailto:david.hunt at intel.com]
>> Sent: Thursday, June 09, 2016 3:10 PM
>> To: Shreyansh Jain ; dev at dpdk.org
>> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
>> jerin.jacob at caviumnetworks.com
>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>> operations
>>
>> Hi Shreyansh,
>>
>> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
>>> Hi David,
>>>
>>> Thanks for explanation. I have some comments inline...
>>>
>>>> -Original Message-
>>>> From: Hunt, David [mailto:david.hunt at intel.com]
>>>> Sent: Tuesday, June 07, 2016 2:56 PM
>>>> To: Shreyansh Jain ; dev at dpdk.org
>>>> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
>>>> jerin.jacob at caviumnetworks.com
>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>>>> operations
>>>>
>>>> Hi Shreyansh,
>>>>
>>>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>>>> Hi,
>>>>>
>>>>> (Apologies for overly-eager email sent on this thread earlier. Will be
>> more
>>>> careful in future).
>>>>> This is more of a question/clarification than a comment. (And I have
>> taken
>>>> only some snippets from original mail to keep it cleaner)
>>>>> 
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>>>> 
>>>>>
>>>>>From the above what I understand is that multiple packet pool handlers
>> can
>>>> be created.
>>>>> I have a use-case where application has multiple pools but only the
>> packet
>>>> pool is hardware backed. Using the hardware for general buffer
>> requirements
>>>> would prove costly.
>>>>>From what I understand from the patch, selection of the pool is based
>> on
>>>> the flags below.
>>>>
>>>> The flags are only used to select one of the default handlers for
>>>> backward compatibility through
>>>> the rte_mempool_create call. If you wish to use a mempool handler that
>>>> is not one of the
>>>> defaults, (i.e. a new hardware handler), you would use the
>>>> rte_create_mempool_empty
>>>> followed by the rte_mempool_set_ops_byname call.
>>>> So, for the external handlers, you create and empty mempool, then set
>>>> the operations (ops)
>>>> for that particular mempool.
>>> I am concerned about the existing applications (for example, l3fwd).
>>> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
>> model would require modifications to these applications.
>>> Ideally, without any modifications, these applications should be able to 
>>> use packet pools (backed by hardware) and buffer pools (backed by
>>> ring/others) - transparently.
>>> If I go by your suggestions, what I understand is, doing the above without 
>>> modification to applications would be equivalent to:
>>> struct rte_mempool_ops custom_hw_allocator = {...}
>>>
>>> thereafter, in config/common_base:
>>>
>>> CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>>>
>>> calls to rte_pktmbuf_pool_create would use the new allocator.
>> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
>> rte_mempool_create will continue to use the default handlers (ring based).
> Agree with you.
> But, some applications continue to use rte_mempool_create for allocating 
> packet pools. Thus, even with a custom handler available (which, most 
> probably, would be a hardware packet buffer handler), application would 
> unintentionally end up not using it.
> Probably, such applications should be changed? (e.g. pipeline).

Yes, agreed.  If those applications need to use external mempool 
handlers, then they should be changed. I would see that as outside the 
scope of this patchset.


>>> But, another problem arises here.
>>>
>>> There are two distinct paths for allocations of a memory pool:
>>> 1. A 'pkt' pool:
>>>  rte_pktmbuf_pool_create
>>>\- rte_mempool_create_empty
>>>|   \- rte_mempool_se

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Shreyansh Jain
Hi Jerin,

> -Original Message-
> From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> Sent: Thursday, June 09, 2016 6:01 PM
> To: Shreyansh Jain 
> Cc: Hunt, David ; dev at dpdk.org; olivier.matz at 
> 6wind.com;
> viktorin at rehivetech.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> On Thu, Jun 09, 2016 at 11:49:44AM +, Shreyansh Jain wrote:
> > Hi Jerin,
> 
> Hi Shreyansh,
> 
> >
> > > > Yes, this would simplify somewhat the creation of a pktmbuf pool, in
> that
> > > it
> > > > replaces
> > > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure
> we
> > > > want
> > > > to introduce a third method of creating a mempool to the developers. If
> we
> > > > introduced this, we would then have:
> > > > 1. rte_pktmbuf_pool_create()
> > > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> > > >use the configured custom handler)
> > > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> followed
> > > >by a call to rte_mempool_set_ops_byname() (would allow several
> different
> > > > custom
> > > >handlers to be used in one application
> > > >
> > > > Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> > >
> > > As I mentioned earlier, My take is not to create the separate API's for
> > > external mempool handlers.In my view, It's same,  just that sepreate
> > > mempool handler through function pointers.
> > >
> > > To keep the backward compatibility, I think we can extend the flags
> > > in rte_mempool_create and have a single API external/internal pool
> > > creation(makes easy for existing applications too, add a just mempool
> > > flag command line argument to existing applications to choose the
> > > mempool handler)
> >
> > May be I am interpreting it wrong, but, are you suggesting a single mempool
> handler for all buffer/packet needs of an application (passed as command line
> argument)?
> > That would be inefficient especially for cases where pool is backed by a
> hardware. The application wouldn't want its generic buffers to consume
> hardware resource which would be better used for packets.
> 
> It may vary from platform to platform or particular use case. For instance,
> the HW external pool manager for generic buffers may scale better than SW
> multi
> producers/multi-consumer implementation when the number of cores > N
> (as no locking involved in enqueue/dequeue(again it is depended on
> specific HW implementation))

I agree with you that above cases would exist.

But, even in these cases I think it would be application's prerogative to 
decide whether it would like its buffers to be managed by a hardware allocator 
or SW [SM]p/[SM]c implementations. Probably, in this case the application would 
call the rte_mempool_*(PKT_POOL) for generic buffers as well (or maybe a 
dedicated buffer pool flag) - just as an example.

> 
> I thought their no harm in selecting the external pool handlers
> in root level itself(rte_mempool_create) as by default it is
> SW MP/MC and it just an option to override if the application wants it.

It sounds fine if calls to rte_mempool_* can select an external handler 
*optionally* - but, if we pass it as command line, it would be binding (at 
least, semantically) for rte_pktmbuf_* calls as well. Isn't it?

[Probably, I am still unclear how it would remain 'optional' in command line 
case you suggested.]

> 
> Jerin
> 
> 
[...]

-
Shreyansh


[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Hunt, David


On 9/6/2016 11:31 AM, Jerin Jacob wrote:
> On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote:
>> Hi Shreyansh,
>>
>> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
>>> Hi David,
>>>
>>> Thanks for explanation. I have some comments inline...
>>>
>>>> -Original Message-
>>>> From: Hunt, David [mailto:david.hunt at intel.com]
>>>> Sent: Tuesday, June 07, 2016 2:56 PM
>>>> To: Shreyansh Jain ; dev at dpdk.org
>>>> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
>>>> jerin.jacob at caviumnetworks.com
>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>>>> operations
>>>>
>>>> Hi Shreyansh,
>>>>
>>>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>>>> Hi,
>>>>>
>>>>> (Apologies for overly-eager email sent on this thread earlier. Will be 
>>>>> more
>>>> careful in future).
>>>>> This is more of a question/clarification than a comment. (And I have taken
>>>> only some snippets from original mail to keep it cleaner)
>>>>> 
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>>>> 
>>>>>
>>>>>From the above what I understand is that multiple packet pool handlers 
>>>>> can
>>>> be created.
>>>>> I have a use-case where application has multiple pools but only the packet
>>>> pool is hardware backed. Using the hardware for general buffer requirements
>>>> would prove costly.
>>>>>From what I understand from the patch, selection of the pool is based 
>>>>> on
>>>> the flags below.
>>>>
>>>> The flags are only used to select one of the default handlers for
>>>> backward compatibility through
>>>> the rte_mempool_create call. If you wish to use a mempool handler that
>>>> is not one of the
>>>> defaults, (i.e. a new hardware handler), you would use the
>>>> rte_create_mempool_empty
>>>> followed by the rte_mempool_set_ops_byname call.
>>>> So, for the external handlers, you create and empty mempool, then set
>>>> the operations (ops)
>>>> for that particular mempool.
>>> I am concerned about the existing applications (for example, l3fwd).
>>> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' 
>>> model would require modifications to these applications.
>>> Ideally, without any modifications, these applications should be able to 
>>> use packet pools (backed by hardware) and buffer pools (backed by 
>>> ring/others) - transparently.
>>>
>>> If I go by your suggestions, what I understand is, doing the above without 
>>> modification to applications would be equivalent to:
>>>
>>> struct rte_mempool_ops custom_hw_allocator = {...}
>>>
>>> thereafter, in config/common_base:
>>>
>>> CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>>>
>>> calls to rte_pktmbuf_pool_create would use the new allocator.
>> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
>> rte_mempool_create will continue to use the default handlers (ring based).
>>> But, another problem arises here.
>>>
>>> There are two distinct paths for allocations of a memory pool:
>>> 1. A 'pkt' pool:
>>>  rte_pktmbuf_pool_create
>>>\- rte_mempool_create_empty
>>>|   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
>>>|
>>>`- rte_mempool_set_ops_byname
>>>  (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
>>>  /* Override default 'ring_mp_mc' of
>>>   * rte_mempool_create */
>>>
>>> 2. Through generic mempool create API
>>>  rte_mempool_create
>>>\- rte_mempool_create_empty
>>>  (passing pktmbuf and pool constructors)
>>> I found various instances in example applications where 
>>> rte_mempool_create() is being called directly for packet pools - bypassing 
>>> the more semantically correct call to rte_pktmbuf_* for packet pools.
>>>
>>> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to 
>>> repl

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Shreyansh Jain
Hi David,

> -Original Message-
> From: Hunt, David [mailto:david.hunt at intel.com]
> Sent: Thursday, June 09, 2016 3:10 PM
> To: Shreyansh Jain ; dev at dpdk.org
> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
> jerin.jacob at caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> Hi Shreyansh,
> 
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> >
> > Thanks for explanation. I have some comments inline...
> >
> >> -Original Message-
> >> From: Hunt, David [mailto:david.hunt at intel.com]
> >> Sent: Tuesday, June 07, 2016 2:56 PM
> >> To: Shreyansh Jain ; dev at dpdk.org
> >> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
> >> jerin.jacob at caviumnetworks.com
> >> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> >> operations
> >>
> >> Hi Shreyansh,
> >>
> >> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> >>> Hi,
> >>>
> >>> (Apologies for overly-eager email sent on this thread earlier. Will be
> more
> >> careful in future).
> >>> This is more of a question/clarification than a comment. (And I have
> taken
> >> only some snippets from original mail to keep it cleaner)
> >>> 
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> >>> 
> >>>
> >>>   From the above what I understand is that multiple packet pool handlers
> can
> >> be created.
> >>> I have a use-case where application has multiple pools but only the
> packet
> >> pool is hardware backed. Using the hardware for general buffer
> requirements
> >> would prove costly.
> >>>   From what I understand from the patch, selection of the pool is based
> on
> >> the flags below.
> >>
> >> The flags are only used to select one of the default handlers for
> >> backward compatibility through
> >> the rte_mempool_create call. If you wish to use a mempool handler that
> >> is not one of the
> >> defaults, (i.e. a new hardware handler), you would use the
> >> rte_create_mempool_empty
> >> followed by the rte_mempool_set_ops_byname call.
> >> So, for the external handlers, you create and empty mempool, then set
> >> the operations (ops)
> >> for that particular mempool.
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
> model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to
> use packet pools (backed by hardware) and buffer pools (backed by
> ring/others) - transparently.
> >
> > If I go by your suggestions, what I understand is, doing the above without
> modification to applications would be equivalent to:
> >
> >struct rte_mempool_ops custom_hw_allocator = {...}
> >
> > thereafter, in config/common_base:
> >
> >CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> >
> > calls to rte_pktmbuf_pool_create would use the new allocator.
> 
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> rte_mempool_create will continue to use the default handlers (ring based).

Agree with you.
But, some applications continue to use rte_mempool_create for allocating packet 
pools. Thus, even with a custom handler available (which, most probably, would 
be a hardware packet buffer handler), application would unintentionally end up 
not using it.
Probably, such applications should be changed? (e.g. pipeline). 

> > But, another problem arises here.
> >
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> > rte_pktmbuf_pool_create
> >   \- rte_mempool_create_empty
> >   |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> >   |
> >   `- rte_mempool_set_ops_byname
> > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> > /* Override default 'ring_mp_mc' of
> >  * rte_mempool_create */
> >
> > 2. Through generic mempool create API
> > rte_mempool_create
> >   \- rte_mempool_create_empty
> > (passing pktmbuf and pool constructors)
> >
> > I found various instances in example applications where
> rte_

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Hunt, David
Hi Olivier,

On 8/6/2016 1:13 PM, Olivier Matz wrote:
> Hi David,
>
> Please find some comments below.
>
> On 06/03/2016 04:58 PM, David Hunt wrote:
>
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> +/**
>> + * Prototype for implementation specific data provisioning function.
>> + *
>> + * The function should provide the implementation specific memory for
>> + * for use by the other mempool ops functions in a given mempool ops struct.
>> + * E.g. the default ops provides an instance of the rte_ring for this 
>> purpose.
>> + * it will mostlikely point to a different type of data structure, and
>> + * will be transparent to the application programmer.
>> + */
>> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> In http://dpdk.org/ml/archives/dev/2016-June/040233.html, I suggested
> to change the prototype to return an int (-errno) and directly set
> the set mp->pool_data (void *) or mp->pool_if (uint64_t). No cast
> would be required in this latter case.

Done.

> By the way, there is a typo in the comment:
> "mostlikely" -> "most likely"

Fixed.

>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_default.c
>> +static void
>> +common_ring_free(struct rte_mempool *mp)
>> +{
>> +rte_ring_free((struct rte_ring *)mp->pool_data);
>> +}
> I don't think the cast is needed here.
> (same in other functions)

Removed.

>
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_ops.c
>> +/* add a new ops struct in rte_mempool_ops_table, return its index */
>> +int
>> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
>> +{
>> +struct rte_mempool_ops *ops;
>> +int16_t ops_index;
>> +
>> +rte_spinlock_lock(_mempool_ops_table.sl);
>> +
>> +if (rte_mempool_ops_table.num_ops >=
>> +RTE_MEMPOOL_MAX_OPS_IDX) {
>> +rte_spinlock_unlock(_mempool_ops_table.sl);
>> +RTE_LOG(ERR, MEMPOOL,
>> +"Maximum number of mempool ops structs exceeded\n");
>> +return -ENOSPC;
>> +}
>> +
>> +if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
>> +rte_spinlock_unlock(_mempool_ops_table.sl);
>> +RTE_LOG(ERR, MEMPOOL,
>> +"Missing callback while registering mempool ops\n");
>> +return -EINVAL;
>> +}
>> +
>> +if (strlen(h->name) >= sizeof(ops->name) - 1) {
>> +RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
>> +__func__, h->name);
>> +rte_errno = EEXIST;
>> +return NULL;
>> +}
> This has already been noticed by Shreyansh, but in case of:
>
> rte_mempool_ops.c:75:10: error: return makes integer from pointer
> without a cast [-Werror=int-conversion]
> return NULL;
>^

Changed to return -EEXIST

>
>> +/* sets mempool ops previously registered by rte_mempool_ops_register */
>> +int
>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>
> When I compile with shared libraries enabled, I get the following error:
>
> librte_reorder.so: undefined reference to `rte_mempool_ops_table'
> librte_mbuf.so: undefined reference to `rte_mempool_set_ops_byname'
> ...
>
> The new functions and global variables must be in
> rte_mempool_version.map. This was in v5
> ( http://dpdk.org/ml/archives/dev/2016-May/039365.html ) but
> was dropped in v6.

OK, Added.

>
> Regards,
> Olivier

Thanks,
David.



[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-09 Thread Hunt, David
Hi Shreyansh,

On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> Hi David,
>
> Thanks for explanation. I have some comments inline...
>
>> -Original Message-
>> From: Hunt, David [mailto:david.hunt at intel.com]
>> Sent: Tuesday, June 07, 2016 2:56 PM
>> To: Shreyansh Jain ; dev at dpdk.org
>> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
>> jerin.jacob at caviumnetworks.com
>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>> operations
>>
>> Hi Shreyansh,
>>
>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>> Hi,
>>>
>>> (Apologies for overly-eager email sent on this thread earlier. Will be more
>> careful in future).
>>> This is more of a question/clarification than a comment. (And I have taken
>> only some snippets from original mail to keep it cleaner)
>>> 
>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>> 
>>>
>>>   From the above what I understand is that multiple packet pool handlers can
>> be created.
>>> I have a use-case where application has multiple pools but only the packet
>> pool is hardware backed. Using the hardware for general buffer requirements
>> would prove costly.
>>>   From what I understand from the patch, selection of the pool is based on
>> the flags below.
>>
>> The flags are only used to select one of the default handlers for
>> backward compatibility through
>> the rte_mempool_create call. If you wish to use a mempool handler that
>> is not one of the
>> defaults, (i.e. a new hardware handler), you would use the
>> rte_create_mempool_empty
>> followed by the rte_mempool_set_ops_byname call.
>> So, for the external handlers, you create and empty mempool, then set
>> the operations (ops)
>> for that particular mempool.
> I am concerned about the existing applications (for example, l3fwd).
> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' 
> model would require modifications to these applications.
> Ideally, without any modifications, these applications should be able to use 
> packet pools (backed by hardware) and buffer pools (backed by ring/others) - 
> transparently.
>
> If I go by your suggestions, what I understand is, doing the above without 
> modification to applications would be equivalent to:
>
>struct rte_mempool_ops custom_hw_allocator = {...}
>
> thereafter, in config/common_base:
>
>CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>
> calls to rte_pktmbuf_pool_create would use the new allocator.

Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to 
rte_mempool_create will continue to use the default handlers (ring based).
> But, another problem arises here.
>
> There are two distinct paths for allocations of a memory pool:
> 1. A 'pkt' pool:
> rte_pktmbuf_pool_create
>   \- rte_mempool_create_empty
>   |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
>   |
>   `- rte_mempool_set_ops_byname
> (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> /* Override default 'ring_mp_mc' of
>  * rte_mempool_create */
>
> 2. Through generic mempool create API
> rte_mempool_create
>   \- rte_mempool_create_empty
> (passing pktmbuf and pool constructors)
>
> I found various instances in example applications where rte_mempool_create() 
> is being called directly for packet pools - bypassing the more semantically 
> correct call to rte_pktmbuf_* for packet pools.
>
> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to 
> replace custom handler operations for packet buffer allocations.
>
>  From a performance point-of-view, Applications should be able to select 
> between packet pools and non-packet pools.

This is intended for backward compatibility, and API consistency. Any 
applications that use
rte_mempool_create directly will continue to use the default mempool 
handlers. If the need
to use a custeom hander, they will need to be modified to call the newer 
API,
rte_mempool_create_empty and rte_mempool_set_ops_byname.


>>> 
>>>> +  /*
>>>> +   * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>>>> +   * set the correct index into the table of ops structs.
>>>> +   */
>>>> +  if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>> +  rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>>>> +  else

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-08 Thread Shreyansh Jain
Hi David,

Sorry for multiple mails on a patch. I forgot a trivial comment in previous 
mail.

> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of David Hunt
> Sent: Friday, June 03, 2016 8:28 PM
> To: dev at dpdk.org
> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
> jerin.jacob at caviumnetworks.com; David Hunt 
> Subject: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
[...]
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> + struct rte_mempool_ops *ops;
> + int16_t ops_index;
> +
> + rte_spinlock_lock(_mempool_ops_table.sl);
> +
> + if (rte_mempool_ops_table.num_ops >=
> + RTE_MEMPOOL_MAX_OPS_IDX) {
> + rte_spinlock_unlock(_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Maximum number of mempool ops structs exceeded\n");
> + return -ENOSPC;
> + }
> +
> + if (h->put == NULL || h->get == NULL || h->get_count == NULL) {

I think 'h->alloc' should also be checked here.

> + rte_spinlock_unlock(_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Missing callback while registering mempool ops\n");
> + return -EINVAL;
> + }
> +
> + if (strlen(h->name) >= sizeof(ops->name) - 1) {
> + RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> + __func__, h->name);
> + rte_errno = EEXIST;
> + return NULL;
> + }
> +
> + ops_index = rte_mempool_ops_table.num_ops++;
> + ops = _mempool_ops_table.ops[ops_index];
> + snprintf(ops->name, sizeof(ops->name), "%s", h->name);
> + ops->alloc = h->alloc;
> + ops->put = h->put;
> + ops->get = h->get;
> + ops->get_count = h->get_count;
> +
> + rte_spinlock_unlock(_mempool_ops_table.sl);
> +
> + return ops_index;
> +}
> +
[...]

-
Shreyansh



[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-08 Thread Olivier Matz
Hi David,

Please find some comments below.

On 06/03/2016 04:58 PM, David Hunt wrote:

> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h

> +/**
> + * Prototype for implementation specific data provisioning function.
> + *
> + * The function should provide the implementation specific memory for
> + * for use by the other mempool ops functions in a given mempool ops struct.
> + * E.g. the default ops provides an instance of the rte_ring for this 
> purpose.
> + * it will mostlikely point to a different type of data structure, and
> + * will be transparent to the application programmer.
> + */
> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);

In http://dpdk.org/ml/archives/dev/2016-June/040233.html, I suggested
to change the prototype to return an int (-errno) and directly set
the set mp->pool_data (void *) or mp->pool_if (uint64_t). No cast
would be required in this latter case.

By the way, there is a typo in the comment:
"mostlikely" -> "most likely"

> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_default.c

> +static void
> +common_ring_free(struct rte_mempool *mp)
> +{
> + rte_ring_free((struct rte_ring *)mp->pool_data);
> +}

I don't think the cast is needed here.
(same in other functions)


> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_ops.c

> +/* add a new ops struct in rte_mempool_ops_table, return its index */
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> + struct rte_mempool_ops *ops;
> + int16_t ops_index;
> +
> + rte_spinlock_lock(_mempool_ops_table.sl);
> +
> + if (rte_mempool_ops_table.num_ops >=
> + RTE_MEMPOOL_MAX_OPS_IDX) {
> + rte_spinlock_unlock(_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Maximum number of mempool ops structs exceeded\n");
> + return -ENOSPC;
> + }
> +
> + if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> + rte_spinlock_unlock(_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Missing callback while registering mempool ops\n");
> + return -EINVAL;
> + }
> +
> + if (strlen(h->name) >= sizeof(ops->name) - 1) {
> + RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> + __func__, h->name);
> + rte_errno = EEXIST;
> + return NULL;
> + }

This has already been noticed by Shreyansh, but in case of:

rte_mempool_ops.c:75:10: error: return makes integer from pointer
without a cast [-Werror=int-conversion]
   return NULL;
  ^


> +/* sets mempool ops previously registered by rte_mempool_ops_register */
> +int
> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)


When I compile with shared libraries enabled, I get the following error:

librte_reorder.so: undefined reference to `rte_mempool_ops_table'
librte_mbuf.so: undefined reference to `rte_mempool_set_ops_byname'
...

The new functions and global variables must be in
rte_mempool_version.map. This was in v5
( http://dpdk.org/ml/archives/dev/2016-May/039365.html ) but
was dropped in v6.




Regards,
Olivier


[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-08 Thread Shreyansh Jain
Hi David,

Thanks for explanation. I have some comments inline...

> -Original Message-
> From: Hunt, David [mailto:david.hunt at intel.com]
> Sent: Tuesday, June 07, 2016 2:56 PM
> To: Shreyansh Jain ; dev at dpdk.org
> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
> jerin.jacob at caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> Hi Shreyansh,
> 
> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> > Hi,
> >
> > (Apologies for overly-eager email sent on this thread earlier. Will be more
> careful in future).
> >
> > This is more of a question/clarification than a comment. (And I have taken
> only some snippets from original mail to keep it cleaner)
> >
> > 
> >> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> > 
> >
> >  From the above what I understand is that multiple packet pool handlers can
> be created.
> >
> > I have a use-case where application has multiple pools but only the packet
> pool is hardware backed. Using the hardware for general buffer requirements
> would prove costly.
> >  From what I understand from the patch, selection of the pool is based on
> the flags below.
> 
> The flags are only used to select one of the default handlers for
> backward compatibility through
> the rte_mempool_create call. If you wish to use a mempool handler that
> is not one of the
> defaults, (i.e. a new hardware handler), you would use the
> rte_create_mempool_empty
> followed by the rte_mempool_set_ops_byname call.
> So, for the external handlers, you create and empty mempool, then set
> the operations (ops)
> for that particular mempool.

I am concerned about the existing applications (for example, l3fwd).
Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model 
would require modifications to these applications.
Ideally, without any modifications, these applications should be able to use 
packet pools (backed by hardware) and buffer pools (backed by ring/others) - 
transparently.

If I go by your suggestions, what I understand is, doing the above without 
modification to applications would be equivalent to:

  struct rte_mempool_ops custom_hw_allocator = {...}

thereafter, in config/common_base:

  CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"

calls to rte_pktmbuf_pool_create would use the new allocator.

But, another problem arises here.

There are two distinct paths for allocations of a memory pool:
1. A 'pkt' pool:
   rte_pktmbuf_pool_create   
 \- rte_mempool_create_empty
 |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
 |
 `- rte_mempool_set_ops_byname
   (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
   /* Override default 'ring_mp_mc' of
* rte_mempool_create */

2. Through generic mempool create API
   rte_mempool_create
 \- rte_mempool_create_empty
   (passing pktmbuf and pool constructors)

I found various instances in example applications where rte_mempool_create() is 
being called directly for packet pools - bypassing the more semantically 
correct call to rte_pktmbuf_* for packet pools.

In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace 
custom handler operations for packet buffer allocations.

>From a performance point-of-view, Applications should be able to select 
>between packet pools and non-packet pools.

> 
> > 
> >> +  /*
> >> +   * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> >> +   * set the correct index into the table of ops structs.
> >> +   */
> >> +  if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> >> +  rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> >> +  else if (flags & MEMPOOL_F_SP_PUT)
> >> +  rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> >> +  else if (flags & MEMPOOL_F_SC_GET)
> >> +  rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> >> +  else
> >> +  rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> >> +

My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if 
specified, would:

...
#define MEMPOOL_F_SC_GET0x0008
#define MEMPOOL_F_PKT_ALLOC 0x0010
...

in rte_mempool_create_empty:
   ... after checking the other MEMPOOL_F_* flags...

if (flags & MEMPOOL_F_PKT_ALLOC)
rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)

And removing the redundant call to rte_mempool_set_ops_byname() in 
rte_pktmbuf_create_pool().

Thereafter, rte_pktmbuf_pool_create can be changed to:

  ...
   

[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-07 Thread Hunt, David
Hi Shreyansh,

On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> Hi,
>
> (Apologies for overly-eager email sent on this thread earlier. Will be more 
> careful in future).
>
> This is more of a question/clarification than a comment. (And I have taken 
> only some snippets from original mail to keep it cleaner)
>
> 
>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> 
>
>  From the above what I understand is that multiple packet pool handlers can 
> be created.
>
> I have a use-case where application has multiple pools but only the packet 
> pool is hardware backed. Using the hardware for general buffer requirements 
> would prove costly.
>  From what I understand from the patch, selection of the pool is based on the 
> flags below.

The flags are only used to select one of the default handlers for 
backward compatibility through
the rte_mempool_create call. If you wish to use a mempool handler that 
is not one of the
defaults, (i.e. a new hardware handler), you would use the 
rte_create_mempool_empty
followed by the rte_mempool_set_ops_byname call.
So, for the external handlers, you create and empty mempool, then set 
the operations (ops)
for that particular mempool.

> 
>> +/*
>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>> + * set the correct index into the table of ops structs.
>> + */
>> +if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>> +rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>> +else if (flags & MEMPOOL_F_SP_PUT)
>> +rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>> +else if (flags & MEMPOOL_F_SC_GET)
>> +rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>> +else
>> +rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>> +
> Is there any way I can achieve the above use case of multiple pools which can 
> be selected by an application - something like a run-time toggle/flag?
>
> -
> Shreyansh

Yes, you can create multiple pools, some using the default handlers, and 
some using external handlers.
There is an example of this in the autotests (app/test/test_mempool.c). 
This test creates multiple
mempools, of which one is a custom malloc based mempool handler. The 
test puts and gets mbufs
to/from each mempool all in the same application.

Regards,
Dave.






[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-07 Thread Shreyansh Jain
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of David Hunt
> Sent: Friday, June 03, 2016 8:28 PM
> To: dev at dpdk.org
> Cc: olivier.matz at 6wind.com; viktorin at rehivetech.com;
> jerin.jacob at caviumnetworks.com; David Hunt 
> Subject: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 

[...]

> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> + struct rte_mempool_ops *ops;
> + int16_t ops_index;
> +
> + rte_spinlock_lock(_mempool_ops_table.sl);
> +
> + if (rte_mempool_ops_table.num_ops >=
> + RTE_MEMPOOL_MAX_OPS_IDX) {
> + rte_spinlock_unlock(_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Maximum number of mempool ops structs exceeded\n");
> + return -ENOSPC;
> + }
> +
> + if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> + rte_spinlock_unlock(_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Missing callback while registering mempool ops\n");
> + return -EINVAL;
> + }
> +
> + if (strlen(h->name) >= sizeof(ops->name) - 1) {
> + RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> + __func__, h->name);
> + rte_errno = EEXIST;
> + return NULL;

rte_mempool_ops_register has return type 'int'. Above should be 'return 
rte_errno;', or probably 'return -EEXIST;' itself.

> + }
> +
> + ops_index = rte_mempool_ops_table.num_ops++;
> + ops = _mempool_ops_table.ops[ops_index];
> + snprintf(ops->name, sizeof(ops->name), "%s", h->name);
> + ops->alloc = h->alloc;
> + ops->put = h->put;
> + ops->get = h->get;
> + ops->get_count = h->get_count;
> +
> + rte_spinlock_unlock(_mempool_ops_table.sl);
> +
> + return ops_index;
> +}
> +
[...]

-
Shreyansh


[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-06 Thread Shreyansh Jain
Hi,

(Apologies for overly-eager email sent on this thread earlier. Will be more 
careful in future).

This is more of a question/clarification than a comment. (And I have taken only 
some snippets from original mail to keep it cleaner)


> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> +MEMPOOL_REGISTER_OPS(ops_sp_mc);


>From the above what I understand is that multiple packet pool handlers can be 
>created.

I have a use-case where application has multiple pools but only the packet pool 
is hardware backed. Using the hardware for general buffer requirements would 
prove costly.
>From what I understand from the patch, selection of the pool is based on the 
>flags below.


> + /*
> +  * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> +  * set the correct index into the table of ops structs.
> +  */
> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> + else if (flags & MEMPOOL_F_SP_PUT)
> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> + else if (flags & MEMPOOL_F_SC_GET)
> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> + else
> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> +

Is there any way I can achieve the above use case of multiple pools which can 
be selected by an application - something like a run-time toggle/flag?

-
Shreyansh


[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-06 Thread Shreyansh Jain
Hi,

This is more of a question/clarification than a comment. (And I have taken only 
some snippets from original mail to keep it cleaner)


> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> +MEMPOOL_REGISTER_OPS(ops_sp_mc);




> + /*
> +  * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> +  * set the correct index into the table of ops structs.
> +  */
> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> + else if (flags & MEMPOOL_F_SP_PUT)
> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> + else if (flags & MEMPOOL_F_SC_GET)
> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> + else
> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> +



[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

2016-06-03 Thread David Hunt
Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz 
Signed-off-by: David Hunt 
---
 app/test/test_mempool_perf.c |   1 -
 lib/librte_mempool/Makefile  |   2 +
 lib/librte_mempool/rte_mempool.c |  73 -
 lib/librte_mempool/rte_mempool.h | 247 ---
 lib/librte_mempool/rte_mempool_default.c | 157 
 lib/librte_mempool/rte_mempool_ops.c | 149 +++
 6 files changed, 562 insertions(+), 67 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
   n_get_bulk);
if (unlikely(ret < 0)) {
rte_mempool_dump(stdout, mp);
-   rte_ring_dump(stdout, mp->ring);
/* in this case, objects are lost... */
return -1;
}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..8cac29b 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2

 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b54de43..eb74e25 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, 
phys_addr_t physaddr)
 #endif

/* enqueue in ring */
-   rte_ring_sp_enqueue(mp->ring, obj);
+   rte_mempool_ops_enqueue_bulk(mp, , 1);
 }

 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t 
elt_num,
return (size_t)paddr_idx << pg_shift;
 }

-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-   int rg_flags = 0, ret;
-   char rg_name[RTE_RING_NAMESIZE];
-   struct rte_ring *r;
-
-   ret = snprintf(rg_name, sizeof(rg_name),
-   RTE_MEMPOOL_MZ_FORMAT, mp->name);
-   if (ret < 0 || ret >= (int)sizeof(rg_name))
-   return -ENAMETOOLONG;
-
-   /* ring flags */
-   if (mp->flags & MEMPOOL_F_SP_PUT)
-   rg_flags |= RING_F_SP_ENQ;
-   if (mp->flags & MEMPOOL_F_SC_GET)
-   rg_flags |= RING_F_SC_DEQ;
-
-   /* Allocate the ring that will be used to store objects.
-* Ring functions will return appropriate errors if we are
-* running as a secondary process etc., so no checks made
-* in this function for that condition.
-*/
-   r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-   mp->socket_id, rg_flags);
-   if (r == NULL)
-   return -rte_errno;
-
-   mp->ring = r;
-   mp->flags |= MEMPOOL_F_RING_CREATED;
-   return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
void *elt;

while (!STAILQ_EMPTY(>elt_list)) {
-   rte_ring_sc_dequeue(mp->ring, );
+   rte_mempool_ops_dequeue_bulk(mp, , 1);
(void)elt;
STAILQ_REMOVE_HEAD(>elt_list, next);
mp->populated_size--;
@@ -383,13 +349,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char 
*vaddr,
unsigned i = 0;
size_t off;
struct rte_mempool_memhdr *memhdr;
-   int ret;

/* create the internal ring if not already done */
-   if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-   ret = rte_mempool_ring_create(mp);
-   if (ret < 0)
-   return ret;
+   if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+   rte_errno = 0;
+