[dpdk-dev] [RFC 0/7] changing mbuf pool handler

2016-10-05 Thread Hunt, David


On 5/10/2016 12:49 PM, Hemant Agrawal wrote:
> Hi Olivier,
>
>> -Original Message-
>> From: Hunt, David [mailto:david.hunt at intel.com]
>> Hi Olivier,
>>
>>
>> On 3/10/2016 4:49 PM, Olivier Matz wrote:
>>> Hi Hemant,
>>>
>>> Thank you for your feedback.
>>>
>>> On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
 Hi Olivier

 On 9/19/2016 7:12 PM, Olivier Matz wrote:
> Hello,
>
> Following discussion from [1] ("usages issue with external mempool").
>
> This is a tentative to make the mempool_ops feature introduced by
> David Hunt [2] more widely used by applications.
>
> It applies on top of a minor fix in mbuf lib [3].
>
> To sumarize the needs (please comment if I did not got it properly):
>
> - new hw-assisted mempool handlers will soon be introduced
> - to make use of it, the new mempool API [4]
>> (rte_mempool_create_empty,
> rte_mempool_populate, ...) has to be used
> - the legacy mempool API (rte_mempool_create) does not allow to
>> change
> the mempool ops. The default is "ring_p_c" depending on
> flags.
> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
> them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
> ("ring_mp_mc")
> - today, most (if not all) applications and examples use either
> rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
> pool, making it difficult to take advantage of this feature with
> existing apps.
>
> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
> rte_mempool_create(), forcing the applications to use the new API,
> which is more flexible. But after digging a bit, it appeared that
> rte_mempool_create() is widely used, and not only for mbufs.
> Deprecating it would have a big impact on applications, and
> replacing it with the new API would be overkill in many use-cases.
 I agree with the proposal.

> So I finally tried the following approach (inspired from a
> suggestion Jerin [5]):
>
> - add a new mempool_ops parameter to rte_pktmbuf_pool_create().
>> This
> unfortunatelly breaks the API, but I implemented an ABI compat layer.
> If the patch is accepted, we could discuss how to announce/schedule
> the API change.
> - update the applications and documentation to prefer
> rte_pktmbuf_pool_create() as much as possible
> - update most used examples (testpmd, l2fwd, l3fwd) to add a new
>> command
> line argument to select the mempool handler
>
> I hope the external applications would then switch to
> rte_pktmbuf_pool_create(), since it supports most of the use-cases
> (even priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>
 I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
 void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
 consolidated wrapper will almost make it certain that applications
 will not try to use rte_mempool_create for packet buffers.
>>> The patch changes the example applications. I'm not sure I understand
>>> why adding these arguments would force application to not use
>>> rte_mempool_create() for packet buffers. Do you have a application in
>> mind?
>>> For the mempool_ops parameter, we must pass it at init because we need
>>> to know the mempool handler before populating the pool. For object
>>> initialization, it can be done after, so I thought it was better to
>>> reduce the number of arguments to avoid to fall in the
>>> mempool_create() syndrom :)
>> I also agree with the proposal. Looks cleaner.
>>
>> I would lean to the side of keeping the parameters to the minimum, i.e.
>> not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create.
>> Developers always have the option of going with rte_mempool_create if they
>> need more fine-grained control.
> [Hemant] The implementations with hw offloaded mempools don't want developer 
> using *rte_mempool_create* for packet buffer pools.
> This API does not work for hw offloaded mempool.
>
> Also, *rte_mempool_create_empty* - may not be convenient for many 
> application, as it requires calling  4+ APIs.
>
> Olivier is not in favor of deprecating the *rte_mempool_create*.   I agree 
> with concerns raised by him.
>
> Essentially, I was suggesting to upgrade * rte_pktmbuf_pool_create* to be 
> like *rte_mempool_create*  for packet buffers exclusively.
>
> This will provide a clear segregation for API usages w.r.t the packet buffer 
> pool vs all other type of mempools.

Yes, it does sound like we need those extra parameters on 
rte_pktmbuf_pool_create.

Regards,
Dave.


[dpdk-dev] [RFC 0/7] changing mbuf pool handler

2016-10-05 Thread Hemant Agrawal
Hi Olivier,

> -Original Message-
> From: Hunt, David [mailto:david.hunt at intel.com]
> Hi Olivier,
> 
> 
> On 3/10/2016 4:49 PM, Olivier Matz wrote:
> > Hi Hemant,
> >
> > Thank you for your feedback.
> >
> > On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
> >> Hi Olivier
> >>
> >> On 9/19/2016 7:12 PM, Olivier Matz wrote:
> >>> Hello,
> >>>
> >>> Following discussion from [1] ("usages issue with external mempool").
> >>>
> >>> This is a tentative to make the mempool_ops feature introduced by
> >>> David Hunt [2] more widely used by applications.
> >>>
> >>> It applies on top of a minor fix in mbuf lib [3].
> >>>
> >>> To sumarize the needs (please comment if I did not got it properly):
> >>>
> >>> - new hw-assisted mempool handlers will soon be introduced
> >>> - to make use of it, the new mempool API [4]
> (rte_mempool_create_empty,
> >>>rte_mempool_populate, ...) has to be used
> >>> - the legacy mempool API (rte_mempool_create) does not allow to
> change
> >>>the mempool ops. The default is "ring_p_c" depending on
> >>>flags.
> >>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
> >>>them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
> >>>("ring_mp_mc")
> >>> - today, most (if not all) applications and examples use either
> >>>rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
> >>>pool, making it difficult to take advantage of this feature with
> >>>existing apps.
> >>>
> >>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
> >>> rte_mempool_create(), forcing the applications to use the new API,
> >>> which is more flexible. But after digging a bit, it appeared that
> >>> rte_mempool_create() is widely used, and not only for mbufs.
> >>> Deprecating it would have a big impact on applications, and
> >>> replacing it with the new API would be overkill in many use-cases.
> >> I agree with the proposal.
> >>
> >>> So I finally tried the following approach (inspired from a
> >>> suggestion Jerin [5]):
> >>>
> >>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create().
> This
> >>>unfortunatelly breaks the API, but I implemented an ABI compat layer.
> >>>If the patch is accepted, we could discuss how to announce/schedule
> >>>the API change.
> >>> - update the applications and documentation to prefer
> >>>rte_pktmbuf_pool_create() as much as possible
> >>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new
> command
> >>>line argument to select the mempool handler
> >>>
> >>> I hope the external applications would then switch to
> >>> rte_pktmbuf_pool_create(), since it supports most of the use-cases
> >>> (even priv_size != 0, since we can call rte_mempool_obj_iter() after) .
> >>>
> >> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
> >> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
> >> consolidated wrapper will almost make it certain that applications
> >> will not try to use rte_mempool_create for packet buffers.
> > The patch changes the example applications. I'm not sure I understand
> > why adding these arguments would force application to not use
> > rte_mempool_create() for packet buffers. Do you have a application in
> mind?
> >
> > For the mempool_ops parameter, we must pass it at init because we need
> > to know the mempool handler before populating the pool. For object
> > initialization, it can be done after, so I thought it was better to
> > reduce the number of arguments to avoid to fall in the
> > mempool_create() syndrom :)
> 
> I also agree with the proposal. Looks cleaner.
> 
> I would lean to the side of keeping the parameters to the minimum, i.e.
> not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create.
> Developers always have the option of going with rte_mempool_create if they
> need more fine-grained control.

[Hemant] The implementations with hw offloaded mempools don't want developer 
using *rte_mempool_create* for packet buffer pools. 
This API does not work for hw offloaded mempool. 

Also, *rte_mempool_create_empty* - may not be convenient for many application, 
as it requires calling  4+ APIs.

Olivier is not in favor of deprecating the *rte_mempool_create*.   I agree with 
concerns raised by him. 

Essentially, I was suggesting to upgrade * rte_pktmbuf_pool_create* to be like 
*rte_mempool_create*  for packet buffers exclusively.

This will provide a clear segregation for API usages w.r.t the packet buffer 
pool vs all other type of mempools. 


Regards,
Hemant

> 
> Regards,
> Dave.
> 
> > Any other opinions?
> >
> > Regards,
> > Olivier



[dpdk-dev] [RFC 0/7] changing mbuf pool handler

2016-10-05 Thread Hunt, David
Hi Olivier,


On 3/10/2016 4:49 PM, Olivier Matz wrote:
> Hi Hemant,
>
> Thank you for your feedback.
>
> On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
>> Hi Olivier
>>
>> On 9/19/2016 7:12 PM, Olivier Matz wrote:
>>> Hello,
>>>
>>> Following discussion from [1] ("usages issue with external mempool").
>>>
>>> This is a tentative to make the mempool_ops feature introduced
>>> by David Hunt [2] more widely used by applications.
>>>
>>> It applies on top of a minor fix in mbuf lib [3].
>>>
>>> To sumarize the needs (please comment if I did not got it properly):
>>>
>>> - new hw-assisted mempool handlers will soon be introduced
>>> - to make use of it, the new mempool API [4] (rte_mempool_create_empty,
>>>rte_mempool_populate, ...) has to be used
>>> - the legacy mempool API (rte_mempool_create) does not allow to change
>>>the mempool ops. The default is "ring_p_c" depending on
>>>flags.
>>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
>>>them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
>>>("ring_mp_mc")
>>> - today, most (if not all) applications and examples use either
>>>rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
>>>pool, making it difficult to take advantage of this feature with
>>>existing apps.
>>>
>>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
>>> rte_mempool_create(), forcing the applications to use the new API, which
>>> is more flexible. But after digging a bit, it appeared that
>>> rte_mempool_create() is widely used, and not only for mbufs. Deprecating
>>> it would have a big impact on applications, and replacing it with the
>>> new API would be overkill in many use-cases.
>> I agree with the proposal.
>>
>>> So I finally tried the following approach (inspired from a suggestion
>>> Jerin [5]):
>>>
>>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
>>>unfortunatelly breaks the API, but I implemented an ABI compat layer.
>>>If the patch is accepted, we could discuss how to announce/schedule
>>>the API change.
>>> - update the applications and documentation to prefer
>>>rte_pktmbuf_pool_create() as much as possible
>>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new command
>>>line argument to select the mempool handler
>>>
>>> I hope the external applications would then switch to
>>> rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
>>> priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>>>
>> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
>> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
>> consolidated wrapper will almost make it certain that applications will
>> not try to use rte_mempool_create for packet buffers.
> The patch changes the example applications. I'm not sure I understand
> why adding these arguments would force application to not use
> rte_mempool_create() for packet buffers. Do you have a application in mind?
>
> For the mempool_ops parameter, we must pass it at init because we need
> to know the mempool handler before populating the pool. For object
> initialization, it can be done after, so I thought it was better to
> reduce the number of arguments to avoid to fall in the mempool_create()
> syndrom :)

I also agree with the proposal. Looks cleaner.

I would lean to the side of keeping the parameters to the minimum, i.e. 
not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create. 
Developers always have the option of going with rte_mempool_create if 
they need more fine-grained control.

Regards,
Dave.

> Any other opinions?
>
> Regards,
> Olivier



[dpdk-dev] [RFC 0/7] changing mbuf pool handler

2016-10-03 Thread Olivier Matz
Hi Hemant,

Thank you for your feedback.

On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
> Hi Olivier
> 
> On 9/19/2016 7:12 PM, Olivier Matz wrote:
>> Hello,
>>
>> Following discussion from [1] ("usages issue with external mempool").
>>
>> This is a tentative to make the mempool_ops feature introduced
>> by David Hunt [2] more widely used by applications.
>>
>> It applies on top of a minor fix in mbuf lib [3].
>>
>> To sumarize the needs (please comment if I did not got it properly):
>>
>> - new hw-assisted mempool handlers will soon be introduced
>> - to make use of it, the new mempool API [4] (rte_mempool_create_empty,
>>   rte_mempool_populate, ...) has to be used
>> - the legacy mempool API (rte_mempool_create) does not allow to change
>>   the mempool ops. The default is "ring_p_c" depending on
>>   flags.
>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
>>   them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
>>   ("ring_mp_mc")
>> - today, most (if not all) applications and examples use either
>>   rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
>>   pool, making it difficult to take advantage of this feature with
>>   existing apps.
>>
>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
>> rte_mempool_create(), forcing the applications to use the new API, which
>> is more flexible. But after digging a bit, it appeared that
>> rte_mempool_create() is widely used, and not only for mbufs. Deprecating
>> it would have a big impact on applications, and replacing it with the
>> new API would be overkill in many use-cases.
> 
> I agree with the proposal.
> 
>>
>> So I finally tried the following approach (inspired from a suggestion
>> Jerin [5]):
>>
>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
>>   unfortunatelly breaks the API, but I implemented an ABI compat layer.
>>   If the patch is accepted, we could discuss how to announce/schedule
>>   the API change.
>> - update the applications and documentation to prefer
>>   rte_pktmbuf_pool_create() as much as possible
>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new command
>>   line argument to select the mempool handler
>>
>> I hope the external applications would then switch to
>> rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
>> priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>>
> 
> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
> consolidated wrapper will almost make it certain that applications will
> not try to use rte_mempool_create for packet buffers.

The patch changes the example applications. I'm not sure I understand
why adding these arguments would force application to not use
rte_mempool_create() for packet buffers. Do you have a application in mind?

For the mempool_ops parameter, we must pass it at init because we need
to know the mempool handler before populating the pool. For object
initialization, it can be done after, so I thought it was better to
reduce the number of arguments to avoid to fall in the mempool_create()
syndrom :)

Any other opinions?

Regards,
Olivier


[dpdk-dev] [RFC 0/7] changing mbuf pool handler

2016-09-22 Thread Hemant Agrawal
Hi Olivier

On 9/19/2016 7:12 PM, Olivier Matz wrote:
> Hello,
>
> Following discussion from [1] ("usages issue with external mempool").
>
> This is a tentative to make the mempool_ops feature introduced
> by David Hunt [2] more widely used by applications.
>
> It applies on top of a minor fix in mbuf lib [3].
>
> To sumarize the needs (please comment if I did not got it properly):
>
> - new hw-assisted mempool handlers will soon be introduced
> - to make use of it, the new mempool API [4] (rte_mempool_create_empty,
>   rte_mempool_populate, ...) has to be used
> - the legacy mempool API (rte_mempool_create) does not allow to change
>   the mempool ops. The default is "ring_p_c" depending on
>   flags.
> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
>   them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
>   ("ring_mp_mc")
> - today, most (if not all) applications and examples use either
>   rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
>   pool, making it difficult to take advantage of this feature with
>   existing apps.
>
> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
> rte_mempool_create(), forcing the applications to use the new API, which
> is more flexible. But after digging a bit, it appeared that
> rte_mempool_create() is widely used, and not only for mbufs. Deprecating
> it would have a big impact on applications, and replacing it with the
> new API would be overkill in many use-cases.

I agree with the proposal.

>
> So I finally tried the following approach (inspired from a suggestion
> Jerin [5]):
>
> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
>   unfortunatelly breaks the API, but I implemented an ABI compat layer.
>   If the patch is accepted, we could discuss how to announce/schedule
>   the API change.
> - update the applications and documentation to prefer
>   rte_pktmbuf_pool_create() as much as possible
> - update most used examples (testpmd, l2fwd, l3fwd) to add a new command
>   line argument to select the mempool handler
>
> I hope the external applications would then switch to
> rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
> priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>

I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb, 
void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single 
consolidated wrapper will almost make it certain that applications will 
not try to use rte_mempool_create for packet buffers.



> Comments are of course welcome. Note: the patchset is not really
> tested yet.
>
>
> Thanks,
> Olivier
>
> [1] http://dpdk.org/ml/archives/dev/2016-July/044734.html
> [2] http://dpdk.org/ml/archives/dev/2016-June/042423.html
> [3] http://www.dpdk.org/dev/patchwork/patch/15923/
> [4] http://dpdk.org/ml/archives/dev/2016-May/039229.html
> [5] http://dpdk.org/ml/archives/dev/2016-July/044779.html
>
>
> Olivier Matz (7):
>   mbuf: set the handler at mbuf pool creation
>   mbuf: use helper to create the pool
>   testpmd: new parameter to set mbuf pool ops
>   l3fwd: rework long options parsing
>   l3fwd: new parameter to set mbuf pool ops
>   l2fwd: rework long options parsing
>   l2fwd: new parameter to set mbuf pool ops
>
>  app/pdump/main.c   |   2 +-
>  app/test-pipeline/init.c   |   3 +-
>  app/test-pmd/parameters.c  |   5 +
>  app/test-pmd/testpmd.c |  16 +-
>  app/test-pmd/testpmd.h |   1 +
>  app/test/test_cryptodev.c  |   2 +-
>  app/test/test_cryptodev_perf.c |   2 +-
>  app/test/test_distributor.c|   2 +-
>  app/test/test_distributor_perf.c   |   2 +-
>  app/test/test_kni.c|   2 +-
>  app/test/test_link_bonding.c   |   2 +-
>  app/test/test_link_bonding_mode4.c |   2 +-
>  app/test/test_link_bonding_rssconf.c   |  11 +-
>  app/test/test_mbuf.c   |   6 +-
>  app/test/test_pmd_perf.c   |   3 +-
>  app/test/test_pmd_ring.c   |   2 +-
>  app/test/test_reorder.c|   2 +-
>  app/test/test_sched.c  |   2 +-
>  app/test/test_table.c  |   2 +-
>  doc/guides/prog_guide/mbuf_lib.rst |   2 +-
>  doc/guides/sample_app_ug/ip_reassembly.rst |  13 +-
>  doc/guides/sample_app_ug/ipv4_multicast.rst|  12 +-
>  doc/guides/sample_app_ug/l2_forward_job_stats.rst  |  33 ++--
>  .../sample_app_ug/l2_forward_real_virtual.rst  |  26 ++-
>  doc/guides/sample_app_ug/ptpclient.rst |  12 +-
>  doc/guides/sample_app_ug/quota_watermark.rst   |  26 ++-
>  drivers/net/bonding/rte_eth_bond_8023ad.c  |  13 +-
>  drivers/net/bond

[dpdk-dev] [RFC 0/7] changing mbuf pool handler

2016-09-19 Thread Olivier Matz
Hello,

Following discussion from [1] ("usages issue with external mempool").

This is a tentative to make the mempool_ops feature introduced
by David Hunt [2] more widely used by applications.

It applies on top of a minor fix in mbuf lib [3].

To sumarize the needs (please comment if I did not got it properly):

- new hw-assisted mempool handlers will soon be introduced
- to make use of it, the new mempool API [4] (rte_mempool_create_empty,
  rte_mempool_populate, ...) has to be used
- the legacy mempool API (rte_mempool_create) does not allow to change
  the mempool ops. The default is "ring_p_c" depending on
  flags.
- the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
  them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
  ("ring_mp_mc")
- today, most (if not all) applications and examples use either
  rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
  pool, making it difficult to take advantage of this feature with
  existing apps.

My initial idea was to deprecate both rte_pktmbuf_pool_create() and
rte_mempool_create(), forcing the applications to use the new API, which
is more flexible. But after digging a bit, it appeared that
rte_mempool_create() is widely used, and not only for mbufs. Deprecating
it would have a big impact on applications, and replacing it with the
new API would be overkill in many use-cases.

So I finally tried the following approach (inspired from a suggestion
Jerin [5]):

- add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
  unfortunatelly breaks the API, but I implemented an ABI compat layer.
  If the patch is accepted, we could discuss how to announce/schedule
  the API change.
- update the applications and documentation to prefer
  rte_pktmbuf_pool_create() as much as possible
- update most used examples (testpmd, l2fwd, l3fwd) to add a new command
  line argument to select the mempool handler

I hope the external applications would then switch to
rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
priv_size != 0, since we can call rte_mempool_obj_iter() after) .

Comments are of course welcome. Note: the patchset is not really
tested yet.


Thanks,
Olivier

[1] http://dpdk.org/ml/archives/dev/2016-July/044734.html
[2] http://dpdk.org/ml/archives/dev/2016-June/042423.html
[3] http://www.dpdk.org/dev/patchwork/patch/15923/
[4] http://dpdk.org/ml/archives/dev/2016-May/039229.html
[5] http://dpdk.org/ml/archives/dev/2016-July/044779.html


Olivier Matz (7):
  mbuf: set the handler at mbuf pool creation
  mbuf: use helper to create the pool
  testpmd: new parameter to set mbuf pool ops
  l3fwd: rework long options parsing
  l3fwd: new parameter to set mbuf pool ops
  l2fwd: rework long options parsing
  l2fwd: new parameter to set mbuf pool ops

 app/pdump/main.c   |   2 +-
 app/test-pipeline/init.c   |   3 +-
 app/test-pmd/parameters.c  |   5 +
 app/test-pmd/testpmd.c |  16 +-
 app/test-pmd/testpmd.h |   1 +
 app/test/test_cryptodev.c  |   2 +-
 app/test/test_cryptodev_perf.c |   2 +-
 app/test/test_distributor.c|   2 +-
 app/test/test_distributor_perf.c   |   2 +-
 app/test/test_kni.c|   2 +-
 app/test/test_link_bonding.c   |   2 +-
 app/test/test_link_bonding_mode4.c |   2 +-
 app/test/test_link_bonding_rssconf.c   |  11 +-
 app/test/test_mbuf.c   |   6 +-
 app/test/test_pmd_perf.c   |   3 +-
 app/test/test_pmd_ring.c   |   2 +-
 app/test/test_reorder.c|   2 +-
 app/test/test_sched.c  |   2 +-
 app/test/test_table.c  |   2 +-
 doc/guides/prog_guide/mbuf_lib.rst |   2 +-
 doc/guides/sample_app_ug/ip_reassembly.rst |  13 +-
 doc/guides/sample_app_ug/ipv4_multicast.rst|  12 +-
 doc/guides/sample_app_ug/l2_forward_job_stats.rst  |  33 ++--
 .../sample_app_ug/l2_forward_real_virtual.rst  |  26 ++-
 doc/guides/sample_app_ug/ptpclient.rst |  12 +-
 doc/guides/sample_app_ug/quota_watermark.rst   |  26 ++-
 drivers/net/bonding/rte_eth_bond_8023ad.c  |  13 +-
 drivers/net/bonding/rte_eth_bond_alb.c |   2 +-
 examples/bond/main.c   |   2 +-
 examples/distributor/main.c|   2 +-
 examples/dpdk_qat/main.c   |   3 +-
 examples/ethtool/ethtool-app/main.c|   4 +-
 examples/exception_path/main.c |   3 +-
 examples/ip_fragmentation/main.c   |   4 +-
 examples/ip_pipeline/init.c|  19 ++-
 examples/ip_reassembly/main.c  |  16