Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 3 hours earlier to reduce impact for earlier time zone

2022-12-12 Thread Damjan Marion via lists.fd.io

OK, moving meeting tomorrow (and following 1st monthly meetings) for -3 hrs, 
new time is 5-6am PT.

Wiki updated with:

Bi-weekly on the second Tuesday of each month at 5-6am PT and fourth Tuesday of 
each month at 8-9am PT.


— 
Damjan



> On 07.12.2022., at 18:30, Damjan Marion via lists.fd.io 
>  wrote:
> 
> 
> Can we get a bit more votes on this? It is significant change so would like 
> to confirm that everybody is ok with it….
> 
> — 
> Damjan
> 
> 
> 
>> On 24.11.2022., at 03:52, Marvin Liu  wrote:
>> 
>> +1, looking forward to join community meeting and have more discussions.
>>  
>> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > <mailto:vpp-dev@lists.fd.io>> On Behalf Of Xu, Ting
>> Sent: Thursday, November 24, 2022 10:22 AM
>> To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
>> 3 hours earlier to reduce impact for earlier time zone
>>  
>> +1, it would be convenient for us to attend the meeting!
>>  
>> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > <mailto:vpp-dev@lists.fd.io>> On Behalf Of Pei, Yulong
>> Sent: Thursday, November 24, 2022 10:19 AM
>> To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> Cc: Li, Jokul mailto:jokul...@intel.com>>
>> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
>> 3 hours earlier to reduce impact for earlier time zone
>>  
>> +1, Looking forward to join community meeting.
>>  
>> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > <mailto:vpp-dev@lists.fd.io>> On Behalf Of Ni, Hongjun
>> Sent: Wednesday, November 23, 2022 11:51 AM
>> To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> Cc: Li, Jokul mailto:jokul...@intel.com>>
>> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
>> 3 hours earlier to reduce impact for earlier time zone
>>  
>> +1. Hope more PRC guys will join and contribute to VPP community!
>>  
>> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > <mailto:vpp-dev@lists.fd.io>> On Behalf Of qian xu
>> Sent: Wednesday, November 23, 2022 9:28 AM
>> To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> Cc: Li, Jokul mailto:jokul...@intel.com>>
>> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
>> 3 hours earlier to reduce impact for earlier time zone
>>  
>> +1, thanks for the inclusion for PRC contributors, looking forwards to 
>> joining the community meetings in near future!
>>  
>> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > <mailto:vpp-dev@lists.fd.io>> On Behalf Of Dave Wallace
>> Sent: Wednesday, November 23, 2022 2:48 AM
>> To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
>> 3 hours earlier to reduce impact for earlier time zone
>>  
>> +1
>> 
>> On 11/22/22 11:28 AM, Andrew Yourtchenko wrote:
>> Hi all,
>>  
>> It came up that the current time of the community meeting is extremely 
>> unfriendly for our community members on China timezone - it makes it 
>> midnight their time.
>>  
>> So, as per discussion on the call today, I would like to propose to move 
>> every other of the meetings (the one in the second Tuesday of the month) 
>> three hours earlier.
>>  
>> I would also suggest that this proposal take effect starting with the 
>> upcoming meeting on December 13th.
>>  
>> Thoughts ?
>>  
>> --a
>>  
>> 
>>  
>>  
>>  
>>  
>> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22319): https://lists.fd.io/g/vpp-dev/message/22319
Mute This Topic: https://lists.fd.io/mt/9526/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 3 hours earlier to reduce impact for earlier time zone

2022-12-07 Thread Damjan Marion via lists.fd.io

Can we get a bit more votes on this? It is significant change so would like to 
confirm that everybody is ok with it….

— 
Damjan



> On 24.11.2022., at 03:52, Marvin Liu  wrote:
> 
> +1, looking forward to join community meeting and have more discussions.
>  
> From: vpp-dev@lists.fd.io   > On Behalf Of Xu, Ting
> Sent: Thursday, November 24, 2022 10:22 AM
> To: vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
> 3 hours earlier to reduce impact for earlier time zone
>  
> +1, it would be convenient for us to attend the meeting!
>  
> From: vpp-dev@lists.fd.io   > On Behalf Of Pei, Yulong
> Sent: Thursday, November 24, 2022 10:19 AM
> To: vpp-dev@lists.fd.io 
> Cc: Li, Jokul mailto:jokul...@intel.com>>
> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
> 3 hours earlier to reduce impact for earlier time zone
>  
> +1, Looking forward to join community meeting.
>  
> From: vpp-dev@lists.fd.io   > On Behalf Of Ni, Hongjun
> Sent: Wednesday, November 23, 2022 11:51 AM
> To: vpp-dev@lists.fd.io 
> Cc: Li, Jokul mailto:jokul...@intel.com>>
> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
> 3 hours earlier to reduce impact for earlier time zone
>  
> +1. Hope more PRC guys will join and contribute to VPP community!
>  
> From: vpp-dev@lists.fd.io   > On Behalf Of qian xu
> Sent: Wednesday, November 23, 2022 9:28 AM
> To: vpp-dev@lists.fd.io 
> Cc: Li, Jokul mailto:jokul...@intel.com>>
> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
> 3 hours earlier to reduce impact for earlier time zone
>  
> +1, thanks for the inclusion for PRC contributors, looking forwards to 
> joining the community meetings in near future!
>  
> From: vpp-dev@lists.fd.io   > On Behalf Of Dave Wallace
> Sent: Wednesday, November 23, 2022 2:48 AM
> To: vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] RFC: move the “1”st instance of the community meeting 
> 3 hours earlier to reduce impact for earlier time zone
>  
> +1
> 
> On 11/22/22 11:28 AM, Andrew Yourtchenko wrote:
> Hi all,
>  
> It came up that the current time of the community meeting is extremely 
> unfriendly for our community members on China timezone - it makes it midnight 
> their time.
>  
> So, as per discussion on the call today, I would like to propose to move 
> every other of the meetings (the one in the second Tuesday of the month) 
> three hours earlier.
>  
> I would also suggest that this proposal take effect starting with the 
> upcoming meeting on December 13th.
>  
> Thoughts ?
>  
> --a
>  
> 
>  
>  
>  
>  
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22297): https://lists.fd.io/g/vpp-dev/message/22297
Mute This Topic: https://lists.fd.io/mt/9526/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] clang-15 fixes

2022-10-12 Thread Damjan Marion via lists.fd.io

Guys,


I submitted patch which fixes issues reported by clang 15.

https://gerrit.fd.io/r/c/vpp/+/37387

Mainly it is about variables which are computed but never used after….
Please take a look if your code is listed in this patch.

— 
Damjan




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22012): https://lists.fd.io/g/vpp-dev/message/22012
Mute This Topic: https://lists.fd.io/mt/94284807/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] zero memcpy interface (alternative to memif) to pass packets from VPP to another trusted DPDK application

2022-05-24 Thread Damjan Marion via lists.fd.io

> On 24.05.2022., at 14:28, PRANAB DAS  wrote:
> 
> Hi Damjan,
> 
> The snort plugin could be simple. But I am not familiar with VPP 
> memory/buffer management as such and I am finding it not so easy to navigate. 
> And looking at the physmem.c/h code, it appears the VPP uses heap memory 
> (mmap) and has its own heap/buffer management system.

VPP uses mmap system call (same like DPDK) to map shared memory, but that is 
not heap memory.

VPP buffer memory is shared memory (typically hugepage backed). There is one 
block of shared memory per NUMA, and each block have own FD.
VPP can pass that FD to remote app over UDS and remote app can simply map that 
shared memory, Both memif and snort plugins do exactly that.

Main complexity is creating another shared memory region which holds enqueue 
and dequeue rings: In snort plugin you can see how that shared memory is 
orgaised by looking at src/plugins/snort/daq_vpp.h


> Could you clarify? And in the snort plugin exposes/shares the mmap memory 
> region to the external program (snort). Is there any documentation that 
> provides an overview of VPP buffer management and how VPP memory/buffers can 
> be shared with external trusted applications?

No, unfortunately there is no documentation, but there is running code (snort 
plugin) which does exactly that.
If you have any questions, ask here and i will try to help.

> 
> Thank you
> 
> - PK Das
>  
> 
> 
> 
> On Tue, May 24, 2022 at 7:51 AM Damjan Marion  > wrote:
> 
> VPP is not DPDK application, VPP features doesnt maintain DPDK metadata, in 
> many cases we use native drivers so DPDK is not even loaded so you cannot use 
> rte_ring unless you want to write complete translation layer which will 
> likely be significantly slower that using native way which snort plugin uses.
> 
> Why do you think zero copy interface in snort plugin is not simple?
> 
> — 
> Damjan
> 
>> On 24.05.2022., at 13:34, PRANAB DAS > > wrote:
>> 
>> 
>> Thank you very much for your response Benoit!
>> I am wondering if there is another option e.g. rte-ring of DPDK or if that 
>> option is not feasible in VPP? 
>> Could you comment? We are looking for a service-chaining application similar 
>> to snort but would like to have a much simpler zero-copy interface.
>> 
>> Thank you,
>> 
>> - P K DAS
>> 
>> On Tue, May 24, 2022 at 4:22 AM Benoit Ganne (bganne) > > wrote:
>> It all depends on what you want to do. Usually, we try to avoid sharing 
>> buffers read/write between multiple process, it makes debugging much harder 
>> - especially buffer leaks or use-after-free...
>> Here is an example to share VPP buffers read-only with snort so that snort 
>> can inspect traffic and gives a verdict back without any copy: 
>> https://git.fd.io/vpp/tree/src/plugins/snort 
>> 
>> 
>> Best
>> ben
>> 
>> > -Original Message-
>> > From: vpp-dev@lists.fd.io  
>> > mailto:vpp-dev@lists.fd.io>> On Behalf Of PRANAB DAS
>> > Sent: Monday, May 23, 2022 20:11
>> > To: vpp-dev@lists.fd.io 
>> > Subject: Re: [vpp-dev] zero memcpy interface (alternative to memif) to
>> > pass packets from VPP to another trusted DPDK application
>> > 
>> > Hi
>> > 
>> > Could some one comment on zero-copy alternative to memif interface? Any
>> > ideas, comments are welcome.
>> > 
>> > Thank you
>> > 
>> > - Pranab K Das
>> 
>> 
>> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21442): https://lists.fd.io/g/vpp-dev/message/21442
Mute This Topic: https://lists.fd.io/mt/91252298/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] zero memcpy interface (alternative to memif) to pass packets from VPP to another trusted DPDK application

2022-05-24 Thread Damjan Marion via lists.fd.io

VPP is not DPDK application, VPP features doesnt maintain DPDK metadata, in 
many cases we use native drivers so DPDK is not even loaded so you cannot use 
rte_ring unless you want to write complete translation layer which will likely 
be significantly slower that using native way which snort plugin uses.

Why do you think zero copy interface in snort plugin is not simple?

— 
Damjan

> On 24.05.2022., at 13:34, PRANAB DAS  wrote:
> 
> 
> Thank you very much for your response Benoit!
> I am wondering if there is another option e.g. rte-ring of DPDK or if that 
> option is not feasible in VPP? 
> Could you comment? We are looking for a service-chaining application similar 
> to snort but would like to have a much simpler zero-copy interface.
> 
> Thank you,
> 
> - P K DAS
> 
>> On Tue, May 24, 2022 at 4:22 AM Benoit Ganne (bganne)  
>> wrote:
>> It all depends on what you want to do. Usually, we try to avoid sharing 
>> buffers read/write between multiple process, it makes debugging much harder 
>> - especially buffer leaks or use-after-free...
>> Here is an example to share VPP buffers read-only with snort so that snort 
>> can inspect traffic and gives a verdict back without any copy: 
>> https://git.fd.io/vpp/tree/src/plugins/snort
>> 
>> Best
>> ben
>> 
>> > -Original Message-
>> > From: vpp-dev@lists.fd.io  On Behalf Of PRANAB DAS
>> > Sent: Monday, May 23, 2022 20:11
>> > To: vpp-dev@lists.fd.io
>> > Subject: Re: [vpp-dev] zero memcpy interface (alternative to memif) to
>> > pass packets from VPP to another trusted DPDK application
>> > 
>> > Hi
>> > 
>> > Could some one comment on zero-copy alternative to memif interface? Any
>> > ideas, comments are welcome.
>> > 
>> > Thank you
>> > 
>> > - Pranab K Das
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21440): https://lists.fd.io/g/vpp-dev/message/21440
Mute This Topic: https://lists.fd.io/mt/91252298/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Change how "unix exec" executes cli scripts

2022-05-12 Thread Damjan Marion via lists.fd.io

— 
Damjan



> On 12.05.2022., at 15:12, Andrew  Yourtchenko  wrote:
> 
> Inline 
> 
>> On 12 May 2022, at 14:21, Damjan Marion  wrote:
>> 
>> Inline..
>> 
>> — 
>> Damjan
>> 
>> 
>> 
>>> On 12.05.2022., at 09:06, Andrew  Yourtchenko  wrote:
>>> Damjan,
>>> it is true we do not “guarantee” the behavior of the CLIs, but it is also 
>>> true that a lot of people use them, that is why this warrants a bit more 
>>> discussion and heads up.
>> 
>> Lot of people use cli, but not so much use packet-genrator or multiline 
>> comments from startup.conf "unix exec”. Those who do sooner or later will 
>> ned to spend few minutes to fix them, and I’m pretty sure earth will not 
>> stop spinning for them.
> 
> Every time a change is forced on someone, they might have to interrupt what 
> they were doing, in order to deal with that change. The earth doesn’t stop 
> spinning, as you say, but it’s much nicer to have a little heads up on these 
> things, like an order of a couple of weeks or so.
> Maybe in this case there is no one who is affected in this case. But I do not 
> know. And when I do not know I try to err on a side of caution.

Exact purpose of this thread was to give heads up, and opportunity to somebody 
who is really impacted to say so.

> 
>> 
>>> This issue was there since day 1
>> 
>> Yes, but looks like it starts annoying some people, and patches started 
>> showing up in gerrit. Unfortunately those patches and just workarounds 
>> dealing with single cli and I will prefer to have proper fix merged asap 
>> instead of merging those workarounds.
>> 
>>> . What is driving the urgency of it ?
>> 
>> I didn’t say it is urgent, I said that it is better to get it fixed in 22.06 
>> than not having it fixed in 22.06 and I elaborated why.
> 
> As you rightly said, we do not guarantee the stability of the CLIs, I removed 
> the -2. Please accept my apologies, it probably should’ve been a -1 in the 
> first place.

Thanks!

> 
> —a
> 
>> 
>>> --a
>>>>> On 11 May 2022, at 14:29, Damjan Marion via lists.fd.io 
>>>>>  wrote:
>>>> 
>>>> I didn’t hear back from anybody, so let me elaborate this change a bit.
>>>> It is clear that current behaviour is broken, as some CLI handlers which 
>>>> work well in interactive mode simply eat rest of the content of exec file.
>>>> As result of that I can see people submitting patches to fix CLI handlers 
>>>> they are interested in only, which i believe is wrong. This needs to be 
>>>> fixed inside infra.
>>>> Patch I submitted doesn’t change behaviour of CLIs which are written in 
>>>> single line, but requires change in multiline CLIs which uses data wrapped 
>>>> with {}. I am aware of 2 commands which do that:
>>>> - ‘comment' with multiline comments
>>>> - ‘packet-generator new’
>>>> Funny thing about ‘packet-generator new’ is that typically needs to be 
>>>> followed by “packet-generator enable” which also eats rest of the input 
>>>> which makes usage of packet generator from exec script limited and 
>>>> constrained. i.e. when i use packet generator typically i want to do some 
>>>> show commands after and that is not possible today.
>>>> I know very few people who use packet-genrator from exec script, they are 
>>>> developers and I’m quite sure they should be able to address this issue 
>>>> quickly.
>>>> I fully disagree that we wait for next release with this change for 
>>>> following reasons:
>>>> - we don’t guarantee CLI consistency like we do for APIs
>>>> - If this is right thing to do (and I didn’t hear anybody disagreeing), it 
>>>> needs to happen sooner or later, so it is better to have this issue fixed 
>>>> sooner that later
>>>> - reason for not merging patch late in development cycles is high risk of 
>>>> bugs and that is not the case here. This patch is low risk from bugs 
>>>> perspective, it just changes behaviour for very limited number of CLIs, 
>>>> and there is more than enough time to document that before release is out.
>>>> —
>>>> Damjan
>>>>> On 09.05.2022., at 13:56, Damjan Marion via lists.fd.io 
>>>>>  wrote:
>>>>> Hmm, not sure I understand concern about blast radius. I also replied to 
>>>>> your comment in gerrit.
>>>>> —
>>>>> Damjan
>>

Re: [vpp-dev] Change how "unix exec" executes cli scripts

2022-05-12 Thread Damjan Marion via lists.fd.io
Inline..

— 
Damjan



> On 12.05.2022., at 09:06, Andrew  Yourtchenko  wrote:
> 
> Damjan,
> 
> it is true we do not “guarantee” the behavior of the CLIs, but it is also 
> true that a lot of people use them, that is why this warrants a bit more 
> discussion and heads up.

Lot of people use cli, but not so much use packet-genrator or multiline 
comments from startup.conf "unix exec”. Those who do sooner or later will ned 
to spend few minutes to fix them, and I’m pretty sure earth will not stop 
spinning for them.

> 
> This issue was there since day 1

Yes, but looks like it starts annoying some people, and patches started showing 
up in gerrit. Unfortunately those patches and just workarounds dealing with 
single cli and I will prefer to have proper fix merged asap instead of merging 
those workarounds.

> . What is driving the urgency of it ?

I didn’t say it is urgent, I said that it is better to get it fixed in 22.06 
than not having it fixed in 22.06 and I elaborated why.

> 
> --a
> 
>> On 11 May 2022, at 14:29, Damjan Marion via lists.fd.io 
>>  wrote:
>> 
>> 
>> I didn’t hear back from anybody, so let me elaborate this change a bit.
>> 
>> It is clear that current behaviour is broken, as some CLI handlers which 
>> work well in interactive mode simply eat rest of the content of exec file.
>> As result of that I can see people submitting patches to fix CLI handlers 
>> they are interested in only, which i believe is wrong. This needs to be 
>> fixed inside infra.
>> 
>> Patch I submitted doesn’t change behaviour of CLIs which are written in 
>> single line, but requires change in multiline CLIs which uses data wrapped 
>> with {}. I am aware of 2 commands which do that:
>> - ‘comment' with multiline comments
>> - ‘packet-generator new’
>> 
>> Funny thing about ‘packet-generator new’ is that typically needs to be 
>> followed by “packet-generator enable” which also eats rest of the input 
>> which makes usage of packet generator from exec script limited and 
>> constrained. i.e. when i use packet generator typically i want to do some 
>> show commands after and that is not possible today.
>> 
>> I know very few people who use packet-genrator from exec script, they are 
>> developers and I’m quite sure they should be able to address this issue 
>> quickly.
>> 
>> I fully disagree that we wait for next release with this change for 
>> following reasons:
>> - we don’t guarantee CLI consistency like we do for APIs
>> - If this is right thing to do (and I didn’t hear anybody disagreeing), it 
>> needs to happen sooner or later, so it is better to have this issue fixed 
>> sooner that later
>> - reason for not merging patch late in development cycles is high risk of 
>> bugs and that is not the case here. This patch is low risk from bugs 
>> perspective, it just changes behaviour for very limited number of CLIs, and 
>> there is more than enough time to document that before release is out.
>> 
>> — 
>> Damjan
>> 
>> 
>> 
>>> On 09.05.2022., at 13:56, Damjan Marion via lists.fd.io 
>>>  wrote:
>>> 
>>> 
>>> Hmm, not sure I understand concern about blast radius. I also replied to 
>>> your comment in gerrit.
>>> 
>>> — 
>>> Damjan
>>> 
>>> 
>>> 
>>>>> On 09.05.2022., at 07:31, Andrew  Yourtchenko  wrote:
>>>> 
>>>> Damjan,
>>>> 
>>>> I have left the comment on the change itself - in short, given its blast 
>>>> radius, it needs to wait at least until 22.06 RC1 is done.
>>>> 
>>>> --a
>>>> 
>>>>> On 8 May 2022, at 19:39, Damjan Marion via lists.fd.io 
>>>>>  wrote:
>>>>> 
>>>>> Guys,
>>>>> 
>>>>> I just submitted following patch which fixes long standing issue in how 
>>>>> cli scripts are executed.
>>>>> 
>>>>> https://gerrit.fd.io/r/c/vpp/+/36101
>>>>> 
>>>>> Problem was that there was no way to execute CLIs which have optional 
>>>>> arguments. I.e. “show version” and “show version verbose”.
>>>>> CLI parser was passing whole contents up to the EOF to each CLI handler, 
>>>>> and because he eats all whitespaces there was no way to know if current 
>>>>> unformat input points to the rest of the line or to the beginning of the 
>>>>> new line.
>>>>> 
>>>>> In this patch i changed that behaviour so CLI gets only one 

Re: [vpp-dev] Change how "unix exec" executes cli scripts

2022-05-11 Thread Damjan Marion via lists.fd.io

I didn’t hear back from anybody, so let me elaborate this change a bit.

It is clear that current behaviour is broken, as some CLI handlers which work 
well in interactive mode simply eat rest of the content of exec file.
As result of that I can see people submitting patches to fix CLI handlers they 
are interested in only, which i believe is wrong. This needs to be fixed inside 
infra.

Patch I submitted doesn’t change behaviour of CLIs which are written in single 
line, but requires change in multiline CLIs which uses data wrapped with {}. I 
am aware of 2 commands which do that:
 - ‘comment' with multiline comments
 - ‘packet-generator new’

Funny thing about ‘packet-generator new’ is that typically needs to be followed 
by “packet-generator enable” which also eats rest of the input which makes 
usage of packet generator from exec script limited and constrained. i.e. when i 
use packet generator typically i want to do some show commands after and that 
is not possible today.

I know very few people who use packet-genrator from exec script, they are 
developers and I’m quite sure they should be able to address this issue quickly.

I fully disagree that we wait for next release with this change for following 
reasons:
 - we don’t guarantee CLI consistency like we do for APIs
 - If this is right thing to do (and I didn’t hear anybody disagreeing), it 
needs to happen sooner or later, so it is better to have this issue fixed 
sooner that later
 - reason for not merging patch late in development cycles is high risk of bugs 
and that is not the case here. This patch is low risk from bugs perspective, it 
just changes behaviour for very limited number of CLIs, and there is more than 
enough time to document that before release is out.

— 
Damjan



> On 09.05.2022., at 13:56, Damjan Marion via lists.fd.io 
>  wrote:
> 
> 
> Hmm, not sure I understand concern about blast radius. I also replied to your 
> comment in gerrit.
> 
> — 
> Damjan
> 
> 
> 
>> On 09.05.2022., at 07:31, Andrew  Yourtchenko  wrote:
>> 
>> Damjan,
>> 
>> I have left the comment on the change itself - in short, given its blast 
>> radius, it needs to wait at least until 22.06 RC1 is done.
>> 
>> --a
>> 
>>> On 8 May 2022, at 19:39, Damjan Marion via lists.fd.io 
>>>  wrote:
>>> 
>>> Guys,
>>> 
>>> I just submitted following patch which fixes long standing issue in how cli 
>>> scripts are executed.
>>> 
>>> https://gerrit.fd.io/r/c/vpp/+/36101
>>> 
>>> Problem was that there was no way to execute CLIs which have optional 
>>> arguments. I.e. “show version” and “show version verbose”.
>>> CLI parser was passing whole contents up to the EOF to each CLI handler, 
>>> and because he eats all whitespaces there was no way to know if current 
>>> unformat input points to the rest of the line or to the beginning of the 
>>> new line.
>>> 
>>> In this patch i changed that behaviour so CLI gets only one line of input.
>>> 
>>> Also I changed unformat_input function so it recognises backslash before 
>>> newline as way to pass multiline data to cli handler.
>>> 
>>> As a result, there is no need for calling unformat_line in each cli 
>>> handler, and still there is a way to specify multiline CLIs.
>>> 
>>> i.e. 
>>> 
>>> show version \
>>>  verbose
>>> 
>>> or:
>>> 
>>> packet-generator new { \
>>>  name x \
>>>  limit 5 \
>>>  size 128-128 \
>>>  interface local0 \
>>>  node null-node \
>>>  data { \
>>>  incrementing 30 \
>>>  } \
>>> }
>>> 
>>> Hope nobody have issues with this change, but let me know if I’m wrong…
>>> 
>>> — 
>>> Damjan
>>> 
>>> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21397): https://lists.fd.io/g/vpp-dev/message/21397
Mute This Topic: https://lists.fd.io/mt/90974441/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Segmentation fault when dpdk number-rx-queues > 1 in startup.conf

2022-05-11 Thread Damjan Marion via lists.fd.io

> On 06.05.2022., at 11:33, Xu, Ting  wrote:
> 
> Hi, Damjan
> 
> I look into the code. The bad commit is 
> ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”), and the 
> previous commit is correct, so I compare these two. Since they use the same 
> DPDK version, I check the input of rte API.
> 
> I find that the direct cause is configuring default RSS in DPDK. It is called 
> by dpdk_device_setup() in dpdk plugins, the API function is 
> rte_eth_dev_configure(). However, the bad commit and the good commit have 
> almost the same input to rte_eth_dev_configure(), the only difference is a Tx 
> offload flag (TX_IPV4_CSUM), but I think it is not the root cause because it 
> does not help after I fix it. Since they have the same input to dpdk API, I 
> think it is not DPDK's issue.
> 
> I find there are a lot of flags or offloads configuring change in commit 
> (“dpdk: offloads cleanup”). I guess is it possible that some flags are not 
> correct? I look at the code in dpdk_lib_init(), but do not find the cause yet.
> 
> Do you have any suggestion to me? Thanks!

No. DPDK should not crash even if we are doing something wrong. It should 
return error value.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21396): https://lists.fd.io/g/vpp-dev/message/21396
Mute This Topic: https://lists.fd.io/mt/89520993/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Change how "unix exec" executes cli scripts

2022-05-09 Thread Damjan Marion via lists.fd.io

Hmm, not sure I understand concern about blast radius. I also replied to your 
comment in gerrit.

— 
Damjan



> On 09.05.2022., at 07:31, Andrew  Yourtchenko  wrote:
> 
> Damjan,
> 
> I have left the comment on the change itself - in short, given its blast 
> radius, it needs to wait at least until 22.06 RC1 is done.
> 
> --a
> 
>> On 8 May 2022, at 19:39, Damjan Marion via lists.fd.io 
>>  wrote:
>> 
>> Guys,
>> 
>> I just submitted following patch which fixes long standing issue in how cli 
>> scripts are executed.
>> 
>> https://gerrit.fd.io/r/c/vpp/+/36101
>> 
>> Problem was that there was no way to execute CLIs which have optional 
>> arguments. I.e. “show version” and “show version verbose”.
>> CLI parser was passing whole contents up to the EOF to each CLI handler, and 
>> because he eats all whitespaces there was no way to know if current unformat 
>> input points to the rest of the line or to the beginning of the new line.
>> 
>> In this patch i changed that behaviour so CLI gets only one line of input.
>> 
>> Also I changed unformat_input function so it recognises backslash before 
>> newline as way to pass multiline data to cli handler.
>> 
>> As a result, there is no need for calling unformat_line in each cli handler, 
>> and still there is a way to specify multiline CLIs.
>> 
>> i.e. 
>> 
>> show version \
>>  verbose
>> 
>> or:
>> 
>> packet-generator new { \
>>  name x \
>>  limit 5 \
>>  size 128-128 \
>>  interface local0 \
>>  node null-node \
>>  data { \
>>  incrementing 30 \
>>  } \
>> }
>> 
>> Hope nobody have issues with this change, but let me know if I’m wrong…
>> 
>> — 
>> Damjan
>> 
>> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21390): https://lists.fd.io/g/vpp-dev/message/21390
Mute This Topic: https://lists.fd.io/mt/90974441/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Change how "unix exec" executes cli scripts

2022-05-09 Thread Damjan Marion via lists.fd.io
Why do you think so?

— 
Damjan



> On 09.05.2022., at 03:43, jiangxiaom...@outlook.com wrote:
> 
> Hi Damjan,
>  With your patch, most of current clis have to be modified, it may be a large 
> work. I think the suitable way is changing the meaning of VLIB_CLI_COMMAND 
> function's second params from input to args.
> That's:
>  
> /* CLI command callback function. */
> 
> typedef clib_error_t *(vlib_cli_command_function_t)
> 
>   (struct vlib_main_t * vm,
> 
>unformat_input_t * input (input changed to args), struct 
> vlib_cli_command_t * cmd)
> 
> 
> vlib_cli_dispatch_sub_commands(...) {
> 
> 
> c_error = c->function (vm, si (changed to args), c); => si changed to 
> args
> ...
> }
> 
> 
> Every command just parse it's own args.
> 
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21389): https://lists.fd.io/g/vpp-dev/message/21389
Mute This Topic: https://lists.fd.io/mt/90974441/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Change how "unix exec" executes cli scripts

2022-05-08 Thread Damjan Marion via lists.fd.io
Guys,

I just submitted following patch which fixes long standing issue in how cli 
scripts are executed.

https://gerrit.fd.io/r/c/vpp/+/36101

Problem was that there was no way to execute CLIs which have optional 
arguments. I.e. “show version” and “show version verbose”.
CLI parser was passing whole contents up to the EOF to each CLI handler, and 
because he eats all whitespaces there was no way to know if current unformat 
input points to the rest of the line or to the beginning of the new line.

In this patch i changed that behaviour so CLI gets only one line of input.

Also I changed unformat_input function so it recognises backslash before 
newline as way to pass multiline data to cli handler.

As a result, there is no need for calling unformat_line in each cli handler, 
and still there is a way to specify multiline CLIs.

i.e. 

show version \
   verbose

or:

packet-generator new { \
   name x \
   limit 5 \
   size 128-128 \
   interface local0 \
   node null-node \
   data { \
   incrementing 30 \
   } \
}

Hope nobody have issues with this change, but let me know if I’m wrong…

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21382): https://lists.fd.io/g/vpp-dev/message/21382
Mute This Topic: https://lists.fd.io/mt/90974441/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: 35640 has a crashing regression, was Re: [vpp-dev] vpp-papi stats is broken

2022-04-07 Thread Damjan Marion via lists.fd.io

Yeah, looks like ip4_neighbor_probe is sending packet to deleted interface:

(gdb)p n->name
$4 = (u8 *) 0x7fff82b47578 "interface-3-output-deleted”

So it is right that this assert kicks in.

Likely what happens is that batch of commands are first triggering generation 
of neighbor probe packet, then
immediately after that interface is deleted, but packet is still in flight and 
drop node tries to bump counters for deleted interface.

— 
Damjan



> On 06.04.2022., at 16:21, Pim van Pelt  wrote:
> 
> Hoi,
> 
> Following reproduces the drop.c:77 assertion:
> 
> create loopback interface instance 0
> set interface ip address loop0 10.0.0.1/32
> set interface state GigabitEthernet3/0/1 up
> set interface state loop0 up
> set interface state loop0 down
> set interface ip address del loop0 10.0.0.1/32
> delete loopback interface intfc loop0
> set interface state GigabitEthernet3/0/1 down
> set interface state GigabitEthernet3/0/1 up
> comment { the following crashes VPP }
> set interface state GigabitEthernet3/0/1 down
> 
> I found that adding IPv6 addresses does not provoke the crash, while adding 
> IPv4 addresses to loop0 does provoke it.
> 
> groet,
> Pim
> 
> On Wed, Apr 6, 2022 at 3:56 PM Pim van Pelt via lists.fd.io 
>  wrote:
> Hoi,
> 
> The crash I observed is now gone, thanks!
> 
> VPP occasionally hits an ASSERT related to error counters at drop.c:77 -- 
> I'll try to see if I can get a reproduction, but it may take a while, and it 
> may be transient.
> 
> 11: /home/pim/src/vpp/src/vlib/drop.c:77 (counter_index) assertion `ci < 
> n->n_errors' fails
> 
> Thread 14 "vpp_wk_11" received signal SIGABRT, Aborted.
> [Switching to Thread 0x7fff4bbfd700 (LWP 182685)]
> __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> 50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> (gdb) bt
> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1  0x76a5f859 in __GI_abort () at abort.c:79
> #2  0x004072e3 in os_panic () at 
> /home/pim/src/vpp/src/vpp/vnet/main.c:413
> #3  0x76daea29 in debugger () at 
> /home/pim/src/vpp/src/vppinfra/error.c:84
> #4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
> at /home/pim/src/vpp/src/vppinfra/error.c:143
> #5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at 
> /home/pim/src/vpp/src/vlib/drop.c:77
> #6  0x76f77c57 in process_drop_punt (vm=0x7fffa09fb2c0, 
> node=0x7fffa0c79b00, frame=0x7fff97168140, disposition=ERROR_DISPOSITION_DROP)
> at /home/pim/src/vpp/src/vlib/drop.c:224
> #7  0x76f77957 in error_drop_node_fn_hsw (vm=0x7fffa09fb2c0, 
> node=0x7fffa0c79b00, frame=0x7fff97168140)
> at /home/pim/src/vpp/src/vlib/drop.c:248
> #8  0x76f0b10d in dispatch_node (vm=0x7fffa09fb2c0, 
> node=0x7fffa0c79b00, type=VLIB_NODE_TYPE_INTERNAL, 
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fff97168140, 
> last_time_stamp=5318787653101516) at /home/pim/src/vpp/src/vlib/main.c:961
> #9  0x76f0bb60 in dispatch_pending_node (vm=0x7fffa09fb2c0, 
> pending_frame_index=5, last_time_stamp=5318787653101516)
> at /home/pim/src/vpp/src/vlib/main.c:1120
> #10 0x76f06e0f in vlib_main_or_worker_loop (vm=0x7fffa09fb2c0, 
> is_main=0) at /home/pim/src/vpp/src/vlib/main.c:1587
> #11 0x76f06537 in vlib_worker_loop (vm=0x7fffa09fb2c0) at 
> /home/pim/src/vpp/src/vlib/main.c:1721
> #12 0x76f44ef4 in vlib_worker_thread_fn (arg=0x7fff98eabec0) at 
> /home/pim/src/vpp/src/vlib/threads.c:1587
> #13 0x76f3ffe5 in vlib_worker_thread_bootstrap_fn 
> (arg=0x7fff98eabec0) at /home/pim/src/vpp/src/vlib/threads.c:426
> #14 0x76e61609 in start_thread (arg=) at 
> pthread_create.c:477
> #15 0x76b5c163 in clone () at 
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> (gdb) up 4
> #4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
> at /home/pim/src/vpp/src/vppinfra/error.c:143
> 143 debugger ();
> (gdb) up
> #5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at 
> /home/pim/src/vpp/src/vlib/drop.c:77
> 77ASSERT (ci < n->n_errors);
> (gdb) list
> 72
> 73ni = vlib_error_get_node (>node_main, e);
> 74n = vlib_get_node (vm, ni);
> 75
> 76ci = vlib_error_get_code (>node_main, e);
> 77ASSERT (ci < n->n_errors);
> 78
> 79ci += n->error_heap_index;
> 80
> 81return ci;
> 
> On Wed, Apr 6, 2022 at 1:53 PM Damjan Marion (damarion)  
> wrote:
> 
> This seems to be day one issue, and my patch just exposed it.
> Current interface deletion code is not removing node stats entries.
> 
> So if you delete interface and then create one with the same name, 
> stats entry is already there, and creation of new entry fails.
> 
> Hope this helps:
> 
> 

Re: 35640 has a crashing regression, was Re: [vpp-dev] vpp-papi stats is broken

2022-04-06 Thread Damjan Marion via lists.fd.io

This seems to be day one issue, and my patch just exposed it.
Current interface deletion code is not removing node stats entries.

So if you delete interface and then create one with the same name, 
stats entry is already there, and creation of new entry fails.

Hope this helps:

https://gerrit.fd.io/r/c/vpp/+/35900

— 
Damjan



> On 05.04.2022., at 22:13, Pim van Pelt  wrote:
> 
> Hoi,
> 
> Here's a minimal repro that reliably crashes VPP at head for me, does not 
> crash before gerrit 35640:
> 
> create loopback interface instance 0
> create bond id 0 mode lacp load-balance l34
> create bond id 1 mode lacp load-balance l34
> delete loopback interface intfc loop0
> delete bond BondEthernet0
> delete bond BondEthernet1
> create bond id 0 mode lacp load-balance l34
> delete bond BondEthernet0
> comment { the next command crashes VPP }
> create loopback interface instance 0
> 
> 
> 
> On Tue, Apr 5, 2022 at 9:48 PM Pim van Pelt  wrote:
> Hoi,
> 
> There is a crashing regression in VPP after 
> https://gerrit.fd.io/r/c/vpp/+/35640
> 
> With that change merged, VPP crashes upon creation and deletion of 
> interfaces. Winding back the repo until before 35640 does not crash. The 
> crash happens in 
> 0: /home/pim/src/vpp/src/vlib/stats/stats.h:115 (vlib_stats_get_entry) 
> assertion `entry_index < vec_len (sm->directory_vector)' fails
> 
> (gdb) bt
> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1  0x76a5e859 in __GI_abort () at abort.c:79
> #2  0x004072e3 in os_panic () at 
> /home/pim/src/vpp/src/vpp/vnet/main.c:413
> #3  0x76dada29 in debugger () at 
> /home/pim/src/vpp/src/vppinfra/error.c:84
> #4  0x76dad7fa in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0, fmt=0x76f9c19c "%s:%d (%s) assertion `%s' fails")
>at /home/pim/src/vpp/src/vppinfra/error.c:143
> #5  0x76f39605 in vlib_stats_get_entry (sm=0x76fce5e8 
> , entry_index=4294967295)
>at /home/pim/src/vpp/src/vlib/stats/stats.h:115
> #6  0x76f39273 in vlib_stats_remove_entry (entry_index=4294967295) at 
> /home/pim/src/vpp/src/vlib/stats/stats.c:135
> #7  0x76ee36d9 in vlib_register_errors (vm=0x7fff96800740, 
> node_index=718, n_errors=0, error_strings=0x0, counters=0x0)
>at /home/pim/src/vpp/src/vlib/error.c:149
> #8  0x770b8e0c in setup_tx_node (vm=0x7fff96800740, node_index=718, 
> dev_class=0x7fff973f9fb0) at /home/pim/src/vpp/src/vnet/interface.c:816
> #9  0x770b7f26 in vnet_register_interface (vnm=0x77f579a0 
> , dev_class_index=31, dev_instance=0, hw_class_index=29, 
>hw_instance=7) at /home/pim/src/vpp/src/vnet/interface.c:1085
> #10 0x77129efd in vnet_eth_register_interface (vnm=0x77f579a0 
> , r=0x7fff4b288f18)
>at /home/pim/src/vpp/src/vnet/ethernet/interface.c:376
> #11 0x7712bd05 in vnet_create_loopback_interface 
> (sw_if_indexp=0x7fff4b288fb8, mac_address=0x7fff4b288fb2 "", is_specified=1 
> '\001', 
>user_instance=0) at /home/pim/src/vpp/src/vnet/ethernet/interface.c:883
> #12 0x7712fecf in create_simulated_ethernet_interfaces 
> (vm=0x7fff96800740, input=0x7fff4b2899d0, cmd=0x7fff973c7e38)
>at /home/pim/src/vpp/src/vnet/ethernet/interface.c:930
> #13 0x76ed65e8 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
> cm=0x42c2f0 , input=0x7fff4b2899d0, 
>parent_command_index=1161) at /home/pim/src/vpp/src/vlib/cli.c:592
> #14 0x76ed6358 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
> cm=0x42c2f0 , input=0x7fff4b2899d0, 
>parent_command_index=33) at /home/pim/src/vpp/src/vlib/cli.c:549
> #15 0x76ed6358 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
> cm=0x42c2f0 , input=0x7fff4b2899d0, 
>parent_command_index=0) at /home/pim/src/vpp/src/vlib/cli.c:549
> #16 0x76ed5528 in vlib_cli_input (vm=0x7fff96800740, 
> input=0x7fff4b2899d0, function=0x0, function_arg=0)
>at /home/pim/src/vpp/src/vlib/cli.c:695
> #17 0x76f61f21 in unix_cli_exec (vm=0x7fff96800740, 
> input=0x7fff4b289e78, cmd=0x7fff973c99d8) at 
> /home/pim/src/vpp/src/vlib/unix/cli.c:3454
> #18 0x76ed65e8 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
> cm=0x42c2f0 , input=0x7fff4b289e78, 
>parent_command_index=0) at /home/pim/src/vpp/src/vlib/cli.c:592
> #19 0x76ed5528 in vlib_cli_input (vm=0x7fff96800740, 
> input=0x7fff4b289e78, function=0x76f55960 , 
> function_arg=1)
>at /home/pim/src/vpp/src/vlib/cli.c:695
> 
> This is caught by a local regression test 
> (https://github.com/pimvanpelt/vppcfg/tree/main/intest) that executes a bunch 
> of CLI statements, and I have a set of transitions there which I can probably 
> narrow down to an exact repro case.
> 
> On Fri, Apr 1, 2022 at 3:08 PM Pim van Pelt via lists.fd.io 
>  wrote:
> Hoi,
> 
> As a followup - I tried to remember why I copied class VPPStats() and friends 
> into my own repository, but that may be because it's not exported in 

[vpp-dev] vector allocator rework

2022-03-25 Thread Damjan Marion via lists.fd.io
Guys,

As discussed on the last call, I did a rework of the VPP vector (and pool) 
allocator.

https://gerrit.fd.io/r/c/vpp/+/35718

Here is the list of changes:

- support of in-place growth of vectors (if there is available space next to 
  existing alloc)
- drops the need for alloc_aligned_at_offset from memory allocator,
  which allows easier swap to different memory allocator and reduces 
  malloc overhead
- rework of pool and vec macros to inline functions to improve debuggability
- fix alignment - in many cases macros were not using native alignment
  of the particular datatype. Explicitly setting alignment with XXX_aligned()
  versions of the macro is not needed anymore in > 99% of cases
- fix ASAN usage
- avoid use of vector of voids, this was root cause of several bugs
  found in vec_* and pool_* function where sizeof() was used on voids
  instead of real vector data type
- introduce minimal alignment which is currently 8 bytes, vectors will
  be always aligned at least to that value (underlay allocator actually always
  provide 16-byte aligned allocs)

Please let me know if any feedback.

Thanks!

— 
Damjan




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21114): https://lists.fd.io/g/vpp-dev/message/21114
Mute This Topic: https://lists.fd.io/mt/90024154/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Segmentation fault when dpdk number-rx-queues > 1 in startup.conf

2022-03-07 Thread Damjan Marion via lists.fd.io

OK, so crash is clearly happening in DPDK code so you will need somebody 
familiar with that driver to take a look.

— 
Damjan



> On 07.03.2022., at 07:01, Xu, Ting  wrote:
> 
> Hi, Damjan
> 
> Thanks for your help, and the backtrace from gdb is below (a file with same 
> content is attached for better format). I use the commit 
> ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”) to get 
> these info. The previous one commit 3b7ef512f190a506f62af53536b586b4800f66c1 
> ("misc: fix the uninitialization error") does not cause the error.
> 
> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
> 0x7fff7211f958 in ice_sq_send_cmd_nolock (hw=0x0, cq=0x0, desc=0x0, 
> buf=0x0, buf_size=0, cd=0x0) at 
> ../src-dpdk/drivers/net/ice/base/ice_controlq.c:889
> 889 {
> (gdb) bt
> #0  0x7fff7211f958 in ice_sq_send_cmd_nolock (hw=0x0, cq=0x0, desc=0x0, 
> buf=0x0, buf_size=0, cd=0x0) at 
> ../src-dpdk/drivers/net/ice/base/ice_controlq.c:889
> #1  0x7fff721434f9 in ice_sq_send_cmd (hw=0x7fd2bf7f9b00, 
> cq=0x7fd2bf7fb5a0, desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, 
> cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_controlq.c:1076
> #2  0x7fff721724bc in ice_sq_send_cmd_retry (hw=0x7fd2bf7f9b00, 
> cq=0x7fd2bf7fb5a0, desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, 
> cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1415
> #3  0x7fff72180687 in ice_aq_send_cmd (hw=0x7fd2bf7f9b00, 
> desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, cd=0x0) at 
> ../src-dpdk/drivers/net/ice/base/ice_common.c:1474
> #4  0x7fff72181130 in ice_aq_alloc_free_res (hw=0x7fd2bf7f9b00, 
> num_entries=1, buf=0x7fe2c025d000, buf_size=6, opc=ice_aqc_opc_alloc_res, 
> cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1810
> #5  0x7fff72181255 in ice_alloc_hw_res (hw=0x7fd2bf7f9b00, type=96, 
> num=1, btm=false, res=0x7fff6d364452) at 
> ../src-dpdk/drivers/net/ice/base/ice_common.c:1840
> #6  0x7fff72327d2c in ice_alloc_prof_id (hw=0x7fd2bf7f9b00, 
> blk=ICE_BLK_RSS, prof_id=0x7fff6d3644ba "5r") at 
> ../src-dpdk/drivers/net/ice/base/ice_flex_pipe.c:3305
> #7  0x7fff72348519 in ice_add_prof (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, 
> id=17179875328, ptypes=0x7fe2c025ddbc "", attr=0x0, attr_cnt=0, 
> es=0x7fe2c025dc90, masks=0x7fe2c025dd5a) at 
> ../src-dpdk/drivers/net/ice/base/ice_flex_pipe.c:4980
> #8  0x7fff72364b71 in ice_flow_add_prof_sync (hw=0x7fd2bf7f9b00, 
> blk=ICE_BLK_RSS, dir=ICE_FLOW_RX, prof_id=17179875328, segs=0x7fe2c025dec0, 
> segs_cnt=1 '\001', acts=0x0, acts_cnt=0 '\000', prof=0x7fff6d368fb8) at 
> ../src-dpdk/drivers/net/ice/base/ice_flow.c:2054
> #9  0x7fff7236574a in ice_flow_add_prof (hw=0x7fd2bf7f9b00, 
> blk=ICE_BLK_RSS, dir=ICE_FLOW_RX, prof_id=17179875328, segs=0x7fe2c025dec0, 
> segs_cnt=1 '\001', acts=0x0, acts_cnt=0 '\000', prof=0x7fff6d368fb8) at 
> ../src-dpdk/drivers/net/ice/base/ice_flow.c:2371
> #10 0x7fff7238bd74 in ice_add_rss_cfg_sync (hw=0x7fd2bf7f9b00, 
> vsi_handle=0, cfg=0x7fff6d369010) at 
> ../src-dpdk/drivers/net/ice/base/ice_flow.c:3884
> #11 0x7fff7238beef in ice_add_rss_cfg (hw=0x7fd2bf7f9b00, vsi_handle=0, 
> cfg=0x7fff6d3690b0) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:3937
> #12 0x7fff724e6301 in ice_add_rss_cfg_wrap (pf=0x7fd2bf7fc7d0, vsi_id=0, 
> cfg=0x7fff6d3690b0) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:2792
> #13 0x7fff724e6457 in ice_rss_hash_set (pf=0x7fd2bf7fc7d0, rss_hf=12220) 
> at ../src-dpdk/drivers/net/ice/ice_ethdev.c:2834
> #14 0x7fff724fc253 in ice_init_rss (pf=0x7fd2bf7fc7d0) at 
> ../src-dpdk/drivers/net/ice/ice_ethdev.c:3102
> #15 0x7fff724fc369 in ice_dev_configure (dev=0x7fff746a0100 
> ) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:3131
> #16 0x7fff70d9c3e4 in rte_eth_dev_configure (port_id=0, nb_rx_q=8, 
> nb_tx_q=5, dev_conf=0x7fff6d36ecc0) at 
> ../src-dpdk/lib/ethdev/rte_ethdev.c:1578
> #17 0x7fff73e10178 in dpdk_device_setup (xd=0x7fff7c8f4f00) at 
> /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/common.c:156
> #18 0x7fff73e47b84 in dpdk_lib_init (dm=0x7fff74691f58 ) at 
> /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/init.c:582
> #19 0x7fff73e459f4 in dpdk_process (vm=0x7fff76800680, rt=0x7fff76e191c0, 
> f=0x0) at 
> /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/init.c:1499
> #20 0x76e7033d in vlib_process_bootstrap (_a=140735062407352) at 
> /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1235
> #21 0x76d0ebf8 in clib_calljmp () at 
> /root/networking.dataplane.fdio.vpp/src/vppinfra/longjmp.S:123
> #22 0x7fff6f66f8b0 in ?? ()
> #23 0x76e6fd5f in vlib_process_startup (vm=0x7fff76800680, 
> p=0x7fff76e191c0, f=0x0) at 
> /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1260
> #24 0x76e6b4fa in dispatch_process (vm=0x7fff76800680, 
> p=0x7fff76e191c0, f=0x0, last_time_stamp=2826656650676320) at 
> 

Re: [vpp-dev] VxLAN set MTU

2022-02-17 Thread Damjan Marion via lists.fd.io


vxlan code is missing max_frame_size callback which is now mandatory if you 
want to be able to do the change.
If vxlan code doesn’t care about that change, it can be as simple as:

vxlan_set_max_frame_size (vnet_main_t *vnm, vnet_hw_interface_t *hw, u32 
frame_size)
{
  return 0;
}

and:

eir.cb.set_max_frame_size = vxlan_set_max_frame_size; 

right before call to vnet_eth_register_interface (vnm, );

However right solution will be to allow/disallow change based on underlying 
MTU…..

— 
Damjan



> On 17.02.2022., at 12:11, Pim van Pelt  wrote:
> 
> Hoi Artyom,
> 
> Not an authoritative answer, although perhaps Damjan can confirm:
> Is the semantic difference between max frame size (==device will reject 
> frames larger than that on Rx, and never Tx frames larger than that) and mtu 
> (L3 IP/IP6/MPLS packets will not be larger than that) with an implication 
> that MTU can bever be larger than max_frame_size.
> 
> Regardless, Artyom, here's how I manipulate the MTU of VXLAN tunnels - the 
> trick is to use the 'mtu packet' rather than 'mtu' option (see the CLI call 
> in bold):
> 
> create vxlan tunnel instance 10 src $A dst $B vni 320501 decap-next l2
> set interface state vxlan_tunnel10 up
> set interface mtu packet 1522 vxlan_tunnel10
> set interface l2 xconnect TenGigabitEthernet5/0/0.50 vxlan_tunnel10
> set interface l2 xconnect vxlan_tunnel10 TenGigabitEthernet5/0/0.50
> 
> pim@ddln1:~$ vppctl show ver
> vpp v22.06-rc0~93-g360aee3e0 built by pim on dellr610 at 2022-02-13T11:56:07
> 
> pim@ddln1:~$ vppctl show int vxlan_tunnel10
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count 
> vxlan_tunnel1017 up  1522/0/0/0 rx 
> packets209408
> rx bytes  
>   81834552
> tx 
> packets   1616124
> tx bytes  
> 2471872228
> 
> Does that help for you? I actually noticed that it doesn't prevent 
> vxlan_tunnel10 from emitting larger frames, so I'm not quite sure what 
> benefit it brings.
> 
> On Thu, Feb 17, 2022 at 11:38 AM Artyom Glazychev 
>  wrote:
> Hello,
> 
> There is a problem with setting MTU for VxLAN. I see, that there was a change 
> related to MTU and Max_frame_size - https://gerrit.fd.io/r/c/vpp/+/34928
> 
> I don't know, what VxLAN configuration is right (I've asked about it here 
> https://lists.fd.io/g/vpp-dev/topic/vxlan_l3_mode/89205942?p=,,,20,0,0,0::recentpostdate/sticky,,,20,2,0,89205942,previd=1645090589967474984,nextid=1643993575016104745=1645090589967474984=1643993575016104745)
> 
> But neither this:
> DBGvpp# create vxlan tunnel src 10.0.3.1 dst 10.0.3.3 vni 55 l3
> vxlan_tunnel0
> DBGvpp# set interface mtu 1400 vxlan_tunnel0
> set interface mtu: Unsupported (underlying driver doesn't support changing 
> Max Frame Size)
> 
> Nor this:
> DBGvpp# create vxlan tunnel src 10.0.3.1 dst 10.0.3.3 vni 55
> vxlan_tunnel0
> DBGvpp# set interface mtu 1400 vxlan_tunnel0
> set interface mtu: not supported
> is not working.
> 
> So, my final question is how to configure the VxLAN L2-tunnel and set MTU?
> Thank you.
> 
> 
> 
> 
> 
> 
> -- 
> Pim van Pelt  
> PBVP1-RIPE - http://www.ipng.nl/
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20877): https://lists.fd.io/g/vpp-dev/message/20877
Mute This Topic: https://lists.fd.io/mt/89206535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Community call today

2022-01-25 Thread Damjan Marion via lists.fd.io

Just a reminder, today we have community call, as we agreed to move to 
bi-weekly format.

I created .ics file so people can put it in the calendar.

Please let me know if you have any topics to discuss…


— 
Damjan


BEGIN:VCALENDAR
VERSION:2.0
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
LAST-MODIFIED:20201011T015911Z
TZURL:http://tzurl.org/zoneinfo-outlook/America/Los_Angeles
X-LIC-LOCATION:America/Los_Angeles
BEGIN:DAYLIGHT
TZNAME:PDT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
DTSTART:19700308T02
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:PST
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
DTSTART:19701101T02
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20220125T115146Z
UID:FD3D99B2-7E0A-4980-ACB6-1ED9AC34878E
DTSTART;TZID=America/Los_Angeles:20220111T08
RRULE:FREQ=MONTHLY;BYDAY=2TU,4TU
DTEND;TZID=America/Los_Angeles:20220111T09
SUMMARY:fd.io VPP Project Community Call
URL:https://zoom.us/my/fastdata?pwd=Z3Z0UnJyUmRIMlU3eTJLcGF6VEptQT09
LOCATION:Zoom Meeting
END:VEVENT
END:VCALENDAR

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20785): https://lists.fd.io/g/vpp-dev/message/20785
Mute This Topic: https://lists.fd.io/mt/88669896/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Bihash is considered thread-safe but probably shouldn't

2022-01-20 Thread Damjan Marion via lists.fd.io

Let me resurrect this discussion as my patch is still in gerrit.

Nick, any chance you can submit your proposal as discussed bellow?

Thanks,

— 
Damjan



> On 04.11.2021., at 19:35, Damjan Marion via lists.fd.io 
>  wrote:
> 
> 
> Dear Nick,
> 
> Will be great if you can support your proposal with a running code so we can 
> understand exactly what it means.
> 
> — 
> Damjan
> 
>> On 04.11.2021., at 19:24, Nick Zavaritsky  wrote:
>> 
>>  Hi, thanks for an insightful discussion!
>> 
>> I do understand that high performance is one of the most important goals of 
>> vpp, therefore certain solutions might not fly. From my POV, the version 
>> counter would be an improvement. It definitely decreases the probability of 
>> triggering the bug.
>> 
>> Concerning isolcpus, currently this is presented as an optimisation, not a 
>> prerequisite. Without isolcpus, a thread could get preempted for arbitrarily 
>> long. Meaning that no matter how many bits we allocate for the version 
>> field, occasionally they won’t be enough.
>> 
>> I’d love to have something that’s robust no matter how the threads are 
>> scheduled. Would it be possible to use vpp benchmarking lab to evaluate the 
>> performance impact of the proposed solutions?
>> 
>> Finally, I'd like to rehash the reader lock proposal. The idea was that we 
>> don’t introduce any atomic operations in the reader path. A reader 
>> *publishes* the bucket number it is about to examine in int 
>> rlock[MAX_THREADS] array. Every thread uses a distinct cell in rlock 
>> (determined by the thread id), therefore it could be a regular write 
>> followed by a barrier. Eliminate false sharing with padding.
>> 
>> Writer locks a bucket as currently implemented (CAS) and then waits until 
>> the bucket number disappears from rlock[].
>> 
>> Reader publishes the bucket number and then checks if the bucket is locked 
>> (regular write, barrier, regular read). Good to go if  not locked, otherwise 
>> remove the bucket number from rlock, wait for the lock to get released, 
>> restart.
>> 
>> The proposal doesn’t introduce any new atomic operations. There still might 
>> be a slowdown due to cache line ping-pong in the rlock array. In the worst 
>> case, it costs us 1 extra cache miss for the reader. Could be coalesced with 
>> the bucket prefetch, making it essentially free (few if any bihash users 
>> prefetch buckets).
>> 
>> Best,
>> 
>> Nick
>> 
>> 
>>> On 3. Nov 2021, at 21:29, Florin Coras via lists.fd.io 
>>>  wrote:
>>> 
>>> 
>>> Agreed it’s unlikely so maybe just use the 2 bits left for the epoch 
>>> counter as a middle ground? The new approach should be better either way :-)
>>> 
>>> Florin
>>> 
>>> 
>>>> On Nov 3, 2021, at 11:55 AM, Damjan Marion  wrote:
>>>> 
>>>> What about the following, we shift offset by 6, as all buckets are aligned 
>>>> to 64, anyway,  and that gives us 6 more bits so we can have 8 bit epoch 
>>>> counter…. ?
>>>> 
>>>> — 
>>>> Damjan
>>>> 
>>>>> On 03.11.2021., at 19:45, Damjan Marion  wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>> yes, i am aware of that, it is extremelly unlikely and only way i can see 
>>>>> this fixed is introducing epoch on the bucket level but we dont have 
>>>>> enough space there…. 
>>>>> 
>>>>> — 
>>>>> Damjan
>>>>> 
>>>>>> On 03.11.2021., at 19:16, Florin Coras  wrote:
>>>>>> 
>>>>>> Hi Damjan, 
>>>>>> 
>>>>>> Definitely like the scheme but the change bit might not be enough, 
>>>>>> unless I’m misunderstanding. For instance, two consecutive updates to a 
>>>>>> bucket before reader grabs b1 will hide the change. 
>>>>>> 
>>>>>> Florin
>>>>>> 
>>>>>>> On Nov 3, 2021, at 9:36 AM, Damjan Marion via lists.fd.io 
>>>>>>>  wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> Agree with Dave on atomic ops being bad on the reader side.
>>>>>>> 
>>>>>>> What about following schema:
>>>>>>> 
>>>>>>> As bucket is just u64 value on the reader side we grab bucket before 
>>>>>>> (b0) and after (b1) search operatio

Re: [vpp-dev] Risc-V Compilation Error

2022-01-19 Thread Damjan Marion via lists.fd.io

This may help you:

https://gerrit.fd.io/r/c/vpp/+/34972 

— 
Damjan



> On 19.01.2022., at 11:15, Hrishikesh Karanjikar 
>  wrote:
> 
> Hi,
> 
> I am trying to compile vpp on the SiFive HiFive Unmatched board.
> Following is my build environment
> 
> ==
> 
> 
> ubuntu@ubuntu:~/work/vpp$ uname -a
> Linux ubuntu 5.13.0-1008-generic #8-Ubuntu SMP Fri Jan 7 18:50:29 UTC 2022 
> riscv64 riscv64 riscv64 GNU/Linux
> 
> ubuntu@ubuntu:~/work/vpp$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description: Ubuntu 21.10
> Release: 21.10
> Codename: impish
> 
> ==
> I am using the latest master branch for VPP.
> I am getting the following compilation error. I have changed the 
> default-clang format to 11.
> 
> ==
> ubuntu@ubuntu:~/work/vpp$ make pkg-deb
> make[1]: Entering directory '/home/ubuntu/work/vpp/build-root'
>  Arch for platform 'vpp' is native 
>  Finding source for external 
>  Makefile fragment found in 
> /home/ubuntu/work/vpp/build-data/packages/external.mk  
> 
>  Source found in /home/ubuntu/work/vpp/build 
>  Arch for platform 'vpp' is native 
>  Finding source for vpp 
>  Makefile fragment found in 
> /home/ubuntu/work/vpp/build-data/packages/vpp.mk  
>  Source found in /home/ubuntu/work/vpp/src 
> find: ‘/home/ubuntu/work/vpp/build-root/config.site’: No such file or 
> directory
>  Configuring external: nothing to do 
>  Building external in 
> /home/ubuntu/work/vpp/build-root/build-vpp-native/external 
> 
>  Installing external: nothing to do 
> find: ‘/home/ubuntu/work/vpp/build-root/config.site’: No such file or 
> directory
>  Configuring vpp: nothing to do 
>  Building vpp in /home/ubuntu/work/vpp/build-root/build-vpp-native/vpp 
> 
> [2/1501] Building C object 
> CMakeFiles/vppinfra/CMakeFiles/vppinfra_objs.dir/cpu.c.o
> FAILED: CMakeFiles/vppinfra/CMakeFiles/vppinfra_objs.dir/cpu.c.o 
> ccache /usr/lib/ccache/clang-13 --target=riscv64-linux-gnu -DHAVE_FCNTL64 
> -D_FORTIFY_SOURCE=2 -I/home/ubuntu/work/vpp/src -ICMakeFiles -fPIC   
> -fvisibility=hidden -g -Werror -Wall -Wno-address-of-packed-member -O3 
> -fstack-protector -fno-common -MD -MT 
> CMakeFiles/vppinfra/CMakeFiles/vppinfra_objs.dir/cpu.c.o -MF 
> CMakeFiles/vppinfra/CMakeFiles/vppinfra_objs.dir/cpu.c.o.d -o 
> CMakeFiles/vppinfra/CMakeFiles/vppinfra_objs.dir/cpu.c.o -c 
> /home/ubuntu/work/vpp/src/vppinfra/cpu.c
> /home/ubuntu/work/vpp/src/vppinfra/cpu.c:203:1: error: unused function 
> 'flag_skip_prefix' [-Werror,-Wunused-function]
> flag_skip_prefix (char const *flag, const char *pfx, int len)
> ^
> 1 error generated.
> [5/1501] Building C object 
> CMakeFiles/vppinfra/CMakeFiles/vppinfra_objs.dir/format.c.o
> ninja: build stopped: subcommand failed.
> make[1]: *** [Makefile:693: vpp-build] Error 1
> make[1]: Leaving directory '/home/ubuntu/work/vpp/build-root'
> make: *** [Makefile:593: pkg-deb] Error 2
> 
> ==
> 
> 
> -- 
> 
> Regards,
> Hrishikesh Karanjikar
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20762): https://lists.fd.io/g/vpp-dev/message/20762
Mute This Topic: https://lists.fd.io/mt/88531162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev]question on memif configuration

2022-01-12 Thread Damjan Marion via lists.fd.io
I am not able ro help with your libmemif related qurstions, but can answer 1st 
one.

What you see is OK. Slave is always ring producer. In s2m direction slave is 
enqueueing packets to the ring and in m2s direction slave is enqueueing empty 
buffers. So from your output it is clear that s2m is empty and m2s is full of 
empty buffers. 

— 
Damjan

> On 11.01.2022., at 19:09, vipin allawadhi  wrote:
> 
> 
> Hello Experts,
> 
> I have a question on memif configuration. One of our application connects to 
> VPP via memif. following is memif configuration for this connection:
> 
> vpp# show memif memif0/2
> 
> interface memif0/2
>   remote-name "XYZ"
>   remote-interface "memif_conn"
>   socket-id 0 id 2 mode ip
>   flags admin-up connected
>   listener-fd 50 conn-fd 51
>   num-s2m-rings 1 num-m2s-rings 1 buffer-size 0 num-regions 2
>   region 0 size 65792 fd 56
>   region 1 size 264241152 fd 59
> master-to-slave ring 0:
>   region 0 offset 32896 ring-size 2048 int-fd 66
>   head 2057 tail 9 flags 0x interrupts 9
> slave-to-master ring 0:
>   region 0 offset 0 ring-size 2048 int-fd 62
>   head 9 tail 9 flags 0x0001 interrupts 0
> vpp#
> 
> one question related to above config is, slave-to-master ring's head and tail 
> points to same index even though ring size is 2048. Is this correct?  in case 
> of master-to-slave ring, head and tail index differ by 2048 which is exactly 
> same as ring size. Let us know your opinion on this. 
> 
> Another problem (major one) is, when we send multiple messages of size 64K 
> bytes from slave to master in a tight loop, those messages are received 
> corrupted on master side. by corrupted, I actually mean is, succeeding 
> message content is written over previous message content. same is observed 
> for messages from master to slave also. when we send a single message from 
> slave to master, we do not see any problem but if we increase message sending 
> rate, we hit this problem immediately. 
> 
> that's how we send the message from slave to master and master is expected to 
> respond back for each received message.
> 
> #deinfe MAX_COUNT = 100;
> for(tmp=0;tmp {
> memif_send_msg (0, 0, data_len, data);
> } 
> 
> memif_send_msg (int index, int q_id, uint16_t data_len, void *data)
> {
>uint64_t count = 1;
>memif_connection_t *c = _connection[index];
>if (c->conn == NULL)
>{
>   INFO ("No connection at index %d. Returning Failure ...\n", index);
>   return SM_RC_ERROR;
>   }
> 
>   uint16_t tx, i;
>   int err = MEMIF_ERR_SUCCESS;
>   uint32_t seq = 0;
>   struct timespec start, end;
> 
>   memif_conn_args_t *args = &(c->args);
>   icmpr_flow_mode_t transport_mode = (icmpr_flow_mode_t) args->mode;
> 
>   memset (, 0, sizeof (start));
>   memset (, 0, sizeof (end));
> 
>   timespec_get (, TIME_UTC);
>   while (count)
>   {
>   i = 0;
>   err = memif_buffer_alloc (c->conn, q_id, c->tx_bufs, MAX_MEMIF_BUFS > 
> count ? count : MAX_MEMIF_BUFS, , 128);
> 
>   if ((err != MEMIF_ERR_SUCCESS) && (err != MEMIF_ERR_NOBUF_RING))
>   {
> INFO ("memif_buffer_alloc: %s Returning Failure...\n", 
> memif_strerror (err));
> return SM_RC_ERROR;
>}
>c->tx_buf_num += tx;
>   while (tx)
>   {
>   while (tx > 2)
>   {
>   memif_generate_packet ((void *) c->tx_bufs[i].data, 
> >tx_bufs[i].len, c->ip_addr, c->ip_daddr, c->hw_daddr, data, data_len, 
> transport_mode);
>   memif_generate_packet ((void *) c->tx_bufs[i + 1].data, 
> >tx_bufs[i + 1].len, c->ip_addr, c->ip_daddr, c->hw_daddr, data, data_len, 
> transport_mode);
>   i += 2;
>   tx -= 2;
>   }
> 
>   /* Generate the last remained one */
>   if(tx)
>   {
>   memif_generate_packet ((void *) 
> c->tx_bufs[i].data,>tx_bufs[i].len, c->ip_addr,c->ip_daddr, 
> c->hw_daddr,data, data_len, transport_mode);
>   i++;
>   tx--;
>   }
>   }
> 
>   err = memif_tx_burst (c->conn, q_id, c->tx_bufs, c->tx_buf_num, );
>   if (err != MEMIF_ERR_SUCCESS)
>   {
>  INFO ("memif_tx_burst: %s Returning Failure...\n",memif_strerror 
> (err));
>   return SM_RC_ERROR;
>   }
> 
>   c->tx_buf_num -= tx;
>   c->tx_counter += tx;
>   count -= tx;
>   }
> }
> 
> We doubt the way we are invoking the "memif_buffer_alloc" function. we are 
> not incrementing the "tx_buf" pointer. so at every invocation of the  
> "memif_send_msg" function, we will try to alloc the same "tx buffer" again 
> and may end up overwriting the previously written content. this may garbage 
> the previous message content if that is not yet sent to memif. 
> 
> Can you please go through this analysis and share your valuable inputs.
> 
> Thanks
> Vipin A.
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20703): 

[vpp-dev] Topics for Tuesday Call

2022-01-07 Thread Damjan Marion via lists.fd.io

Hello and Happy New Year!

I have 3 topics for Tuesday call[1]:

 - DPDK plugin rework
 - default MTU on Ethernet interfaces (change default to 1500 to avoid need for 
no-multi-seg in startup.conf)
 - change in CPU pinning defaults (we don’t use DPDK to launch threads anymore 
so we have more freedom)

Please let me know if there is anything else to put on the list.

Thanks,

— 
Damjan


[1] http://wiki.fd.io/view/VPP/Meeting



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20686): https://lists.fd.io/g/vpp-dev/message/20686
Mute This Topic: https://lists.fd.io/mt/88259907/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vlib_frame_t aux data

2021-12-23 Thread Damjan Marion via lists.fd.io

I just submitted patch which cleans vlib_frame_t allocation code and adds 
support for aux data.

https://gerrit.fd.io/r/c/vpp/+/34798

Example use case is when node passes array of sw_if_index to next node, so next 
node doesn’t need
to do expensive parsing of buffer metadata.

It is as simple as:

typedef struct {
  u32 sw_if_index;
  u32 foo;
} my_frame_aux_data_t;

VLIB_REGISTER_NODE (my_node) = {
.vector_size = sizeof (u32),
.aux_size = sizeof (my_frame_aux_data_t),
};

And the inside the node function:

my_frame_aux_data_t *ad = vlib_frame_aux_args (frame);

Sending node can pass parameters with:

  vlib_next_frame_t *nf;
  vlib_frame_t *f;
  nf = vlib_node_runtime_get_next_frame (vm, node, next_index);
  f = vlib_get_frame (vm, nf->frame);
  aux = vlib_frame_aux_args (f);

Please let me know if any feedback...

— 
Damjan




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20663): https://lists.fd.io/g/vpp-dev/message/20663
Mute This Topic: https://lists.fd.io/mt/87924883/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: Recommended NICs and drivers (was Re: [vpp-dev] About Risc-V Porting)

2021-11-22 Thread Damjan Marion via lists.fd.io

https://docs.fd.io/csit/master/trending/trending/ip4-2n-clx-xxv710.html#t1c

native AVF driver 22.5 Mpps vs DPDK i40e 17.5Mpps on the same hardware.

Beside what Benoit listed we have following native drivers:
 - virtio
 - vhost-user
 - tap
 - vmxnet
 - memif
 - af_packet
 - af_xdp
 
Ben was right about outdated documentation, contributions in this area are more 
than welcome….

And just to make it clear, I do respect DPDK, it bring lot of traction to 
userspace networking and it is very good fit in many cases.
Problem we have with DPDK is that it is built as monolythic framework and not 
as a set of reusable libraries.
If you want to use NIC drivers you are forced to use buffer management, memory 
allocator, command line parser, and many more.
That makes it very hard to use in existing projects like VPP where we need to 
run lot of DPDK code which is redundant for us just to get some functionality 
working.
It will be really nice that DPDK provides something what rdma-core already 
does, direct access to NIC descriptors so we can only extract data we need 
instead
of copying all data from descriptor to rte_mbuf and then from rte_mbuf to our 
own metadata.
Also in many places we had to write thousands lines of code which sole purpose 
is to cheat DPDK, good example is EAL command line construction or 
fake buffer pools…
As long as DPDK stays as monolithic as it is, our only option is to have native 
drivers….

— 
Damjan



> On 09.11.2021., at 21:05, Mrityunjay Kumar  wrote:
> 
> 
> Thanks ben, 
> Inline please, 
> 
> 
> 
> 
> On Tue, 9 Nov, 2021, 7:24 pm Benoit Ganne (bganne) via lists.fd.io, 
>  wrote:
> Hi Ben,
> 
> DPDK is definitely supported in VPP but it has performance impacts
> 
> In case, we have performance bench marking, please share with us. 
> 
> 
> and also carries a lot of dependencies (see for example the discussion on 
> RISC-V: if DPDK was mandatory, we'd have to have both DPDK and VPP on RISC-V, 
> but as VPP have native drivers too, we can get away without DPDK).
> 
> Nice,  please share wiki link,  I sm not aware about that, 
> 
> 
> VPP currently have native drivers for the following physical NICs:
>  - avf: Intel with AVF support (Fortville and Columbiaville)
>  - rdma: Mellanox mlx5 (ConnectX-4/5/6)
> 
> In case, the support are,  please share wiki link,  
> 
> 
> Thanks for details 
> //MJ
> 
> 
> 
> 
> Best
> ben
> 
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Ben McKeegan
> > Sent: mardi 9 novembre 2021 11:38
> > To: vpp-dev@lists.fd.io
> > Subject: Recommended NICs and drivers (was Re: [vpp-dev] About Risc-V
> > Porting)
> > 
> > Hi,
> > 
> > On 08/11/2021 19:15, Damjan Marion via lists.fd.io wrote:
> > >
> > > and that VPP works better when DPDK is not involved.
> > >
> > 
> > 
> > Apologies for hijacking the thread but as a new VPP user who has spent
> > many many hours reading through every scrap of documentation I can find,
> > this statement comes as a surprise to me.   Unfortunately a lot of the
> > VPP documentation seems a bit out of date.  Indeed, the documentation
> > for most the recent release still says it 'Leverages best-of-breed open
> > source driver technology: DPDK':
> > 
> > https://s3-docs.fd.io/vpp/21.10/whatisvpp/performance.html
> > 
> > So if DPDK isn't the best option any more, what do the people of this
> > list recommend as a NIC and driver combination, and why?
> > (Unfortunately, I don't have a huge budget at my disposal, so I'm
> > looking for some best value-for-money options.)
> > 
> > Thanks,
> > Ben.
> 
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20531): https://lists.fd.io/g/vpp-dev/message/20531
Mute This Topic: https://lists.fd.io/mt/86928356/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] About Risc-V Porting

2021-11-12 Thread Damjan Marion via lists.fd.io

Hi,

i tried to repro with gcc-10 on ubuntu 21.10 x86_64 and everything works fine.
Only caveat is that I had to pass -Wno-stringop-overflow in CFLAGS.

— 
Damjan



> On 10.11.2021., at 08:29, Hrishikesh Karanjikar 
>  wrote:
> 
> Hi Damjan,
> 
> I upgraded my Unmatched board to run Ubuntu 21.10 from 21.04.
> I was able to build VPP on Unmatched.
> I am also trying to build it on Qemu with GCC 10.
> I am getting following error,
> 
> 
> 
> ubuntu@ubuntu:~/work/vpp$ make build
> make[1]: Entering directory '/home/ubuntu/work/vpp/build-root'
>  Arch for platform 'vpp' is native 
>  Finding source for external 
>  Makefile fragment found in 
> /home/ubuntu/work/vpp/build-data/packages/external.mk  
> 
>  Source found in /home/ubuntu/work/vpp/build 
>  Arch for platform 'vpp' is native 
>  Finding source for vpp 
>  Makefile fragment found in 
> /home/ubuntu/work/vpp/build-data/packages/vpp.mk  
>  Source found in /home/ubuntu/work/vpp/src 
> find: ‘/home/ubuntu/work/vpp/build-root/config.site’: No such file or 
> directory
>  Configuring external: nothing to do 
>  Building external: nothing to do 
>  Installing external: nothing to do 
> find: ‘/home/ubuntu/work/vpp/build-root/config.site’: No such file or 
> directory
>  Configuring vpp: nothing to do 
>  Building vpp in 
> /home/ubuntu/work/vpp/build-root/build-vpp_debug-native/vpp 
> [2/1463] Linking C executable bin/svmtool
> FAILED: bin/svmtool 
> : && ccache /usr/lib/ccache/gcc-10
> CMakeFiles/svm/CMakeFiles/svmtool.dir/svmtool.c.o  -o bin/svmtool  
> -Wl,-rpath,/home/ubuntu/work/vpp/build-root/build-vpp_debug-native/vpp/lib/riscv64-linux-gnu::
>   lib/riscv64-linux-gnu/libsvm.so.22.02  
> lib/riscv64-linux-gnu/libvppinfra.so.22.02  -lm  -lrt  -lpthread && :
> /usr/bin/ld: lib/riscv64-linux-gnu/libvppinfra.so.22.02: undefined reference 
> to `__atomic_exchange_1'
> collect2: error: ld returned 1 exit status
> [3/1463] Linking C executable bin/svmdbtool
> FAILED: bin/svmdbtool 
> : && ccache /usr/lib/ccache/gcc-10
> CMakeFiles/svm/CMakeFiles/svmdbtool.dir/svmdbtool.c.o  -o bin/svmdbtool  
> -Wl,-rpath,/home/ubuntu/work/vpp/build-root/build-vpp_debug-native/vpp/lib/riscv64-linux-gnu::
>   lib/riscv64-linux-gnu/libsvmdb.so.22.02  
> lib/riscv64-linux-gnu/libsvm.so.22.02  
> lib/riscv64-linux-gnu/libvppinfra.so.22.02  -lm  -lrt  -lpthread && :
> /usr/bin/ld: lib/riscv64-linux-gnu/libvppinfra.so.22.02: undefined reference 
> to `__atomic_exchange_1'
> collect2: error: ld returned 1 exit status
> [5/1463] Building C object CMakeFiles/vlib/CMakeFiles/vlib_objs.dir/drop.c.o
> ninja: build stopped: subcommand failed.
> make[1]: *** [Makefile:693: vpp-build] Error 1
> make[1]: Leaving directory '/home/ubuntu/work/vpp/build-root'
> make: *** [Makefile:356: build] Error 2
> 
> 
> Libatomic is present on my machine.
> 
> Can you give me some pointers to resolve the same?
> 
> Thanks,
> Hrishikesh
> 
> On Mon, Nov 8, 2021 at 8:19 PM Hrishikesh Karanjikar 
> mailto:hrishikesh.karanji...@gmail.com>> 
> wrote:
> Hi,
> 
> This is great.
> Thanks a lot.
> Let me try that.
> 
> Hrishikesh
> 
> On Mon, Nov 8, 2021 at 8:00 PM Damjan Marion  > wrote:
> I compiled directly on the Unmatched board. I also submitted series of 
> patches which are fixing all 
> issues you are referring to.
> 
> you can use both clang and gcc, problem with clang is that some parts of
> VPP  unconditionally turn address sanitiser on and there is no ASAN shared 
> libraries available for risc-v.
> You can bypass this temporarely by commenting out test_pnat, test_vat and 
> test_vat2 targets.
> 
> I also managed to cross-compile vpp on ubuntu system by using debian 
> multiarch libs.
> 
> # dpkg --add-architecture riscv64
> 
> Update sources.list:
> 
> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ 
>  impish main restricted universe 
> multiverse
> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ 
>  impish-updates main restricted 
> universe multiverse
> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ 
>  impish-backports main restricted 
> universe multiverse
> 
> # apt update
> 
> # apt install crossbuild-essential-riscv64 libssl-dev:riscv64 
> uuid-dev:riscv64 libnl-3-dev:riscv6 libnl-route-3-dev:riscv64 
> libbpf-dev:riscv64
> 
> 
> $ cmake \
>   -DCMAKE_SYSTEM_NAME=Linux \
>   -DCMAKE_SYSTEM_PROCESSOR=riscv64 \
>   -DCMAKE_C_COMPILER=riscv64-linux-gnu-gcc \
>   -DCMAKE_CXX_COMPILER=riscv64-linux-gnu-gcc \
>   -DCMAKE_C_COMPILER_TARGET=riscv64-linux-gnu \
>   

[vpp-dev] Dave Wallace to be VPP project represetative at TSC

2021-11-10 Thread Damjan Marion via lists.fd.io

Guys,

According to the new fd.io Technical Community Doc:

---
A Core Projects PTL may choose to appoint another committer from that Core 
Project to represent the core project in their stead.
---

As Dave is already attending TSC meetings for a long period of time and he is 
very active I decided to appoint Dave to represent VPP project.

Thanks,

— 
Damjan




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20476): https://lists.fd.io/g/vpp-dev/message/20476
Mute This Topic: https://lists.fd.io/mt/86965867/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] About Risc-V Porting

2021-11-08 Thread Damjan Marion via lists.fd.io


> On 08.11.2021., at 20:21, Mrityunjay Kumar  wrote:
> 
> Thank you for the clarification. 
> 
> Inline please, 
> 
> 
> 
> 
> On Tue, 9 Nov, 2021, 12:45 am Damjan Marion,  wrote:
> > 
> > > 
> > > 
> > > 
> > > On Mon, 8 Nov, 2021, 10:18 pm Damjan Marion via lists.fd.io, 
> > >  wrote:
> > > 
> > > No, I didn’t bother… Not using DPDK for a long time...
> > 
> > We know that,
> 
> Who are “We”?
> 
> Dpdk users as older than VPP as open source. 

DPDK users are not the only people on this list and even if you have mandate to 
represent all of them I don't think you have 
a right to dictate what people are allowed to say on this list.

Also, it is not clear to me why it matters which one is older in open-source.
This sounds to me like “my dad is stronger than yours” kind of argumentation.

> 
> 
> >  not required to say, but many of us following DPDK and VPP.
> 
> I disagree, i think it is important to say as many people is not aware that 
> VPP is not DPDK application
> and that VPP works better when DPDK is not involved.
> 
> May be, but for me still it's magic,   
> 
> 
> I am saying let have respect of both users. 

I absolutely have respect for both users and I didn’t do anything disrespectful.
I simply said that i didn’t bother to compile dpdk as it doesn’t bring any 
value to me.

— 
Damjan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20457): https://lists.fd.io/g/vpp-dev/message/20457
Mute This Topic: https://lists.fd.io/mt/86312689/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] About Risc-V Porting

2021-11-08 Thread Damjan Marion via lists.fd.io
> 
> > 
> > 
> > 
> > On Mon, 8 Nov, 2021, 10:18 pm Damjan Marion via lists.fd.io, 
> >  wrote:
> > 
> > No, I didn’t bother… Not using DPDK for a long time...
> 
> We know that,

Who are “We”?

>  not required to say, but many of us following DPDK and VPP.

I disagree, i think it is important to say as many people is not aware that VPP 
is not DPDK application
and that VPP works better when DPDK is not involved.

>  Let someone reply answer of exact inline question, 

Exact question was addressed to me, so please explain how “someone” can reply.

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20455): https://lists.fd.io/g/vpp-dev/message/20455
Mute This Topic: https://lists.fd.io/mt/86312689/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] About Risc-V Porting

2021-11-08 Thread Damjan Marion via lists.fd.io

Sorry, I don’t understand your question.

Are you asking me to forward Hrishikesh’s question if I compiled VPP with DPDK 
on RISC-V5 board to DPDK team?
How they should know if I compiled VPP with DPDK or not?

Even if it is valid question for dpdk community, I dont feel like I am e-mail 
relay agent.
Everybody have a freedom to go and ask question on the DPDK mailing list.

— 
Damjan



> On 08.11.2021., at 19:18, Mrityunjay Kumar  wrote:
> 
> Damjan, hi again
> 
> Is it possible to share comunity queries to DPDK team? 
> 
> 
> DPDK: it's self is very big code base. 
> 
> I am only users of both open source. Knowledge exchange are appreciated. 
> 
> 
> 
> /MJ
> 
> 
> 
> 
> On Mon, 8 Nov, 2021, 10:18 pm Damjan Marion via lists.fd.io, 
>  wrote:
> 
> No, I didn’t bother… Not using DPDK for a long time...
> 
> — 
> Damjan
> 
>> On 08.11.2021., at 16:51, Hrishikesh Karanjikar 
>>  wrote:
>> 
>> Hi,
>> 
>> One more thing.
>> Did you compile with DPDK?
>> I compiled with DPDK. I have ported DPDK for Risc-V. Not upstreamed yet.
>> I had to compile without rdma-core.
>> 
>> 
>> 
>> Thanks,
>> Hrishikesh
>> 
>> On Mon, Nov 8, 2021 at 8:19 PM Hrishikesh Karanjikar 
>>  wrote:
>> Hi,
>> 
>> This is great.
>> Thanks a lot.
>> Let me try that.
>> 
>> Hrishikesh
>> 
>> On Mon, Nov 8, 2021 at 8:00 PM Damjan Marion  wrote:
>> I compiled directly on the Unmatched board. I also submitted series of 
>> patches which are fixing all 
>> issues you are referring to.
>> 
>> you can use both clang and gcc, problem with clang is that some parts of
>> VPP  unconditionally turn address sanitiser on and there is no ASAN shared 
>> libraries available for risc-v.
>> You can bypass this temporarely by commenting out test_pnat, test_vat and 
>> test_vat2 targets.
>> 
>> I also managed to cross-compile vpp on ubuntu system by using debian 
>> multiarch libs.
>> 
>> # dpkg --add-architecture riscv64
>> 
>> Update sources.list:
>> 
>> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ impish 
>> main restricted universe multiverse
>> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ 
>> impish-updates main restricted universe multiverse
>> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ 
>> impish-backports main restricted universe multiverse
>> 
>> # apt update
>> 
>> # apt install crossbuild-essential-riscv64 libssl-dev:riscv64 
>> uuid-dev:riscv64 libnl-3-dev:riscv6 libnl-route-3-dev:riscv64 
>> libbpf-dev:riscv64
>> 
>> 
>> $ cmake \
>>   -DCMAKE_SYSTEM_NAME=Linux \
>>   -DCMAKE_SYSTEM_PROCESSOR=riscv64 \
>>   -DCMAKE_C_COMPILER=riscv64-linux-gnu-gcc \
>>   -DCMAKE_CXX_COMPILER=riscv64-linux-gnu-gcc \
>>   -DCMAKE_C_COMPILER_TARGET=riscv64-linux-gnu \
>>   -DCMAKE_CXX_COMPILER_TARGET=riscv64-linux-gnu \
>>   -DCMAKE_ASM_COMPILER_TARGET=riscv64-linux-gnu \
>>   -DCMAKE_FIND_ROOT_PATH_MODE_PROGRAM=NEVER \
>>   -DCMAKE_FIND_ROOT_PATH_MODE_LIBRARY=BOTH \
>>   -DCMAKE_FIND_ROOT_PATH_MODE_INCLUDE=BOTH \
>>   -DCMAKE_FIND_ROOT_PATH_MODE_PACKAGE=ONLY \
>>   -DCMAKE_FIND_ROOT_PATH=/usr/riscv64-linux-gnu \
>>   -DCMAKE_INSTALL_PREFIX=/usr/local \
>>   -DCMAKE_EXPORT_COMPILE_COMMANDS:BOOL=ON \
>>   -DCMAKE_BUILD_TYPE:STRING=debug \
>>   -G Ninja \
>>   -S src \
>>   -B .
>> 
>> $ ninja
>> 
>> $ file bin/vpp
>> bin/vpp: ELF 64-bit LSB executable, UCB RISC-V, version 1 (SYSV), 
>> dynamically linked, interpreter /lib/ld-linux-riscv64-lp64d.so.1, 
>> BuildID[sha1]=51ac741e44727379a0fbb5936acea4d7b8bdd624, for GNU/Linux 
>> 4.15.0, with debug_info, not stripped
>> 
>> And run with qemu:
>> 
>> $ qemu-riscv64-static ./bin/vpp unix interactive
>> buffer  [warn  ]: numa[0] falling back to non-hugepage backed buffer 
>> pool (vlib_physmem_shared_map_create: pmalloc_map_pages: failed to mmap 19 
>> pages at 0x404fc0 fd 4 numa 0 flags 0x11: Invalid argument)
>> buffer  [warn  ]: numa[1] falling back to non-hugepage backed buffer 
>> pool (vlib_physmem_shared_map_create: pmalloc_map_pages: failed to set 
>> mempolicy for numa node 1: Function not implemented)
>> vlib_physmem_shared_map_create: pmalloc_map_pages: failed to set mempolicy 
>> for numa node 1: Function not implementedsvm_queue_init:57: mutex_init: No 
>> such file or directory (errno 2)
>> svm_queue_init:57: mutex_init: No such file or directory (errno 2)
>> svm

Re: [vpp-dev] About Risc-V Porting

2021-11-08 Thread Damjan Marion via lists.fd.io

No, I didn’t bother… Not using DPDK for a long time...

— 
Damjan

> On 08.11.2021., at 16:51, Hrishikesh Karanjikar 
>  wrote:
> 
> Hi,
> 
> One more thing.
> Did you compile with DPDK?
> I compiled with DPDK. I have ported DPDK for Risc-V. Not upstreamed yet.
> I had to compile without rdma-core.
> 
> 
> 
> Thanks,
> Hrishikesh
> 
> On Mon, Nov 8, 2021 at 8:19 PM Hrishikesh Karanjikar 
> mailto:hrishikesh.karanji...@gmail.com>> 
> wrote:
> Hi,
> 
> This is great.
> Thanks a lot.
> Let me try that.
> 
> Hrishikesh
> 
> On Mon, Nov 8, 2021 at 8:00 PM Damjan Marion  > wrote:
> I compiled directly on the Unmatched board. I also submitted series of 
> patches which are fixing all 
> issues you are referring to.
> 
> you can use both clang and gcc, problem with clang is that some parts of
> VPP  unconditionally turn address sanitiser on and there is no ASAN shared 
> libraries available for risc-v.
> You can bypass this temporarely by commenting out test_pnat, test_vat and 
> test_vat2 targets.
> 
> I also managed to cross-compile vpp on ubuntu system by using debian 
> multiarch libs.
> 
> # dpkg --add-architecture riscv64
> 
> Update sources.list:
> 
> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ 
>  impish main restricted universe 
> multiverse
> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ 
>  impish-updates main restricted 
> universe multiverse
> deb [arch=arm64,armhf,riscv64] http://ports.ubuntu.com/ubuntu-ports/ 
>  impish-backports main restricted 
> universe multiverse
> 
> # apt update
> 
> # apt install crossbuild-essential-riscv64 libssl-dev:riscv64 
> uuid-dev:riscv64 libnl-3-dev:riscv6 libnl-route-3-dev:riscv64 
> libbpf-dev:riscv64
> 
> 
> $ cmake \
>   -DCMAKE_SYSTEM_NAME=Linux \
>   -DCMAKE_SYSTEM_PROCESSOR=riscv64 \
>   -DCMAKE_C_COMPILER=riscv64-linux-gnu-gcc \
>   -DCMAKE_CXX_COMPILER=riscv64-linux-gnu-gcc \
>   -DCMAKE_C_COMPILER_TARGET=riscv64-linux-gnu \
>   -DCMAKE_CXX_COMPILER_TARGET=riscv64-linux-gnu \
>   -DCMAKE_ASM_COMPILER_TARGET=riscv64-linux-gnu \
>   -DCMAKE_FIND_ROOT_PATH_MODE_PROGRAM=NEVER \
>   -DCMAKE_FIND_ROOT_PATH_MODE_LIBRARY=BOTH \
>   -DCMAKE_FIND_ROOT_PATH_MODE_INCLUDE=BOTH \
>   -DCMAKE_FIND_ROOT_PATH_MODE_PACKAGE=ONLY \
>   -DCMAKE_FIND_ROOT_PATH=/usr/riscv64-linux-gnu \
>   -DCMAKE_INSTALL_PREFIX=/usr/local \
>   -DCMAKE_EXPORT_COMPILE_COMMANDS:BOOL=ON \
>   -DCMAKE_BUILD_TYPE:STRING=debug \
>   -G Ninja \
>   -S src \
>   -B .
> 
> $ ninja
> 
> $ file bin/vpp
> bin/vpp: ELF 64-bit LSB executable, UCB RISC-V, version 1 (SYSV), dynamically 
> linked, interpreter /lib/ld-linux-riscv64-lp64d.so.1, 
> BuildID[sha1]=51ac741e44727379a0fbb5936acea4d7b8bdd624, for GNU/Linux 4.15.0, 
> with debug_info, not stripped
> 
> And run with qemu:
> 
> $ qemu-riscv64-static ./bin/vpp unix interactive
> buffer  [warn  ]: numa[0] falling back to non-hugepage backed buffer pool 
> (vlib_physmem_shared_map_create: pmalloc_map_pages: failed to mmap 19 pages 
> at 0x404fc0 fd 4 numa 0 flags 0x11: Invalid argument)
> buffer  [warn  ]: numa[1] falling back to non-hugepage backed buffer pool 
> (vlib_physmem_shared_map_create: pmalloc_map_pages: failed to set mempolicy 
> for numa node 1: Function not implemented)
> vlib_physmem_shared_map_create: pmalloc_map_pages: failed to set mempolicy 
> for numa node 1: Function not implementedsvm_queue_init:57: mutex_init: No 
> such file or directory (errno 2)
> svm_queue_init:57: mutex_init: No such file or directory (errno 2)
> svm_queue_init:57: mutex_init: No such file or directory (errno 2)
> svm_queue_init:57: mutex_init: No such file or directory (errno 2)
> svm_queue_init:57: mutex_init: No such file or directory (errno 2)
> svm_queue_init:57: mutex_init: No such file or directory (errno 2)
> svm_queue_init:57: mutex_init: No such file or directory (errno 2)
> vat-plug/load  [error ]: vat_plugin_register: oddbuf plugin not loaded...
> _____   _  ___
>  __/ __/ _ \  (_)__| | / / _ \/ _ \
>  _/ _// // / / / _ \   | |/ / ___/ ___/
>  /_/ /(_)_/\___/   |___/_/  /_/
> 
> DBGvpp#
> 
> 
> — 
> Damjan
> 
> 
> 
> > On 08.11.2021., at 14:59, Hrishikesh Karanjikar 
> > mailto:hrishikesh.karanji...@gmail.com>> 
> > wrote:
> > 
> > Hi,
> > 
> > Thanks for this patch. I will check it out. Which compile did you use? Did 
> > you cross compile or locally compiled it on Qemu or any other platform?
> > I was able to compile VPP using GCC10 locally on Qemu but I had to do other 
> > modifications.
> > At many places I was able to put RiscV specific code but vector support for 
> > RiscV is still not available so I had to use stubs for compilation to work.
> > 
> > Thanks,
> > Hrishikesh
> > 
> > On Mon, Nov 1, 2021 at 1:53 AM Damjan Marion  > > wrote:
> > 
> > Here it is:
> > 
> > 

Re: [vpp-dev] how to enable avx512 for all plugins and drivers?

2021-11-08 Thread Damjan Marion via lists.fd.io


There is no easy way. You can do node by node with ""set node function 
 ” CLI.

Problem is that in older versions of VPP we kept AVX512 off due to impact of 
power level transitions.

In newer versions we have AVX512 enabled for sklylake X / cascadelake but with 
256-bit register sizes
as in such situations use of AVX-512 instructions doesn’t trigger power level 
transitions.

So in your case turning on AVX512 will likely make things to be slower 
(assuming that you use skylake x, cascadelake).
I suggest using newer version of VPP.

— 
Damjan



> On 08.11.2021., at 02:58, haiyan...@ilinkall.cn wrote:
> 
> I'm using vpp 20.01, is there any way that I can enable it manually ?
> 
> haiyan...@ilinkall.cn
>  
> From: Damjan Marion via lists.fd.io
> Date: 2021-11-05 22:25
> To: haiyan.li
> CC: vpp-dev
> Subject: Re: [vpp-dev] how to enable avx512 for all plugins and drivers?
> 
> Based on your output looks like you are using quite old version of vpp.
> In the recent versions of VPP AVX-512 instructions are used automatically if 
> CPU supports them…
> 
> — 
> Damjan
> 
>> On 05.11.2021., at 10:52, haiyan...@ilinkall.cn wrote:
>> 
>> 
>> 
>> Dear vpp group:
>> 
>>I changed dpdk-input node to avx512 as this message said 
>> "vpp-dev@lists.fd.io | A question about enable AVX512 instruction in VPP", 
>> but others still uses avx2 as below
>> 
>> so how could i enable it for all the plugins and all the drivers? Thanks for 
>> any reply


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20445): https://lists.fd.io/g/vpp-dev/message/20445
Mute This Topic: https://lists.fd.io/mt/86836650/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] how to enable avx512 for all plugins and drivers?

2021-11-05 Thread Damjan Marion via lists.fd.io

Based on your output looks like you are using quite old version of vpp.
In the recent versions of VPP AVX-512 instructions are used automatically if 
CPU supports them…

— 
Damjan

> On 05.11.2021., at 10:52, haiyan...@ilinkall.cn wrote:
> 
> 
> 
> Dear vpp group:
> 
>I changed dpdk-input node to avx512 as this message said 
> "vpp-dev@lists.fd.io | A question about enable AVX512 instruction in VPP", 
> but others still uses avx2 as below
> 
> so how could i enable it for all the plugins and all the drivers? Thanks for 
> any reply
> 
> 
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20436): https://lists.fd.io/g/vpp-dev/message/20436
Mute This Topic: https://lists.fd.io/mt/86836650/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] a free_bitmap may expand while pool_put

2021-11-04 Thread Damjan Marion via lists.fd.io

Dear Stanislav,

It doesn’t look to me as a thread safe solution.

i.e imagine 2 threads call pool_put_will expand  on the same time, and there is 
just one free slot. Both will get negative answer, but 2nd put operation will 
actually expand.

— 
Damjan

> On 04.11.2021., at 18:24, Stanislav Zaikin  wrote:
> 
> 
> Hello folks,
> 
> In a multi-threaded environment (in my case I have 2 workers) I observed a 
> crash, and thanks to Neale, it turned out that free_bitmap may expand while 
> doing pool_put.
> Let's say one thread is doing pool_put, while another thread is calling 
> "pool_elt_at_index". I observed different addresses before and after checking 
> "ASSERT (! pool_is_free (p, _e))" in that macro.
> 
> I prepared a patch [0], but it's kind of ugly. We don't have asserts in 
> release mode, so why should we care about it?
> 
> On the other hand, 2 different threads can do 2 pool_puts simultaneously and 
> we can lose one free element in the pool (and also additionally allocated 
> bitmap).
> 
> For me, it's a place where it would be nice to have an mt-safe vec. What do 
> you think?
> 
> [0] - https://gerrit.fd.io/r/c/vpp/+/34332
> 
> -- 
> Best regards
> Stanislav Zaikin
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20425): https://lists.fd.io/g/vpp-dev/message/20425
Mute This Topic: https://lists.fd.io/mt/86821639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Bihash is considered thread-safe but probably shouldn't

2021-11-04 Thread Damjan Marion via lists.fd.io

Dear Nick,

Will be great if you can support your proposal with a running code so we can 
understand exactly what it means.

— 
Damjan

> On 04.11.2021., at 19:24, Nick Zavaritsky  wrote:
> 
>  Hi, thanks for an insightful discussion!
> 
> I do understand that high performance is one of the most important goals of 
> vpp, therefore certain solutions might not fly. From my POV, the version 
> counter would be an improvement. It definitely decreases the probability of 
> triggering the bug.
> 
> Concerning isolcpus, currently this is presented as an optimisation, not a 
> prerequisite. Without isolcpus, a thread could get preempted for arbitrarily 
> long. Meaning that no matter how many bits we allocate for the version field, 
> occasionally they won’t be enough.
> 
> I’d love to have something that’s robust no matter how the threads are 
> scheduled. Would it be possible to use vpp benchmarking lab to evaluate the 
> performance impact of the proposed solutions?
> 
> Finally, I'd like to rehash the reader lock proposal. The idea was that we 
> don’t introduce any atomic operations in the reader path. A reader 
> *publishes* the bucket number it is about to examine in int 
> rlock[MAX_THREADS] array. Every thread uses a distinct cell in rlock 
> (determined by the thread id), therefore it could be a regular write followed 
> by a barrier. Eliminate false sharing with padding.
> 
> Writer locks a bucket as currently implemented (CAS) and then waits until the 
> bucket number disappears from rlock[].
> 
> Reader publishes the bucket number and then checks if the bucket is locked 
> (regular write, barrier, regular read). Good to go if  not locked, otherwise 
> remove the bucket number from rlock, wait for the lock to get released, 
> restart.
> 
> The proposal doesn’t introduce any new atomic operations. There still might 
> be a slowdown due to cache line ping-pong in the rlock array. In the worst 
> case, it costs us 1 extra cache miss for the reader. Could be coalesced with 
> the bucket prefetch, making it essentially free (few if any bihash users 
> prefetch buckets).
> 
> Best,
> 
> Nick
> 
> 
>>> On 3. Nov 2021, at 21:29, Florin Coras via lists.fd.io 
>>>  wrote:
>>> 
>>> 
>>> Agreed it’s unlikely so maybe just use the 2 bits left for the epoch 
>>> counter as a middle ground? The new approach should be better either way :-)
>>> 
>>> Florin
>>> 
>>> 
>>> On Nov 3, 2021, at 11:55 AM, Damjan Marion  wrote:
>>> 
>>> What about the following, we shift offset by 6, as all buckets are aligned 
>>> to 64, anyway,  and that gives us 6 more bits so we can have 8 bit epoch 
>>> counter…. ?
>>> 
>>> — 
>>> Damjan
>>> 
>>>> On 03.11.2021., at 19:45, Damjan Marion  wrote:
>>>> 
>>>> 
>>>> 
>>>> yes, i am aware of that, it is extremelly unlikely and only way i can see 
>>>> this fixed is introducing epoch on the bucket level but we dont have 
>>>> enough space there…. 
>>>> 
>>>> — 
>>>> Damjan
>>>> 
>>>>> On 03.11.2021., at 19:16, Florin Coras  wrote:
>>>>> 
>>>>> Hi Damjan, 
>>>>> 
>>>>> Definitely like the scheme but the change bit might not be enough, unless 
>>>>> I’m misunderstanding. For instance, two consecutive updates to a bucket 
>>>>> before reader grabs b1 will hide the change. 
>>>>> 
>>>>> Florin
>>>>> 
>>>>>> On Nov 3, 2021, at 9:36 AM, Damjan Marion via lists.fd.io 
>>>>>>  wrote:
>>>>>> 
>>>>>> 
>>>>>> Agree with Dave on atomic ops being bad on the reader side.
>>>>>> 
>>>>>> What about following schema:
>>>>>> 
>>>>>> As bucket is just u64 value on the reader side we grab bucket before 
>>>>>> (b0) and after (b1) search operation.
>>>>>> 
>>>>>> If search finds entry, we simply do 2 checks:
>>>>>> - that b0 is equal to b1
>>>>>> - that lock bit is not set in both of them
>>>>>> If check fails, we simply retry.
>>>>>> 
>>>>>> On the writer side, we have add, remove and replace operations.
>>>>>> First 2 alter refcnt which is part of bucket.
>>>>>> To deal with replace case we introduce another bit (change bit) which is 
>>>>>> flipped every time data is changed

Re: [vpp-dev] Bihash is considered thread-safe but probably shouldn't

2021-11-03 Thread Damjan Marion via lists.fd.io
What about the following, we shift offset by 6, as all buckets are aligned to 
64, anyway,  and that gives us 6 more bits so we can have 8 bit epoch counter…. 
?

— 
Damjan

> On 03.11.2021., at 19:45, Damjan Marion  wrote:
> 
> 
> 
> yes, i am aware of that, it is extremelly unlikely and only way i can see 
> this fixed is introducing epoch on the bucket level but we dont have enough 
> space there…. 
> 
> — 
> Damjan
> 
>>> On 03.11.2021., at 19:16, Florin Coras  wrote:
>>> 
>> Hi Damjan, 
>> 
>> Definitely like the scheme but the change bit might not be enough, unless 
>> I’m misunderstanding. For instance, two consecutive updates to a bucket 
>> before reader grabs b1 will hide the change. 
>> 
>> Florin
>> 
>>> On Nov 3, 2021, at 9:36 AM, Damjan Marion via lists.fd.io 
>>>  wrote:
>>> 
>>> 
>>> Agree with Dave on atomic ops being bad on the reader side.
>>> 
>>> What about following schema:
>>> 
>>> As bucket is just u64 value on the reader side we grab bucket before (b0) 
>>> and after (b1) search operation.
>>> 
>>> If search finds entry, we simply do 2 checks:
>>> - that b0 is equal to b1
>>> - that lock bit is not set in both of them
>>> If check fails, we simply retry.
>>> 
>>> On the writer side, we have add, remove and replace operations.
>>> First 2 alter refcnt which is part of bucket.
>>> To deal with replace case we introduce another bit (change bit) which is 
>>> flipped every time data is changed in the bucket.
>>> 
>>> Here are possible scenarios:
>>> 
>>> - reader grabs b0 before lock and b1 after unlock
>>>- add, del - refcnt and change bit will be different between b0 and b1 
>>> causing retry
>>>- replace - change bit will be different between b0 and b1 causing retry
>>> 
>>> - reader grabs b0 after lock and/or b1 before unlock
>>>- lock bit will be set causing retry  
>>> 
>>> Of course, this to work properly we need to ensure proper memory ordering 
>>> (i.e. avoid bucket change to be visible to remote thread before kvp change).
>>> 
>>> I crafted WIP patch to present my idea:
>>> 
>>> https://gerrit.fd.io/r/c/vpp/+/34326
>>> 
>>> In this patch I get a rid of all store barriers and replaced them with more 
>>> lightweight:
>>> 
>>> __atomic_store_n (ptr, val, __ATOMIC_RELEASE);
>>> 
>>> On platforms with strong memory ordering (like x86_64) this will result 
>>> with just normal stores (but compiler will know that it should not reorder 
>>> them).
>>> On platforms with weak memory ordering (like arch64) this will result in 
>>> special store instruction, but that one is still cheaper than full memory 
>>> barrier.
>>> 
>>> Thoughts? Comments?
>>> 
>>> Thanks,
>>> 
>>> — 
>>> Damjan
>>> 
>>> 
>>> 
>>>> On 02.11.2021., at 12:14, Dave Barach  wrote:
>>>> 
>>>> Dear Nick,
>>>> 
>>>> As the code comment suggests, we tiptoe right up to the line to extract 
>>>> performance. Have you tried e.g. ISOLCPUS, thread priority, or some other 
>>>> expedients to make the required assumptions true?
>>>> 
>>>> It’s easy enough to change the code in various ways so this use-case 
>>>> cannot backfire. High on the list: always make a working copy of the 
>>>> bucket, vs. update in place. Won’t help write performance, but it’s likely 
>>>> to make the pain go away.
>>>> 
>>>> Bucket-level reader-locks would involve adding Avogadro’s number of atomic 
>>>> ops to the predominant case. I’m pretty sure that’s a non-starter.
>>>> 
>>>> FWIW... Dave
>>>> 
>>>> 
>>>> From: vpp-dev@lists.fd.io  On Behalf Of Nick 
>>>> Zavaritsky
>>>> Sent: Monday, November 1, 2021 12:12 PM
>>>> To: vpp-dev@lists.fd.io
>>>> Subject: [vpp-dev] Bihash is considered thread-safe but probably shouldn't
>>>> 
>>>> Hello bihash experts!
>>>> 
>>>> There's an old thread claiming that bihash lookup can produce a value=-1 
>>>> under intense add/delete concurrent activity: 
>>>> https://lists.fd.io/g/vpp-dev/message/15606
>>>> 
>>>> We had a seemingly related crash recently when a lookup in 
>>>>

Re: [vpp-dev] Bihash is considered thread-safe but probably shouldn't

2021-11-03 Thread Damjan Marion via lists.fd.io

yes, i am aware of that, it is extremelly unlikely and only way i can see this 
fixed is introducing epoch on the bucket level but we dont have enough space 
there…. 

— 
Damjan

> On 03.11.2021., at 19:16, Florin Coras  wrote:
> 
> Hi Damjan, 
> 
> Definitely like the scheme but the change bit might not be enough, unless I’m 
> misunderstanding. For instance, two consecutive updates to a bucket before 
> reader grabs b1 will hide the change. 
> 
> Florin
> 
>> On Nov 3, 2021, at 9:36 AM, Damjan Marion via lists.fd.io 
>>  wrote:
>> 
>> 
>> Agree with Dave on atomic ops being bad on the reader side.
>> 
>> What about following schema:
>> 
>> As bucket is just u64 value on the reader side we grab bucket before (b0) 
>> and after (b1) search operation.
>> 
>> If search finds entry, we simply do 2 checks:
>> - that b0 is equal to b1
>> - that lock bit is not set in both of them
>> If check fails, we simply retry.
>> 
>> On the writer side, we have add, remove and replace operations.
>> First 2 alter refcnt which is part of bucket.
>> To deal with replace case we introduce another bit (change bit) which is 
>> flipped every time data is changed in the bucket.
>> 
>> Here are possible scenarios:
>> 
>> - reader grabs b0 before lock and b1 after unlock
>>- add, del - refcnt and change bit will be different between b0 and b1 
>> causing retry
>>- replace - change bit will be different between b0 and b1 causing retry
>> 
>> - reader grabs b0 after lock and/or b1 before unlock
>>- lock bit will be set causing retry  
>> 
>> Of course, this to work properly we need to ensure proper memory ordering 
>> (i.e. avoid bucket change to be visible to remote thread before kvp change).
>> 
>> I crafted WIP patch to present my idea:
>> 
>> https://gerrit.fd.io/r/c/vpp/+/34326
>> 
>> In this patch I get a rid of all store barriers and replaced them with more 
>> lightweight:
>> 
>> __atomic_store_n (ptr, val, __ATOMIC_RELEASE);
>> 
>> On platforms with strong memory ordering (like x86_64) this will result with 
>> just normal stores (but compiler will know that it should not reorder them).
>> On platforms with weak memory ordering (like arch64) this will result in 
>> special store instruction, but that one is still cheaper than full memory 
>> barrier.
>> 
>> Thoughts? Comments?
>> 
>> Thanks,
>> 
>> — 
>> Damjan
>> 
>> 
>> 
>>> On 02.11.2021., at 12:14, Dave Barach  wrote:
>>> 
>>> Dear Nick,
>>> 
>>> As the code comment suggests, we tiptoe right up to the line to extract 
>>> performance. Have you tried e.g. ISOLCPUS, thread priority, or some other 
>>> expedients to make the required assumptions true?
>>> 
>>> It’s easy enough to change the code in various ways so this use-case cannot 
>>> backfire. High on the list: always make a working copy of the bucket, vs. 
>>> update in place. Won’t help write performance, but it’s likely to make the 
>>> pain go away.
>>> 
>>> Bucket-level reader-locks would involve adding Avogadro’s number of atomic 
>>> ops to the predominant case. I’m pretty sure that’s a non-starter.
>>> 
>>> FWIW... Dave
>>> 
>>> 
>>> From: vpp-dev@lists.fd.io  On Behalf Of Nick Zavaritsky
>>> Sent: Monday, November 1, 2021 12:12 PM
>>> To: vpp-dev@lists.fd.io
>>> Subject: [vpp-dev] Bihash is considered thread-safe but probably shouldn't
>>> 
>>> Hello bihash experts!
>>> 
>>> There's an old thread claiming that bihash lookup can produce a value=-1 
>>> under intense add/delete concurrent activity: 
>>> https://lists.fd.io/g/vpp-dev/message/15606
>>> 
>>> We had a seemingly related crash recently when a lookup in 
>>> snat_main.flow_hash yielded a value=-1 which was subsequently used as a 
>>> destination thread index to offload to. This crash prompted me to study 
>>> bihash more closely.
>>> 
>>> The rest of the message is structured as follows:
>>>  1. Presenting reasons why I believe that bihash is not thread-safe.
>>>  2. Proposing a fix.
>>> 
>>> 1 Bihash is probably not thread-safe
>>> 
>>> The number of buckets in a hash table never changes. Every bucket has a 
>>> lock bit. Updates happen via clib_bihash_add_del_inline_with_hash. The 
>>> function grabs the bucket lock early on and performs update while holding 
>>>

Re: [vpp-dev] Please include Fixes: tag for regression fix

2021-11-02 Thread Damjan Marion via lists.fd.io

We should probably extend checkstyle to reject patch if there is “Type: fix” 
and there is no “Fixes: *”.
We can have special case “Fixes: unknown” to willingly bypass this check….

— 
Damjan



> On 02.11.2021., at 18:17, steven luong via lists.fd.io 
>  wrote:
> 
> Folks,
>  
> In case you don’t already know, there is a tag called Fixes in the commit 
> message which allows one to specify if the current patch fixes a regression. 
> See an example usage in https://gerrit.fd.io/r/c/vpp/+/34212
>  
> When you commit a patch which fixes a known regression, please make use of 
> the Fixes tag to benefit every consumer.
>  
> Steven
>  
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20407): https://lists.fd.io/g/vpp-dev/message/20407
Mute This Topic: https://lists.fd.io/mt/86771694/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] About Risc-V Porting

2021-10-31 Thread Damjan Marion via lists.fd.io

Here it is:

https://gerrit.fd.io/r/c/vpp/+/34298 

It is early but works for me.

— 
Damjan


> On 25.10.2021., at 18:36, Hrishikesh Karanjikar 
>  wrote:
> 
> Hi,
> 
> Yes. SiFive HiFive boards are available. But they do not support Vector 
> Extension yet.
> Also Qemu is ready for RiscV. Ubuntu images are available for RIscV.
> 
> Thanks,
> Hrishikesh
> 
> 
> On Mon, Oct 25, 2021 at 9:56 PM Damjan Marion  > wrote:
> 
> 
> 
> > On 14.10.2021., at 15:43, Hrishikesh Karanjikar 
> > mailto:hrishikesh.karanji...@gmail.com>> 
> > wrote:
> > 
> > 
> > Hi,
> > 
> > Is VPP ported for the Risc-V processor?
> > Is there any project going for the same?
> > 
> 
> I was looking at that a year ago but I was not able to find any suitable dev 
> board.
> 
> Is there anything new on the market?
> 
> — 
> Damjan
> 
> 
> 
> -- 
> 
> Regards,
> Hrishikesh Karanjikar
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20400): https://lists.fd.io/g/vpp-dev/message/20400
Mute This Topic: https://lists.fd.io/mt/86312689/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Linking DPDK libs to plugins

2021-10-25 Thread Damjan Marion via lists.fd.io

If it is multi-producer/multi-consumer from my experience it is still much more 
costly than using simple per-thread cache scheme.

VPP buffer pools are using that architecture. Each thread maintains own cache,
and lock happens only when bulk transfer is needed from cache to the global 
freelist or back.

With this approach there is also better cache locality, as most of the time 
allocated chunk is the last freed on the same thread.

— 
Damjan



> On 25.10.2021., at 21:23, Honnappa Nagarahalli  
> wrote:
> 
> There are few additional modes added to the ring library (a year back) in 
> DPDK that improve the performance when there are threads on control plane and 
> data plane doing enqueue/dequeue from the same ring. Are you talking about 
> these or just the ring in general?
> 
> Thanks,
> Honnappa
> 
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of bjeremy32 via
>> lists.fd.io
>> Sent: Monday, October 25, 2021 2:18 PM
>> To: 'Damjan Marion' 
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] Linking DPDK libs to plugins
>> 
>> I believe it was just ring that they cared about.
>> 
>> -Original Message-
>> From: Damjan Marion 
>> Sent: Monday, October 25, 2021 11:08 AM
>> To: bjerem...@gmail.com
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] Linking DPDK libs to plugins
>> 
>> 
>> Ok, i’m affraid that to implement this you will need to introduce lot of 
>> mess.
>> At the end probably will be easier to implement that functionality natively.
>> 
>> Which exact implementation of the dpdk mempool you are looking to use
>> (ring, stack, bucket, ...)?
>> 
>> —
>> Damjan
>> 
>> 
>> 
>>> On 25.10.2021., at 17:39, 
>>  wrote:
>>> 
>>> Hi Damjan,
>>> 
>>> Thanks for the reply
>>> 
>>> Here are the  details:
>>> 
>>> 1. We want to use only the rte_mempool infrastructure for lockless global
>> memory pools. We will not be using any mbuf infrastructure from dpdk
>>> 2. We want to use this infra across our multiple plugins
>>> 3. We want to be able to include rte_mempool data structures from our
>> multiple header files (.h files )
>>> 4. We want to be able to make calls to rte_mempool apis from our source
>> code ( .c files )
>>> 
>>> -Original Message-
>>> From: Damjan Marion 
>>> Sent: Monday, October 25, 2021 5:22 AM
>>> To: bjerem...@gmail.com
>>> Cc: vpp-dev@lists.fd.io
>>> Subject: Re: [vpp-dev] Linking DPDK libs to plugins
>>> 
>>> 
>>> 
>>> 
 On 25.10.2021., at 01:13, bjerem...@gmail.com wrote:
 
 Greetings,
 
 Let me preface this by saying that I really do not know much about the
>> CMake utility. But I am trying to see if there is a way to make the DPDK libs
>> accessible to other plugins (aside from the dpdk plugin) that are in their 
>> own
>> project/subdirectory similar. I am working with v20.05 currently (although we
>> are upgrading to 21.06 if that make a difference).
 
 Initially it was suggested to me that I could just add a couple lines
 to my CMakeLists to link the dpdk_plugin.so to my own plugin.. but I
 have not been able to get this to work.. It never seems to recognize
 the path to the .so, even if I give the absolute path
 
 set(DPDK_PLUGIN_LINK_FLAGS "${DPDK_PLUGIN_LINK_FLAGS} -L > vpp
 plugins> -ldpdk_plugin.so")
 
 add_vpp_plugin(my_plugin
 ….
 LINK_FLAGS
 “${ DPDK_PLUGIN_LINK_FLAGS }”
 
 Another approach suggested was to maybe use dlsym to dynamically load
>> symbols… Anyway, I was thinking that someone has to have had done this
>> before, or maybe have more of a clue as to how to do this then I currently 
>> do.
 
>>> 
>>> 
>>> 
>>> Please note that VPP is not DPDK application, DPDK is just optional device
>> driver layer for us.
>>> 
>>> Even if you manage to get your plugin linked against DPDK libs, there is no
>> guarantee that you will be able to use all dpdk data structures. Most obvious
>> example, rte_mbuf structure for a packet buffer may not be populated for
>> you.
>>> 
>>> Also use of DPDK comes with a performance cost, we need to copy buffer
>> metadata left and right on both RX and TX side.
>>> 
>>> What specific DPDK library you would like to use? We may have alternative
>> proposal….
>>> 
>>> —
>>> Damjan
>>> 
>>> 
>> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20390): https://lists.fd.io/g/vpp-dev/message/20390
Mute This Topic: https://lists.fd.io/mt/86565585/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Linking DPDK libs to plugins

2021-10-25 Thread Damjan Marion via lists.fd.io



> On 25.10.2021., at 19:02, Mrityunjay Kumar  wrote:
> 
> Damjon Hi, 
> 
> I am users of DPDK since August 2011, and VPP guess it was Jan 2017,

VPP is just open-sourced in 2017.  It started back in 2004:

https://patents.google.com/patent/US7961636B1/en


> 
> Not sure, about contributor but we should think to take dpdk as main frame in 
> VPP, 
> 
> For this, we need to deprecate vlib_buffer_t,  marry entire VPP with 
> rte_mbuf, 

Why is rte_mbuf better than vlib_buffer_t ?

> 
> 
> This can lead to avoid extra overhead to translate the dpdk vs VPP metadata. 

Do we have all fields needed by vpp features in rte_mbuf?

> 
> 
> I am sure, we can get significant performance count as well. 
> 
> What is your input. 

I doubt that will happen for many reasons, one of them is that it will require 
all features to be rewritten.

> 
> 
> 
> 
> On Mon, 25 Oct, 2021, 9:37 pm Damjan Marion via lists.fd.io, 
>  wrote:
> 
> Ok, i’m affraid that to implement this you will need to introduce lot of mess.
> At the end probably will be easier to implement that functionality natively.
> 
> Which exact implementation of the dpdk mempool you are looking to use (ring, 
> stack, bucket, ...)?
> 
> — 
> Damjan
> 
> 
> 
> > On 25.10.2021., at 17:39,   wrote:
> > 
> > Hi Damjan,
> > 
> > Thanks for the reply
> > 
> > Here are the  details:
> > 
> > 1. We want to use only the rte_mempool infrastructure for lockless global 
> > memory pools. We will not be using any mbuf infrastructure from dpdk
> > 2. We want to use this infra across our multiple plugins
> > 3. We want to be able to include rte_mempool data structures from our 
> > multiple header files (.h files )
> > 4. We want to be able to make calls to rte_mempool apis from our source 
> > code ( .c files )
> > 
> > -Original Message-
> > From: Damjan Marion  
> > Sent: Monday, October 25, 2021 5:22 AM
> > To: bjerem...@gmail.com
> > Cc: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] Linking DPDK libs to plugins
> > 
> > 
> > 
> > 
> >> On 25.10.2021., at 01:13, bjerem...@gmail.com wrote:
> >> 
> >> Greetings,
> >> 
> >> Let me preface this by saying that I really do not know much about the 
> >> CMake utility. But I am trying to see if there is a way to make the DPDK 
> >> libs accessible to other plugins (aside from the dpdk plugin) that are in 
> >> their own project/subdirectory similar. I am working with v20.05 currently 
> >> (although we are upgrading to 21.06 if that make a difference).
> >> 
> >> Initially it was suggested to me that I could just add a couple lines 
> >> to my CMakeLists to link the dpdk_plugin.so to my own plugin.. but I 
> >> have not been able to get this to work.. It never seems to recognize 
> >> the path to the .so, even if I give the absolute path
> >> 
> >> set(DPDK_PLUGIN_LINK_FLAGS "${DPDK_PLUGIN_LINK_FLAGS} -L  >> plugins> -ldpdk_plugin.so")
> >> 
> >> add_vpp_plugin(my_plugin
> >> ….
> >>  LINK_FLAGS
> >>  “${ DPDK_PLUGIN_LINK_FLAGS }”
> >> 
> >> Another approach suggested was to maybe use dlsym to dynamically load 
> >> symbols… Anyway, I was thinking that someone has to have had done this 
> >> before, or maybe have more of a clue as to how to do this then I currently 
> >> do.
> >> 
> > 
> > 
> > 
> > Please note that VPP is not DPDK application, DPDK is just optional device 
> > driver layer for us.
> > 
> > Even if you manage to get your plugin linked against DPDK libs, there is no 
> > guarantee that you will be able to use all dpdk data structures. Most 
> > obvious example, rte_mbuf structure for a packet buffer may not be 
> > populated for you.
> > 
> > Also use of DPDK comes with a performance cost, we need to copy buffer 
> > metadata left and right on both RX and TX side.
> > 
> > What specific DPDK library you would like to use? We may have alternative 
> > proposal….
> > 
> > —
> > Damjan
> > 
> > 
> 
> 
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20386): https://lists.fd.io/g/vpp-dev/message/20386
Mute This Topic: https://lists.fd.io/mt/86565585/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] About Risc-V Porting

2021-10-25 Thread Damjan Marion via lists.fd.io



> On 14.10.2021., at 15:43, Hrishikesh Karanjikar 
>  wrote:
> 
> 
> Hi,
> 
> Is VPP ported for the Risc-V processor?
> Is there any project going for the same?
> 

I was looking at that a year ago but I was not able to find any suitable dev 
board.

Is there anything new on the market?

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20383): https://lists.fd.io/g/vpp-dev/message/20383
Mute This Topic: https://lists.fd.io/mt/86312689/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Linking DPDK libs to plugins

2021-10-25 Thread Damjan Marion via lists.fd.io

Ok, i’m affraid that to implement this you will need to introduce lot of mess.
At the end probably will be easier to implement that functionality natively.

Which exact implementation of the dpdk mempool you are looking to use (ring, 
stack, bucket, ...)?

— 
Damjan



> On 25.10.2021., at 17:39,   wrote:
> 
> Hi Damjan,
> 
> Thanks for the reply
> 
> Here are the  details:
> 
> 1. We want to use only the rte_mempool infrastructure for lockless global 
> memory pools. We will not be using any mbuf infrastructure from dpdk
> 2. We want to use this infra across our multiple plugins
> 3. We want to be able to include rte_mempool data structures from our 
> multiple header files (.h files )
> 4. We want to be able to make calls to rte_mempool apis from our source code 
> ( .c files )
> 
> -Original Message-
> From: Damjan Marion  
> Sent: Monday, October 25, 2021 5:22 AM
> To: bjerem...@gmail.com
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Linking DPDK libs to plugins
> 
> 
> 
> 
>> On 25.10.2021., at 01:13, bjerem...@gmail.com wrote:
>> 
>> Greetings,
>> 
>> Let me preface this by saying that I really do not know much about the CMake 
>> utility. But I am trying to see if there is a way to make the DPDK libs 
>> accessible to other plugins (aside from the dpdk plugin) that are in their 
>> own project/subdirectory similar. I am working with v20.05 currently 
>> (although we are upgrading to 21.06 if that make a difference).
>> 
>> Initially it was suggested to me that I could just add a couple lines 
>> to my CMakeLists to link the dpdk_plugin.so to my own plugin.. but I 
>> have not been able to get this to work.. It never seems to recognize 
>> the path to the .so, even if I give the absolute path
>> 
>> set(DPDK_PLUGIN_LINK_FLAGS "${DPDK_PLUGIN_LINK_FLAGS} -L > plugins> -ldpdk_plugin.so")
>> 
>> add_vpp_plugin(my_plugin
>> ….
>>  LINK_FLAGS
>>  “${ DPDK_PLUGIN_LINK_FLAGS }”
>> 
>> Another approach suggested was to maybe use dlsym to dynamically load 
>> symbols… Anyway, I was thinking that someone has to have had done this 
>> before, or maybe have more of a clue as to how to do this then I currently 
>> do.
>> 
> 
> 
> 
> Please note that VPP is not DPDK application, DPDK is just optional device 
> driver layer for us.
> 
> Even if you manage to get your plugin linked against DPDK libs, there is no 
> guarantee that you will be able to use all dpdk data structures. Most obvious 
> example, rte_mbuf structure for a packet buffer may not be populated for you.
> 
> Also use of DPDK comes with a performance cost, we need to copy buffer 
> metadata left and right on both RX and TX side.
> 
> What specific DPDK library you would like to use? We may have alternative 
> proposal….
> 
> —
> Damjan
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20382): https://lists.fd.io/g/vpp-dev/message/20382
Mute This Topic: https://lists.fd.io/mt/86565585/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Linking DPDK libs to plugins

2021-10-25 Thread Damjan Marion via lists.fd.io



> On 25.10.2021., at 01:13, bjerem...@gmail.com wrote:
> 
> Greetings,
>  
> Let me preface this by saying that I really do not know much about the CMake 
> utility. But I am trying to see if there is a way to make the DPDK libs 
> accessible to other plugins (aside from the dpdk plugin) that are in their 
> own project/subdirectory similar. I am working with v20.05 currently 
> (although we are upgrading to 21.06 if that make a difference).
>  
> Initially it was suggested to me that I could just add a couple lines to my 
> CMakeLists to link the dpdk_plugin.so to my own plugin.. but I have not been 
> able to get this to work.. It never seems to recognize the path to the .so, 
> even if I give the absolute path
>  
> set(DPDK_PLUGIN_LINK_FLAGS "${DPDK_PLUGIN_LINK_FLAGS} -L  
> -ldpdk_plugin.so")
>  
> add_vpp_plugin(my_plugin
> ….
>   LINK_FLAGS
>   “${ DPDK_PLUGIN_LINK_FLAGS }”
>  
> Another approach suggested was to maybe use dlsym to dynamically load 
> symbols… Anyway, I was thinking that someone has to have had done this 
> before, or maybe have more of a clue as to how to do this then I currently do.
>  



Please note that VPP is not DPDK application, DPDK is just optional device 
driver layer for us.

Even if you manage to get your plugin linked against DPDK libs, there is no 
guarantee that 
you will be able to use all dpdk data structures. Most obvious example, 
rte_mbuf structure for a packet buffer may not be populated for you.

Also use of DPDK comes with a performance cost, we need to copy buffer metadata 
left and right 
on both RX and TX side.

What specific DPDK library you would like to use? We may have alternative 
proposal….

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20379): https://lists.fd.io/g/vpp-dev/message/20379
Mute This Topic: https://lists.fd.io/mt/86565585/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Does memif zero-copy work ?

2021-10-24 Thread Damjan Marion via lists.fd.io


> 
> On 24.10.2021., at 18:08, Mrityunjay Kumar  wrote:
> 
> 
> Well, almost VPP experts/users are familiar about dpdk. 

I don’t understand why somebody needs to be familiar with DPDK to use memif.
Actually we see more and more people looking to use VPP without DPDK so we have 
significant amount of native drivers which allows people to turn DPDK plugin 
and reduce weight it comes with. One of them is memif.

> 
> But dollor price question,  which one is stable one and less effort to marry 
> with VPP over shared memory, 

VPP memif implementation is reference implementation of protocol. DPDK  
implementation was just a port to enable people who have applications already 
built on top of dpdk framework to connect to VPP. AFAIK we never did serious 
optimizations to that code. I was not checking recently if somebody else did 
any improvements there…

Using DPDK PMD in VPP is also bad idea due to high cost of translation between 
VPP and DPDK data structures.  It is not only memif pmd, VPP with any other 
DPDK PMD is significantly slower than VPP with native driver. Also use of DPDK 
PMDs in VPP lack dynamic interface creation… Not to mention how nice is to get 
a rid of DPDK EAL…

> 
> I suggest, VPP user have both option open,  let them decide, which one is 
> better and convenient for them. 

I disagree as use of memif DPDK PMD in VPP brings zero value, and comes with 
lot of weight including significant performance penalty, less flexibility and 
high cost of mandatory use of EAL.

— 
Damjan
> 
> 
> 
> 
>> On Sun, 24 Oct, 2021, 4:28 pm Damjan Marion,  wrote:
>> 
>> And what is the benefit of doing that?
>> 
>> — 
>> Damjan
>> 
>> 
>> 
>>> On 24.10.2021., at 11:24, Mrityunjay Kumar  wrote:
>>> 
>>> Well, I can dump my opinion regarding this.  we can disable memif plugin 
>>> and  same feature can be achieved using dpdk EAL option.  For details 
>>> please refer bellow link.
>>> 
>>> https://doc.dpdk.org/guides/nics/memif.html
>>> 
>>>  
>>> 
>>> This required small patch by adding a new statrup.conf section under dpdk { 
>>> --vdev=net_memif0, net_memif1, … } , to handle this we need to translate 
>>> this in dpdk plugin to inject EAL in rte_eal_init.
>>> 
>>>  
>>> 
>>> 
>>>> On Sun, 24 Oct, 2021, 5:16 am Damjan Marion,  wrote:
>>>> What is your suggestion? Which part of the plugin is wrongly written?
>>>> 
>>>> — 
>>>> Damjan
>>>> 
>>>>>> On 24.10.2021., at 00:16, Mrityunjay Kumar  wrote:
>>>>>> 
>>>>> 
>>>>> Damjan, 
>>>>> 
>>>>> I think you are trying to explain one copy always occurs in memif 
>>>>> communication, 
>>>>> 
>>>>> 
>>>>> Currently VPP memif plugin is wrongly/misleadingly written and it also 
>>>>> misguiding to VPP users. 
>>>>> 
>>>>> When I look this code long back,  it also misleading me. 
>>>>> 
>>>>> 
>>>>> 
>>>>>> On Sat, 23 Oct, 2021, 11:52 pm Damjan Marion via lists.fd.io, 
>>>>>>  wrote:
>>>>>> 
>>>>>> Please note that “zero copy memif” doesn’t exist.
>>>>>> 
>>>>>> Long time ago I possibly wrongly/misleadingly added VPP feature with 
>>>>>> name “zero-copy slave”.
>>>>>> 
>>>>>> It is vpp internal feature which avoids 2nd memcopy by exposing VPP 
>>>>>> (slave only) buffers directly to master.
>>>>>> In such scenario still one memcpy exists….
>>>>>> 
>>>>>> 
>>>>>> — 
>>>>>> Damjan
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> > On 22.10.2021., at 13:19, Satya Murthy  
>>>>>> > wrote:
>>>>>> > 
>>>>>> > Thanks MJ for the quick reply.
>>>>>> > Will try this and check.
>>>>>> > 
>>>>>> > -- 
>>>>>> > Thanks & Regards,
>>>>>> > Murthy 
>>>>>> > 
>>>>>> > 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20373): https://lists.fd.io/g/vpp-dev/message/20373
Mute This Topic: https://lists.fd.io/mt/86509719/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Does memif zero-copy work ?

2021-10-24 Thread Damjan Marion via lists.fd.io

And what is the benefit of doing that?

— 
Damjan



> On 24.10.2021., at 11:24, Mrityunjay Kumar  wrote:
> 
> Well, I can dump my opinion regarding this.  we can disable memif plugin and  
> same feature can be achieved using dpdk EAL option.  For details please refer 
> bellow link.
> 
> https://doc.dpdk.org/guides/nics/memif.html 
> <https://doc.dpdk.org/guides/nics/memif.html>
>  
> 
> This required small patch by adding a new statrup.conf section under dpdk { 
> --vdev=net_memif0, net_memif1, … } , to handle this we need to translate this 
> in dpdk plugin to inject EAL in rte_eal_init.
> 
>  
> 
> 
> On Sun, 24 Oct, 2021, 5:16 am Damjan Marion,  <mailto:dmar...@me.com>> wrote:
> What is your suggestion? Which part of the plugin is wrongly written?
> 
> — 
> Damjan
> 
>> On 24.10.2021., at 00:16, Mrityunjay Kumar > <mailto:kumarn...@gmail.com>> wrote:
>> 
>> 
>> Damjan, 
>> 
>> I think you are trying to explain one copy always occurs in memif 
>> communication, 
>> 
>> 
>> Currently VPP memif plugin is wrongly/misleadingly written and it also 
>> misguiding to VPP users. 
>> 
>> When I look this code long back,  it also misleading me. 
>> 
>> 
>> 
>> On Sat, 23 Oct, 2021, 11:52 pm Damjan Marion via lists.fd.io 
>> <http://lists.fd.io/>, > <mailto:me@lists.fd.io>> wrote:
>> 
>> Please note that “zero copy memif” doesn’t exist.
>> 
>> Long time ago I possibly wrongly/misleadingly added VPP feature with name 
>> “zero-copy slave”.
>> 
>> It is vpp internal feature which avoids 2nd memcopy by exposing VPP (slave 
>> only) buffers directly to master.
>> In such scenario still one memcpy exists….
>> 
>> 
>> — 
>> Damjan
>> 
>> 
>> 
>> > On 22.10.2021., at 13:19, Satya Murthy > > <mailto:satyamurthy1...@gmail.com>> wrote:
>> > 
>> > Thanks MJ for the quick reply.
>> > Will try this and check.
>> > 
>> > -- 
>> > Thanks & Regards,
>> > Murthy 
>> > 
>> > 
>> 
>> 
>> 
>> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20371): https://lists.fd.io/g/vpp-dev/message/20371
Mute This Topic: https://lists.fd.io/mt/86509719/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Does memif zero-copy work ?

2021-10-23 Thread Damjan Marion via lists.fd.io
What is your suggestion? Which part of the plugin is wrongly written?

— 
Damjan

> On 24.10.2021., at 00:16, Mrityunjay Kumar  wrote:
> 
> 
> Damjan, 
> 
> I think you are trying to explain one copy always occurs in memif 
> communication, 
> 
> 
> Currently VPP memif plugin is wrongly/misleadingly written and it also 
> misguiding to VPP users. 
> 
> When I look this code long back,  it also misleading me. 
> 
> 
> 
>> On Sat, 23 Oct, 2021, 11:52 pm Damjan Marion via lists.fd.io, 
>>  wrote:
>> 
>> Please note that “zero copy memif” doesn’t exist.
>> 
>> Long time ago I possibly wrongly/misleadingly added VPP feature with name 
>> “zero-copy slave”.
>> 
>> It is vpp internal feature which avoids 2nd memcopy by exposing VPP (slave 
>> only) buffers directly to master.
>> In such scenario still one memcpy exists….
>> 
>> 
>> — 
>> Damjan
>> 
>> 
>> 
>> > On 22.10.2021., at 13:19, Satya Murthy  wrote:
>> > 
>> > Thanks MJ for the quick reply.
>> > Will try this and check.
>> > 
>> > -- 
>> > Thanks & Regards,
>> > Murthy 
>> > 
>> > 
>> 
>> 
>> 
>> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20369): https://lists.fd.io/g/vpp-dev/message/20369
Mute This Topic: https://lists.fd.io/mt/86509719/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Does memif zero-copy work ?

2021-10-23 Thread Damjan Marion via lists.fd.io

Please note that “zero copy memif” doesn’t exist.

Long time ago I possibly wrongly/misleadingly added VPP feature with name 
“zero-copy slave”.

It is vpp internal feature which avoids 2nd memcopy by exposing VPP (slave 
only) buffers directly to master.
In such scenario still one memcpy exists….
 

— 
Damjan



> On 22.10.2021., at 13:19, Satya Murthy  wrote:
> 
> Thanks MJ for the quick reply.
> Will try this and check.
> 
> -- 
> Thanks & Regards,
> Murthy 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20365): https://lists.fd.io/g/vpp-dev/message/20365
Mute This Topic: https://lists.fd.io/mt/86509719/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Are INPUT and PROCESS nodes considered for CPU util calculations?

2021-10-23 Thread Damjan Marion via lists.fd.io

Dear Satya,

If you are in polling mode CPU is always 100% utilised.
Not sure what do you mean by "CPU utilisation” here...

— 
Damjan



> On 18.10.2021., at 16:16, Satya Murthy  wrote:
> 
> Hi VPP Experts,
> 
> We have an issue at hand, where we are seeing non-uniform CPU utilizations 
> showing up for workers from "show threads".
> We are doing lot of work as part of some timer node, which periodically does 
> maintenance of flows.
> However, this maintenance activity, which is run as part of this INPUT node ( 
> timer node), is not considered into CPU utilizations.
> 
> Basically, this timer node does not have any vector as its input. 
> So, is this load not considered in the worker CPU utilization ?
> 
> -- 
> Thanks & Regards,
> Murthy 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20364): https://lists.fd.io/g/vpp-dev/message/20364
Mute This Topic: https://lists.fd.io/mt/86414816/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN with newer i40e drivers

2021-10-07 Thread Damjan Marion via lists.fd.io

ok, then just don’t do it until problem is properly addressed in i40e.

—
Damjan



On 07.10.2021., at 14:03, Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco) 
mailto:pmi...@cisco.com>> wrote:

vpp_device, a.k.a functional per patch testing in volume.

Peter Mikus
Engineer – Software
Cisco Systems Limited


From: Damjan Marion (damarion) mailto:damar...@cisco.com>>
Sent: Thursday, October 7, 2021 14:02
To: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco)
Cc: Juraj Linkeš; vpp-dev; Lijian Zhang
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN 
with newer i40e drivers


I don't think using vlans in performance testbeds is good idea.
I would simply avoid using it…

—
Damjan



On 07.10.2021., at 13:58, Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco) 
mailto:pmi...@cisco.com><mailto:pmi...@cisco.com>> wrote:

This effectively means to me decommission of vpp_device testing on Fortville, 
due to absent of support (API, switch, whatever...).
Vlan is fundamental feature of isolating the traffic into VFs in multitenant 
environments. It is clear from [0]

//quote
If you have applications that require Virtual Functions (VFs) to receive
packets with VLAN tags, you can disable VLAN tag stripping for the VF. The
Physical Function (PF) processes requests issued from the VF to enable or
disable VLAN tag stripping. Note that if the PF has assigned a VLAN to a VF,
then requests from that VF to set VLAN tag stripping will be ignored.

So unless this is a bug, then it means that application should not enforce the 
behavior.

Juraj. Does this detection works on DPDK testpmd? 
https://doc.dpdk.org/dts/test_plans/vlan_test_plan.html
If this is indeed unsupported on new driver level and firmware, then removing 
700 series is the only option to me.

To me this is as simple as to:
 =  disable_vlan_stripping()
if error:
  vlan_cannot_stripped_flag == 1 # ignore the error.

Thoughts?

[0] https://downloadmirror.intel.com/24693/eng/readme_4.2.7.txt

Peter Mikus
Engineer – Software
Cisco Systems Limited


From: Damjan Marion (damarion) 
mailto:damar...@cisco.com><mailto:damar...@cisco.com>>
Sent: Thursday, October 7, 2021 13:36
To: Juraj Linkeš
Cc: vpp-dev; Lijian Zhang; Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco)
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN 
with newer i40e drivers



On 07.10.2021., at 13:22, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech><mailto:juraj.lin...@pantheon.tech>>
 wrote:



-Original Message-
From: 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io>> 
On Behalf Of Juraj Linkeš
Sent: Tuesday, September 28, 2021 11:43 AM
To: damar...@cisco.com<mailto:damar...@cisco.com><mailto:damar...@cisco.com>
Cc: vpp-dev 
mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io>>; 
Lijian Zhang 
mailto:lijian.zh...@arm.com><mailto:lijian.zh...@arm.com>>
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN
with newer i40e drivers



-Original Message-
From: 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io>> 
On Behalf Of Damjan
Marion via lists.fd.io<http://lists.fd.io><http://lists.fd.io>
Sent: Wednesday, September 15, 2021 5:54 PM
To: Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech><mailto:juraj.lin...@pantheon.tech>>
Cc: vpp-dev 
mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io>>; 
Lijian Zhang 
mailto:lijian.zh...@arm.com><mailto:lijian.zh...@arm.com>>
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with
configured VLAN with newer i40e drivers



On 10.09.2021., at 08:53, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech><mailto:juraj.lin...@pantheon.tech>>
 wrote:



From: 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io>> 
On Behalf Of Damjan
Marion via lists.fd.io<http://lists.fd.io><http://lists.fd.io>
Sent: Thursday, September 9, 2021 12:01 PM
To: Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech><mailto:juraj.lin...@pantheon.tech>>
Cc: vpp-dev 
mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io>>; 
Lijian Zhang
mailto:lijian.zh...@arm.com><mailto:lijian.zh...@arm.com>>
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with
configured VLAN with newer i40e drivers


On 09.09.2021., at 09:14, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech><mailto:juraj.lin...@pantheon.tech>>
 wrote:

Hi Damjan, vpp devs,

Upgrading to 2.15.9 i40e driver in CI (from Ubuntu's

Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN with newer i40e drivers

2021-10-07 Thread Damjan Marion via lists.fd.io

I don't think using vlans in performance testbeds is good idea.
I would simply avoid using it…

—
Damjan



On 07.10.2021., at 13:58, Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco) 
mailto:pmi...@cisco.com>> wrote:

This effectively means to me decommission of vpp_device testing on Fortville, 
due to absent of support (API, switch, whatever...).
Vlan is fundamental feature of isolating the traffic into VFs in multitenant 
environments. It is clear from [0]

//quote
If you have applications that require Virtual Functions (VFs) to receive
packets with VLAN tags, you can disable VLAN tag stripping for the VF. The
Physical Function (PF) processes requests issued from the VF to enable or
disable VLAN tag stripping. Note that if the PF has assigned a VLAN to a VF,
then requests from that VF to set VLAN tag stripping will be ignored.

So unless this is a bug, then it means that application should not enforce the 
behavior.

Juraj. Does this detection works on DPDK testpmd? 
https://doc.dpdk.org/dts/test_plans/vlan_test_plan.html
If this is indeed unsupported on new driver level and firmware, then removing 
700 series is the only option to me.

To me this is as simple as to:
 =  disable_vlan_stripping()
if error:
   vlan_cannot_stripped_flag == 1 # ignore the error.

Thoughts?

[0] https://downloadmirror.intel.com/24693/eng/readme_4.2.7.txt

Peter Mikus
Engineer – Software
Cisco Systems Limited


From: Damjan Marion (damarion) mailto:damar...@cisco.com>>
Sent: Thursday, October 7, 2021 13:36
To: Juraj Linkeš
Cc: vpp-dev; Lijian Zhang; Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco)
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN 
with newer i40e drivers



On 07.10.2021., at 13:22, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:



-Original Message-
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Juraj Linkeš
Sent: Tuesday, September 28, 2021 11:43 AM
To: damar...@cisco.com<mailto:damar...@cisco.com>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>; Lijian Zhang 
mailto:lijian.zh...@arm.com>>
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN
with newer i40e drivers



-Original Message-
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan
Marion via lists.fd.io<http://lists.fd.io>
Sent: Wednesday, September 15, 2021 5:54 PM
To: Juraj Linkeš mailto:juraj.lin...@pantheon.tech>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>; Lijian Zhang 
mailto:lijian.zh...@arm.com>>
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with
configured VLAN with newer i40e drivers



On 10.09.2021., at 08:53, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:



From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan
Marion via lists.fd.io<http://lists.fd.io>
Sent: Thursday, September 9, 2021 12:01 PM
To: Juraj Linkeš mailto:juraj.lin...@pantheon.tech>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>; Lijian Zhang
mailto:lijian.zh...@arm.com>>
Subject: Re: [vpp-dev] AVF interface creation fails on VFs with
configured VLAN with newer i40e drivers


On 09.09.2021., at 09:14, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:

Hi Damjan, vpp devs,

Upgrading to 2.15.9 i40e driver in CI (from Ubuntu's 2.8.20-k) makes
AVF
interface creation on VFs with configured VLANs fail:
2021/08/30 09:15:27:343 debug avf :91:04.1: request_queues:
num_queue_pairs 1
2021/08/30 09:15:27:434 debug avf :91:04.1: version: major 1
minor
1
2021/08/30 09:15:27:444 debug avf :91:04.1: get_vf_resources:
bitmap 0x180b80a1 (l2 wb-on-itr adv-link-speed vlan-v2 vlan
rx-polling rss-pf offload-adv-rss-pf offload-fdir-pf)
2021/08/30 09:15:27:445 debug avf :91:04.1: get_vf_resources:
num_vsis 1 num_queue_pairs 1 max_vectors 5 max_mtu 0 vf_cap_flags
0xb0081 (l2 adv-link-speed vlan rx-polling rss-pf) rss_key_size 52
rss_lut_size 64
2021/08/30 09:15:27:445 debug avf :91:04.1:
get_vf_resources_vsi[0]: vsi_id 27 num_queue_pairs 1 vsi_type 6
qset_handle 21 default_mac_addr ba:dc:0f:fe:02:11
2021/08/30 09:15:27:445 debug avf :91:04.1:
disable_vlan_stripping
2021/08/30 09:15:27:559 error avf :00:00.0: error: avf_send_to_pf:
error [v_opcode = 28, v_retval -5] from avf_create_if: pci-addr
:91:04.1

Syslog reveals a bit more:
Aug 30 09:15:27 s55-t13-sut1 kernel: [352169.781206] vfio-pci
:91:04.1: enabling device ( -> 0002) Aug 30 09:15:27
s55-t13-sut1 kernel: [352170.140729] i40e :91:00.0: Cannot
disable vlan stripping when port VLAN is set Aug 30 09:15:27
s55-t13-sut1
kernel: [352170.140737] i40e :91:00.0: VF 17 failed opcode 28,
retval: -5

It looks like this feature (vlan stripping on VFs with VLANs) was
remove

Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN with newer i40e drivers

2021-10-07 Thread Damjan Marion via lists.fd.io


> On 07.10.2021., at 13:22, Juraj Linkeš  wrote:
> 
> 
> 
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of Juraj Linkeš
>> Sent: Tuesday, September 28, 2021 11:43 AM
>> To: damar...@cisco.com
>> Cc: vpp-dev ; Lijian Zhang 
>> Subject: Re: [vpp-dev] AVF interface creation fails on VFs with configured 
>> VLAN
>> with newer i40e drivers
>> 
>> 
>> 
>>> -----Original Message-
>>> From: vpp-dev@lists.fd.io  On Behalf Of Damjan
>>> Marion via lists.fd.io
>>> Sent: Wednesday, September 15, 2021 5:54 PM
>>> To: Juraj Linkeš 
>>> Cc: vpp-dev ; Lijian Zhang 
>>> Subject: Re: [vpp-dev] AVF interface creation fails on VFs with
>>> configured VLAN with newer i40e drivers
>>> 
>>> 
>>> 
>>>> On 10.09.2021., at 08:53, Juraj Linkeš  wrote:
>>>> 
>>>> 
>>>> 
>>>> From: vpp-dev@lists.fd.io  On Behalf Of Damjan
>>>> Marion via lists.fd.io
>>>> Sent: Thursday, September 9, 2021 12:01 PM
>>>> To: Juraj Linkeš 
>>>> Cc: vpp-dev ; Lijian Zhang
>>>> 
>>>> Subject: Re: [vpp-dev] AVF interface creation fails on VFs with
>>>> configured VLAN with newer i40e drivers
>>>> 
>>>> 
>>>> On 09.09.2021., at 09:14, Juraj Linkeš  wrote:
>>>> 
>>>> Hi Damjan, vpp devs,
>>>> 
>>>> Upgrading to 2.15.9 i40e driver in CI (from Ubuntu's 2.8.20-k) makes
>>>> AVF
>>> interface creation on VFs with configured VLANs fail:
>>>> 2021/08/30 09:15:27:343 debug avf :91:04.1: request_queues:
>>>> num_queue_pairs 1
>>>> 2021/08/30 09:15:27:434 debug avf :91:04.1: version: major 1
>>>> minor
>>>> 1
>>>> 2021/08/30 09:15:27:444 debug avf :91:04.1: get_vf_resources:
>>>> bitmap 0x180b80a1 (l2 wb-on-itr adv-link-speed vlan-v2 vlan
>>>> rx-polling rss-pf offload-adv-rss-pf offload-fdir-pf)
>>>> 2021/08/30 09:15:27:445 debug avf :91:04.1: get_vf_resources:
>>>> num_vsis 1 num_queue_pairs 1 max_vectors 5 max_mtu 0 vf_cap_flags
>>>> 0xb0081 (l2 adv-link-speed vlan rx-polling rss-pf) rss_key_size 52
>>>> rss_lut_size 64
>>>> 2021/08/30 09:15:27:445 debug avf :91:04.1:
>>>> get_vf_resources_vsi[0]: vsi_id 27 num_queue_pairs 1 vsi_type 6
>>>> qset_handle 21 default_mac_addr ba:dc:0f:fe:02:11
>>>> 2021/08/30 09:15:27:445 debug avf :91:04.1:
>>>> disable_vlan_stripping
>>>> 2021/08/30 09:15:27:559 error avf :00:00.0: error: avf_send_to_pf:
>>>> error [v_opcode = 28, v_retval -5] from avf_create_if: pci-addr
>>>> :91:04.1
>>>> 
>>>> Syslog reveals a bit more:
>>>> Aug 30 09:15:27 s55-t13-sut1 kernel: [352169.781206] vfio-pci
>>>> :91:04.1: enabling device ( -> 0002) Aug 30 09:15:27
>>>> s55-t13-sut1 kernel: [352170.140729] i40e :91:00.0: Cannot
>>>> disable vlan stripping when port VLAN is set Aug 30 09:15:27
>>>> s55-t13-sut1
>>>> kernel: [352170.140737] i40e :91:00.0: VF 17 failed opcode 28,
>>>> retval: -5
>>>> 
>>>> It looks like this feature (vlan stripping on VFs with VLANs) was
>>>> removed in
>>> later versions of the driver. I don't know what the proper solution
>>> here is, but adding a configuration option to not disable vlan
>>> stripping when creating AVF interface sound good to me.
>>>> 
>>>> I've documented this in https://jira.fd.io/browse/VPP-1995.
>>>> 
>>>> Can you try with 2.16.11 and report back same outputs?
>>>> 
>>>> I've updated https://jira.fd.io/browse/VPP-1995 with 2.16.11 outputs
>>>> and
>>> they're pretty much the same, except the last syslog line is missing.
>>> 
>>> OK, I was hoping new version of driver supports VLAN v2 offload APIs
>>> which allows us to know if stripping is supported or not on the
>>> specific interface. V2 API is already supported on ice driver (E810
>>> NICs) and we have code to deal with that.
>>> 
>>> So not sure what we can do here. I don’t see a way to know if
>>> stripping is supported or not.
>> 
>> If there isn't an API for this, then we'll have to get this information from 
>> the
>> user, right?
>> 
>> Or we could try enabling stripping but not fail the interface initialization 
>> if it's not
>> successful.
>> 
>> Thoughts?
>> Juraj
>> 
> 
> Hi Damjan,
> 
> Just pinging to get your thoughts. I really seems like we should introduce 
> some sort of switch in the absence of an API.

Or simply declare it as unsupported until intel introduces V2 api in the i40e.

— 
Damjan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20295): https://lists.fd.io/g/vpp-dev/message/20295
Mute This Topic: https://lists.fd.io/mt/85479187/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] No rdma plugin in el/8 rpm package 21.06 repository, it obsolete?

2021-09-23 Thread Damjan Marion via lists.fd.io

— 
Damjan



> On 23.09.2021., at 15:00, Юрий Иванов  wrote:
> 
> Hi,
> 
> I see there is no rdma plugin in prebuilded packages 21.06 version
> [suser@RockyVPP-1 ~]$ dnf repoquery -l vpp* | grep rdma | grep -P ".so$" | 
> grep plugi
> Last metadata expiration check: 0:00:05 ago on Thu 23 Sep 2021 03:56:39 PM 
> EEST.
> [suser@RockyVPP-1 ~]$ 
> 
> But it exists in 21.06 packages version
> [suser@RockyVPP-1 ~]$ dnf repoquery -l vpp* | grep rdma | grep -P ".so$" | 
> grep plugi
> Last metadata expiration check: 0:00:24 ago on Thu 23 Sep 2021 03:52:29 PM 
> EEST.
> /usr/lib/vpp_api_test_plugins/rdma_test_plugin.so
> /usr/lib/vpp_plugins/rdma_plugin.so
> /usr/lib/vpp_api_test_plugins/rdma_test_plugin.so
> /usr/lib/vpp_plugins/rdma_plugin.so
> ...
> 
> Is rdma already deprecated and we should use something new for mellanox -4/5 ?

More likely problem is in fact that RPM packaging is not actively maintained 
anymore… Suggest using Ubuntu, Debian, ...

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20196): https://lists.fd.io/g/vpp-dev/message/20196
Mute This Topic: https://lists.fd.io/mt/85813161/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] config 1G hugepage

2021-09-22 Thread Damjan Marion via lists.fd.io

With running VPP you can do:

$ grep huge /proc/$(pgrep vpp)/numa_maps
10 default file=/memfd:buffers-numa-0\040(deleted) huge dirty=19 N0=19 
kernelpagesize_kB=2048
100260 default file=/memfd:buffers-numa-1\040(deleted) huge dirty=19 N1=19 
kernelpagesize_kB=2048
1004c0 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 
kernelpagesize_kB=2048

1st line - 19 2048K memfd backed  hugepages on numa 0
2nd line - 19 2048K memfd backed hugepages on numa 1
3rd line - one 2048K anonymous hugepage on numa 1

first two are buffer pool memory, 3rd one is likely some physmem used by native 
driver


If you add to startup.conf:

memory {
  main-heap-page-size 1G
}


$grep huge /proc/$(pgrep vpp)/numa_maps
10 default file=/memfd:buffers-numa-0\040(deleted) huge dirty=19 N0=19 
kernelpagesize_kB=2048
100260 default file=/memfd:buffers-numa-1\040(deleted) huge dirty=19 N1=19 
kernelpagesize_kB=2048
1004c0 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 
kernelpagesize_kB=2048
7fbc default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 
kernelpagesize_kB=1048576

last line is main heap allocated as single anonymous 1G hugepage.

VPP is not using filesystem backed hugepages so you will not find anything in 
/var/run/huge….

— 
Damjan



> On 21.09.2021., at 20:11, Mohsen Meamarian  wrote:
> 
> Hi,
> Thanks, Is there a way to make sure how many Hugespages are ready to Vpp 
> using? Immediately after Start Vpp, I open the "/ run / vpp / hugepages " 
> file but it is empty. Is the VPP mechanism to occupy the Hugepage if needed 
> or does Vpp reserve it for itself from the beginning? 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20183): https://lists.fd.io/g/vpp-dev/message/20183
Mute This Topic: https://lists.fd.io/mt/85744775/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode

2021-09-22 Thread Damjan Marion via lists.fd.io

— 
Damjan



> On 22.09.2021., at 11:50, Jieqiang Wang  wrote:
> 
> Hi Ben,
> 
> Thanks for your quick feedback. A few comments inline.
> 
> Best Regards,
> Jieqiang Wang
> 
> -Original Message-
> From: Benoit Ganne (bganne) 
> Sent: Friday, September 17, 2021 3:34 PM
> To: Jieqiang Wang ; vpp-dev 
> Cc: Lijian Zhang ; Honnappa Nagarahalli 
> ; Govindarajan Mohandoss 
> ; Ruifeng Wang ; Tianyu 
> Li ; Feifei Wang ; nd 
> Subject: RE: Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode
> 
> Hi Jieqiang,
> 
> This looks like an interesting optimization but you need to check that the 
> 'mbufs to be freed should be coming from the same mempool' rule holds true. 
> This won't be the case on NUMA systems (VPP creates 1 buffer pool per NUMA).
> This should be easy to check with eg. 'vec_len 
> (vm->buffer_main->buffer_pools) == 1'.
 Jieqiang: That's a really good point here. Like you said, it holds true on 
 SMP systems and we can check by if the numbers of buffer pool equal to 1. 
 But I am wondering that is this check too strict? If the worker CPUs and 
 NICs used reside in the same NUMA node, I think mbufs come from the same 
 mempool and we still meet the requirement here.  What do you think?

Please note that VPP is not using DPDK mempools. We are faking them by 
registering our own mempool handlers.
There is special trick how refcnt > 1 is handled. All packets which have vpp 
ref count > 1 are sent to DPDK code as members of another fake mempool which 
have cache turned off.
In reality that means that DPDK will have 2 fake mempools per numa, and all 
packets going to DPDK code will always have set refcnt to 1.

> 
> For the rest, I think we do not use DPDK mbuf refcounting at all as we 
> maintain our own anyway, but someone more knowledgeable than me should 
> confirm.
 Jieqiang: This echoes with the experiments(IPv4 multicasting and L2 flood) 
 I have done. All the mbufs in the two test cases are copied instead of ref 
 counting. But this also needs double-check from VPP experts like you 
 mentioned.

see above….

> 
> I'd be curious to see if we can measure a real performance difference in CSIT.
 Jieqiang: Let me trigger some performance test cases in CSIT and come back 
 to you with related performance figures.

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20182): https://lists.fd.io/g/vpp-dev/message/20182
Mute This Topic: https://lists.fd.io/mt/85669132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN with newer i40e drivers

2021-09-15 Thread Damjan Marion via lists.fd.io


> On 10.09.2021., at 08:53, Juraj Linkeš  wrote:
> 
>  
>  
> From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion 
> via lists.fd.io
> Sent: Thursday, September 9, 2021 12:01 PM
> To: Juraj Linkeš 
> Cc: vpp-dev ; Lijian Zhang 
> Subject: Re: [vpp-dev] AVF interface creation fails on VFs with configured 
> VLAN with newer i40e drivers
>  
> 
> On 09.09.2021., at 09:14, Juraj Linkeš  wrote:
>  
> Hi Damjan, vpp devs,
>  
> Upgrading to 2.15.9 i40e driver in CI (from Ubuntu's 2.8.20-k) makes AVF 
> interface creation on VFs with configured VLANs fail:
> 2021/08/30 09:15:27:343 debug avf :91:04.1: request_queues: 
> num_queue_pairs 1
> 2021/08/30 09:15:27:434 debug avf :91:04.1: version: major 1 minor 1
> 2021/08/30 09:15:27:444 debug avf :91:04.1: get_vf_resources: bitmap 
> 0x180b80a1 (l2 wb-on-itr adv-link-speed vlan-v2 vlan rx-polling rss-pf 
> offload-adv-rss-pf offload-fdir-pf)
> 2021/08/30 09:15:27:445 debug avf :91:04.1: get_vf_resources: num_vsis 1 
> num_queue_pairs 1 max_vectors 5 max_mtu 0 vf_cap_flags 0xb0081 (l2 
> adv-link-speed vlan rx-polling rss-pf) rss_key_size 52 rss_lut_size 64
> 2021/08/30 09:15:27:445 debug avf :91:04.1: get_vf_resources_vsi[0]: 
> vsi_id 27 num_queue_pairs 1 vsi_type 6 qset_handle 21 default_mac_addr 
> ba:dc:0f:fe:02:11
> 2021/08/30 09:15:27:445 debug avf :91:04.1: disable_vlan_stripping
> 2021/08/30 09:15:27:559 error avf :00:00.0: error: avf_send_to_pf: error 
> [v_opcode = 28, v_retval -5]
> from avf_create_if: pci-addr :91:04.1
>  
> Syslog reveals a bit more:
> Aug 30 09:15:27 s55-t13-sut1 kernel: [352169.781206] vfio-pci :91:04.1: 
> enabling device ( -> 0002)
> Aug 30 09:15:27 s55-t13-sut1 kernel: [352170.140729] i40e :91:00.0: 
> Cannot disable vlan stripping when port VLAN is set
> Aug 30 09:15:27 s55-t13-sut1 kernel: [352170.140737] i40e :91:00.0: VF 17 
> failed opcode 28, retval: -5
>  
> It looks like this feature (vlan stripping on VFs with VLANs) was removed in 
> later versions of the driver. I don't know what the proper solution here is, 
> but adding a configuration option to not disable vlan stripping when creating 
> AVF interface sound good to me.
>  
> I've documented this in https://jira.fd.io/browse/VPP-1995.
>  
> Can you try with 2.16.11 and report back same outputs?
>  
> I've updated https://jira.fd.io/browse/VPP-1995 with 2.16.11 outputs and 
> they're pretty much the same, except the last syslog line is missing.

OK, I was hoping new version of driver supports VLAN v2 offload APIs which 
allows us to know if stripping is supported or not on the specific interface. 
V2 API is already supported on ice driver (E810 NICs) and we have code to deal 
with that.

So not sure what we can do here. I don’t see a way to know if stripping is 
supported or not.

— 
Damjan

 
Not sure 



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20138): https://lists.fd.io/g/vpp-dev/message/20138
Mute This Topic: https://lists.fd.io/mt/85479187/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] getting a rid of vpe.api

2021-09-09 Thread Damjan Marion via lists.fd.io

Guys,

Can we get a rid of vpp/api/vpe.api and vpp/api/vpe_types.api by moving content 
to more 
appropriate places. I.e. some basic types and control_ping may be good 
candidate for vlibapi/.

It is quite weird that we have dozens of plugins depending on header files 
autogenerated from the main executable directory….

If we gat a rid of hardcoded dependencies like we have with APIs today, we will 
be able to modularize build. I.e. Florin asking to way to just build VCL….


Any volunteers?

Thanks,

— 
Damjan




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20110): https://lists.fd.io/g/vpp-dev/message/20110
Mute This Topic: https://lists.fd.io/mt/85488417/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] AVF interface creation fails on VFs with configured VLAN with newer i40e drivers

2021-09-09 Thread Damjan Marion via lists.fd.io

On 09.09.2021., at 09:14, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:

Hi Damjan, vpp devs,

Upgrading to 2.15.9 i40e driver in CI (from Ubuntu's 2.8.20-k) makes AVF 
interface creation on VFs with configured VLANs fail:
2021/08/30 09:15:27:343 debug avf :91:04.1: request_queues: num_queue_pairs 
1
2021/08/30 09:15:27:434 debug avf :91:04.1: version: major 1 minor 1
2021/08/30 09:15:27:444 debug avf :91:04.1: get_vf_resources: bitmap 
0x180b80a1 (l2 wb-on-itr adv-link-speed vlan-v2 vlan rx-polling rss-pf 
offload-adv-rss-pf offload-fdir-pf)
2021/08/30 09:15:27:445 debug avf :91:04.1: get_vf_resources: num_vsis 1 
num_queue_pairs 1 max_vectors 5 max_mtu 0 vf_cap_flags 0xb0081 (l2 
adv-link-speed vlan rx-polling rss-pf) rss_key_size 52 rss_lut_size 64
2021/08/30 09:15:27:445 debug avf :91:04.1: get_vf_resources_vsi[0]: vsi_id 
27 num_queue_pairs 1 vsi_type 6 qset_handle 21 default_mac_addr 
ba:dc:0f:fe:02:11
2021/08/30 09:15:27:445 debug avf :91:04.1: disable_vlan_stripping
2021/08/30 09:15:27:559 error avf :00:00.0: error: avf_send_to_pf: error 
[v_opcode = 28, v_retval -5]
from avf_create_if: pci-addr :91:04.1

Syslog reveals a bit more:
Aug 30 09:15:27 s55-t13-sut1 kernel: [352169.781206] vfio-pci :91:04.1: 
enabling device ( -> 0002)
Aug 30 09:15:27 s55-t13-sut1 kernel: [352170.140729] i40e :91:00.0: Cannot 
disable vlan stripping when port VLAN is set
Aug 30 09:15:27 s55-t13-sut1 kernel: [352170.140737] i40e :91:00.0: VF 17 
failed opcode 28, retval: -5

It looks like this feature (vlan stripping on VFs with VLANs) was removed in 
later versions of the driver. I don't know what the proper solution here is, 
but adding a configuration option to not disable vlan stripping when creating 
AVF interface sound good to me.

I've documented this in https://jira.fd.io/browse/VPP-1995.

Can you try with 2.16.11 and report back same outputs?

I just updated https://github.com/dmarion/deb-i40e in case you are using it…

—
Damjan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20108): https://lists.fd.io/g/vpp-dev/message/20108
Mute This Topic: https://lists.fd.io/mt/85479187/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Regarding assert in vlib_buffer_advance

2021-09-08 Thread Damjan Marion via lists.fd.io

It is mainly about first segment. Majority of vpp code asumes that packet 
headers are in the first segment. This is to prevent crashes due to headers 
being split between 2.

— 
Damjan

> On 08.09.2021., at 11:31, Prashant Upadhyaya  wrote:
> 
> Hi Damjan,
> 
> Thanks for the feedback.
> Out of curiosity, what is the motivation of this contract about
> minimal length of chained buffer data -- surely, my case being in
> point, the chaining framework should not make any assumptions about
> how the user would use it.
> 
> Regards
> -Prashant
> 
>> On Tue, Sep 7, 2021 at 12:59 AM Damjan Marion  wrote:
>> 
>> 
>> —
>> Damjan
>> 
>> 
>> 
>> On 06.09.2021., at 15:27, Prashant Upadhyaya  wrote:
>> 
>> Hi,
>> 
>> I am using VPP21.06
>> In vlib_buffer_advance there is the following assert --
>> ASSERT ((b->flags & VLIB_BUFFER_NEXT_PRESENT) == 0 ||
>> b->current_length >= VLIB_BUFFER_MIN_CHAIN_SEG_SIZE);
>> 
>> The above is problematic as I have a usecase where I construct a chained 
>> packet.
>> The first packet in the chain is containing just an ip4/udp/gtp header
>> and the second packet in the chain is an IP4 packet of arbitrary
>> length -- you can see that I am trying to wrap the packet into gtp via
>> chaining.
>> As a result this assert hits and brings the house down.
>> My usecase works fine when I use the non-debug build of VPP.
>> 
>> Perhaps this assert should be removed ?
>> 
>> 
>> This assert  enforces contract with the rest of the VPP code about minimal 
>> length of chaine buffer data.
>> You can remove it, but be aware of consequences. At some point things may 
>> just blow up….
>> 
>> —
>> Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20094): https://lists.fd.io/g/vpp-dev/message/20094
Mute This Topic: https://lists.fd.io/mt/85411974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Regarding assert in vlib_buffer_advance

2021-09-06 Thread Damjan Marion via lists.fd.io

— 
Damjan



> On 06.09.2021., at 15:27, Prashant Upadhyaya  wrote:
> 
> Hi,
> 
> I am using VPP21.06
> In vlib_buffer_advance there is the following assert --
> ASSERT ((b->flags & VLIB_BUFFER_NEXT_PRESENT) == 0 ||
>  b->current_length >= VLIB_BUFFER_MIN_CHAIN_SEG_SIZE);
> 
> The above is problematic as I have a usecase where I construct a chained 
> packet.
> The first packet in the chain is containing just an ip4/udp/gtp header
> and the second packet in the chain is an IP4 packet of arbitrary
> length -- you can see that I am trying to wrap the packet into gtp via
> chaining.
> As a result this assert hits and brings the house down.
> My usecase works fine when I use the non-debug build of VPP.
> 
> Perhaps this assert should be removed ?

This assert  enforces contract with the rest of the VPP code about minimal 
length of chaine buffer data.
You can remove it, but be aware of consequences. At some point things may just 
blow up….

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20085): https://lists.fd.io/g/vpp-dev/message/20085
Mute This Topic: https://lists.fd.io/mt/85411974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] TX Queue Placement

2021-09-05 Thread Damjan Marion via lists.fd.io


> On 03.09.2021., at 20:10, Mrityunjay Kumar  wrote:
> 
> Please find the comment inline below. 
> Regards,
> Mrityunjay Kumar.
> Mobile: +91 - 9731528504
> 
> 
> 
> On Fri, Sep 3, 2021 at 9:52 PM Damjan Marion  > wrote:
> 
> 
>> On 03.09.2021., at 12:05, Mrityunjay Kumar > > wrote:
>> 
>> Damjan Hi,
>>  
>> I’m so sorry for pointing but I’d like to make sure I understood you 
>> correctly. Since I don’t have specific case of tx-placement but please help 
>> vpp-dev  mail readers.  
>> · tx queues are statically mapped by vpp.
>> · main thread always maps to queue 0. of each interface in vpp.
>> · For the dpdk interfaces by default, number of vlib_mains [main 
>> thread + workers] is equal to the number of tx queues if the dpdk driver 
>> support such limit.
>> · Tx queues limit can be controlled by startup.conf section dpdk { 
>> num-tx-queues #abc }. But it might leads to spinlock on workers threads, 
>> refer the code.
>> if (xd->tx_q_used < tm->n_vlib_mains)
>> clib_spinlock_init (_elt 
>> (xd->tx_queues, j).lock);
>>  
>> So I think, we can’t generalised workers to tx queue mapping as because, 
>> different vpp interface can have different number of tx queues. 
> 
> If VPP have more worker threads than number of TX queues on the specific 
> interace, then VPP will share 1 queue between multiple workers for that 
> interface.
> 
> With the new tx queue infra (not yet enabled for dpdk). we allow dynamic 
> mapping of tx queues and also sharing single queue between multiple workers. 
> I.e. you can have 8 workers sharing 4 queues (2:1 mapping).
> 
> [MJ]: Why should we implement some features to allow an user to configure 
> less TX queues than worker threads? In case of hardware limitation, that's 
> ok.  If the number of threads is more than TX Queue , it leads to lock on 
> thread. Being a decade old user of dpdk, we recommend a lock-less mechanism. 
> [MJ]: Why do we think of implementing tx-placemats in VPP?  If someone shares 
> a use case, it will definitely help me to improve my knowledge. 

One example is AWS Nitro which have per-queue limits which in some cases lead 
that you need more than one queue to deal with traffic from single thread.

— 
Damjan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20069): https://lists.fd.io/g/vpp-dev/message/20069
Mute This Topic: https://lists.fd.io/mt/82088483/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp-plugin-devtools

2021-08-27 Thread Damjan Marion via lists.fd.io

Hello,

We have more and more plugins which are intended to be developer tools, and not 
really useful in production installs.

Good examples are: unittest, bufmon, perfmon, dispatch-trace, tracedump.

I think we should move them to separate .deb package called vpp-plugin-devtools.

Thoughts?

— 
Damjan




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20038): https://lists.fd.io/g/vpp-dev/message/20038
Mute This Topic: https://lists.fd.io/mt/85185194/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] master branch build failed #vpp-dev

2021-08-26 Thread Damjan Marion via lists.fd.io
But it is good news for pretty much everybody else involved :)
Simply too old, too many missing dependencies, etc…..

— 
Damjan

> On 26.08.2021., at 14:29, jiangxiaom...@outlook.com wrote:
> 
> it's not a good news for me 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20030): https://lists.fd.io/g/vpp-dev/message/20030
Mute This Topic: https://lists.fd.io/mt/85151908/21656
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] master branch build failed #vpp-dev

2021-08-26 Thread Damjan Marion via lists.fd.io
We don’t support CentOS 7 anymore…

— 
Damjan

> On 26.08.2021., at 14:01, jiangxiaom...@outlook.com wrote:
> 
> I build vpp on centos 7 with devtoolset-9. i find the using of devtoolset-9 
> was removeed in the commit, is there any purpose for it? 
> commit a5167edc66c639e139ffb5de4336c54bb3d8a871
> Author: Damjan Marion 
> Date:   Fri Jul 2 16:04:26 2021 +0200
>  
> build: remove unused files and sections
> 
> Type: make
> Change-Id: Ia1d8c53c5fb02f7e5c86efab6e6ccd0fdb16bc96
> Signed-off-by: Damjan Marion 
>  
> diff --git a/build-data/packages/libmemif.mk b/build-data/packages/libmemif.mk
> index acc0d6425..a4676af45 100644
> --- a/build-data/packages/libmemif.mk
> +++ b/build-data/packages/libmemif.mk
> @@ -26,11 +26,6 @@ libmemif_cmake_args += 
> -DCMAKE_C_FLAGS="$($(TAG)_TAG_CFLAGS)"
>  libmemif_cmake_args += -DCMAKE_SHARED_LINKER_FLAGS="$($(TAG)_TAG_LDFLAGS)"
>  libmemif_cmake_args += 
> -DCMAKE_PREFIX_PATH:PATH="$(PACKAGE_INSTALL_DIR)/../vpp"
>  
> -# Use devtoolset on centos 7
> -ifneq ($(wildcard /opt/rh/devtoolset-9/enable),)
> -libmemif_cmake_args += 
> -DCMAKE_PROGRAM_PATH:PATH="/opt/rh/devtoolset-9/root/bin"
> -endif
> -
>  libmemif_configure = \
>cd $(PACKAGE_BUILD_DIR) && \
>$(CMAKE) -G Ninja $(libmemif_cmake_args) $(call 
> find_source_fn,$(PACKAGE_SOURCE))$(PACKAGE_SUBDIR)
> diff --git a/build-data/packages/sample-plugin.mk 
> b/build-data/packages/sample-plugin.mk
> index 34188f9e7..546164c0d 100644
> --- a/build-data/packages/sample-plugin.mk
> +++ b/build-data/packages/sample-plugin.mk
> @@ -30,11 +30,6 @@ sample-plugin_cmake_args += 
> -DCMAKE_C_FLAGS="$($(TAG)_TAG_CFLAGS)"
>  sample-plugin_cmake_args += 
> -DCMAKE_SHARED_LINKER_FLAGS="$($(TAG)_TAG_LDFLAGS)"
>  sample-plugin_cmake_args += 
> -DCMAKE_PREFIX_PATH:PATH="$(PACKAGE_INSTALL_DIR)/../vpp"
>  
> -# Use devtoolset on centos 7
> -ifneq ($(wildcard /opt/rh/devtoolset-9/enable),)
> -sample-plugin_cmake_args += 
> -DCMAKE_PROGRAM_PATH:PATH="/opt/rh/devtoolset-9/root/bin"
> -endif
> -
>  sample-plugin_configure = \
>cd $(PACKAGE_BUILD_DIR) && \
>$(CMAKE) -G Ninja $(sample-plugin_cmake_args) \
> diff --git a/build-data/packages/vpp.mk b/build-data/packages/vpp.mk
> index 7db450e05..ad1d1fc9a 100644
> --- a/build-data/packages/vpp.mk
> +++ b/build-data/packages/vpp.mk
> @@ -30,16 +30,6 @@ vpp_cmake_args += 
> -DCMAKE_PREFIX_PATH:PATH="$(vpp_cmake_prefix_path)"
>  ifeq ("$(V)","1")
> 
>  
> 
>  
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20026): https://lists.fd.io/g/vpp-dev/message/20026
Mute This Topic: https://lists.fd.io/mt/85151908/21656
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] master branch build failed #vpp-dev

2021-08-26 Thread Damjan Marion via lists.fd.io
Your C compiler is too old….  You should use gcc 8+ or clang 7+

— 
Damjan

> On 26.08.2021., at 03:31, jiangxiaom...@outlook.com wrote:
> 
> Hi all,
>  Vpp master branch build failed, anyone have the save issue
> 
> @@@ Configuring vpp in 
> /home/dev/code/vpp/build-root/build-vpp_debug-native/vpp 
> -- The C compiler identification is GNU 4.8.5
> -- Check for working C compiler: /usr/lib64/ccache/cc
> -- Check for working C compiler: /usr/lib64/ccache/cc - works
> -- Detecting C compiler ABI info
> -- Detecting C compiler ABI info - done
> -- Detecting C compile features
> -- Detecting C compile features - done
> -- Performing Test compiler_flag_march_haswell
> -- Performing Test compiler_flag_march_haswell - Failed
> -- Performing Test compiler_flag_mtune_haswell
> -- Performing Test compiler_flag_mtune_haswell - Failed
> -- Performing Test compiler_flag_march_tremont
> -- Performing Test compiler_flag_march_tremont - Failed
> -- Performing Test compiler_flag_mtune_tremont
> -- Performing Test compiler_flag_mtune_tremont - Failed
> -- Performing Test compiler_flag_march_skylake_avx512
> -- Performing Test compiler_flag_march_skylake_avx512 - Failed
> -- Performing Test compiler_flag_mtune_skylake_avx512
> -- Performing Test compiler_flag_mtune_skylake_avx512 - Failed
> -- Performing Test compiler_flag_mprefer_vector_width_256
> -- Performing Test compiler_flag_mprefer_vector_width_256 - Failed
> -- Performing Test compiler_flag_march_icelake_client
> -- Performing Test compiler_flag_march_icelake_client - Failed
> -- Performing Test compiler_flag_mtune_icelake_client
> -- Performing Test compiler_flag_mtune_icelake_client - Failed
> -- Performing Test compiler_flag_mprefer_vector_width_512
> -- Performing Test compiler_flag_mprefer_vector_width_512 - Failed
> -- Looking for ccache
> -- Looking for ccache - found
> -- Performing Test compiler_flag_no_address_of_packed_member
> -- Performing Test compiler_flag_no_address_of_packed_member - Success
> -- Looking for pthread.h
> -- Looking for pthread.h - found
> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
> -- Check if compiler accepts -pthread
> -- Check if compiler accepts -pthread - yes
> -- Found Threads: TRUE  
> -- Performing Test HAVE_FCNTL64
> -- Performing Test HAVE_FCNTL64 - Failed
> -- Found OpenSSL: /usr/lib64/libcrypto.so (found version "1.1.1i")  
> -- The ASM compiler identification is GNU
> -- Found assembler: /usr/lib64/ccache/cc
> -- Looking for libuuid
> -- Found uuid in /usr/include
> -- libbpf headers not found - af_xdp plugin disabled
> -- Intel IPSecMB found: 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- dpdk plugin needs libdpdk.a library - found at 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libdpdk.a
> -- Found DPDK 21.5.0 in 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- dpdk plugin needs numa library - found at /usr/lib64/libnuma.so
> -- linux-cp plugin needs libnl-3.so library - found at /usr/lib64/libnl-3.so
> -- linux-cp plugin needs libnl-route-3.so.200 library - found at 
> /usr/lib64/libnl-route-3.so.200
> -- Found quicly 0.1.3-vpp in 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- rdma plugin needs libibverbs.a library - found at 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libibverbs.a
> -- rdma plugin needs librdma_util.a library - found at 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/librdma_util.a
> -- rdma plugin needs libmlx5.a library - found at 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libmlx5.a
> -- Performing Test IBVERBS_COMPILES_CHECK
> -- Performing Test IBVERBS_COMPILES_CHECK - Success
> -- -- libdaq headers not found - snort3 DAQ disabled
> -- -- libsrtp2.a library not found - srtp plugin disabled
> -- tlsmbedtls plugin needs mbedtls library - found at /usr/lib64/libmbedtls.so
> -- tlsmbedtls plugin needs mbedx509 library - found at 
> /usr/lib64/libmbedx509.so
> -- tlsmbedtls plugin needs mbedcrypto library - found at 
> /usr/lib64/libmbedcrypto.so
> -- Looking for SSL_set_async_callback
> -- Looking for SSL_set_async_callback - not found
> -- Found picotls in 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include and 
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libpicotls-core.a
> -- subunit library not found - vapi tests disabled
> -- Found Python3: /usr/bin/python3.6 (found version "3.6.8") found 
> components: Interpreter 
> -- Configuration:
> VPP version : 21.10-rc0~274-gee04de5
> VPP library version : 21.10
> GIT toplevel dir: /home/dev/code/vpp
> Build type  : debug
> C flags : 
> Linker flags (apps) : 
> Linker flags (libs) : 
> Host processor  : x86_64
> Target processor: x86_64
> Prefix path : /opt/vpp/external/x86_64 
> 

Re: [vpp-dev] Reason for removing SUSE packaging support

2021-07-19 Thread Damjan Marion via lists.fd.io

Simply because there was nobody interested to volunteer to maintain it.

— 
Damjan

> 
> On 19.07.2021., at 12:04, Laszlo Király  wrote:
> 
> 
> Hello,
> 
> Could somebody evolve why the build support for Suse was removed? 
> Which was the last release with support for build on openSuse?  I found this 
> commit only mentioning this removal:
> 
> commit bc35f469c89daf0126937580b6972516b5007d3a
> Author: Dave Wallace 
> Date:   Fri Sep 18 15:35:01 2020 +
> 
> build: remove opensuse build infra
>
> - VPP on opensuse has not been supported
>   for several releases.
>
> Type: fix
>
> Signed-off-by: Dave Wallace 
> Change-Id: I2b5316ad5c20a843b8936f4ceb473f932a5338d9
> 
> 
> Is it planned to add it back soon? Or later?
> 
> --
> Laszlo Kiraly
> Ericsson Software Technology
> laszlo.kir...@est.tech
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19829): https://lists.fd.io/g/vpp-dev/message/19829
Mute This Topic: https://lists.fd.io/mt/84304700/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] compilation error in vpp 20.05

2021-07-16 Thread Damjan Marion via lists.fd.io



> On 16.07.2021., at 16:58, ashish.sax...@hsc.com wrote:
> 
> Hi Devs,
> I am trying to compile VPP 20.05 on centos 8.2 machine. Using the following 
> steps for compilation:
> 
> make wipe-release
> make install-dep
> make install-ext-deps
> make build-release
> make pkg-rpm
> 
> 
>  
> I am getting the following error while creating rpm package from make pkg-rpm 
> command:
>  
>  make[2]: Leaving directory 
> '/opt/vpp/build-root/rpmbuild/vpp-20.05.1/build-root'
> + CFLAGS='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 
> -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong 
> -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
> -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
> -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection'
> + LDFLAGS='-Wl,-z,relro  -Wl,-z,now 
> -specs=/usr/lib/rpm/redhat/redhat-hardened-ld'
> + /usr/bin/python2 setup.py build '--executable=/usr/bin/python2 -s'
> /usr/bin/python2: can't open file 'setup.py': [Errno 2] No such file or 
> directory
> error: Bad exit status from /var/tmp/rpm-tmp.QsPsls (%build)
> RPM build errors:
> Bad exit status from /var/tmp/rpm-tmp.QsPsls (%build)
> make[1]: *** [Makefile:57: RPM] Error 1
> make[1]: Leaving directory '/opt/vpp/extras/rpm'
> make: *** [Makefile:625: pkg-rpm] Error 2
>  
> We were compiling earlier also following the same steps , but didn't got the 
> error then. 
> How can I proceed with the compilation ?

RPM packaging is not maintained for a long time and we recently decided to 
remove it from CI/CD. I suggest using Ubuntu od Debian, otherwise you will 
likely be on your own.

Also consider using newver version of VPP, there are 3 releases newer than 
20.05.

— 
Damjan






-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19810): https://lists.fd.io/g/vpp-dev/message/19810
Mute This Topic: https://lists.fd.io/mt/84250412/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] HQOS in latest VPP

2021-07-15 Thread Damjan Marion via lists.fd.io


> On 15.07.2021., at 18:16, satish amara  wrote:
> 
> Hi,
>It looks like Hierarchical Queuing support is not supported in the latest 
> VPP release.  The last release  I see is 20.01.  Any plans to support it 
> again in the future VPP?  I don't see the config commands or nor hqos code in 
> the latest VPP release.

No, currently no plans AFAIK.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19794): https://lists.fd.io/g/vpp-dev/message/19794
Mute This Topic: https://lists.fd.io/mt/84229266/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Buffer chains and pre-data area

2021-07-15 Thread Damjan Marion via lists.fd.io


> On 15.07.2021., at 18:53, jerome.bay...@student.uliege.be wrote:
> 
> Dear vpp-dev,
> 
> I'm trying to do some IPv6 in IPv6 encapsulation with no tunnel configuration.
> 
> The objective is to encapsulate the received packet in an other IPv6 packet 
> that will also "contain" a Hop-by-hop extension header. In summary, the 
> structure of the final packet will look like this : Outer-IP6-Header -> 
> Hop-by-hop-extension-header -> Original packet.
> 
> The main concern here is that the size of the outer IP6 header + the size of 
> the extension header > 128 bytes sometimes. When it arrives, I cannot write 
> my data inside the buffer pre-data area because it has a size of 128 bytes. I 
> already asked for solutions previously and I was adviced to either increase 
> the size of the pre-data area by recompiling VPP or create a new buffer for 
> my data and then chain it to the original one. I was able to create a buffer 
> chain that seemed to work perfectly fine.
> 
> However, when I tried to perform some performance tests I was quite 
> disappointed by the results : the buffer allocation for each packet is not 
> efficient at all. My question is then : Is there any way to increase the 
> performances ? To allocate buffers, I use the function "vlib_buffer_alloc" 
> defined in "buffer_funcs.h" but is it the right function to use ?

I’m quite sure vlib_buffer_alloc() can allocate buffers very fast. hopefully 
you are not calling that function for one buffer at a time...

> 
> In my case, the best option would be to have more space available in the 
> buffer's pre-data area but VPP does not seem to be built in a way that allows 
> easy modifications of the "PRE_DATA_SIZE" value. Am I right or is there any 
> "clean" method to change this value ?

PRE_DATA_SIZE is compile time constant for a good reason. making it 
configurable will decrease performance of almost every dataplane component.

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19792): https://lists.fd.io/g/vpp-dev/message/19792
Mute This Topic: https://lists.fd.io/mt/84230132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on different Linux Platforms

2021-07-15 Thread Damjan Marion via lists.fd.io


> On 15.07.2021., at 16:51, satish amara  wrote:
> 
> [Edited Message Follows]
> 
> Thanks, I am trying to understand the dependencies of VPP are based on the OS 
> kernel/Linux flavors (Centos, Redhat, Ubuntu). What version of Linux kernel 
> it can run. 

On pretty much any OS kernel/Linux flavours which have kernel versions 
described in my previous e-mail.

> Downloading and Installing VPP — The Vector Packet Processor 20.01 
> documentation (fd.io),  
> The above link talks about version 7 of CentOS. No documentation about Ubuntu 
> and other Linux flavors.

Doc is outdated….

> If there are dependencies on Linux Kernel what are they?

CAn you give me one or few examples of dependency, just to understand your 
question?

> Can I compile the VPP code on CentOS and run the code on RedHat.  

Yes, you likely can. We recently decided to remove CentOS from our CI/CD due to 
lack of interested
parties to maintain CentOS support so things may be broken.


> Do I need to compile and run code on the same flavour of OS? 

In theory no, but your life will likely be easier if you do so….

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19786): https://lists.fd.io/g/vpp-dev/message/19786
Mute This Topic: https://lists.fd.io/mt/84184116/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Prefetches improvement for VPP Arm generic image

2021-07-14 Thread Damjan Marion via lists.fd.io

I spent a bit of time to look at this and come up with some reasonable solution.

First, 128-byte cacheline is not dead, recently announced Marvell Octeon 10 
have 128 byte cacheline.

In current code cacheline size defines both amount of data prefetch instruction 
prefetches and
alignment of data in data structures needed to avoid false sharing.

So I think ideally we should have following:

- on x86:
  - number of bytes prefetch instruction prefetches set to 64
  - data structures should be aligned to 64 bytes
  - due the fact that there is adjacent cacehline prefetcher on x86 it may be 
worth
investigating if aligning to 128 brings some value

- on AArch64
  - number of bytes prefetch instruction prefetches set to 64 or 128, based on 
multiarch variant running
  - data structures should be aligned to 128 bytes as that value prevents false 
sharing for both 64 and 128 byte cacheline systems

Main problem is abuse of CLIB_PREFETCH() macro in our codebase.
Original idea of it was good, somebody wanted to provide macro which 
transparently emits 1-4 prefetch
instructions based on data size recognising that there may be systems with 
different cacheline size

Like:
  CLIB_PREFETCH (p, sizeof (ip6_header_t), LOAD);

But reality is, most of the time we have:
  CLIB_PREFETCH (p, CLIB_CACHE_LINE_BYTES, LOAD);

Where it is assumed that cacheline size is 64 and that just wasted resources on 
system with 128-byte cacheline.

Also, most of places in our codebase are perfectly fine with whatever cacheline 
size is, so I’m thinking about following:

1. set CLIB_CACHE_LINE_BYTES to 64 on x86 and 128 on ARM, that will make sure 
false sharing is not happening

2. introduce CLIB_CACHE_PREFETCH_BYTES which can be set to different value for 
different multiarch variant (64 for N1, 128 ThinderX2)

3. modify CLIB_PREFETCH macro to use CLIB_CACHE_PREFETCH_BYTES to emit proper 
number of prefetch instructions for cases where data size is specified

4. take the stub and replace all `CLIB_PREFETCH (p, CLIB_CACHE_LINE_BYTES, 
LOAD);` with `clib_prefetch_load (p);`.
   There may be exceptions but those lines typically mean: "i want to prefetch 
few (<=64) bytes at this address and i really don’t care what the cache line 
size is”.

5. analyse remaining few cases where CLIB_PREFETCH() is used with size 
specified by CLIB_CACHE_LINE_BYTES.

Thoughts?

— 
Damjan

> On 06.07.2021., at 03:48, Lijian Zhang  wrote:
> 
> Thanks Damjan for your comments. Some replies in lines.
> 
> Hi Lijian,
>  
> It will be good to know if 128 byte cacheline is something ARM platforms will 
> be using in the future or it is just historical.
> [Lijian] Existing ThunderX1 and OcteonTX2 CPUs are 128 byte cache-line. To my 
> knowledge, there may be more CPUs with 128 byte cache-line in the future.
>  
> Cacheline size problem is not just about prefetching, even bigger issue is 
> false sharing, so we need to address both.
> [Lijian] Yes, there may be false-sharing issue when running VPP image with 
> 64B definition on 128B cache-line CPUs. We will do some scalability testing 
> with that case, and check the multi-core performance.
>  
> Probably best solution is to have 2 VPP images, one for 128 and one for 64 
> byte cacheline size.
> [Lijian] For native built image, that’s fine. But I’m not sure if it’s 
> possible for cloud binaries installed via “apt-get install”.
>  
> Going across the whole codebase and replacing prefetch macros is something we 
> should definitely avoid.
> [Lijian] I got your concerns on large scope replacement. My concern is when 
> CLIB_PREFETCH() is used to prefetch packet content into cache as below 
> example, cache-line (CLIB_CACHE_LINE_BYTES) seems to be assumed as 64 bytes 
> always.
> CLIB_PREFETCH (p2->data, 3 * CLIB_CACHE_LINE_BYTES, LOAD);
>  
> — 
> Damjan
> 
> 
> On 05.07.2021., at 07:28, Lijian Zhang  wrote:
>  
> Hi Damjan,
> I committed several patches to address some issues around cache-line 
> definitions in VPP.
>  
> Patch [1.1] is to resolve the build error [2] on 64Byte cache line Arm CPUs, 
> e.g., ThunderX2, NeoverseN1, caused by the commit 
> (https://gerrit.fd.io/r/c/vpp/+/32996, build: remove unused files and 
> sections).
> It also supports building Arm generic image (with command of “make 
> build-release”) with 128Byte cache line definition, and building native image 
> with 64Byte cache line definition on some Arm CPUs, e.g., ThunderX2, 
> NeoverseN1 (with command of “make build-release TARGET_PLATFORM=native”).
>  
> Patch [1.5] is to set the default cache line definition in Arm generic image 
> from 128Byte to 64Byte.
> Setting cache line definition to 128Byte for Arm generic image is required 
> for ThunderX1 (with 128Byte physical cache line), which is also the build 
> machine in FD.io lab. I’m thinking for setting 64Byte cache line definition 
> in VPP for Arm image, which will affect ThunderX1 and OcteonTX2 CPUs. So it 
> requires the confirmation by Marvell.
>  
> Arm architecture CPUs 

Re: [EXTERNAL] [vpp-dev] Multi-threading locks and synchronization

2021-07-13 Thread Damjan Marion via lists.fd.io


> On 13.07.2021., at 18:41, satish amara  wrote:
> 
> Sync is needed. It's a question about the design of packet flow in  VPP. 
> Locks can be avoided if the packets in a flow are processed by the same 
> thread.  

You can use the handoff to make sure all packets belonging to specific flow or 
session end up on the same thread.

You can use bihash to store both thread_index and per-thread flow/session index 
in hash result. Bihash have per-bucket locks so it is safe to use single hash 
table from different workers.

After lookup you can simply compare lookup result thread_index with current 
thread index. if they are different you simply handoff packet to other thread, 
if they are the same you continue processing packets on the same thread.

After that you can build all your data structures as per-thread and avoid 
locking or atomics.

— 
Damjan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19761): https://lists.fd.io/g/vpp-dev/message/19761
Mute This Topic: https://lists.fd.io/mt/84186832/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on different Linux Platforms

2021-07-13 Thread Damjan Marion via lists.fd.io


> On 13.07.2021., at 19:40, satish amara  wrote:
> 
> Hi,
>   Currently, the VPP code can be compiled only on RedHat, CentOS, and Ubuntu. 
> Can I compile the VPP code on other OS Linux flavors, I see it's hardcoded in 
> the makefile. I am trying to understand by changing the Makefile VPP code can 
> be compiled on other Linux platforms or is there any specific Linux flavors 
> dependency.

Assuming that you have all dependencies installed on your system, you should be 
able to compile vpp. Worst case you can invoke cmake directly or just use 
experimental ./configure script

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19760): https://lists.fd.io/g/vpp-dev/message/19760
Mute This Topic: https://lists.fd.io/mt/84184116/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Multi-threading locks and synchronization

2021-07-12 Thread Damjan Marion via lists.fd.io

> On 11.07.2021., at 17:10, satish amara  > wrote:
> 
> [Edited Message Follows]
> 
> Hi,
>I have a few questions about how synchronization is being done when there 
> are multiple workers/threads accessing the same data structure.
> For example, IPsec headers have a seq number that gets incremented.  
> If we have IPsec flow and encrypting packets on VPP do we assume that packets 
> in the same flow go to the same core for encryption?

Yes, we handoff all packets to the owning thread. For fat flows we have crypto 
scheduler which offloads crypto operations to the multiple cores
but order is still maintained by the owning thread.

>  
> In IP lookup will find the adjacency and store it in the Opaque field.. Send 
> to another node to rewrite. Before the packet gets processed in the next node 
> if the interface is removed from the forwarding table. What will happen?   
> Lots of info being stored in Opaque fields for a couple of features. How the 
> code is making sure changes in the config data are taken care of when the 
> packets are still being processed by intermediate nodes in the graph. 

Interfaces are created/deleted under the barrier so there is not packets in 
flight.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19745): https://lists.fd.io/g/vpp-dev/message/19745
Mute This Topic: https://lists.fd.io/mt/84132489/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Multi-threading locks and synchronization

2021-07-12 Thread Damjan Marion via lists.fd.io

> On 11.07.2021., at 17:10, satish amara  > wrote:
> 
> [Edited Message Follows]
> 
> Hi,
>I have a few questions about how synchronization is being done when there 
> are multiple workers/threads accessing the same data structure.
> For example, IPsec headers have a seq number that gets incremented.  
> If we have IPsec flow and encrypting packets on VPP do we assume that packets 
> in the same flow go to the same core for encryption?

Yes, we handoff all packets to the owning thread. For fat flows we have crypto 
scheduler which offloads crypto operations to the multiple cores
but order is still maintained by the owning thread.

>  
> In IP lookup will find the adjacency and store it in the Opaque field.. Send 
> to another node to rewrite. Before the packet gets processed in the next node 
> if the interface is removed from the forwarding table. What will happen?   
> Lots of info being stored in Opaque fields for a couple of features. How the 
> code is making sure changes in the config data are taken care of when the 
> packets are still being processed by intermediate nodes in the graph. 

Interfaces are created/deleted under the barrier so there is not packets in 
flight.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19746): https://lists.fd.io/g/vpp-dev/message/19746
Mute This Topic: https://lists.fd.io/mt/84132489/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Prefetches improvement for VPP Arm generic image

2021-07-05 Thread Damjan Marion via lists.fd.io
Hi Lijian,

It will be good to know if 128 byte cacheline is something ARM platforms will 
be using in the future or it is just historical.

Cacheline size problem is not just about prefetching, even bigger issue is 
false sharing, so we need to address both.
Probably best solution is to have 2 VPP images, one for 128 and one for 64 byte 
cacheline size.

Going across the whole codebase and replacing prefetch macros is something we 
should definitely avoid.

— 
Damjan

> On 05.07.2021., at 07:28, Lijian Zhang  wrote:
> 
> Hi Damjan,
> I committed several patches to address some issues around cache-line 
> definitions in VPP.
>
> Patch [1.1] is to resolve the build error [2] on 64Byte cache line Arm CPUs, 
> e.g., ThunderX2, NeoverseN1, caused by the commit 
> (https://gerrit.fd.io/r/c/vpp/+/32996 , 
> build: remove unused files and sections).
> It also supports building Arm generic image (with command of “make 
> build-release”) with 128Byte cache line definition, and building native image 
> with 64Byte cache line definition on some Arm CPUs, e.g., ThunderX2, 
> NeoverseN1 (with command of “make build-release TARGET_PLATFORM=native”).
>
> Patch [1.5] is to set the default cache line definition in Arm generic image 
> from 128Byte to 64Byte.
> Setting cache line definition to 128Byte for Arm generic image is required 
> for ThunderX1 (with 128Byte physical cache line), which is also the build 
> machine in FD.io  lab. I’m thinking for setting 64Byte cache 
> line definition in VPP for Arm image, which will affect ThunderX1 and 
> OcteonTX2 CPUs. So it requires the confirmation by Marvell.
>
> Arm architecture CPUs have 128Byte or 64Byte physical designs. So no matter 
> the cache line definition is 128Byte or 64Byte in VPP source code, the 
> prefetch functions in generic image will not work properly on all Arm CPUs. 
> Patches [1.2] [1.3] [1.4] are to resolve the issue.
>
> For example when running Arm generic image (cache-line-size is defined as 
> 128B in Makefile for all Arm architectures) on 64Byte cache-line-size CPUs, 
> e.g., Neoverse-N1, Ampere altra, ThunderX2.
>
> [3] shows the prefetch macro definitions in VPP. Using CLIB_PREFETCH(), you 
> can prefetch data resides in multiple cache lines.
> [4] shows some usage examples of the prefetch macros in VPP. When running Arm 
> generic image (128B cache-line-size definition) on 64B cache-line CPUs (N1SDP 
> for example), 4.2, 4.3 and 4.4 have issues.
> For 4.2, the input for size parameter is 68. On N1SDP with 64B 
> cache-line-size, there should be two prefetch instructions executed, but due 
> to 68 is less than CLIB_CACHE_LINE_BYTES (128Byte definition in VPP), only 
> the first prefetch instruction is executed.
> For 4.3, if sizeof (ip0[0]) equals 68 or any other values larger than 64B, 
> there will be the same issue as 4.2.
> For 4.4, the code  is trying to prefetch the first 128B of packet content. It 
> assumes  CLIB_CACHE_LINE_BYTES is 64B always. In Arm generic image, the input 
> for size parameter is 256B, which will execute prefetches on unexpected 
> cache-lines (expected prefetches on 64B-0 and 64B-1, but actually on B64-0 
> and B64-2) .
> Packet content: [64B-0][64B-1][64B-2][64B-3]
>
> Our proposal is introduce a macro CLIB_N_CACHELINE_BYTES via VPP multi-arch 
> feature (check patch [1.2]), to reflect the runtime CPU cache-line-size in 
> Arm generic image, so that the prefetch instructions can be executed 
> correctly.
> Then for 4.4, we will need to modify the parameter for size, from 
> 2*CLIB_CACHE_LINE_BYTES to 128B, to reflect the actual intention.
>
> Some additional macros [1.3] can be added for users to do prefetch based on 
> number of cache-lines, besides number of bytes.
>
> Could you please suggest on the issue and proposal?
>
> [1]. Patches
> [1.1] build: support 128B/64B cache line size in Arm image, 
> https://gerrit.fd.io/r/c/vpp/+/32968/2 
> 
> [1.2] vppinfra: refactor prefetch macro, 
> https://gerrit.fd.io/r/c/vpp/+/32969/3 
> 
> [1.3] vppinfra: fix functions to prefetch single line, 
> https://gerrit.fd.io/r/c/vpp/+/32970/2 
> 
> [1.4] misc: correct prefetch macro usage, 
> https://gerrit.fd.io/r/c/vpp/+/32971/3 
> 
> [1.5] build: set 64B cache line size in Arm image, 
> https://gerrit.fd.io/r/c/vpp/+/32972/2 
> 
>
> [2]. Error message
> src/plugins/dpdk/device/init.c:1916:3: error: static_assert failed due to 
> requirement '128 == 1 << 6' "DPDK RTE CACHE LINE SIZE does not match with 
> 1<   STATIC_ASSERT (RTE_CACHE_LINE_SIZE == 1 << CLIB_LOG2_CACHE_LINE_BYTES,
>   ^~
> /home/lijian/tasks/plsremove/src/vppinfra/error_bootstrap.h:111:34: note: 
> expanded from macro 'STATIC_ASSERT'
> #define 

Re: [vpp-dev] heap sizes

2021-07-01 Thread Damjan Marion via lists.fd.io


> On 01.07.2021., at 16:12, Matthew Smith  wrote:
> 
> 
> 
> On Thu, Jul 1, 2021 at 6:36 AM Damjan Marion  wrote:
> 
> 
> > On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io 
> >  wrote:
> > 
> >> Yes, allowing dynamic heap growth sounds like it could be better.
> >> Alternatively... if memory allocations could fail and something more
> >> graceful than VPP exiting could occur, that may also be better. E.g. if
> >> I'm adding a route and try to allocate a counter for it and that fails, it
> >> would be better to refuse to add the route than to exit and take the
> >> network down.
> >> 
> >> I realize that neither of those options is easy to do btw. I'm just trying
> >> to figure out how to make it easier and more forgiving for users to set up
> >> their configuration without making them learn about various memory
> >> parameters.
> > 
> > Understood, but setting a very high default will just make users of smaller 
> > config puzzled too  and I think changing all memory allocation callsites 
> > to check for NULL would be a big paradigm change in VPP.
> > That's why I think a dynamically growing heap might be better but I do not 
> > really know what would be the complexity.
> > That said, you can probably change the default in your own build and that 
> > should work.
> > 
> 
> Fully agree wirth Benoit. We should not increase heap size default value.
> 
> Things are actually a bit more complicated. For performance reasons people 
> should use 
> hugepages whenever they are available, but they are also not default.
> When hugepages are used all pages are immediately backed with physical memory.
> 
> So different use cases require different heap configurations and end user 
> needs to tune that.
> Same applies for other things like stats segment page size which again may 
> impact forwarding
> performance significantly.
> 
> If messing with startup.conf is too complicated for end user, some nice 
> configuration script may be helpful.
> Or just throwing few startup.confs into extras/startup_configs.
> 
> Dynamic heap is possible, but not straight forward, as at some places we use 
> offsets
> to the start of the heap, so additional allocation cannot be anywhere.
> Also it will not help in some cases, i.e. when 1G hugepage is used for heap, 
> growing up to 2G
> will fail if 2nd 1G page is not pre-allocated.
> 
> 
> Sorry for not being clear. I was not advocating any change to defaults in VPP 
> code in gerrit. I was trying to figure out the impact of changing the default 
> value written in startup.conf by the management plane I work on. And also 
> have a conversation on whether there are ways that it could be made easier to 
> tune memory parameters correctly. 

ok, so let me try to answer your original questions:

> It's my understanding that when you set the size of the main heap or the stat 
> segment in startup.conf, the size you specify is used to set up virtual 
> address space and the system does not actually allocate that full amount of 
> memory to VPP. I think when VPP tries to read/write addresses within the 
> address space, then memory is requested from the system to back the chunk of 
> address space containing the address being accessed. Is my understanding 
> correct(ish)?

heap-size parameter defines size of memory mapping created for the heap. With 
the normal 4K pages mapping is not backed by physical memory. Instead, first 
time you try to access specific page CPU will generate page fault, and kernel 
will handle it by allocating 4k chunk of physical memory to back that specific 
virtual address and setup MMU mapping for that page.

In VPP we don’t have reverse process, even if all memory allocations which use 
specific 4k page are freed, that 4K page will not be returned to kernel, as 
kernel simply doesn’t know that specific page is not in use anymore.
Solution would be to somehow track number of memory allocations sharing single 
4K page and call madvise() system call when last one is freed...

If you are using hugepages, all virtual memory is immediately backed by 
physical memory so VPP with 32G of hugepage heap will use 32G of physical 
memory as long as VPP is running.

If you do `show memory main-heap` you will actually see how many physical pages 
are allocated:

vpp# show memory main-heap
Thread 0 vpp_main
  base 0x7f6f95c9f000, size 1g, locked, unmap-on-destroy, name 'main heap'
page stats: page-size 4K, total 262144, mapped 50702, not-mapped 211442
  numa 1: 50702 pages, 198.05m bytes
total: 1023.99M, used: 115.51M, free: 908.49M, trimmable: 905.75M


Out of this you can see that heap is using 4K pages, 262144 total, and 50702 
are mapped to physical memory.
All 50702 pages are using memory on numa node 1.

So effectively VPP is using around 198 MB of physical memory for heap while 
real heap usage is only 115 MB.
Such a big difference is mainly caused by one place in our code which temporary 
allocates ~200M of memory for 
temporary vector. 

Re: [vpp-dev] VPP on a Bluefield-2 smartNIC

2021-07-01 Thread Damjan Marion via lists.fd.io


> On 01.07.2021., at 07:35, Pierre Louis Aublin  
> wrote:
> 
> diff --git a/build/external/packages/ipsec-mb.mk 
> b/build/external/packages/ipsec-mb.mk
> index d0bd2af19..119eb5219 100644
> --- a/build/external/packages/ipsec-mb.mk
> +++ b/build/external/packages/ipsec-mb.mk
> @@ -34,7 +34,7 @@ define  ipsec-mb_build_cmds
>   SAFE_DATA=n \
>   PREFIX=$(ipsec-mb_install_dir) \
>   NASM=$(ipsec-mb_install_dir)/bin/nasm \
> - EXTRA_CFLAGS="-g -msse4.2" > $(ipsec-mb_build_log)
> + EXTRA_CFLAGS="-g" > $(ipsec-mb_build_log)

Why do you need this change?

If i get it right bluefield uses ARM cpus and we don’t compile intel ipsecmb 
lib on arm.

$ git grep ARCH_X86_64 build/external/Makefile
build/external/Makefile:ARCH_X86_64=$(filter x86_64,$(shell uname -m))
build/external/Makefile:install: $(if $(ARCH_X86_64), nasm-install 
ipsec-mb-install) dpdk-install rdma-core-install quicly-install libbpf-install
build/external/Makefile:config: $(if $(ARCH_X86_64), nasm-config 
ipsec-mb-config) dpdk-config rdma-core-config quicly-build

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19682): https://lists.fd.io/g/vpp-dev/message/19682
Mute This Topic: https://lists.fd.io/mt/83910198/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on a Bluefield-2 smartNIC

2021-07-01 Thread Damjan Marion via lists.fd.io

Might be worth trying our native driver (rdma) instead of using dpdk…..

— 
Damjan


> On 01.07.2021., at 11:07, Pierre Louis Aublin  
> wrote:
> 
> The"Unsupported PCI device 0x15b3:0xa2d6 found at PCI address :03:00.0" 
> message disappears; however the network interface still doesn't show up. 
> Interestingly, vpp on the host also prints this message, yet the interface 
> can be used.
> 
> By any chance, would you have any clue on what I could try to further debug 
> this issue?
> 
> Best
> Pierre Louis
> 
> On 2021/07/01 17:50, Benoit Ganne (bganne) via lists.fd.io wrote:
>> Please try https://gerrit.fd.io/r/c/vpp/+/32965 and reports if it works.
>> 
>> Best
>> ben
>> 
>>> -Original Message-
>>> From: vpp-dev@lists.fd.io  On Behalf Of Pierre Louis
>>> Aublin
>>> Sent: jeudi 1 juillet 2021 07:36
>>> To: vpp-dev@lists.fd.io
>>> Subject: [vpp-dev] VPP on a Bluefield-2 smartNIC
>>> 
>>> Dear VPP developers
>>> 
>>> I would like to run VPP on the Bluefield-2 smartNIC, but even though I
>>> managed to compile it the interface doesn't show up inside the CLI. By
>>> any chance, would you know how to compile and configure vpp for this
>>> device?
>>> 
>>> I am using VPP v21.06-rc2 and did the following modifications so that it
>>> can compile:
>>> ```
>>> diff --git a/build/external/packages/dpdk.mk
>>> b/build/external/packages/dpdk.mk
>>> index c7eb0fc3f..31a5c764e 100644
>>> --- a/build/external/packages/dpdk.mk
>>> +++ b/build/external/packages/dpdk.mk
>>> @@ -15,8 +15,8 @@ DPDK_PKTMBUF_HEADROOM?= 128
>>>   DPDK_USE_LIBBSD  ?= n
>>>   DPDK_DEBUG   ?= n
>>>   DPDK_MLX4_PMD?= n
>>> -DPDK_MLX5_PMD?= n
>>> -DPDK_MLX5_COMMON_PMD ?= n
>>> +DPDK_MLX5_PMD?= y
>>> +DPDK_MLX5_COMMON_PMD ?= y
>>>   DPDK_TAP_PMD ?= n
>>>   DPDK_FAILSAFE_PMD?= n
>>>   DPDK_MACHINE ?= default
>>> diff --git a/build/external/packages/ipsec-mb.mk
>>> b/build/external/packages/ipsec-mb.mk
>>> index d0bd2af19..119eb5219 100644
>>> --- a/build/external/packages/ipsec-mb.mk
>>> +++ b/build/external/packages/ipsec-mb.mk
>>> @@ -34,7 +34,7 @@ define  ipsec-mb_build_cmds
>>>SAFE_DATA=n \
>>>PREFIX=$(ipsec-mb_install_dir) \
>>>NASM=$(ipsec-mb_install_dir)/bin/nasm \
>>> - EXTRA_CFLAGS="-g -msse4.2" > $(ipsec-mb_build_log)
>>> + EXTRA_CFLAGS="-g" > $(ipsec-mb_build_log)
>>>   endef
>>> 
>>>   define  ipsec-mb_install_cmds
>>> ```
>>> 
>>> 
>>> However, when running the VPP CLI, the network interface does not show up:
>>> ```
>>> $ sudo -E make run
>>> clib_sysfs_prealloc_hugepages:261: pre-allocating 6 additional 2048K
>>> hugepages on numa node 0
>>> dpdk   [warn  ]: Unsupported PCI device 0x15b3:0xa2d6 found
>>> at PCI address :03:00.0
>>> 
>>> dpdk/cryptodev [warn  ]: dpdk_cryptodev_init: Failed to configure
>>> cryptodev
>>> vat-plug/load  [error ]: vat_plugin_register: oddbuf plugin not
>>> loaded...
>>>  _____   _  ___
>>>   __/ __/ _ \  (_)__| | / / _ \/ _ \
>>>   _/ _// // / / / _ \   | |/ / ___/ ___/
>>>   /_/ /(_)_/\___/   |___/_/  /_/
>>> 
>>> DBGvpp# show int
>>>Name   IdxState  MTU
>>> (L3/IP4/IP6/MPLS) Counter  Count
>>> local00 down 0/0/0/0
>>> DBGvpp# sh hard
>>>NameIdx   Link  Hardware
>>> local0 0down  local0
>>>Link speed: unknown
>>>local
>>> ```
>>> 
>>> 
>>> The dpdk-testpmd application seems to start correctly though:
>>> ```
>>> $ sudo ./build-root/install-vpp_debug-native/external/bin/dpdk-testpmd
>>> -l 0-2 -a :03:00.00 -- -i --nb-cores=2 --nb-ports=1
>>> --total-num-mbufs=2048
>>> EAL: Detected 8 lcore(s)
>>> EAL: Detected 1 NUMA nodes
>>> EAL: Detected static linkage of DPDK
>>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>>> EAL: Selected IOVA mode 'VA'
>>> EAL: No available 32768 kB hugepages reported
>>> EAL: No available 64 kB hugepages reported
>>> EAL: No available 1048576 kB hugepages reported
>>> EAL: Probing VFIO support...
>>> EAL: VFIO support initialized
>>> EAL:   Invalid NUMA socket, default to 0
>>> EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: :03:00.0 (socket
>>> 0)
>>> mlx5_pci: Failed to allocate Tx DevX UAR (BF)
>>> mlx5_pci: Failed to allocate Rx DevX UAR (BF)
>>> mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.
>>> Interactive-mode selected
>>> testpmd: create a new mbuf pool : n=2048, size=2176, socket=0
>>> testpmd: preferred mempool ops selected: ring_mp_mc
>>> 
>>> Warning! port-topology=paired and odd forward ports number, the last
>>> port will pair with itself.
>>> 
>>> Configuring Port 0 (socket 0)
>>> Port 0: 0C:42:A1:A4:89:B4
>>> Checking link statuses...
>>> Done
>>> testpmd>
>>> ```
>>> 
>>> Is the problem related to the failure to 

Re: [vpp-dev] heap sizes

2021-07-01 Thread Damjan Marion via lists.fd.io


> On 01.07.2021., at 11:12, Benoit Ganne (bganne) via lists.fd.io 
>  wrote:
> 
>> Yes, allowing dynamic heap growth sounds like it could be better.
>> Alternatively... if memory allocations could fail and something more
>> graceful than VPP exiting could occur, that may also be better. E.g. if
>> I'm adding a route and try to allocate a counter for it and that fails, it
>> would be better to refuse to add the route than to exit and take the
>> network down.
>> 
>> I realize that neither of those options is easy to do btw. I'm just trying
>> to figure out how to make it easier and more forgiving for users to set up
>> their configuration without making them learn about various memory
>> parameters.
> 
> Understood, but setting a very high default will just make users of smaller 
> config puzzled too  and I think changing all memory allocation callsites to 
> check for NULL would be a big paradigm change in VPP.
> That's why I think a dynamically growing heap might be better but I do not 
> really know what would be the complexity.
> That said, you can probably change the default in your own build and that 
> should work.
> 

Fully agree wirth Benoit. We should not increase heap size default value.

Things are actually a bit more complicated. For performance reasons people 
should use 
hugepages whenever they are available, but they are also not default.
When hugepages are used all pages are immediately backed with physical memory.

So different use cases require different heap configurations and end user needs 
to tune that.
Same applies for other things like stats segment page size which again may 
impact forwarding
performance significantly.

If messing with startup.conf is too complicated for end user, some nice 
configuration script may be helpful.
Or just throwing few startup.confs into extras/startup_configs.

Dynamic heap is possible, but not straight forward, as at some places we use 
offsets
to the start of the heap, so additional allocation cannot be anywhere.
Also it will not help in some cases, i.e. when 1G hugepage is used for heap, 
growing up to 2G
will fail if 2nd 1G page is not pre-allocated.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19680): https://lists.fd.io/g/vpp-dev/message/19680
Mute This Topic: https://lists.fd.io/mt/83856384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp in a non-previleged mode

2021-06-18 Thread Damjan Marion via lists.fd.io

I’m asking because in vpp we have also native drivers for some NICs and 
paravirtualized devices, and those drivers are working in the non-priv mode.

— 
Damjan

On 18.06.2021., at 09:12, Venumadhav Josyula  wrote:
> 
> 
> Hi Damjan,
> 
> We need dpdk, the reason being that packets from the NICs ( pollmode ) need 
> to come inside our packet processing sw ( GTPU). So we wanted to use dpdk for 
> the same. Now we wanted to know the pod in which vpp is running in 
> non-previleged mode.
> 
> Now we have questions
> i) is it possible ?
> ii) if yes how ?
> Now we were looking at below link for examples, but no luck... they 
> non-previleged running vpp + dpdk had some problems.
> https://github.com/cncf/cnf-testbed/issues/291
> 
> Hence i am trying to check in the community.
> 
> Thanks,
> Regards,
> Venu
> 
> 
>> On Fri, 18 Jun 2021 at 12:20, Damjan Marion  wrote:
>> 
>> 
>> Why do you need dpdk?
>> 
>> — 
>> Damjan
>> 
 On 18.06.2021., at 06:47, Venumadhav Josyula  wrote:
 
>>> 
>>> Hi Christian,
>>> 
>>> Can you please share the exact steps please ?
>>> 
>>> Thanks,
>>> Regards,
>>> Venu
>>> 
 On Thu, 17 Jun 2021 at 21:25, Christian Hopps  wrote:
 
 "Venumadhav Josyula"  writes:
 
 > Hi All,
 >
 > Can you run vpp + dpdk in non-privileged mode ? This vpp running
 > inside pod as a cnf
 
 I did this at one point, IIRC I had to disable something small bit of code 
 in the dpdk_early_init that required root, but as this code was only 
 required to do something directly with the HW later, it wasn't needed in 
 the container/virtual case.
 
 Thanks,
 Chris.
 
 >
 > Thanks,
 > Regards,
 > Venu
 >
 >
 >
 > 
 
>>> 
>>> 
>>> 
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19607): https://lists.fd.io/g/vpp-dev/message/19607
Mute This Topic: https://lists.fd.io/mt/83551481/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp in a non-previleged mode

2021-06-18 Thread Damjan Marion via lists.fd.io


Why do you need dpdk?

— 
Damjan

> On 18.06.2021., at 06:47, Venumadhav Josyula  wrote:
> 
> 
> Hi Christian,
> 
> Can you please share the exact steps please ?
> 
> Thanks,
> Regards,
> Venu
> 
>> On Thu, 17 Jun 2021 at 21:25, Christian Hopps  wrote:
>> 
>> "Venumadhav Josyula"  writes:
>> 
>> > Hi All,
>> >
>> > Can you run vpp + dpdk in non-privileged mode ? This vpp running
>> > inside pod as a cnf
>> 
>> I did this at one point, IIRC I had to disable something small bit of code 
>> in the dpdk_early_init that required root, but as this code was only 
>> required to do something directly with the HW later, it wasn't needed in the 
>> container/virtual case.
>> 
>> Thanks,
>> Chris.
>> 
>> >
>> > Thanks,
>> > Regards,
>> > Venu
>> >
>> >
>> >
>> > 
>> 
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19605): https://lists.fd.io/g/vpp-dev/message/19605
Mute This Topic: https://lists.fd.io/mt/83551481/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] clib_mask_compare_u16_x64 has asan issue

2021-06-17 Thread Damjan Marion via lists.fd.io

yes, for performance reason it is written as it is, you can ask asan to ignore 
it

— 
Damjan

> On 17.06.2021., at 11:28, jiangxiaom...@outlook.com wrote:
> 
> Hi Damjan Marion,
> 
>  vector function: clib_mask_compare_u16_x64 has ASAN Issue,
> 
> clib_mask_compare_u16_x64 (u16 v, u16 *a, u32 n_elts)
> {
>   ...
>   u16x32u *av = (u16x32u *) a;
>   ...
>
>   x = i8x32_pack (v16 == av[0], v16 == av[1]); <-  av[0] will read 64 
> bytes, but a[0] only have 2 bytes
> 
> This function will lead to session node crash if ASAS enabled
> 
> =
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7ff96f54d700 (LWP 113687)]
> 0x773de5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long, 
> unsigned long*, unsigned long*) () from 
> /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> Missing separate debuginfos, use: debuginfo-install 
> libgcc-4.8.5-44.el7.x86_64 libstdc++-4.8.5-44.el7.x86_64 
> libuuid-2.23.2-65.el7_9.1.x86_64 mbedtls-2.7.17-1.el7.x86_64 
> pkcs11-helper-1.11-3.el7.x86_64
> (gdb) bt
> #0  0x773de5c1 in __asan::FakeStack::AddrIsInFakeStack(unsigned long, 
> unsigned long*, unsigned long*) () from 
> /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #1  0x774c5a11 in 
> __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*) () 
> from /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #2  0x774dfdc2 in 
> __sanitizer::ThreadRegistry::FindThreadContextLocked(bool 
> (*)(__sanitizer::ThreadContextBase*, void*), void*) () from 
> /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #3  0x774c6e5a in __asan::FindThreadByStackAddress(unsigned long) () 
> from /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #4  0x773d8fb6 in __asan::GetStackAddressInformation(unsigned long, 
> unsigned long, __asan::StackAddressDescription*) () from 
> /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #5  0x773da3f9 in 
> __asan::AddressDescription::AddressDescription(unsigned long, unsigned long, 
> bool) () from /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #6  0x773dce51 in __asan::ErrorGeneric::ErrorGeneric(unsigned int, 
> unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned 
> long) () from /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #7  0x774c0c2a in __asan::ReportGenericError(unsigned long, unsigned 
> long, unsigned long, unsigned long, bool, unsigned long, unsigned int, bool) 
> () from /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #8  0x774c2194 in __asan_report_load_n () from 
> /home/dev/code/net-base/dist/script/test/../../lib/libasan.so.5
> #9  0x741c34c5 in clib_mask_compare_u16_x64 (v=2, a=0x7fffd38cb980, 
> n_elts=1) at 
> /home/dev/code/net-base/.vpp-21.06-rc2/src/vppinfra/vector_funcs.h:24
> #10 0x741c374c in clib_mask_compare_u16 (v=2, a=0x7fffd38cb980, 
> mask=0x7ff96ecf5310, n_elts=1) at 
> /home/dev/code/net-base/.vpp-21.06-rc2/src/vppinfra/vector_funcs.h:79
> #11 0x741c3b7b in enqueue_one (vm=0x7fffd1c73080, 
> node=0x7fffd2d21040, used_elt_bmp=0x7ff96ecf5440, next_index=2, 
> buffers=0x7fffd1d3b2d0, nexts=0x7fffd38cb980, n_buffers=1, n_left=1, 
> tmp=0x7ff96ecf5480) at 
> /home/dev/code/net-base/.vpp-21.06-rc2/src/vlib/buffer_funcs.c:30
> #12 0x741fe451 in vlib_buffer_enqueue_to_next_fn_hsw 
> (vm=0x7fffd1c73080, node=0x7fffd2d21040, buffers=0x7fffd1d3b2d0, 
> nexts=0x7fffd38cb980, count=1) at 
> /home/dev/code/net-base/.vpp-21.06-rc2/src/vlib/buffer_funcs.c:110
> #13 0x75aff172 in vlib_buffer_enqueue_to_next (vm=0x7fffd1c73080, 
> node=0x7fffd2d21040, buffers=0x7fffd1d3b2d0, nexts=0x7fffd38cb980, count=1) 
> at /home/dev/code/net-base/.vpp-21.06-rc2/src/vlib/buffer_node.h:344
> #14 0x75b16b0a in session_flush_pending_tx_buffers 
> (wrk=0x7fffd4d1ad40, node=0x7fffd2d21040) at 
> /home/dev/code/net-base/.vpp-21.06-rc2/src/vnet/session/session_node.c:1626
> #15 0x75b1a3db in session_queue_node_fn (vm=0x7fffd1c73080, 
> node=0x7fffd2d21040, frame=0x0) at 
> /home/dev/code/net-base/.vpp-21.06-rc2/src/vnet/session/session_node.c:1793
> #16 0x740a1bfb in dispatch_node (vm=0x7fffd1c73080, 
> node=0x7fffd2d21040, type=VLIB_NODE_TYPE_INPUT, 
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0, 
> last_time_stamp=101201619637438) at 
> /home/dev/code/net-base/.vpp-21.06-rc2/src/vlib/main.c:1024
> #17 0x740a6aef in vlib_main_or_worker_loop (vm=0x7fffd1c73080, 
> is_main=0) at /home/dev/code/net-base/.vpp-21.06-rc2/src/vlib/main.c:1618
> #18 0x740a8713 in vlib_worker_loop (vm=0x7fffd1c73080) at 
> /home/dev/code/net-base/.vpp-21.06-rc2/src/vlib/main.c:1783
> #19 0x7413f573 in vlib_worker_thread_fn (arg=0x7fffd685c500) at 
> 

Re: [vpp-dev] Proposed removal of Centos-8 jobs from master

2021-06-07 Thread Damjan Marion via lists.fd.io


> On 07.06.2021., at 17:24, Dave Wallace  wrote:
> 
> Folks,
> 
> The RPM builds have been unmaintained for a couple years now and the CentOS-8 
> jobs have become the long pole in the CI verification cycle as well as 
> costing time to maintain the builds due to changes upstream.
> 
> I am proposing that the vpp-*-master-centos-8-* jobs be removed from the CI 
> until someone steps up to invest in removal of the technical debt that has 
> accumulated in the RPM build infrastructure.  
> 
> If no one steps up to maintain the RPM build infrastructure, then I would 
> also recommend that it be removed from the VPP build infrastructure as well.
> 
> Damjan, can you please add this to the agenda for tomorrow's VPP Monthly 
> Community meeting?

Fully agree. Will add….

— 
Damjan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19535): https://lists.fd.io/g/vpp-dev/message/19535
Mute This Topic: https://lists.fd.io/mt/83372643/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] What are the available atomic operations in VPP #vpp-dev #vpp #counters

2021-06-07 Thread Damjan Marion via lists.fd.io

VPP is just a C app. So you can use standard C atomics…

Have you tried to do something like “git grep atomic”  in the vpp repo ?

— 
Damjan

> On 05.06.2021., at 19:39, Mohanty, Chandan (Nokia - IN/Bangalore) 
>  wrote:
> 
> VPP experts,
>  Any pointers on this will be much appreciated and help a VPP learning guy.
> 
> 
> Wanted to know, Similar to Linux, if any atomic operation is available in VPP?
> Intention is to use atomic operations in place of Locking.
> 
> Linux:
> long atomic64_add_return(int i, atomic64_t *v) Atomically add i to v and 
> return the result
> long atomic64_sub_return(int i, atomic64_t *v) Atomically subtract i from v 
> and return the result
> 
> int atomic_sub_return(int i, atomic_t *v) Atomically subtract i from v and 
> return the result
> int atomic_add_return(int i, atomic_t *v) Atomically add i to v and return 
> the result.
> 
> VPP:
> ??
> -
> 
> Regards
> Chandan 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19533): https://lists.fd.io/g/vpp-dev/message/19533
Mute This Topic: https://lists.fd.io/mt/83277899/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #counters:https://lists.fd.io/g/vpp-dev/mutehashtag/counters
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] unformat_vnet_uri not implemented following RFC 3986

2021-05-27 Thread Damjan Marion via lists.fd.io

Same RFC defines that for IPv6, square brackets should be used to distinguish 
between addr and port:

 A host identified by an Internet Protocol literal address, version 6
   [RFC3513] or later, is distinguished by enclosing the IP literal
   within square brackets ("[" and "]").  This is the only place where
   square bracket characters are allowed in the URI syntax. 
— 
Damjan



> On 27.05.2021., at 13:00, Dave Barach  wrote:
> 
> IIRC it's exactly because ipv6 addresses use ':' (and "::") as chunk 
> separators. If you decide to change unformat_vnet_uri please test ipv6 cases 
> carefully.
> 
> D.
> 
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Florin Coras
> Sent: Thursday, May 27, 2021 1:05 AM
> To: 江 晓明 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] unformat_vnet_uri not implemented following RFC 3986
> 
> Hi, 
> 
> That unformat function and the associated session layer apis (e.g., 
> vnet_connect_uri) are mainly used for testing and their production use is 
> discouraged. Provided that functionality is not lost, if anybody wants to do 
> the work, I don’t see why we wouldn’t want to make the unformat function rfc 
> compliant. At this point I can’t remember why we settled on the use of “/“ 
> but I suspect it may have to do with easier parsing of ipv6 ips. 
> 
> Regards,
> Florin
> 
>> On May 26, 2021, at 8:04 PM, jiangxiaom...@outlook.com wrote:
>> 
>> Hi Florin:
>>Currently unformat_vnet_uri not implemented following RFC 3986. The 
>> syntax `tcp://10.0.0.1/500` should be `tcp://10.0.0.1:500` in rfc 3986.
>> I noticed in there is a comment for `unformat_vent_uri` in 
>> `src/vnet/session/application_interface.c`,
>> ```
>> /**
>> * unformat a vnet URI
>> *
>> * transport-proto://[hostname]ip46-addr:port
>> * eg.  tcp://ip46-addr:port
>> *  tls://[testtsl.fd.io]ip46-addr:port
>> *
>> ...
>> ```
>> Does it mean `unformat_vnet_uri` will be refactored following rfc in future?
>> 
>> 
>> 
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19489): https://lists.fd.io/g/vpp-dev/message/19489
Mute This Topic: https://lists.fd.io/mt/83117335/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] IPv6 in IPv6 Encapsulation

2021-05-21 Thread Damjan Marion via lists.fd.io


> On 21.05.2021., at 17:14, Neale Ranns  wrote:
> 
> Right, there’s only so much space available. You’ll need to recompile VPP to 
> get more space.
> Change the PRE_DATA_SIZE value in src/vlib/CMakeLists.txt.

Changing makefiles is bad. cmake allows specifying custom values with -D or
alternatively running ccmake in the build dir or with the patch to build dir
allows people to modify build arguments using the UI.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19432): https://lists.fd.io/g/vpp-dev/message/19432
Mute This Topic: https://lists.fd.io/mt/82983110/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Run VPP as non root user

2021-05-20 Thread Damjan Marion via lists.fd.io


> On 20.05.2021., at 12:26, ashish.sax...@hsc.com wrote:
> 
> Hi All,
> We are using VPP version 21.01 on our setup. We are able to  run vpp as root 
> user , but getting the following error while running VPP as non root user:
> $ vppctl
> clib_socket_init: connect (fd 3, '/run/vpp/cli.sock'): Permission denied
> 
> Can you please let us know how can we run VPP on our machine as  non root 
> user ? 

Configuring VPP to store cli.sock somewhere where user have write permissions 
may help.
hint: look at startup.conf file…

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19417): https://lists.fd.io/g/vpp-dev/message/19417
Mute This Topic: https://lists.fd.io/mt/82958340/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Fail to build vpp locally

2021-05-20 Thread Damjan Marion via lists.fd.io

Yes, you need intel ipsecmb library version 1.0.

We should better deal with this kind of situations, display warning and turn 
off feature instead of compile failures.


do “make install-ext-dep”….

— 
Damjan


> On 20.05.2021., at 11:46, liuyacan  wrote:
> 
> Hi, All
> 
>I checked out the latest code and try to build, but encountered the 
> following error.  Do I need to install/update any package ?
> I'm sure it was ok before.
>
> VPP version : 21.06-rc0~755-g785458895
> VPP library version : 21.06
> GIT toplevel dir: /home/liuyacan/vpp_gerrit/vpp
> Build type  : debug
> C flags :
> Linker flags (apps) :
> Linker flags (libs) :
> Host processor  : x86_64
> Target processor: x86_64
> Prefix path : 
> /opt/vpp/external/x86_64;/home/liuyacan/vpp_gerrit/vpp/build-root/install-vpp_debug-native/external
> Install prefix  : 
> /home/liuyacan/vpp_gerrit/vpp/build-root/install-vpp_debug-native/vpp
> -- Configuring done
> -- Generating done
> -- Build files have been written to: 
> /home/liuyacan/vpp_gerrit/vpp/build-root/build-vpp_debug-native/vpp
>  Building vpp in 
> /home/liuyacan/vpp_gerrit/vpp/build-root/build-vpp_debug-native/vpp 
> [1620/2492] Building C object 
> CMakeFiles/plugins/crypto_ipsecmb/CMakeFiles/crypto_ipsecmb_plugin.dir/ipsecmb.c.o
> FAILED: 
> CMakeFiles/plugins/crypto_ipsecmb/CMakeFiles/crypto_ipsecmb_plugin.dir/ipsecmb.c.o
> ccache /usr/bin/clang-9 --target=x86_64-linux-gnu 
> -Dcrypto_ipsecmb_plugin_EXPORTS -I/home/liuyacan/vpp_gerrit/vpp/src 
> -ICMakeFiles -I/home/liuyacan/vpp_gerrit/vpp/src/plugins -ICMakeFiles/plugins 
> -I/opt/vpp/external/x86_64/include -fPIC   -g -fPIC -Werror -Wall 
> -Wno-address-of-packed-member -O0 -fstack-protector -fno-common -DCLIB_DEBUG 
> -march=corei7 -mtune=corei7-avx -fvisibility=hidden -ffunction-sections 
> -fdata-sections -march=silvermont -maes -MD -MT 
> CMakeFiles/plugins/crypto_ipsecmb/CMakeFiles/crypto_ipsecmb_plugin.dir/ipsecmb.c.o
>  -MF 
> CMakeFiles/plugins/crypto_ipsecmb/CMakeFiles/crypto_ipsecmb_plugin.dir/ipsecmb.c.o.d
>  -o 
> CMakeFiles/plugins/crypto_ipsecmb/CMakeFiles/crypto_ipsecmb_plugin.dir/ipsecmb.c.o
>-c /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:459:5: 
> error: unknown type name 'IMB_CIPHER_DIRECTION'; did you mean 
> 'JOB_CIPHER_DIRECTION'?
>  IMB_CIPHER_DIRECTION dir)
>  ^~~~
>  JOB_CIPHER_DIRECTION
> /opt/vpp/external/x86_64/include/intel-ipsec-mb.h:280:3: note: 
> 'JOB_CIPHER_DIRECTION' declared here
> } JOB_CIPHER_DIRECTION;
>   ^
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:548:6: 
> error: unknown type name 'IMB_CIPHER_DIRECTION'; did you mean 
> 'JOB_CIPHER_DIRECTION'?
>  IMB_CIPHER_DIRECTION dir)
>  ^~~~
>  JOB_CIPHER_DIRECTION
> /opt/vpp/external/x86_64/include/intel-ipsec-mb.h:280:3: note: 
> 'JOB_CIPHER_DIRECTION' declared here
> } JOB_CIPHER_DIRECTION;
>   ^
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:563:42: 
> error: variable has incomplete type 'struct chacha20_poly1305_context_data'
>   struct chacha20_poly1305_context_data ctx;
> ^
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:563:11: 
> note: forward declaration of 'struct chacha20_poly1305_context_data'
>   struct chacha20_poly1305_context_data ctx;
>  ^
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:586:4: 
> error: implicit declaration of function 'IMB_CHACHA20_POLY1305_INIT' is 
> invalid in C99 [-Werror,-Wimplicit-function-declaration]
>   IMB_CHACHA20_POLY1305_INIT (m, key, , op->iv, op->aad,
>   ^
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:592:8: 
> error: implicit declaration of function 'IMB_CHACHA20_POLY1305_ENC_UPDATE' is 
> invalid in C99 [-Werror,-Wimplicit-function-declaration]
>   IMB_CHACHA20_POLY1305_ENC_UPDATE (m, key, , chp->dst,
>   ^
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:592:8: 
> note: did you mean 'IMB_CHACHA20_POLY1305_INIT'?
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:586:4: 
> note: 'IMB_CHACHA20_POLY1305_INIT' declared here
>   IMB_CHACHA20_POLY1305_INIT (m, key, , op->iv, op->aad,
>   ^
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:597:4: 
> error: implicit declaration of function 'IMB_CHACHA20_POLY1305_ENC_FINALIZE' 
> is invalid in C99 [-Werror,-Wimplicit-function-declaration]
>   IMB_CHACHA20_POLY1305_ENC_FINALIZE (m, , op->tag, op->tag_len);
>   ^
> /home/liuyacan/vpp_gerrit/vpp/src/plugins/crypto_ipsecmb/ipsecmb.c:597:4: 

Re: [vpp-dev] [DPDK] AF_XDP PMD

2021-05-18 Thread Damjan Marion via lists.fd.io

Our preference is to have native drivers, that is what we did already for many 
devices.
DPDK is not modular so each time you want ot use some DPDK feature end up with 
sub optimal solution.
Notable exception is DPDK crypto which allows use of crypto PMDs without being 
forced to use mbufs, etc...

I’m not very familiar with DPDK AF_XDP pmd, but my guess is that if it is not 
working, likely it is because it needs to do something special with buffer 
memory (i.e. register regions trough AFXDP APIs) and that memory is handled by 
VPP.



> On 18.05.2021., at 11:25, Catalin Vasile  wrote:
> 
> So shouldn't this hack for also using the vpp buffer manager be generally 
> usable with all PMDs?
> From: Damjan Marion 
> Sent: Tuesday, May 18, 2021 1:17
> To: Catalin Vasile 
> Cc: vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] [DPDK] AF_XDP PMD
>  
> 
> No, dpdk PMDs are also using vpp buffer manager, we are cheating a bit by 
> registering fake mempool.
> 
> — 
> Damjan
> 
> > On 17.05.2021., at 23:40, Catalin Vasile  wrote:
> > 
> > �쏜㷃�쏜㷃�쏜㋢鹋ᢢ燨꽹ꥥ駫ⱪެ뇩譡諈⮢觬㽅죩瞵皆�돿쭬緘ꣾ࿯ꚗ庿咽닆ꁻﵽ�貺힓蘫Ꭲ颜蛛榳¥請汽�﹫翳漷㛿껹蚮让㬉�響痫ﺣ৞깘겶쟝誅➲犸魺᭭ꛏヨ⬭뇷抣뺚嵺○黋魶暫誉쨦�쏜㷃�쏜㷃�쏜


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19396): https://lists.fd.io/g/vpp-dev/message/19396
Mute This Topic: https://lists.fd.io/mt/82836782/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [DPDK] AF_XDP PMD

2021-05-17 Thread Damjan Marion via lists.fd.io

No, dpdk PMDs are also using vpp buffer manager, we are cheating a bit by 
registering fake mempool.

— 
Damjan

> On 17.05.2021., at 23:40, Catalin Vasile  wrote:
> 
> �쏜㷃�쏜㷃�쏜㋢鹋ᢢ燨꽹ꥥ駫ⱪެ뇩譡諈⮢觬㽅죩瞵皆�돿쭬緘ꣾ࿯ꚗ庿咽닆ꁻﵽ�貺힓蘫Ꭲ颜蛛榳¥請汽�﹫翳漷㛿껹蚮让㬉�響痫ﺣ৞깘겶쟝誅➲犸魺᭭ꛏヨ⬭뇷抣뺚嵺○黋魶暫誉쨦�쏜㷃�쏜㷃�쏜

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19393): https://lists.fd.io/g/vpp-dev/message/19393
Mute This Topic: https://lists.fd.io/mt/82836782/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [DPDK] AF_XDP PMD

2021-05-14 Thread Damjan Marion via lists.fd.io
Probably nobody tried, and knowing that vpp doesn’t use dpdk buffer manager, i 
would be very surprised if it works.

What is wrong with using VPP native AF_XDP?

— 
Damjan

> On 15.05.2021., at 01:26, Catalin Vasile  wrote:
> 
> 
> Hi,
> 
> I know VPP has an AF_XDP plugin, but I'm trying to use the AF_XDP PMD driver 
> from DPDK.
> It's not clear to me: does VPP have a way to use the DPDK AF_XDP PMD driver? 
> I tried looking through the code, but I'm not sure yet.
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19386): https://lists.fd.io/g/vpp-dev/message/19386
Mute This Topic: https://lists.fd.io/mt/82836782/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vlib_buffer_clone behavior when trying to send to two interfaces

2021-04-14 Thread Damjan Marion via lists.fd.io


> On 14.04.2021., at 17:44, David Gohberg  wrote:
> 
> using DPDK with Mellanox MT27800 [ConnectX-5] (100G)

might be worth trying to use native rdma diver.

> show buffers indeed shows a leak:
> 
> Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   Used 
>  
> default-numa-0 0 0   2496 2048   430185 430185 0   0  
>  
> default-numa-1 1 1   2496 2048   430185  71913101   
> 358171 
> 
> this keeps increasing until reaching 100% used


I would suspect your code first for buffer leaks…..

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19201): https://lists.fd.io/g/vpp-dev/message/19201
Mute This Topic: https://lists.fd.io/mt/81938105/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vlib_buffer_clone behavior when trying to send to two interfaces

2021-04-14 Thread Damjan Marion via lists.fd.io


> On 14.04.2021., at 17:15, David Gohberg  wrote:
> 
> When testing Damjan's version under traffic (using trex with a few thousand 
> pps)
> after about 5 minutes into the test vlib_buffer_clone fails to create 2 
> copies and vpp crashes due to double-free error.
> looks like the buffer pool leaks.

Are you using DPDK. What kind of NIC?

> 1. Is it the node function responsibility to vlib_buffer_free the cloned 
> buffer?

no, driver should do that….

> 2. How can I inspect the buffer pool memory usage?

show buffers

> I used the 'show memory' cli commands and didn't notice any thread memory 
> increase. 

buffer memory is pre-alllocated on startup and it not taken from the main heap 
so “show memory” will not give you any releveant data.

— 
Damjan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19199): https://lists.fd.io/g/vpp-dev/message/19199
Mute This Topic: https://lists.fd.io/mt/81938105/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] TX Queue Placement

2021-04-14 Thread Damjan Marion via lists.fd.io

General rule is that increasing those values are reducing number of rx drops 
due to 
VPP being de-scheduled or busy or due to traffic bursts, but also that degrades 
performance.

I think what we have set as default is reasonable and you should leave it 
unless you are experiencing some issues…

— 
Damjan


> On 14.04.2021., at 13:45, Marcos - Mgiga  wrote:
> 
> Hello Damjan,
>
> Thank you for clarifying...
>
> I also have a question about num-rx-desc and num-rt-desc parameters, hope you 
> dont mind to dicuss it in this e-mail
>
> I would like to understand what value fits better to my environment, do have 
> any thoughts about it?
>
> Best Regards
> De: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>  <mailto:vpp-dev@lists.fd.io>> Em nome de Damjan Marion via lists.fd.io 
> <http://lists.fd.io/>
> Enviada em: quarta-feira, 14 de abril de 2021 08:24
> Para: Marcos - Mgiga mailto:mar...@mgiga.com.br>>
> Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
> Assunto: Re: [vpp-dev] TX Queue Placement
>
>
> 
> 
> On 14.04.2021., at 13:21, Marcos - Mgiga  <mailto:mar...@mgiga.com.br>> wrote:
>
> Hello,
>
> I increased VPP rx/tx queues in order to enable RSS on VPP instance. Since 
> VPP is running on a NUMA system with two threads with 8 cores each, I would 
> like to pin TX / RX queue to proper NUMA nodes.
>
> Using set interface rx-placement I was able to associate rx queue to desired 
> cores, so I would like to know if is there any possibility to pin tx queue to 
> a certain workers as well.
>
>
> Not at the moment. tx queues are statically mapped (0 to main thread, 1 to 
> worker 0, 2 to worker 1, etc.).
> There are some plans to implement such capability…..
>
>
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19194): https://lists.fd.io/g/vpp-dev/message/19194
Mute This Topic: https://lists.fd.io/mt/82088483/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] TX Queue Placement

2021-04-14 Thread Damjan Marion via lists.fd.io


> On 14.04.2021., at 13:21, Marcos - Mgiga  wrote:
> 
> Hello,
>  
> I increased VPP rx/tx queues in order to enable RSS on VPP instance. Since 
> VPP is running on a NUMA system with two threads with 8 cores each, I would 
> like to pin TX / RX queue to proper NUMA nodes.
>  
> Using set interface rx-placement I was able to associate rx queue to desired 
> cores, so I would like to know if is there any possibility to pin tx queue to 
> a certain workers as well.
>  

Not at the moment. tx queues are statically mapped (0 to main thread, 1 to 
worker 0, 2 to worker 1, etc.).
There are some plans to implement such capability…..



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19192): https://lists.fd.io/g/vpp-dev/message/19192
Mute This Topic: https://lists.fd.io/mt/82088483/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] How to enable Mellanox compilation in VPP 21.01

2021-04-14 Thread Damjan Marion via lists.fd.io


> On 14.04.2021., at 10:07, chetan bhasin  wrote:
> 
> Hi,
> 
> I have to do the following to enable Mellanox compilation under VPP21.01 with 
> dpdk 20.11.
> 
> If I dont comment rdma-core dependencies , it will lead to undefined symbols 
> . 
> 
> Can anybody please correct , is this the right way to do it?

Right way to do it is to use rdma plugin instead of dpdk...

— 
Damjan



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19189): https://lists.fd.io/g/vpp-dev/message/19189
Mute This Topic: https://lists.fd.io/mt/81718770/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vlib_buffer_clone behavior when trying to send to two interfaces

2021-04-12 Thread Damjan Marion via lists.fd.io


> On 12.04.2021., at 12:50, David Gohberg  wrote:
> 
> Damjan,
> 
> After looking at the vlib_buffer_clone_256 function I realize that it 
> modifies the original buffer pointer, like you said.
> my packets are coming in down a custom node path (originating from an asic 
> data plane), so they will always have the l2 header.
> The node that performs the cloning is the last stop before packets get sent 
> to the hardware interface.
> Is there an "elegant" way to always get a buffer that points to the start of 
> the packet data, regardless of vlan tags and other encapsulations? 

See VNET_BUFFER_F_L2_HDR_OFFSET_VALID flag in b->flags.
If that flag is set then vnet_buffer (b)->l2_hdr_offset tells you where l2 
header starts.

— 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19181): https://lists.fd.io/g/vpp-dev/message/19181
Mute This Topic: https://lists.fd.io/mt/81938105/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vlib_buffer_clone behavior when trying to send to two interfaces

2021-04-12 Thread Damjan Marion via lists.fd.io


> On 12.04.2021., at 10:34, David Gohberg  wrote:
> 
> [Edited Message Follows]
> 
> moving the buffer backwards by 14 looks correct for small packets, but for 
> 1500 byte packets I get `truncated-ip - 14 bytes missing` error from tcpdump. 
> After the clone I'm restoring the original offset:
> vlib_buffer_advance (b0, -14);
> u16 n_cloned = vlib_buffer_clone (vm, bi0, (u32*), 2, 
> VLIB_BUFFER_CLONE_HEAD_SIZE);
> vlib_buffer_advance (b0, 14);



If you look into my explanation how cloning works you can realize that
using b0 after clone is bad idea. b0 is pointer to buffer which is not head 
buffer anymore.

b0 = vlib_get_buffer (vm, cbi0[0]);

after cloning may help.

Still this is ugly hack. If you are executing this code inside IP path, there 
is no guarantee that l2 header will be there as packet may arrive from l3 
interface (memif, tun) or from l3 encapsulation (ipsec).
Also packet may be dot1Q tagged so ethernet header will not be 14 bytes long.

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19177): https://lists.fd.io/g/vpp-dev/message/19177
Mute This Topic: https://lists.fd.io/mt/81938105/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Creating qemu VMs using memif instead of vhostuser #vpp-memif #vpp #dpdk #vhost

2021-04-10 Thread Damjan Marion via lists.fd.io


> On 10.04.2021., at 09:14, abhinav.mishra via lists.fd.io 
>  wrote:
> 
> Hi everyone,
> 
> I am wondering if we can create VMs using memifs instead of normal vhostuser 
> interfaces. 
> Normal qemu command for creating virtio ifs in VM looks like :
> 
> create vhost-user socket /run/vpp/sock1.sock
> 
> -chardev socket,id=char1,path=/var/run/vpp/sock1.sock,server -netdev 
> type=vhost-user,id=net1,chardev=char1,vhostforce -device 
> virtio-net-pci,mac=,netdev=net1,mrg_rxbuf=off
> 
> Now this works flawlessly, but is there a way to create virtio ifs for VM 
> using memif concept of master and slave ?

No, memif is built for different purpose and it doesn’t support VM use case.

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19171): https://lists.fd.io/g/vpp-dev/message/19171
Mute This Topic: https://lists.fd.io/mt/81988051/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Mute #vpp-memif:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-memif
Mute #vhost:https://lists.fd.io/g/vpp-dev/mutehashtag/vhost
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vlib_buffer_clone behavior when trying to send to two interfaces

2021-04-08 Thread Damjan Marion via lists.fd.io

OK, what about something like this:


vnet_hw_interface_t * host_intf = vnet_get_sup_hw_interface (vnm, sw_if_index);
vlib_frame_t *to_frame = vlib_get_frame_to_node (vm, node_index);
vlib_frame_t *host_if_frame = vlib_get_frame_to_node(vm, 
host_intf->tx_node_index);
u32 *to_next = vlib_frame_vector_args (to_frame);
u32 *intf_host_to_next = vlib_frame_vector_args (host_if_frame);

while (n_left_from > 0)
{
u32 cbi0[2];
u16 n_cloned = vlib_buffer_clone (vm, from]0], , 2, 
VLIB_BUFFER_CLONE_HEAD_SIZE);
to_next[0] = cbi0[0];   
intf_host_to_next[0] = cbi0[1];

// check n_cloned == 2

from++;
to_next++;
intf_host_to_next++;
n_left_from--;
}

host_if_frame->n_vectors = from_frame->n_vectors; // adjust if check n_cloned 
== 2 fails
to_frame->n_vectors = from_frame->n_vectors;; // adjust if check n_cloned == 2 
fails
vlib_put_frame_to_node (vm, host_interface_node_index, host_if_frame);
vlib_put_frame_to_node (vm, node_index, to_frame);

return from_frame->n_vectors;



> On 08.04.2021., at 15:05, David Gohberg  wrote:
> 
> > why do you need to open host_if_frame for each packet
> 
> If you refer to the fact that I can just create one frame outside the loop, 
> you are correct :).
> I tried to get the code to the simplest working example that will correctly 
> mirror the packets so I started to "dumb down" things :) I'm aware
> that it is not efficient.
> 
> > why do you use vlib_get_frame_to_node instead of simply registering next 
> > indices like 99% of VPP nodes are doing….
> 
> I followed this code example from the documentation:
> https://fdio-vpp.readthedocs.io/en/latest/gettingstarted/developers/vnet.html?highlight=quad#enqueueing-packets-for-lookup-and-transmission
> Is there a better way of doing this? 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19152): https://lists.fd.io/g/vpp-dev/message/19152
Mute This Topic: https://lists.fd.io/mt/81938105/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



  1   2   3   4   5   6   7   >