Re: [vpp-dev] how to create a session from thin air?

2020-03-25 Thread Florin Coras
Hi Andreas, 

Understood. Let me know if the next_node_index works for you. 

Regards,
Florin

> On Mar 25, 2020, at 12:39 PM, Andreas Schultz 
>  wrote:
> 
> Am Mi., 25. März 2020 um 18:57 Uhr schrieb Florin Coras 
> mailto:fcoras.li...@gmail.com>>:
> Hi Andreas, 
> 
> You have in the tcp connection next_node_index and next_node_opaque which can 
> be used to determine the next node and some additional info you may want to 
> send to a custom next node from tcp_output. You can initialize those as you 
> may see fit in your custom listen node.
> 
> Thanks, I missed that. 
> 
>  For synacks, we use tcp-output path but for syn we do use a different path 
> because the connections are not fully established, and I guess this is what 
> you’re asking about lower. So, just to be sure we’re on the same page, are 
> you trying to avoid the fib lookup for syn packets? 
> 
> I don't need it for syns, only for synack. I missed the next_node_index in tc.
>
> If yes, first of all, why (out of curiosity)? Second, this becomes cumbersome 
> both because of the api changes and because we’d like to avoid polling more 
> queues of frames (ip_lookup now) in tcp. We might be able to improve what we 
> have but it’s going to be an involved change. 
> 
> The reason for avoiding fib lookups is that I sometimes need to pass IP 
> frames to UEs before I know in which fib the UE will end up. My session logic 
> does know how to encapsulate the packet, but it needs to get the IP packets 
> to the node without going through the fib lookup.
>
> Thanks,
> Andreas
>
> Regards,
> Florin
> 
>> On Mar 25, 2020, at 9:51 AM, Andreas Schultz > > wrote:
>> 
>> Hi Florin,
>> 
>> I've rebase my changes to your TCP split patch and it kind of works. The 
>> problem that I've encountered now is that TCP always hands the buffers to 
>> ip[46]_lookup.
>> 
>> Would it be acceptable to modify the application, session and tcp logic to 
>> be able to inject a function per application that allows to overwrite the 
>> next node lookup in tcp_enqueue_to_ip_lookup_i and 
>> tcp_flush_frames_to_output?
>> 
>> Thanks
>> Andreas
>> 
>> Am Fr., 20. März 2020 um 22:05 Uhr schrieb Florin Coras 
>> mailto:fcoras.li...@gmail.com>>:
>> Hi Andreas, 
>> 
>> I just posted some comments on the patch. I think you can further reduce the 
>> amount of code you need to copy from tcp/session layer. 
>> 
>> Regards,
>> Florin
>> 
>>> On Mar 20, 2020, at 5:00 AM, Andreas Schultz 
>>> mailto:andreas.schu...@travelping.com>> 
>>> wrote:
>>> 
>>> Hi Florin,
>>> 
>>> I managed to get it working. I still have to copy more code from tcp_input 
>>> and session_stream_accept that I like, but it works.
>>> 
>>> Could you have a look at 
>>> https://gerrit.fd.io/r/c/vpp/+/15798/9/src/plugins/upf/upf_process.c#90 
>>>  
>>> and let me know what you think?
>>> 
>>> Regards
>>> Andreas
>>> 
>>> Am Do., 19. März 2020 um 16:18 Uhr schrieb Florin Coras 
>>> mailto:fcoras.li...@gmail.com>>:
>>> Hi Andreas, 
>>> 
>>> Probably the best option, at this time, would be to completely avoid using 
>>> session lookup and accept infra because you can’t have a generic listener 
>>> for sessions you intercept. Or you could have a generic 0/0 listener but 
>>> that would also intercept connections meant for local termination. That’s 
>>> not to say you can’t use session tables. 
>>> 
>>> Instead, you can manually create sessions in your custom tcp-listen node 
>>> and 1) do the linking with the tcp connection, i.e., fix the 
>>> session/connection indices and 2) assign the sessions to your app’s worker 
>>> and allocate fifos 3) initialize session state in your app either directly 
>>> or by calling app_worker_accept_notify. Practically this custom node will 
>>> bootstrap tcp, session layer and app state and after that you can let the 
>>> sessions “run normally”. 
>>> 
>>> You probably also want to mark the transport connection (tcp “base class”) 
>>> with TRANSPORT_CONNECTION_F_NO_LOOKUP, to avoid session layer attempts to 
>>> look up the connection in builtin session tables. 
>>> 
>>> Regards,
>>> Florin
>>> 
 On Mar 19, 2020, at 4:54 AM, Andreas Schultz 
 mailto:andreas.schu...@travelping.com>> 
 wrote:
 
 Hi Florin,
 
 That patch has helped a bit, but now I'm stuck with session_stream_accept.
 
 Creating an application session without having a listener is quick 
 complex. So far I'm resorting to creating a dummy listener, but I would be 
 cleaner not to use that.
 
 I have tried to create a session without a listener, but it turns out that 
 there are too many dependencies in the app worker and segment manager 
 handling.
 
 Regards
 Andreas
 
 Am Di., 17. März 2020 um 20:15 Uhr schrieb Florin Coras 
 mailto:fcoras.li...@gmail.com>>:
 Hi Andreas, 
 
 Is this [1] enough for now? I'll eventually 

Re: [vpp-dev] how to create a session from thin air?

2020-03-25 Thread Andreas Schultz
Am Mi., 25. März 2020 um 18:57 Uhr schrieb Florin Coras <
fcoras.li...@gmail.com>:

> Hi Andreas,
>
> You have in the tcp connection next_node_index and next_node_opaque which
> can be used to determine the next node and some additional info you may
> want to send to a custom next node from tcp_output. You can initialize
> those as you may see fit in your custom listen node.
>

Thanks, I missed that.

 For synacks, we use tcp-output path but for syn we do use a different path
> because the connections are not fully established, and I guess this is what
> you’re asking about lower. So, just to be sure we’re on the same page, are
> you trying to avoid the fib lookup for syn packets?
>

I don't need it for syns, only for synack. I missed the next_node_index in
tc.


> If yes, first of all, why (out of curiosity)? Second, this becomes
> cumbersome both because of the api changes and because we’d like to avoid
> polling more queues of frames (ip_lookup now) in tcp. We might be able to
> improve what we have but it’s going to be an involved change.
>

The reason for avoiding fib lookups is that I sometimes need to pass IP
frames to UEs before I know in which fib the UE will end up. My session
logic does know how to encapsulate the packet, but it needs to get the IP
packets to the node without going through the fib lookup.

Thanks,
Andreas


> Regards,
> Florin
>
> On Mar 25, 2020, at 9:51 AM, Andreas Schultz <
> andreas.schu...@travelping.com> wrote:
>
> Hi Florin,
>
> I've rebase my changes to your TCP split patch and it kind of works. The
> problem that I've encountered now is that TCP always hands the buffers to
> ip[46]_lookup.
>
> Would it be acceptable to modify the application, session and tcp logic to
> be able to inject a function per application that allows to overwrite the
> next node lookup in tcp_enqueue_to_ip_lookup_i and
> tcp_flush_frames_to_output?
>
> Thanks
> Andreas
>
> Am Fr., 20. März 2020 um 22:05 Uhr schrieb Florin Coras <
> fcoras.li...@gmail.com>:
>
>> Hi Andreas,
>>
>> I just posted some comments on the patch. I think you can further reduce
>> the amount of code you need to copy from tcp/session layer.
>>
>> Regards,
>> Florin
>>
>> On Mar 20, 2020, at 5:00 AM, Andreas Schultz <
>> andreas.schu...@travelping.com> wrote:
>>
>> Hi Florin,
>>
>> I managed to get it working. I still have to copy more code from
>> tcp_input and session_stream_accept that I like, but it works.
>>
>> Could you have a look at
>> https://gerrit.fd.io/r/c/vpp/+/15798/9/src/plugins/upf/upf_process.c#90
>> and let me know what you think?
>>
>> Regards
>> Andreas
>>
>> Am Do., 19. März 2020 um 16:18 Uhr schrieb Florin Coras <
>> fcoras.li...@gmail.com>:
>>
>>> Hi Andreas,
>>>
>>> Probably the best option, at this time, would be to completely avoid
>>> using session lookup and accept infra because you can’t have a generic
>>> listener for sessions you intercept. Or you could have a generic 0/0
>>> listener but that would also intercept connections meant for local
>>> termination. That’s not to say you can’t use session tables.
>>>
>>> Instead, you can manually create sessions in your custom tcp-listen node
>>> and 1) do the linking with the tcp connection, i.e., fix the
>>> session/connection indices and 2) assign the sessions to your app’s worker
>>> and allocate fifos 3) initialize session state in your app either directly
>>> or by calling app_worker_accept_notify. Practically this custom node will
>>> bootstrap tcp, session layer and app state and after that you can let the
>>> sessions “run normally”.
>>>
>>> You probably also want to mark the transport connection (tcp “base
>>> class”) with TRANSPORT_CONNECTION_F_NO_LOOKUP, to avoid session layer
>>> attempts to look up the connection in builtin session tables.
>>>
>>> Regards,
>>> Florin
>>>
>>> On Mar 19, 2020, at 4:54 AM, Andreas Schultz <
>>> andreas.schu...@travelping.com> wrote:
>>>
>>> Hi Florin,
>>>
>>> That patch has helped a bit, but now I'm stuck
>>> with session_stream_accept.
>>>
>>> Creating an application session without having a listener is quick
>>> complex. So far I'm resorting to creating a dummy listener, but I would be
>>> cleaner not to use that.
>>>
>>> I have tried to create a session without a listener, but it turns out
>>> that there are too many dependencies in the app worker and segment manager
>>> handling.
>>>
>>> Regards
>>> Andreas
>>>
>>> Am Di., 17. März 2020 um 20:15 Uhr schrieb Florin Coras <
>>> fcoras.li...@gmail.com>:
>>>
 Hi Andreas,

 Is this [1] enough for now? I'll eventually do some additional tcp
 refactor to make sure we have a generic set of functions that are available
 for use cases when only parts of tcp are re-used.

 Regards,
 Florin

 [1] https://gerrit.fd.io/r/c/vpp/+/25961

 On Mar 17, 2020, at 4:20 AM, Andreas Schultz <
 andreas.schu...@travelping.com> wrote:

 Hi Florin,

 I had a look at how tcp_connection_alloc is used and it looks to 

Re: [vpp-dev] how to create a session from thin air?

2020-03-25 Thread Florin Coras
Hi Andreas, 

You have in the tcp connection next_node_index and next_node_opaque which can 
be used to determine the next node and some additional info you may want to 
send to a custom next node from tcp_output. You can initialize those as you may 
see fit in your custom listen node. 

For synacks, we use tcp-output path but for syn we do use a different path 
because the connections are not fully established, and I guess this is what 
you’re asking about lower. So, just to be sure we’re on the same page, are you 
trying to avoid the fib lookup for syn packets? 

If yes, first of all, why (out of curiosity)? Second, this becomes cumbersome 
both because of the api changes and because we’d like to avoid polling more 
queues of frames (ip_lookup now) in tcp. We might be able to improve what we 
have but it’s going to be an involved change. 

Regards,
Florin

> On Mar 25, 2020, at 9:51 AM, Andreas Schultz  
> wrote:
> 
> Hi Florin,
> 
> I've rebase my changes to your TCP split patch and it kind of works. The 
> problem that I've encountered now is that TCP always hands the buffers to 
> ip[46]_lookup.
> 
> Would it be acceptable to modify the application, session and tcp logic to be 
> able to inject a function per application that allows to overwrite the next 
> node lookup in tcp_enqueue_to_ip_lookup_i and tcp_flush_frames_to_output?
> 
> Thanks
> Andreas
> 
> Am Fr., 20. März 2020 um 22:05 Uhr schrieb Florin Coras 
> mailto:fcoras.li...@gmail.com>>:
> Hi Andreas, 
> 
> I just posted some comments on the patch. I think you can further reduce the 
> amount of code you need to copy from tcp/session layer. 
> 
> Regards,
> Florin
> 
>> On Mar 20, 2020, at 5:00 AM, Andreas Schultz > > wrote:
>> 
>> Hi Florin,
>> 
>> I managed to get it working. I still have to copy more code from tcp_input 
>> and session_stream_accept that I like, but it works.
>> 
>> Could you have a look at 
>> https://gerrit.fd.io/r/c/vpp/+/15798/9/src/plugins/upf/upf_process.c#90 
>>  
>> and let me know what you think?
>> 
>> Regards
>> Andreas
>> 
>> Am Do., 19. März 2020 um 16:18 Uhr schrieb Florin Coras 
>> mailto:fcoras.li...@gmail.com>>:
>> Hi Andreas, 
>> 
>> Probably the best option, at this time, would be to completely avoid using 
>> session lookup and accept infra because you can’t have a generic listener 
>> for sessions you intercept. Or you could have a generic 0/0 listener but 
>> that would also intercept connections meant for local termination. That’s 
>> not to say you can’t use session tables. 
>> 
>> Instead, you can manually create sessions in your custom tcp-listen node and 
>> 1) do the linking with the tcp connection, i.e., fix the session/connection 
>> indices and 2) assign the sessions to your app’s worker and allocate fifos 
>> 3) initialize session state in your app either directly or by calling 
>> app_worker_accept_notify. Practically this custom node will bootstrap tcp, 
>> session layer and app state and after that you can let the sessions “run 
>> normally”. 
>> 
>> You probably also want to mark the transport connection (tcp “base class”) 
>> with TRANSPORT_CONNECTION_F_NO_LOOKUP, to avoid session layer attempts to 
>> look up the connection in builtin session tables. 
>> 
>> Regards,
>> Florin
>> 
>>> On Mar 19, 2020, at 4:54 AM, Andreas Schultz 
>>> mailto:andreas.schu...@travelping.com>> 
>>> wrote:
>>> 
>>> Hi Florin,
>>> 
>>> That patch has helped a bit, but now I'm stuck with session_stream_accept.
>>> 
>>> Creating an application session without having a listener is quick complex. 
>>> So far I'm resorting to creating a dummy listener, but I would be cleaner 
>>> not to use that.
>>> 
>>> I have tried to create a session without a listener, but it turns out that 
>>> there are too many dependencies in the app worker and segment manager 
>>> handling.
>>> 
>>> Regards
>>> Andreas
>>> 
>>> Am Di., 17. März 2020 um 20:15 Uhr schrieb Florin Coras 
>>> mailto:fcoras.li...@gmail.com>>:
>>> Hi Andreas, 
>>> 
>>> Is this [1] enough for now? I'll eventually do some additional tcp refactor 
>>> to make sure we have a generic set of functions that are available for use 
>>> cases when only parts of tcp are re-used. 
>>> 
>>> Regards,
>>> Florin
>>> 
>>> [1] https://gerrit.fd.io/r/c/vpp/+/25961 
>>> 
>>> 
 On Mar 17, 2020, at 4:20 AM, Andreas Schultz 
 mailto:andreas.schu...@travelping.com>> 
 wrote:
 
 Hi Florin,
 
 I had a look at how tcp_connection_alloc is used and it looks to me like I 
 would need to replicate almost all of tcp46_listen_inline to actually get 
 the TCP connection setup correctly. I was hoping that I could reuse more 
 of the existing code.
 
 Would you be ok with moving much of the body of tcp46_listen_inline into a 
 header file, marking it always inline? That way I could reuse it 

Re: [vpp-dev] worker barrier state

2020-03-25 Thread Dave Barach via Lists.Fd.Io
Vlib_main_t *vm->main_loop_count.

One trip around the main loop accounts for all per-worker local graph edges / 
acyclic graph behaviors. 

As to the magic number E (not to be confused with e): repeatedly handing off 
packets from thread to thread seems like a bad implementation strategy. The 
packet tracer will tell you how many handoffs are involved in a certain path, 
as will a bit of code inspection.

Neale has some experience with this scenario, maybe he can share some 
thoughts...

HTH... Dave 

-Original Message-
From: Christian Hopps  
Sent: Wednesday, March 25, 2020 1:14 PM
To: Dave Barach (dbarach) 
Cc: Christian Hopps ; dmar...@me.com; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] worker barrier state

I'm not clear on what you mean by table add/del, but I can give you the 
scenario I'm concerned with.

I have a packet P input and it has some state S associated with it.

The API wants to delete state S. When is it safe?

Say P's arc from input to output contains E edges. Each node on the arc could 
conceivably handoff packet P to another worker for processing. So if I read 
things correctly I need to wait at least E laps until I know for sure that P is 
out of the system, and S is safe to delete.

Q: How do I know what value E is?

I am not in control of all nodes along a P's arc and how they might handoff 
packets, and the graph is not acyclic so I couldn't even use a max value like 
the total number of nodes in the graph for E as the packet may loop back.

Q: Which lap counter am I looking at?

As you point out each vlib_main_t has it's own counter (main_loop_count?) so I 
think I have to record every workers main_loop_count in the state S and wait 
for every counter to  be +E before deleting S.

Thanks for the help!

Chris.

> On Mar 25, 2020, at 12:15 PM, Dave Barach (dbarach)  wrote:
> 
> +1. 
>  
> View any metadata subject to table add/del accidents with suspicion. There is 
> a safe delete paradigm: each vlib_main_t has a “lap counter”.  When deleting 
> table entries: atomically update table entries. Record the lap counter and 
> wait until all worker threads have completed a lap. Then, delete (or 
> pool_put) the underlying data structure.
>  
> Dave
>  
>  
> From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion 
> via Lists.Fd.Io
> Sent: Wednesday, March 25, 2020 12:10 PM
> To: Christian Hopps 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] worker barrier state
>  
> 
> 
> > On 25 Mar 2020, at 16:01, Christian Hopps  wrote:
> > 
> > Is it supposed to be the case that no packets are inflight (*) in the graph 
> > when the worker barrier is held?
> > 
> > I think perhaps MP unsafe API code is assuming this.
> > 
> > I also think that the frame queues used by handoff code violate this 
> > assumption.
> > 
> > Can someone with deep VPP knowledge clarify this for me? :)
> 
> 
> correct, there is small chance that frame is enqueued right before worker 
> hits barrier…
> 
> — 
> Damjan
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15874): https://lists.fd.io/g/vpp-dev/message/15874
Mute This Topic: https://lists.fd.io/mt/72542383/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] worker barrier state

2020-03-25 Thread Christian Hopps
I'm not clear on what you mean by table add/del, but I can give you the 
scenario I'm concerned with.

I have a packet P input and it has some state S associated with it.

The API wants to delete state S. When is it safe?

Say P's arc from input to output contains E edges. Each node on the arc could 
conceivably handoff packet P to another worker for processing. So if I read 
things correctly I need to wait at least E laps until I know for sure that P is 
out of the system, and S is safe to delete.

Q: How do I know what value E is?

I am not in control of all nodes along a P's arc and how they might handoff 
packets, and the graph is not acyclic so I couldn't even use a max value like 
the total number of nodes in the graph for E as the packet may loop back.

Q: Which lap counter am I looking at?

As you point out each vlib_main_t has it's own counter (main_loop_count?) so I 
think I have to record every workers main_loop_count in the state S and wait 
for every counter to  be +E before deleting S.

Thanks for the help!

Chris.

> On Mar 25, 2020, at 12:15 PM, Dave Barach (dbarach)  wrote:
> 
> +1. 
>
> View any metadata subject to table add/del accidents with suspicion. There is 
> a safe delete paradigm: each vlib_main_t has a “lap counter”.  When deleting 
> table entries: atomically update table entries. Record the lap counter and 
> wait until all worker threads have completed a lap. Then, delete (or 
> pool_put) the underlying data structure.
>
> Dave
>
>
> From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion 
> via Lists.Fd.Io
> Sent: Wednesday, March 25, 2020 12:10 PM
> To: Christian Hopps 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] worker barrier state
>
> 
> 
> > On 25 Mar 2020, at 16:01, Christian Hopps  wrote:
> > 
> > Is it supposed to be the case that no packets are inflight (*) in the graph 
> > when the worker barrier is held?
> > 
> > I think perhaps MP unsafe API code is assuming this.
> > 
> > I also think that the frame queues used by handoff code violate this 
> > assumption.
> > 
> > Can someone with deep VPP knowledge clarify this for me? :)
> 
> 
> correct, there is small chance that frame is enqueued right before worker 
> hits barrier…
> 
> — 
> Damjan
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15873): https://lists.fd.io/g/vpp-dev/message/15873
Mute This Topic: https://lists.fd.io/mt/72542383/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] how to create a session from thin air?

2020-03-25 Thread Andreas Schultz
Hi Florin,

I've rebase my changes to your TCP split patch and it kind of works. The
problem that I've encountered now is that TCP always hands the buffers to
ip[46]_lookup.

Would it be acceptable to modify the application, session and tcp logic to
be able to inject a function per application that allows to overwrite the
next node lookup in tcp_enqueue_to_ip_lookup_i and
tcp_flush_frames_to_output?

Thanks
Andreas

Am Fr., 20. März 2020 um 22:05 Uhr schrieb Florin Coras <
fcoras.li...@gmail.com>:

> Hi Andreas,
>
> I just posted some comments on the patch. I think you can further reduce
> the amount of code you need to copy from tcp/session layer.
>
> Regards,
> Florin
>
> On Mar 20, 2020, at 5:00 AM, Andreas Schultz <
> andreas.schu...@travelping.com> wrote:
>
> Hi Florin,
>
> I managed to get it working. I still have to copy more code from tcp_input
> and session_stream_accept that I like, but it works.
>
> Could you have a look at
> https://gerrit.fd.io/r/c/vpp/+/15798/9/src/plugins/upf/upf_process.c#90
> and let me know what you think?
>
> Regards
> Andreas
>
> Am Do., 19. März 2020 um 16:18 Uhr schrieb Florin Coras <
> fcoras.li...@gmail.com>:
>
>> Hi Andreas,
>>
>> Probably the best option, at this time, would be to completely avoid
>> using session lookup and accept infra because you can’t have a generic
>> listener for sessions you intercept. Or you could have a generic 0/0
>> listener but that would also intercept connections meant for local
>> termination. That’s not to say you can’t use session tables.
>>
>> Instead, you can manually create sessions in your custom tcp-listen node
>> and 1) do the linking with the tcp connection, i.e., fix the
>> session/connection indices and 2) assign the sessions to your app’s worker
>> and allocate fifos 3) initialize session state in your app either directly
>> or by calling app_worker_accept_notify. Practically this custom node will
>> bootstrap tcp, session layer and app state and after that you can let the
>> sessions “run normally”.
>>
>> You probably also want to mark the transport connection (tcp “base
>> class”) with TRANSPORT_CONNECTION_F_NO_LOOKUP, to avoid session layer
>> attempts to look up the connection in builtin session tables.
>>
>> Regards,
>> Florin
>>
>> On Mar 19, 2020, at 4:54 AM, Andreas Schultz <
>> andreas.schu...@travelping.com> wrote:
>>
>> Hi Florin,
>>
>> That patch has helped a bit, but now I'm stuck with session_stream_accept.
>>
>> Creating an application session without having a listener is quick
>> complex. So far I'm resorting to creating a dummy listener, but I would be
>> cleaner not to use that.
>>
>> I have tried to create a session without a listener, but it turns out
>> that there are too many dependencies in the app worker and segment manager
>> handling.
>>
>> Regards
>> Andreas
>>
>> Am Di., 17. März 2020 um 20:15 Uhr schrieb Florin Coras <
>> fcoras.li...@gmail.com>:
>>
>>> Hi Andreas,
>>>
>>> Is this [1] enough for now? I'll eventually do some additional tcp
>>> refactor to make sure we have a generic set of functions that are available
>>> for use cases when only parts of tcp are re-used.
>>>
>>> Regards,
>>> Florin
>>>
>>> [1] https://gerrit.fd.io/r/c/vpp/+/25961
>>>
>>> On Mar 17, 2020, at 4:20 AM, Andreas Schultz <
>>> andreas.schu...@travelping.com> wrote:
>>>
>>> Hi Florin,
>>>
>>> I had a look at how tcp_connection_alloc is used and it looks to me like
>>> I would need to replicate almost all of tcp46_listen_inline to actually get
>>> the TCP connection setup correctly. I was hoping that I could reuse more of
>>> the existing code.
>>>
>>> Would you be ok with moving much of the body of tcp46_listen_inline into
>>> a header file, marking it always inline? That way I could reuse it without
>>> having to sync changes back all the time.
>>>
>>> Andreas
>>>
>>> Am Mo., 16. März 2020 um 19:09 Uhr schrieb Florin Coras <
>>> fcoras.li...@gmail.com>:
>>>
 Hi Andreas,

 From the info lower, I guess that you want to build a transparent tcp
 terminator/proxy. For that, you’ll be forced to do a) because ip-local path
 is purely for consuming packets whose destination is local ip addresses.
 Moreover, you’ll have to properly classify/match all packets to connections
 and hand them to tcp-input (or better yet tcp-input-nolookup) for tcp
 processing.

 Regarding the passing of data, is that at connection establishment or
 throughout the lifetime of the connection? If the former, your classifier
 together with your builtin app will have to instantiate tcp connections and
 sessions “manually” and properly initialize them whenever it detects a new
 flow. APIs like session_alloc and tcp_connection_alloc are already exposed.

 Regards,
 Florin

 On Mar 16, 2020, at 10:39 AM, Andreas Schultz <
 andreas.schu...@travelping.com> wrote:

 Hi,

 In our UPF plugin [1], I need to terminate a TCP connection with a
 non-local 

[vpp-dev] ACL question

2020-03-25 Thread Govindarajan Mohandoss
Hello ACL Maintainer,

  We want to measure and optimize the ACL performance for ARM servers.  As per 
the foll. link, there are 4 different implementation of ACLs in VPP.

  https://fd.io/docs/vpp/master/usecases/acls.html

  We would like to start with most commonly used ACL implementation in VPP 
which can cover L2, L3 and L4 fields. As per the link above and CSIT reports 
(link below), it looks like ACL plugin is the right match.

  Can you please confirm ? ACL plugin has 2 variants - Stateful & Stateless. 
Which is common and widely used in VPP ?

  
https://docs.fd.io/csit/master/report/detailed_test_results/vpp_performance_results/index.html



Thanks

Govind

IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15871): https://lists.fd.io/g/vpp-dev/message/15871
Mute This Topic: https://lists.fd.io/mt/72544608/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] worker barrier state

2020-03-25 Thread Dave Barach via Lists.Fd.Io
+1.

View any metadata subject to table add/del accidents with suspicion. There is a 
safe delete paradigm: each vlib_main_t has a “lap counter”.  When deleting 
table entries: atomically update table entries. Record the lap counter and wait 
until all worker threads have completed a lap. Then, delete (or pool_put) the 
underlying data structure.

Dave


From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
Lists.Fd.Io
Sent: Wednesday, March 25, 2020 12:10 PM
To: Christian Hopps 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] worker barrier state



> On 25 Mar 2020, at 16:01, Christian Hopps 
> mailto:cho...@chopps.org>> wrote:
>
> Is it supposed to be the case that no packets are inflight (*) in the graph 
> when the worker barrier is held?
>
> I think perhaps MP unsafe API code is assuming this.
>
> I also think that the frame queues used by handoff code violate this 
> assumption.
>
> Can someone with deep VPP knowledge clarify this for me? :)


correct, there is small chance that frame is enqueued right before worker hits 
barrier…

—
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15870): https://lists.fd.io/g/vpp-dev/message/15870
Mute This Topic: https://lists.fd.io/mt/72542383/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] worker barrier state

2020-03-25 Thread Damjan Marion via Lists.Fd.Io


> On 25 Mar 2020, at 16:01, Christian Hopps  wrote:
> 
> Is it supposed to be the case that no packets are inflight (*) in the graph 
> when the worker barrier is held?
> 
> I think perhaps MP unsafe API code is assuming this.
> 
> I also think that the frame queues used by handoff code violate this 
> assumption.
> 
> Can someone with deep VPP knowledge clarify this for me? :)


correct, there is small chance that frame is enqueued right before worker hits 
barrier…

— 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15869): https://lists.fd.io/g/vpp-dev/message/15869
Mute This Topic: https://lists.fd.io/mt/72542383/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Interfaces mysteriously went down #azure

2020-03-25 Thread Chris King
vpp show log only showed a bunch of dhcp/client timeouts, I believe. I don't 
know if there was a VM migration, but how could I know for sure? The VM has 
been powered on the entire time, but perhaps there was a hot-swap of NICs?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15868): https://lists.fd.io/g/vpp-dev/message/15868
Mute This Topic: https://lists.fd.io/mt/72540981/21656
Mute #azure: https://lists.fd.io/mk?hashtag=azure=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] worker barrier state

2020-03-25 Thread Christian Hopps
Is it supposed to be the case that no packets are inflight (*) in the graph 
when the worker barrier is held?

I think perhaps MP unsafe API code is assuming this.

I also think that the frame queues used by handoff code violate this assumption.

Can someone with deep VPP knowledge clarify this for me? :)

Thanks,
Chris.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15867): https://lists.fd.io/g/vpp-dev/message/15867
Mute This Topic: https://lists.fd.io/mt/72542383/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Interfaces mysteriously went down #azure

2020-03-25 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi Chris,

I'd expect only 2 dtap too. Do you know if there was a VM migration?
Is there anything in vpp 'show log'?

ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Chris King
> Sent: mercredi 25 mars 2020 15:00
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP Interfaces mysteriously went down #azure
> 
> I am running vpp v20.01-release on Ubuntu 18.04 on an Azure VM. I had been
> forwarding traffic with VPP for about 5 days and today I noticed that my 2
> main interfaces FailsafeEthernet2 and FailsafeEthernet4) had gone down and
> I could not find a reason. I looked at the journalctl logs (which only go
> back about 6 hours and the interfaces went down about 18 hours ago), dmesg
> logs, and ran a few commands in vppctl to no avail.
> 
> I was able to restore the interfaces just by setting their state back to
> 'up'.
> 
> vpp# show int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
> FailsafeEthernet2 1  up  9000/0/0/0 rx
> packets   972
> rx
> bytes   72060
> tx
> packets   1435428
> tx
> bytes94738701
> drops
> 972
> ip4
> 956
> ip6
> 15
> FailsafeEthernet4 2  up  9000/0/0/0 rx
> packets   1435458
> rx
> bytes94741057
> drops
> 31
> ip4
> 1435430
> ip6
> 28
> 
> I did, however, notice that I have more Linux network interfaces than I
> expected:
> ifconfig
> dtap2: flags=4675  mtu 1500
> inet6 fe80::20d:3aff:feff:1b82  prefixlen 64  scopeid 0x20
> ether 00:0d:3a:ff:1b:82  txqueuelen 1000  (Ethernet)
> RX packets 2012647  bytes 2878934474 (2.8 GB)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 24822  bytes 1839632 (1.8 MB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> dtap3: flags=4675  mtu 1500
> inet6 fe80::20d:3aff:fe84:aa4f  prefixlen 64  scopeid 0x20
> ether 00:0d:3a:84:aa:4f  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 1692203  bytes 2362194140 (2.3 GB)
> TX errors 0  dropped 1096 overruns 0  carrier 0  collisions 0
> 
> dtap4: flags=4675  mtu 1500
> inet6 fe80::20d:3aff:feff:1b82  prefixlen 64  scopeid 0x20
> ether 00:0d:3a:ff:1b:82  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 1527  bytes 113126 (113.1 KB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> dtap5: flags=4675  mtu 1500
> inet6 fe80::20d:3aff:fe84:aa4f  prefixlen 64  scopeid 0x20
> ether 00:0d:3a:84:aa:4f  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 7164  bytes 6532230 (6.5 MB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> dtap8: flags=4675  mtu 1500
> inet6 fe80::20d:3aff:feff:1b82  prefixlen 64  scopeid 0x20
> ether 00:0d:3a:ff:1b:82  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 1101  bytes 81606 (81.6 KB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> dtap9: flags=4675  mtu 1500
> inet6 fe80::20d:3aff:fe84:aa4f  prefixlen 64  scopeid 0x20
> ether 00:0d:3a:84:aa:4f  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 67  bytes 4834 (4.8 KB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> eth0: flags=4163  mtu 1500
> inet 10.0.9.4  netmask 255.255.255.0  broadcast 10.0.9.255
> inet6 fe80::20d:3aff:feff:1a42  prefixlen 64  scopeid 0x20
> ether 00:0d:3a:ff:1a:42  txqueuelen 1000  (Ethernet)
> RX packets 9860630  bytes 8395557697 (8.3 GB)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 7300672  bytes 1643536351 (1.6 GB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> eth1: flags=4675  mtu 1500
> inet 

[vpp-dev] Coverity run FAILED as of 2020-03-25 14:00:25 UTC

2020-03-25 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 4
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15865): https://lists.fd.io/g/vpp-dev/message/15865
Mute This Topic: https://lists.fd.io/mt/72541032/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP Interfaces mysteriously went down #azure

2020-03-25 Thread Chris King
I am running vpp v20.01-release on Ubuntu 18.04 on an Azure VM. I had been 
forwarding traffic with VPP for about 5 days and today I noticed that my 2 main 
interfaces FailsafeEthernet2 and FailsafeEthernet4) had gone down and I could 
not find a reason. I looked at the journalctl logs (which only go back about 6 
hours and the interfaces went down about 18 hours ago), dmesg logs, and ran a 
few commands in vppctl to no avail.

I was able to restore the interfaces just by setting their state back to 'up'.

vpp# show int
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
FailsafeEthernet2 1  up  9000/0/0/0 rx packets  
 972
rx bytes   72060
tx packets   1435428
tx bytes    94738701
drops    972
ip4  956
ip6   15
FailsafeEthernet4 2  up  9000/0/0/0 rx packets  
 1435458
rx bytes    94741057
drops 31
ip4  1435430
ip6   28

I did, however, notice that I have more Linux network interfaces than I 
expected:
ifconfig
dtap2: flags=4675  mtu 1500
inet6 fe80::20d:3aff:feff:1b82  prefixlen 64  scopeid 0x20
ether 00:0d:3a:ff:1b:82  txqueuelen 1000  (Ethernet)
RX packets 2012647  bytes 2878934474 (2.8 GB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 24822  bytes 1839632 (1.8 MB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

dtap3: flags=4675  mtu 1500
inet6 fe80::20d:3aff:fe84:aa4f  prefixlen 64  scopeid 0x20
ether 00:0d:3a:84:aa:4f  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1692203  bytes 2362194140 (2.3 GB)
TX errors 0  dropped 1096 overruns 0  carrier 0  collisions 0

dtap4: flags=4675  mtu 1500
inet6 fe80::20d:3aff:feff:1b82  prefixlen 64  scopeid 0x20
ether 00:0d:3a:ff:1b:82  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1527  bytes 113126 (113.1 KB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

dtap5: flags=4675  mtu 1500
inet6 fe80::20d:3aff:fe84:aa4f  prefixlen 64  scopeid 0x20
ether 00:0d:3a:84:aa:4f  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 7164  bytes 6532230 (6.5 MB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

dtap8: flags=4675  mtu 1500
inet6 fe80::20d:3aff:feff:1b82  prefixlen 64  scopeid 0x20
ether 00:0d:3a:ff:1b:82  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1101  bytes 81606 (81.6 KB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

dtap9: flags=4675  mtu 1500
inet6 fe80::20d:3aff:fe84:aa4f  prefixlen 64  scopeid 0x20
ether 00:0d:3a:84:aa:4f  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 67  bytes 4834 (4.8 KB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
inet 10.0.9.4  netmask 255.255.255.0  broadcast 10.0.9.255
inet6 fe80::20d:3aff:feff:1a42  prefixlen 64  scopeid 0x20
ether 00:0d:3a:ff:1a:42  txqueuelen 1000  (Ethernet)
RX packets 9860630  bytes 8395557697 (8.3 GB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 7300672  bytes 1643536351 (1.6 GB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4675  mtu 1500
inet 10.0.8.4  netmask 255.255.255.0  broadcast 10.0.8.255
inet6 fe80::20d:3aff:feff:1b82  prefixlen 64  scopeid 0x20
ether 00:0d:3a:ff:1b:82  txqueuelen 1000  (Ethernet)
RX packets 45588  bytes 3379906 (3.3 MB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2012803  bytes 2878946062 (2.8 GB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth2: flags=4675  mtu 1500
inet 10.0.10.4  netmask 255.255.255.0  broadcast 10.0.10.255
inet6 fe80::20d:3aff:fe84:aa4f  prefixlen 64  scopeid 0x20
ether 00:0d:3a:84:aa:4f  txqueuelen 1000  (Ethernet)
RX packets 1651556  bytes 2368775116 (2.3 GB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 155  bytes 11518 (11.5 KB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 1000  (Local Loopback)
RX packets 439044  bytes 59701416 (59.7 MB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 439044  bytes 59701416 (59.7 MB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

rename11: flags=6723  mtu 1500
ether 00:0d:3a:ff:1b:82  txqueuelen 1000  (Ethernet)
RX packets 2  bytes 180 (180.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 114841  bytes 163438128 (163.4 MB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

rename12: flags=6723  mtu 1500
ether 00:0d:3a:84:aa:4f  txqueuelen 1000  (Ethernet)
RX 

Re: [vpp-dev] VPP Crashes When Executing Large Script Using 'exec'

2020-03-25 Thread Luc Pelletier
It works! Thank you very much, Dave. Much appreciated.


Le mer. 25 mars 2020 à 09:39, Dave Barach (dbarach)  a
écrit :

> OK, no need to see the script... Classifier table out of memory... If
> you’re using the “classify table” debug CLI to set up the tables, change
> (or add) “memory-size xxxM” or “memory-size xxxG” to give the classifier
> enough memory. Depending on how many concurrent entries you expect, set the
> number of buckets somewhere between Nconcurrent/2 and Nconcurrent.
>
>
>
> HTH... Dave
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *Luc
> Pelletier
> *Sent:* Wednesday, March 25, 2020 9:21 AM
> *To:* Dave Barach (dbarach) 
> *Cc:* vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] VPP Crashes When Executing Large Script Using
> 'exec'
>
>
>
> 2nd attempt - replying all. Dave - Apologies for the duplicate response.
>
>
>
> Thanks for your response. You're right -- I should have provided more
> details. My script is trying to set up a large numbers of IPs to block
> using classifiers. I now have a backtrace as well which indicates that it
> seems to run out of memory when creating classifier sessions. Maybe I'm not
> using classifiers correctly, it's been difficult to find documentation on
> how to use that feature. I'd be grateful for any tips or help you can
> provide. Thanks in advance.
>
>
>
> Here's the backtrace:
>
>
>
> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
> #1  0x7fd22f118801 in __GI_abort () at abort.c:79
> #2  0x55be59a05ca3 in os_panic () at
> /usr/src/vpp/src/vpp/vnet/main.c:355
> #3  0x7fd230484cf5 in clib_mem_alloc_aligned_at_offset
> (os_out_of_memory_on_failure=1, align_offset=0, align=64, size= out>) at /usr/src/vpp/src/vppinfra/mem.h:143
> #4  clib_mem_alloc_aligned (align=64, size=) at
> /usr/src/vpp/src/vppinfra/mem.h:163
> #5  vnet_classify_entry_alloc (t=t@entry=0x7fd1ef4ddd80,
> log2_pages=log2_pages@entry=13) at
> /usr/src/vpp/src/vnet/classify/vnet_classify.c:210
> #6  0x7fd23048a224 in split_and_rehash (t=t@entry=0x7fd1ef4ddd80,
> old_values=old_values@entry=0x7fd1c1a99600,
> old_log2_pages=old_log2_pages@entry=11,
> new_log2_pages=new_log2_pages@entry=13)
> at /usr/src/vpp/src/vnet/classify/vnet_classify.c:299
> #7  0x7fd23048ae78 in vnet_classify_add_del (t=t@entry=0x7fd1ef4ddd80,
> add_v=add_v@entry=0x7fd1ef793a50, is_add=is_add@entry=1) at
> /usr/src/vpp/src/vnet/classify/vnet_classify.c:576
> #8  0x7fd23048b73b in vnet_classify_add_del_session 
> (cm=cm@entry=0x7fd230bda1a0
> , table_index=, match=0x7fd1ef7b5140 "",
> hit_next_index=,
> opaque_index=, advance=,
> action=, metadata=, is_add=)
> at /usr/src/vpp/src/vnet/classify/vnet_classify.c:2706
> #9  0x7fd23048dfad in classify_session_command_fn (vm=,
> input=0x7fd1ef793d10, cmd=) at
> /usr/src/vpp/src/vnet/classify/vnet_classify.c:2790
> #10 0x7fd22f9d9a3e in vlib_cli_dispatch_sub_commands 
> (vm=vm@entry=0x7fd22fc58380
> , cm=cm@entry=0x7fd22fc585b0 ,
> input=input@entry=0x7fd1ef793d10,
> parent_command_index=) at
> /usr/src/vpp/src/vlib/cli.c:568
> #11 0x7fd22f9da1f3 in vlib_cli_dispatch_sub_commands 
> (vm=vm@entry=0x7fd22fc58380
> , cm=cm@entry=0x7fd22fc585b0 ,
> input=input@entry=0x7fd1ef793d10,
> parent_command_index=parent_command_index@entry=0) at
> /usr/src/vpp/src/vlib/cli.c:528
> #12 0x7fd22f9da475 in vlib_cli_input (vm=vm@entry=0x7fd22fc58380
> , input=input@entry=0x7fd1ef793d10,
> function=function@entry=0x0, function_arg=function_arg@entry=0)
> at /usr/src/vpp/src/vlib/cli.c:667
> #13 0x7fd22fa31999 in unix_cli_exec (vm=0x7fd22fc58380
> , input=, cmd=) at
> /usr/src/vpp/src/vlib/unix/cli.c:3327
> #14 0x7fd22f9d9a3e in vlib_cli_dispatch_sub_commands 
> (vm=vm@entry=0x7fd22fc58380
> , cm=cm@entry=0x7fd22fc585b0 ,
> input=input@entry=0x7fd1ef793f60,
> parent_command_index=parent_command_index@entry=0) at
> /usr/src/vpp/src/vlib/cli.c:568
> #15 0x7fd22f9da475 in vlib_cli_input (vm=0x7fd22fc58380
> , input=input@entry=0x7fd1ef793f60,
> function=function@entry=0x7fd22fa34bf0 ,
> function_arg=function_arg@entry=0) at /usr/src/vpp/src/vlib/cli.c:667
> #16 0x7fd22fa37cf6 in unix_cli_process_input (cm=0x7fd22fc58de0
> , cli_file_index=0) at /usr/src/vpp/src/vlib/unix/cli.c:2572
> #17 unix_cli_process (vm=0x7fd22fc58380 ,
> rt=0x7fd1ef753000, f=) at
> /usr/src/vpp/src/vlib/unix/cli.c:2688
> #18 0x7fd22f9f2c36 in vlib_process_bootstrap (_a=) at
> /usr/src/vpp/src/vlib/main.c:1475
> #19 0x7fd22f4f3bb4 in clib_calljmp () from
> /usr/lib/x86_64-linux-gnu/libvppinfra.so.20.01
> #20 0x7fd1eea76b30 in ?? ()
> #21 0x7fd22f9f8041 in vlib_process_startup (f=0x0, p=0x7fd1ef753000,
> vm=0x7fd22fc58380 ) at /usr/src/vpp/src/vlib/main.c:1497
> #22 dispatch_process (vm=0x7fd22fc58380 ,
> p=0x7fd1ef753000, last_time_stamp=0, f=0x0) at
> /usr/src/vpp/src/vlib/main.c:1542
>
>
>
> And here's part of the script (I've eliminated a lot of the lines that are
> duplicated) -- please note IPs 

Re: [vpp-dev] VPP Crashes When Executing Large Script Using 'exec'

2020-03-25 Thread Dave Barach via Lists.Fd.Io
OK, no need to see the script... Classifier table out of memory... If you’re 
using the “classify table” debug CLI to set up the tables, change (or add) 
“memory-size xxxM” or “memory-size xxxG” to give the classifier enough memory. 
Depending on how many concurrent entries you expect, set the number of buckets 
somewhere between Nconcurrent/2 and Nconcurrent.

HTH... Dave

From: vpp-dev@lists.fd.io  On Behalf Of Luc Pelletier
Sent: Wednesday, March 25, 2020 9:21 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP Crashes When Executing Large Script Using 'exec'

2nd attempt - replying all. Dave - Apologies for the duplicate response.

Thanks for your response. You're right -- I should have provided more details. 
My script is trying to set up a large numbers of IPs to block using 
classifiers. I now have a backtrace as well which indicates that it seems to 
run out of memory when creating classifier sessions. Maybe I'm not using 
classifiers correctly, it's been difficult to find documentation on how to use 
that feature. I'd be grateful for any tips or help you can provide. Thanks in 
advance.

Here's the backtrace:

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x7fd22f118801 in __GI_abort () at abort.c:79
#2  0x55be59a05ca3 in os_panic () at /usr/src/vpp/src/vpp/vnet/main.c:355
#3  0x7fd230484cf5 in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=0, align=64, size=) 
at /usr/src/vpp/src/vppinfra/mem.h:143
#4  clib_mem_alloc_aligned (align=64, size=) at 
/usr/src/vpp/src/vppinfra/mem.h:163
#5  vnet_classify_entry_alloc (t=t@entry=0x7fd1ef4ddd80, 
log2_pages=log2_pages@entry=13) at 
/usr/src/vpp/src/vnet/classify/vnet_classify.c:210
#6  0x7fd23048a224 in split_and_rehash (t=t@entry=0x7fd1ef4ddd80, 
old_values=old_values@entry=0x7fd1c1a99600, 
old_log2_pages=old_log2_pages@entry=11, new_log2_pages=new_log2_pages@entry=13)
at /usr/src/vpp/src/vnet/classify/vnet_classify.c:299
#7  0x7fd23048ae78 in vnet_classify_add_del (t=t@entry=0x7fd1ef4ddd80, 
add_v=add_v@entry=0x7fd1ef793a50, is_add=is_add@entry=1) at 
/usr/src/vpp/src/vnet/classify/vnet_classify.c:576
#8  0x7fd23048b73b in vnet_classify_add_del_session 
(cm=cm@entry=0x7fd230bda1a0 , table_index=, 
match=0x7fd1ef7b5140 "", hit_next_index=,
opaque_index=, advance=, action=, metadata=, is_add=) at 
/usr/src/vpp/src/vnet/classify/vnet_classify.c:2706
#9  0x7fd23048dfad in classify_session_command_fn (vm=, 
input=0x7fd1ef793d10, cmd=) at 
/usr/src/vpp/src/vnet/classify/vnet_classify.c:2790
#10 0x7fd22f9d9a3e in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x7fd22fc58380 , cm=cm@entry=0x7fd22fc585b0 
, input=input@entry=0x7fd1ef793d10,
parent_command_index=) at /usr/src/vpp/src/vlib/cli.c:568
#11 0x7fd22f9da1f3 in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x7fd22fc58380 , cm=cm@entry=0x7fd22fc585b0 
, input=input@entry=0x7fd1ef793d10,
parent_command_index=parent_command_index@entry=0) at 
/usr/src/vpp/src/vlib/cli.c:528
#12 0x7fd22f9da475 in vlib_cli_input (vm=vm@entry=0x7fd22fc58380 
, input=input@entry=0x7fd1ef793d10, 
function=function@entry=0x0, function_arg=function_arg@entry=0)
at /usr/src/vpp/src/vlib/cli.c:667
#13 0x7fd22fa31999 in unix_cli_exec (vm=0x7fd22fc58380 , 
input=, cmd=) at 
/usr/src/vpp/src/vlib/unix/cli.c:3327
#14 0x7fd22f9d9a3e in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x7fd22fc58380 , cm=cm@entry=0x7fd22fc585b0 
, input=input@entry=0x7fd1ef793f60,
parent_command_index=parent_command_index@entry=0) at 
/usr/src/vpp/src/vlib/cli.c:568
#15 0x7fd22f9da475 in vlib_cli_input (vm=0x7fd22fc58380 , 
input=input@entry=0x7fd1ef793f60, function=function@entry=0x7fd22fa34bf0 
,
function_arg=function_arg@entry=0) at /usr/src/vpp/src/vlib/cli.c:667
#16 0x7fd22fa37cf6 in unix_cli_process_input (cm=0x7fd22fc58de0 
, cli_file_index=0) at /usr/src/vpp/src/vlib/unix/cli.c:2572
#17 unix_cli_process (vm=0x7fd22fc58380 , rt=0x7fd1ef753000, 
f=) at /usr/src/vpp/src/vlib/unix/cli.c:2688
#18 0x7fd22f9f2c36 in vlib_process_bootstrap (_a=) at 
/usr/src/vpp/src/vlib/main.c:1475
#19 0x7fd22f4f3bb4 in clib_calljmp () from 
/usr/lib/x86_64-linux-gnu/libvppinfra.so.20.01
#20 0x7fd1eea76b30 in ?? ()
#21 0x7fd22f9f8041 in vlib_process_startup (f=0x0, p=0x7fd1ef753000, 
vm=0x7fd22fc58380 ) at /usr/src/vpp/src/vlib/main.c:1497
#22 dispatch_process (vm=0x7fd22fc58380 , p=0x7fd1ef753000, 
last_time_stamp=0, f=0x0) at /usr/src/vpp/src/vlib/main.c:1542

And here's part of the script (I've eliminated a lot of the lines that are 
duplicated) -- please note IPs below are completely random as I'm only at the 
stage where I'm trying things out:

classify table mask l3 ip4 src
classify table mask l3 ip4 dst
classify session hit-next 0 table-index 0 match l3 ip4 src 174.121.118.15
classify session hit-next 0 table-index 1 match l3 ip4 dst 174.121.118.15
classify session hit-next 0 

Re: [vpp-dev] VPP Crashes When Executing Large Script Using 'exec'

2020-03-25 Thread Luc Pelletier
2nd attempt - replying all. Dave - Apologies for the duplicate response.

Thanks for your response. You're right -- I should have provided more
details. My script is trying to set up a large numbers of IPs to block
using classifiers. I now have a backtrace as well which indicates that it
seems to run out of memory when creating classifier sessions. Maybe I'm not
using classifiers correctly, it's been difficult to find documentation on
how to use that feature. I'd be grateful for any tips or help you can
provide. Thanks in advance.

Here's the backtrace:

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x7fd22f118801 in __GI_abort () at abort.c:79
#2  0x55be59a05ca3 in os_panic () at
/usr/src/vpp/src/vpp/vnet/main.c:355
#3  0x7fd230484cf5 in clib_mem_alloc_aligned_at_offset
(os_out_of_memory_on_failure=1, align_offset=0, align=64, size=) at /usr/src/vpp/src/vppinfra/mem.h:143
#4  clib_mem_alloc_aligned (align=64, size=) at
/usr/src/vpp/src/vppinfra/mem.h:163
#5  vnet_classify_entry_alloc (t=t@entry=0x7fd1ef4ddd80,
log2_pages=log2_pages@entry=13) at
/usr/src/vpp/src/vnet/classify/vnet_classify.c:210
#6  0x7fd23048a224 in split_and_rehash (t=t@entry=0x7fd1ef4ddd80,
old_values=old_values@entry=0x7fd1c1a99600,
old_log2_pages=old_log2_pages@entry=11, new_log2_pages=new_log2_pages@entry
=13)
at /usr/src/vpp/src/vnet/classify/vnet_classify.c:299
#7  0x7fd23048ae78 in vnet_classify_add_del (t=t@entry=0x7fd1ef4ddd80,
add_v=add_v@entry=0x7fd1ef793a50, is_add=is_add@entry=1) at
/usr/src/vpp/src/vnet/classify/vnet_classify.c:576
#8  0x7fd23048b73b in vnet_classify_add_del_session
(cm=cm@entry=0x7fd230bda1a0
, table_index=, match=0x7fd1ef7b5140 "",
hit_next_index=,
opaque_index=, advance=,
action=, metadata=, is_add=)
at /usr/src/vpp/src/vnet/classify/vnet_classify.c:2706
#9  0x7fd23048dfad in classify_session_command_fn (vm=,
input=0x7fd1ef793d10, cmd=) at
/usr/src/vpp/src/vnet/classify/vnet_classify.c:2790
#10 0x7fd22f9d9a3e in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x7fd22fc58380
, cm=cm@entry=0x7fd22fc585b0 ,
input=input@entry=0x7fd1ef793d10,
parent_command_index=) at /usr/src/vpp/src/vlib/cli.c:568
#11 0x7fd22f9da1f3 in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x7fd22fc58380
, cm=cm@entry=0x7fd22fc585b0 ,
input=input@entry=0x7fd1ef793d10,
parent_command_index=parent_command_index@entry=0) at
/usr/src/vpp/src/vlib/cli.c:528
#12 0x7fd22f9da475 in vlib_cli_input (vm=vm@entry=0x7fd22fc58380
, input=input@entry=0x7fd1ef793d10,
function=function@entry=0x0, function_arg=function_arg@entry=0)
at /usr/src/vpp/src/vlib/cli.c:667
#13 0x7fd22fa31999 in unix_cli_exec (vm=0x7fd22fc58380
, input=, cmd=) at
/usr/src/vpp/src/vlib/unix/cli.c:3327
#14 0x7fd22f9d9a3e in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x7fd22fc58380
, cm=cm@entry=0x7fd22fc585b0 ,
input=input@entry=0x7fd1ef793f60,
parent_command_index=parent_command_index@entry=0) at
/usr/src/vpp/src/vlib/cli.c:568
#15 0x7fd22f9da475 in vlib_cli_input (vm=0x7fd22fc58380
, input=input@entry=0x7fd1ef793f60,
function=function@entry=0x7fd22fa34bf0 ,
function_arg=function_arg@entry=0) at /usr/src/vpp/src/vlib/cli.c:667
#16 0x7fd22fa37cf6 in unix_cli_process_input (cm=0x7fd22fc58de0
, cli_file_index=0) at /usr/src/vpp/src/vlib/unix/cli.c:2572
#17 unix_cli_process (vm=0x7fd22fc58380 ,
rt=0x7fd1ef753000, f=) at
/usr/src/vpp/src/vlib/unix/cli.c:2688
#18 0x7fd22f9f2c36 in vlib_process_bootstrap (_a=) at
/usr/src/vpp/src/vlib/main.c:1475
#19 0x7fd22f4f3bb4 in clib_calljmp () from
/usr/lib/x86_64-linux-gnu/libvppinfra.so.20.01
#20 0x7fd1eea76b30 in ?? ()
#21 0x7fd22f9f8041 in vlib_process_startup (f=0x0, p=0x7fd1ef753000,
vm=0x7fd22fc58380 ) at /usr/src/vpp/src/vlib/main.c:1497
#22 dispatch_process (vm=0x7fd22fc58380 ,
p=0x7fd1ef753000, last_time_stamp=0, f=0x0) at
/usr/src/vpp/src/vlib/main.c:1542

And here's part of the script (I've eliminated a lot of the lines that are
duplicated) -- please note IPs below are completely random as I'm only at
the stage where I'm trying things out:

classify table mask l3 ip4 src
classify table mask l3 ip4 dst
classify session hit-next 0 table-index 0 match l3 ip4 src 174.121.118.15
classify session hit-next 0 table-index 1 match l3 ip4 dst 174.121.118.15
classify session hit-next 0 table-index 0 match l3 ip4 src 93.154.207.221
classify session hit-next 0 table-index 1 match l3 ip4 dst 93.154.207.221
classify session hit-next 0 table-index 0 match l3 ip4 src 48.59.60.149
classify session hit-next 0 table-index 1 match l3 ip4 dst 48.59.60.149

classify session hit-next 0 table-index 1 match l3 ip4 dst 47.50.22.114
classify session hit-next 0 table-index 0 match l3 ip4 src 36.192.94.210
classify session hit-next 0 table-index 1 match l3 ip4 dst 36.192.94.210
classify session hit-next 0 table-index 0 match l3 ip4 src 68.82.35.3
classify session hit-next 0 table-index 1 match l3 ip4 dst 68.82.35.3
set interface input 

Re: [vpp-dev] VPP Crashes When Executing Large Script Using 'exec'

2020-03-25 Thread Dave Barach via Lists.Fd.Io
How about: send a backtrace (preferably from a debug image), and put the script 
somewhere so that we can work the problem?

From: vpp-dev@lists.fd.io  On Behalf Of Luc Pelletier
Sent: Wednesday, March 25, 2020 7:53 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP Crashes When Executing Large Script Using 'exec'

Hi all,

I have a large script (2000 lines, 146,459 bytes) that I'm trying to execute 
using 'exec' in vppctl. When I copy+paste commands from the script, it works 
fine. However, if I try to execute the script with 'exec 
/path/to/myscript.txt', VPP crashes.

I'm running VPP v20.01 on Ubuntu 18.04 on Azure.

Any suggestions?

Thanks

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15860): https://lists.fd.io/g/vpp-dev/message/15860
Mute This Topic: https://lists.fd.io/mt/72538712/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP Crashes When Executing Large Script Using 'exec'

2020-03-25 Thread Luc Pelletier
Hi all,

I have a large script (2000 lines, 146,459 bytes) that I'm trying to
execute using 'exec' in vppctl. When I copy+paste commands from the script,
it works fine. However, if I try to execute the script with 'exec
/path/to/myscript.txt', VPP crashes.

I'm running VPP v20.01 on Ubuntu 18.04 on Azure.

Any suggestions?

Thanks
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15859): https://lists.fd.io/g/vpp-dev/message/15859
Mute This Topic: https://lists.fd.io/mt/72538712/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] DHCPClientDump/DHCPClientDetails not showing correct DomainServer

2020-03-25 Thread carlito nueno
Hi all,

Any ideas I can try? I am not familiar with dhcp plugin.

Thanks.

On Mon, Mar 23, 2020 at 12:55 AM Carlito Nueno 
wrote:

> Hi all,
>
> I am using vpp v20.01 and govpp - v0.3.1
>
> lease.DomainServer is showing [0 0 0 0], empty Address and the conversion
> to IP address is 0.0.0.0.
>
> So it knows that there is one dns server but the value is all zeros.
>
> while vppctl sh dhcp client shows:
> lan1 state DHCP_BOUND installed 1 addr 10.150.150.21/24 gw 10.150.150.1
> server 10.150.150.1 dns 10.150.150.1
>
> dhcpDetails := _dhcp.DHCPClientDetails{}
> last, err := reqCtx.ReceiveReply(dhcpDetails)
> if last {
> break
> }
> if err != nil {
> return nil, err
> }
> client := dhcpDetails.Client
> lease := dhcpDetails.Lease
>
> When I try that method in vpp_api_test, I receive:
> *vat# dhcp_client_dump*
> dhcp_client_dump error: Unspecified Error
>
> Any advice? Thanks!
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15858): https://lists.fd.io/g/vpp-dev/message/15858
Mute This Topic: https://lists.fd.io/mt/72486910/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-