Hi Kevin,
Not sure but that does sound like a bug if it’s allowed. The mutex should be
dropped before suspending.
Regards,
Florin
> On Apr 7, 2022, at 7:07 PM, Kevin Yan wrote:
>
> Hi Florin,
> Thanks for the quick reply. I think when this issue happened,
> main thread was lo
Hi Florin,
Thanks for the quick reply. I think when this issue happened,
main thread was locking the binary api’s queue mutex, and then it scheduled to
execute another process node, in this process node it called barrier sync. Is
this a possible scenario?
BRs,
Kevin
From: Florin
Hi Kevin,
That’s a pretty old VPP release so you should maybe try to update.
Regarding the deadlock, what is main actually doing? If it didn’t lock the
binary api's queue mutex before the barrier sync, it shouldn’t deadlock.
Regards,
Florin
> On Apr 7, 2022, at 6:39 PM, Kevin Yan via lists.
Hi,
Recently I got a VPP crash issue, one worker thread is doing
mutex lock and waiting for getting the mutex, the complete call stack is
arp_learn-> vnet_arp_set_ip4_over_ethernet-> vl_api_rpc_call_main_thread->
vl_msg_api_alloc_as_if_client-> vl_msg_api_alloc_internal-> pthread_m
Hi,
this is vppctl sh error:
root@ubuntu2004:/home/ubuntu# vppctl sh error
Count Node Reason
Severity
1809 l2-output L2 output packets
error
1809 l2-input L
[Edited Message Follows]
On Thu, Apr 7, 2022 at 06:44 PM, Mohsin Kazmi wrote:
>
>
>
> Please use tcpdump to trace on Linux side to see what is happening. Please
> also check for error counters on Linux side.
>
>
>
>
>
>
Hi Mohsin,
After dump, *I see that on tap interface, it can receiv
[Edited Message Follows]
On Thu, Apr 7, 2022 at 06:44 PM, Mohsin Kazmi wrote:
>
>
>
> Please use tcpdump to trace on Linux side to see what is happening. Please
> also check for error counters on Linux side.
>
>
>
>
>
>
Hi Mohsin,
After dump, *I see that on tap interface, it can receiv
[Edited Message Follows]
On Thu, Apr 7, 2022 at 06:44 PM, Mohsin Kazmi wrote:
>
>
>
> Please use tcpdump to trace on Linux side to see what is happening. Please
> also check for error counters on Linux side.
>
>
>
>
>
>
Hi Mohsin,
After dump, *I see that on tap interface, it can receiv
Please check error counters in VPP sh errors
Please trace the packet using tcpdump on the server side too, if packets are
being dropped in Linux.
-br
Mohsin
From: on behalf of "long...@gmail.com"
Date: Thursday, April 7, 2022 at 3:08 PM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev] Fastest
[Edited Message Follows]
On Thu, Apr 7, 2022 at 06:44 PM, Mohsin Kazmi wrote:
>
>
>
> Please use tcpdump to trace on Linux side to see what is happening. Please
> also check for error counters on Linux side.
>
>
>
>
>
>
Hi Mohsin,
After dump, *I see that on tap interface, it can receiv
On Thu, Apr 7, 2022 at 06:44 PM, Mohsin Kazmi wrote:
>
>
>
> Please use tcpdump to trace on Linux side to see what is happening. Please
> also check for error counters on Linux side.
>
>
>
>
>
>
Hi Mohsin,
After dump, *I see that on tap interface, it can receive ping message from
192.1
Yeah, looks like ip4_neighbor_probe is sending packet to deleted interface:
(gdb)p n->name
$4 = (u8 *) 0x7fff82b47578 "interface-3-output-deleted”
So it is right that this assert kicks in.
Likely what happens is that batch of commands are first triggering generation
of neighbor probe packet, t
Not enough buckets for the number of active key/value pairs. Buckets are cheap.
Start with nBuckets = nActiveSessions / (BIHASH_KVP_PER_PAGE / 2) or some such
and increase the number of buckets as necessary...
HTH... Dave
From: vpp-dev@lists.fd.io On Behalf Of chetan bhasin
Sent: Thursd
Hi,
As suggested, we added blackhole routes in linux using below command
sudo ip netns exec dataplane ip -6 route add blackhole 2001:50:10:a111::101/64
table 1203 proto bgp
And then we added ipip tunnel route via API.
Below is the output for "vppctl show ip6 fib table 1203 2001:50:10:a111::101
Hi,
The trace looks good to me.
Please use tcpdump to trace on Linux side to see what is happening. Please also
check for error counters on Linux side.
-br
Mohsin
From: on behalf of "long...@gmail.com"
Date: Thursday, April 7, 2022 at 2:57 AM
To: "vpp-dev@lists.fd.io"
Subject: Re: [vpp-dev]
Hi,
We are using bihash where key is 5 tuple of the packet. There are
different type of free list which are maintained and once an entry is
allocated and removed from a bihash that memory still remain with the
bihash for future usage.
The problem we are seeing is consistent growth in the memory
16 matches
Mail list logo