Hello,
I am trying to set up a L3 only RSS for E810 (Ice) NIC by following this
document
https://doc.dpdk.org/dts/test_plans/fortville_rss_input_test_plan.html
while l3-dst-only or l3-dst-only cases work perfectly fine and
produce the same RSS value for all 10 test packets,
On 2020-12-12 11:46, Alex Kiselev wrote:
On 2020-12-12 11:22, Singh, Jasvinder wrote:
On 12 Dec 2020, at 01:45, Alex Kiselev wrote:
On 2020-12-12 01:54, Alex Kiselev wrote:
On 2020-12-12 01:45, Alex Kiselev wrote:
On 2020-12-12 01:20, Singh, Jasvinder wrote:
On 11 Dec 2020, at 23:37, Alex
On 2020-12-12 11:22, Singh, Jasvinder wrote:
On 12 Dec 2020, at 01:45, Alex Kiselev wrote:
On 2020-12-12 01:54, Alex Kiselev wrote:
On 2020-12-12 01:45, Alex Kiselev wrote:
On 2020-12-12 01:20, Singh, Jasvinder wrote:
On 11 Dec 2020, at 23:37, Alex Kiselev wrote:
On 2020-12-11 23:55
On 2020-12-12 01:54, Alex Kiselev wrote:
On 2020-12-12 01:45, Alex Kiselev wrote:
On 2020-12-12 01:20, Singh, Jasvinder wrote:
On 11 Dec 2020, at 23:37, Alex Kiselev wrote:
On 2020-12-11 23:55, Singh, Jasvinder wrote:
On 11 Dec 2020, at 22:27, Alex Kiselev wrote:
On 2020-12-11 23:06
On 2020-12-12 01:45, Alex Kiselev wrote:
On 2020-12-12 01:20, Singh, Jasvinder wrote:
On 11 Dec 2020, at 23:37, Alex Kiselev wrote:
On 2020-12-11 23:55, Singh, Jasvinder wrote:
On 11 Dec 2020, at 22:27, Alex Kiselev wrote:
On 2020-12-11 23:06, Singh, Jasvinder wrote:
On 11 Dec 2020
On 2020-12-12 01:20, Singh, Jasvinder wrote:
On 11 Dec 2020, at 23:37, Alex Kiselev wrote:
On 2020-12-11 23:55, Singh, Jasvinder wrote:
On 11 Dec 2020, at 22:27, Alex Kiselev wrote:
On 2020-12-11 23:06, Singh, Jasvinder wrote:
On 11 Dec 2020, at 21:29, Alex Kiselev wrote
On 2020-12-11 23:55, Singh, Jasvinder wrote:
On 11 Dec 2020, at 22:27, Alex Kiselev wrote:
On 2020-12-11 23:06, Singh, Jasvinder wrote:
On 11 Dec 2020, at 21:29, Alex Kiselev wrote:
On 2020-12-08 14:24, Singh, Jasvinder wrote:
> [JS] now, returning to 1 mbps pipes situation,
On 2020-12-11 23:27, Alex Kiselev wrote:
On 2020-12-11 23:06, Singh, Jasvinder wrote:
On 11 Dec 2020, at 21:29, Alex Kiselev wrote:
On 2020-12-08 14:24, Singh, Jasvinder wrote:
> [JS] now, returning to 1 mbps pipes situation, try reducing tc period
> first at subport and then at
On 2020-12-11 23:06, Singh, Jasvinder wrote:
On 11 Dec 2020, at 21:29, Alex Kiselev wrote:
On 2020-12-08 14:24, Singh, Jasvinder wrote:
> [JS] now, returning to 1 mbps pipes situation, try reducing tc period
> first at subport and then at pipe level, if that help in getting even
>
On 2020-12-08 14:24, Singh, Jasvinder wrote:
> [JS] now, returning to 1 mbps pipes situation, try reducing tc period
> first at subport and then at pipe level, if that help in getting even
> traffic across low bandwidth pipes.
reducing subport tc from 10 to 5 period also solved the problem
On 2020-12-08 14:24, Singh, Jasvinder wrote:
> [JS] now, returning to 1 mbps pipes situation, try reducing tc period
> first at subport and then at pipe level, if that help in getting even
> traffic across low bandwidth pipes.
reducing subport tc from 10 to 5 period also solved the problem
On 2020-12-07 23:32, Singh, Jasvinder wrote:
On 7 Dec 2020, at 22:16, Alex Kiselev wrote:
On 2020-12-07 21:34, Alex Kiselev wrote:
On 2020-12-07 20:29, Singh, Jasvinder wrote:
On 7 Dec 2020, at 19:09, Alex Kiselev wrote:
On 2020-12-07 20:07, Alex Kiselev wrote:
On 2020-12-07 19:18, Alex
On 2020-12-07 21:34, Alex Kiselev wrote:
On 2020-12-07 20:29, Singh, Jasvinder wrote:
On 7 Dec 2020, at 19:09, Alex Kiselev wrote:
On 2020-12-07 20:07, Alex Kiselev wrote:
On 2020-12-07 19:18, Alex Kiselev wrote:
On 2020-12-07 18:59, Singh, Jasvinder wrote:
On 7 Dec 2020, at 17:45, Alex
On 2020-12-07 18:31, Singh, Jasvinder wrote:
-Original Message-
From: Alex Kiselev
Sent: Monday, December 7, 2020 4:50 PM
To: Singh, Jasvinder
Cc: users@dpdk.org; Dumitrescu, Cristian
;
Dharmappa, Savinay
Subject: Re: [dpdk-users] scheduler issue
On 2020-12-07 12:32, Singh
On 2020-12-07 12:32, Singh, Jasvinder wrote:
-Original Message-
From: Alex Kiselev
Sent: Monday, December 7, 2020 10:46 AM
To: Singh, Jasvinder
Cc: users@dpdk.org; Dumitrescu, Cristian
;
Dharmappa, Savinay
Subject: Re: [dpdk-users] scheduler issue
On 2020-12-07 11:00, Singh
On 2020-12-07 12:32, Singh, Jasvinder wrote:
-Original Message-
From: Alex Kiselev
Sent: Monday, December 7, 2020 10:46 AM
To: Singh, Jasvinder
Cc: users@dpdk.org; Dumitrescu, Cristian
;
Dharmappa, Savinay
Subject: Re: [dpdk-users] scheduler issue
On 2020-12-07 11:00, Singh
On 2020-12-07 11:00, Singh, Jasvinder wrote:
-Original Message-
From: users On Behalf Of Alex Kiselev
Sent: Friday, November 27, 2020 12:12 PM
To: users@dpdk.org
Cc: Dumitrescu, Cristian
Subject: Re: [dpdk-users] scheduler issue
On 2020-11-25 16:04, Alex Kiselev wrote:
> On 2020-11
On 2020-11-25 16:04, Alex Kiselev wrote:
On 2020-11-24 16:34, Alex Kiselev wrote:
Hello,
I am facing a problem with the scheduler library DPDK 18.11.10 with
default
scheduler settings (RED is off).
It seems like some of the pipes (last time it was 4 out of 600 pipes)
start incorrectly
On 2020-11-24 16:34, Alex Kiselev wrote:
Hello,
I am facing a problem with the scheduler library DPDK 18.11.10 with
default
scheduler settings (RED is off).
It seems like some of the pipes (last time it was 4 out of 600 pipes)
start incorrectly dropping most of the traffic after a couple
Hello,
I am facing a problem with the scheduler library DPDK 18.11.10 with
default
scheduler settings (RED is off).
It seems like some of the pipes (last time it was 4 out of 600 pipes)
start incorrectly dropping most of the traffic after a couple of days of
successful work.
So far I've
to.
On Sun, Jun 7, 2020, 10:11 Alex Kiselev wrote:
On 2020-06-07 17:21, Cliff Burdick wrote:
The mbuf pool said be configured to be the size of the largest
packet
you expect to receive. If you're getting packets longer than that,
I
would expect you to see problems. Same goes for transmitting; I
.
But, the crash happened while receiving packets, that's why
I am wondering could the bugs I found in the TX code cause the crush
in RX?
On Sun, Jun 7, 2020, 06:36 Alex Kiselev wrote:
On 2020-06-07 15:16, Cliff Burdick wrote:
That shouldn't matter. The mbuf size is allocated when you
you checked to see if it's potentially a hugepage issue?
Please, explain.
The app had been working two monghts before the crush
and the load was 3-4 gbit/s, so no, I don't think that
something is wrong with hugepages on that machine.
On Sun, Jun 7, 2020, 02:59 Alex Kiselev wrote:
On 2020-06
that translates nb_rx to vec_size,
since that code is double checked.
My actual question now is about possible impact of using
incorrect values of mbuf's pkt_len and data_len fields.
On Sat, Jun 6, 2020 at 5:59 AM Alex Kiselev
wrote:
1 июня 2020 г., в 19:17, Stephen Hemminger
написал(а):
On Mon, 01
> 1 июня 2020 г., в 19:17, Stephen Hemminger
> написал(а):
>
> On Mon, 01 Jun 2020 15:24:25 +0200
> Alex Kiselev wrote:
>
>> Hello,
>>
>> I've got a segmentation fault error in my data plane path.
>> I am pretty sure the code where the segfaul
On 2020-06-01 18:17, Stephen Hemminger wrote:
On Mon, 01 Jun 2020 15:24:25 +0200
Alex Kiselev wrote:
Hello,
I've got a segmentation fault error in my data plane path.
I am pretty sure the code where the segfault happened is ok,
so my guess is that I somehow received a corrupted mbuf.
How
Hello,
I've got a segmentation fault error in my data plane path.
I am pretty sure the code where the segfault happened is ok,
so my guess is that I somehow received a corrupted mbuf.
How could I troubleshoot this? Is there any way?
Is it possible that other threads of the application
corrupted
Hi,
I am facing a strange behavior of my DPDK application on one particular
machine.
It takes about 30 seconds for the application to finish DPDK
initialization
procedures, while normally it takes less than 5-6 seconds.
During that time perf top shows that eal_memalloc_is_contig()
is
It looks like I've found a way:
I edited $RTE_SDK/config/defconfig_x86_64-native-linuxapp-gcc
CONFIG_RTE_MACHINE="x86-64"
and then I used
make EXTRA_CFLAGS="-march=x86-64 -mpopcnt -mmmx -msse -msse2 -msse3 -mssse3
-msse4.1 -msse4.2 -mfpmath=sse -mpopcnt"
The application has started
Hi.
Is it possible to build DPDK application on a machine that do have AVX support
for use at another machine that doesn't have AVX feature?
Thanks.
series NICs. So, I am looking for
a solution to spread PPPoE flows to different queues
on x520 or mellanox NICs.
> On 18 Dec 2018, at 21:36, Stephen Hemminger
> wrote:
>
> On Tue, 18 Dec 2018 20:02:06 +0300
> Alex Kiselev wrote:
>
>> Hi.
>>
>> Is it po
Hi.
Is it possible to configure Intel 82599 NICs RSS function to calculate rss hash
value
based on only the L2 src address and dst address for nonIp packets?
It's possible to do so with Intel x710 cards, but I haven't found the same
feature
for 82599. Have I missed something? Or it's a unique
Hi.
Is it possible to configure Mellanox NICs RSS function to calculate rss hash
value
based on the L2 src address and dst address only for nonIp packets?
Thanks.
--
Alex
for this issue ?
>
> Really appreciate your response.
>
> Thanks,
> Ananda
>
>
> -Original Message-
> From: users On Behalf Of Alex Kiselev
> Sent: Sunday, May 27, 2018 9:09 AM
> To: Xing, Beilei ; users@dpdk.org; Zhang, Qi Z
>
> Subject: Re: [dpdk
Hi.
Is it safe to send a mbuf clone to a KNI interface?
I had a lot of troubles while doing so. At first my KNI
interfaces stoped working after a while while my application
was still forwarding packets. Then my app started almost
immediately crash. Then I changed a code a little bit
and rather
..
> Waiting for lcores to finish...
>
> -- Forward statistics for port 0
> --
> RX-packets: 18 RX-dropped: 0 RX-total: 18
> RX-error: 1
> RX-nombufs:
testpmd>> stop
testpmd>> start
> After that, packets with destination 01:00:5E:00:00:12 still can be received.
> Best Regards
> Beilei Xing
>> -Original Message-
>> From: Alex Kiselev [mailto:kisele...@gmail.com]
>> Sent: Tuesday, May 22, 2018 6:42 P
1
testpmd>> start
testpmd>> mac_addr add 0 00:00:5E:00:01:0A
testpmd>> stop
testpmd>> start
> After that, packets with destination 01:00:5E:00:00:12 still can be received.
> Best Regards
> Beilei Xing
>> -Original Message-
>> From: Alex Kisele
:5E:00:00:12. When there is no additinal mac on a port
everything is ok. Also, there is no such issue when I am using intel X520 nic
(ixgbe),
I am facing this behavior only with X710 (i40e) intel NIC.
DPDK ver dpdk-stable-17.11.1
--
Alex Kiselev.
mmit/drivers/net/bonding/rte_eth_bond_pmd.c?id=55b58a7374554cd1c86f4a13a0e2f54e9ba6fe4d
>
> Are you running with that patch?
No. I wasn't aware of this patch.
I'll try it. Thanks.
>
>> -----Original Message-
>> From: Alex Kiselev [mailto:kisele...@gmail.com]
>> Sent: Friday, Janua
2018-01-24 17:14 GMT+03:00 Alex Kiselev <kisele...@gmail.com>:
> Hi Kyle.
>
> 2018-01-24 17:01 GMT+03:00 Kyle Larose <klar...@sandvine.com>:
>> Did you set the MTU on the bond port? It has separate configuration IIRC.
I don't see any special API functions for setting MT
regarding the requirement to send LACP packets every N ms)
>
>> -Original Message-
>> From: users [mailto:users-boun...@dpdk.org] On Behalf Of Alex Kiselev
>> Sent: Wednesday, January 24, 2018 8:51 AM
>> To: users
>> Subject: Re: [dpdk-users] xl710 NIC doesn't
roblem with the bonding driver. I created
a bond port with four i40e slave ports and place it in the LACP mode.
And the bond port doesn't receive 1518 bytes packets.
Please, help me to resolve the issue.
2018-01-24 0:44 GMT+03:00 Alex Kiselev <kisele...@gmail.com>:
> Hi.
>
> It seems t
te_eth_dev_set_mtu() with different parameters
but nothing has changed.
Have I missed something to configure?
Thanks.
--
Alex Kiselev
, that kinda mbufs are used only for passing
messages via rings in my app.
What should I look for in order to find a bug in my app?
Thank you.
P.S.
So far I've got only one occurrence of the bug and I am trying to
reproduce it running
the same stress test.
--
Alex Kiselev
351 is not done (port=1 queue=0)
Jul 14 03:48:56 bizin the_router.lag[22550]: PMD: i40e_xmit_cleanup():
TX descriptor 351 is not
Is it some kind of hardware or firmware problem?
--
Alex Kiselev
was sent 1, slow pkts 0
I still have no clue what could cause such behavior and I am running
out of ideas how to further debug it.
Please, anybody, help! I would love to hear any ideas.
--
Alex Kiselev
re any other things that I can do to troubleshoot this situation?
I would appreciate any help.
Thank you in advance.
--
Alex Kiselev
packets per seconds. Restarting
my application
helps and tx errors disappear, but only for some time. Tx error rate
is about 200 errors
per second.
What could be a cause for those tx errors?
Is there some ways to debug and troubleshot that situation?
Thank you.
--
--
Alex Kiselev
(rte_eth_rx_queue_setup)
was ok too.
So, is it a mbuf leak in the bonding driver?
P.S.
Does anybody have a success story working with LACP bonding ports?
--
Alex Kiselev
?
rte_eth_tx_burst(portid, queueid, NULL, 0);
Thank you.
Alex Kiselev.
e bonded interface:
>
> https://github.com/AltraMayor/gatekeeper/blob/master/cps/main.c#L545
>
> Hope that helps,
> Cody
>
> On Thu, Jun 22, 2017 at 5:06 AM, Alex Kiselev <kisele...@gmail.com> wrote:
>>
>> Hello.
>>
>> Is it possible to create a KNI
aining one hash table per lcore (thread), and there is no need for
> synchronization issue in our case. If there are more solutions, please let
> me know. I will compare the solutions, and pick up the best suitable one.
>
> Best,
> Qiaobin
>
> On Apr 17, 2017, at 1:30 PM, Alex
I would take a look at:
1) http://preshing.com/20160201/new-concurrent-hash-maps-for-cpp/
2) https://github.com/efficient/libcuckoo
3)
http://high-scale-lib.cvs.sourceforge.net/viewvc/high-scale-lib/high-scale-lib/org/cliffc/high_scale_lib/NonBlockingHashMap.java?view=markup
Hi.
I have just started with the pktgen lua scripting language ang got
something that looks like a bug:
pktgen: /usr/src/dpdk-17.02/lib/librte_timer/rte_timer.c:520:
rte_timer_manage: Assertion `lcore_id < 128' failed.
Aborted (core dumped)
I run the pktgen
#
Hi!
I am working on some stress tests for my packet forwarding engine
and trying to figure out which packet generator tool can help me
to accomplish my tasks with minimum efforts from me. What I need is something
like PktGen but with a kind of traffic flow/session support.
The flow/session in my
freezes for about 5-7 seconds after netlink
commands have sent to up the kni interfaces, and one of kni interfaces
changes it?s name to the name of one of the pure linux interfaces
(eth3).
P.S. I am using the DPDP 2.2.0.
--
--
Alex Kiselev
57 matches
Mail list logo