@Maxim: after PR327 is merged I'll make the changes to replace
scheduler mode with direct pktio mode.
@Honnappa: Right now the sender mode is using one pktout_queue per
thread. Can be modified for more interfaces (more pktout_queue) of
course but I would vote for faster interfaces (40G ?).
On Thu, Dec 7, 2017 at 10:12 PM, Honnappa Nagarahalli <
honnappa.nagaraha...@linaro.org> wrote:
> On 7 December 2017 at 17:36, Bill Fischofer
> wrote:
> >
> >
> > On Thu, Dec 7, 2017 at 3:17 PM, Honnappa Nagarahalli
> > wrote:
> >>
>
On 7 December 2017 at 17:36, Bill Fischofer wrote:
>
>
> On Thu, Dec 7, 2017 at 3:17 PM, Honnappa Nagarahalli
> wrote:
>>
>> This experiment clearly shows the need for providing an API in ODP.
>>
>> On ODP2.0 implementations such an API
Branch: refs/heads/2.0
Home: https://github.com/Linaro/odp
Commit: 80b0d616f750e824a4bf0b0f3406ea3a3173ae0e
https://github.com/Linaro/odp/commit/80b0d616f750e824a4bf0b0f3406ea3a3173ae0e
Author: Honnappa Nagarahalli
Date: 2017-12-08 (Fri, 08 Dec
From: Dmitry Eremin-Solenikov
Require IPv4/IPv6 flag to be set for IPsec processing. Hardware usually
requires this, so require application to set flag that it knows.
Signed-off-by: Dmitry Eremin-Solenikov
---
/** Email
Require IPv4/IPv6 flag to be set for IPsec processing. Hardware usually
requires this, so require application to set flag that it knows.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsoleni...@linaro.org
github
/** Email created from pull request 328
From: Dmitry Eremin-Solenikov
IPsec allows one to use NH=59 packets to mangle traffic/packet
statistics. Add an event to deliver information about such packet to
application. Delivery of such events in SYNC mode is TBD.
Signed-off-by: Dmitry Eremin-Solenikov
Support TFC features:
TFC padding TX and RX via adding additonal data after packet payload
TFC packets RX via ODP_IPSEC_STATUS event. TX and RX SYNC is TBD.
github
/** Email created from pull request 329 (lumag:ipsec-tfc)
**
From: Dmitry Eremin-Solenikov
It is possible to include TFC padding into ESP-tunnel packets. Document
usage of such padding according to RFC.
Signed-off-by: Dmitry Eremin-Solenikov
---
/** Email created from pull request 329
On Thu, Dec 7, 2017 at 3:17 PM, Honnappa Nagarahalli <
honnappa.nagaraha...@linaro.org> wrote:
> This experiment clearly shows the need for providing an API in ODP.
>
> On ODP2.0 implementations such an API will be simple enough (constant
> subtraction), requiring no additional storage in VLIB.
>
This experiment clearly shows the need for providing an API in ODP.
On ODP2.0 implementations such an API will be simple enough (constant
subtraction), requiring no additional storage in VLIB.
Michal, can you send a PR to ODP for the API so that we can debate the
feasibility of the API for
On Thu, Dec 7, 2017 at 12:22 PM, Michal Mazur
wrote:
> Native VPP+DPDK plugin knows the size of rte_mbuf header and subtracts it
> from the vlib pointer.
>
> struct rte_mbuf *mb0 = rte_mbuf_from_vlib_buffer (b0);
> #define rte_mbuf_from_vlib_buffer(x) (((struct rte_mbuf
On Thu, Dec 7, 2017 at 12:55 PM, Honnappa Nagarahalli <
honnappa.nagaraha...@linaro.org> wrote:
> On 7 December 2017 at 08:01, Bogdan Pricope
> wrote:
> > TX is at line rate. Probably will get RX at line rate in direct mode,
> too.
> > Problem is how can you see the
On 7 December 2017 at 08:01, Bogdan Pricope wrote:
> TX is at line rate. Probably will get RX at line rate in direct mode, too.
> Problem is how can you see the performance increase/degradation if you
> can process more than line rate with one core?
Any possibility to
Native VPP+DPDK plugin knows the size of rte_mbuf header and subtracts it
from the vlib pointer.
struct rte_mbuf *mb0 = rte_mbuf_from_vlib_buffer (b0);
#define rte_mbuf_from_vlib_buffer(x) (((struct rte_mbuf *)x) - 1)
On 7 December 2017 at 19:02, Bill Fischofer
wrote:
Ping to others on the mailing list for opinions on this. What does "native"
VPP+DPDK get and how is this problem solved there?
On Thu, Dec 7, 2017 at 11:55 AM, Michal Mazur
wrote:
> The _odp_packet_inline is common for all packets and takes up to two
> cachelines (it
Yes, but _odp_packet_inline.udate is clearly not in the VLIB cache line
either, so it's a separate cache line access. Are you seeing this
difference in real runs or microbenchmarks? Why isn't the entire VLIB being
prefetched at dispatch? Sequential prefetching should add negligible
overhead.
On
It seems that only first cache line of VLIB buffer is in L1, new pointer
can be placed only in second cacheline.
Using constant offset between user area and ODP header i get 11 Mpps, with
pointer stored in VLIB buffer only 10Mpps and with this new api 10.6Mpps.
On 7 December 2017 at 18:04, Bill
Of interest to the ODP community.
-- Forwarded message --
From: P4.org
Date: Thu, Dec 7, 2017 at 10:45 AM
Subject: [P4-announce] [Blog Post] P4 Runtime - Putting the Control Plane
in Charge of the Forwarding Plane.
To: p4-annou...@lists.p4.org, p4-...@lists.p4.org,
How would calling an API be better than referencing the stored data
yourself? A cache line reference is a cache line reference, and presumably
the VLIB buffer is already in L1 since it's your active data.
On Thu, Dec 7, 2017 at 10:45 AM, Michal Mazur
wrote:
> Hi,
>
>
From: Maxim Uvarov
Signed-off-by: Maxim Uvarov
---
/** Email created from pull request 309 (muvarov:devel/master_shippable2)
** https://github.com/Linaro/odp/pull/309
** Patch: https://github.com/Linaro/odp/pull/309.patch
** Base sha:
From: Maxim Uvarov
Signed-off-by: Maxim Uvarov
---
/** Email created from pull request 309 (muvarov:devel/master_shippable2)
** https://github.com/Linaro/odp/pull/309
** Patch: https://github.com/Linaro/odp/pull/309.patch
** Base sha:
From: Maxim Uvarov
Signed-off-by: Maxim Uvarov
---
/** Email created from pull request 309 (muvarov:devel/master_shippable2)
** https://github.com/Linaro/odp/pull/309
** Patch: https://github.com/Linaro/odp/pull/309.patch
** Base sha:
From: Maxim Uvarov
Pass args to cunit to make code commong with other tests.
Signed-off-by: Maxim Uvarov
---
/** Email created from pull request 309 (muvarov:devel/master_shippable2)
** https://github.com/Linaro/odp/pull/309
** Patch:
github
/** Email created from pull request 309 (muvarov:devel/master_shippable2)
** https://github.com/Linaro/odp/pull/309
** Patch: https://github.com/Linaro/odp/pull/309.patch
** Base sha: c15a810b7a47f2e07200f83aa534163ca06e2b16
** Merge commit sha:
Hi,
For odp4vpp plugin we need a new API function which, given user area
pointer, will return a pointer to ODP packet buffer. It is needed when
packets processed by VPP are sent back to ODP and only a pointer to VLIB
buffer data (stored inside user area) is known.
I have tried to store the ODP
https://bugs.linaro.org/show_bug.cgi?id=3051
--- Comment #5 from Dmitry Eremin-Solenikov
---
I think this bug should have been closed as invalid long ago.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.linaro.org/show_bug.cgi?id=3210
--- Comment #7 from Dmitry Eremin-Solenikov
---
We have to add checksum checking if it was not done by hw
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.linaro.org/show_bug.cgi?id=2903
--- Comment #12 from Bill Fischofer ---
Ping to Bala. RED has now been merged. Are you going to be able to address this
for Tiger Moth?
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.linaro.org/show_bug.cgi?id=2988
--- Comment #14 from Bill Fischofer ---
Ping to Dmitry. Do we still want to pursue this?
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.linaro.org/show_bug.cgi?id=3051
--- Comment #4 from Bill Fischofer ---
Ping to Dmitry. Is this something we still need to pursue? It seems to me
./configure has improved in this area recently.
--
You are receiving this mail because:
You are on the CC
https://bugs.linaro.org/show_bug.cgi?id=3210
Bill Fischofer changed:
What|Removed |Added
Assignee|maxim.uva...@linaro.org
https://bugs.linaro.org/show_bug.cgi?id=3241
Bill Fischofer changed:
What|Removed |Added
Resolution|--- |FIXED
Branch: refs/heads/master
Home: https://github.com/Linaro/odp
Commit: f49d7bad7316ed0be807f37984908fc37da68004
https://github.com/Linaro/odp/commit/f49d7bad7316ed0be807f37984908fc37da68004
Author: Maxim Uvarov
Date: 2017-12-07 (Thu, 07 Dec 2017)
On 7 December 2017 at 17:01, Bogdan Pricope
wrote:
> TX is at line rate. Probably will get RX at line rate in direct mode, too.
> Problem is how can you see the performance increase/degradation if you
> can process more than line rate with one core?
>
> I guess ..
TX is at line rate. Probably will get RX at line rate in direct mode, too.
Problem is how can you see the performance increase/degradation if you
can process more than line rate with one core?
I guess .. enable csum option... ?
On 7 December 2017 at 15:46, Maxim Uvarov
From: Dmitry Eremin-Solenikov
This is to remove the following error:
pktio/dpdk.c: In function ‘mbuf_data_off’:
pktio/dpdk.c:126:9: error: cast from pointer to integer of different size
[-Werror=pointer-to-int-cast]
return
From: Dmitry Eremin-Solenikov
Flag -msse4.2 should be used only for i?86/x86_64 targets, it does not
make sense for any other target.
Signed-off-by: Dmitry Eremin-Solenikov
---
/** Email created from pull request 321
From: Dmitry Eremin-Solenikov
Debian unstable ships with DPDK 16.11. Add support for building with
this version.
Signed-off-by: Dmitry Eremin-Solenikov
---
/** Email created from pull request 321 (lumag:dpdk-system-master)
From: Dmitry Eremin-Solenikov
Support using distro-installed DPDK for Pkt I/O. Distributions (like
Ubuntu, Debian, etc) start providing DPDK packages. It is wise to enable
users to use distro-provided DPDK version rather than requiring to
always compile it from
From: Dmitry Eremin-Solenikov
Compile and use DPDK when doing cross-compilation tests. This enables us
to test that pktio/dpdk.c works on non-x86 targets.
Signed-off-by: Dmitry Eremin-Solenikov
---
/** Email created from
From: Dmitry Eremin-Solenikov
Separate DPDK macros to top-level file so that they can be reused by
other implementations (like ODP-DPDK).
Signed-off-by: Dmitry Eremin-Solenikov
---
/** Email created from pull request 321
nice. TX is on line rate, right? Next step probably to add RX path
without scheduler. And we will have good testing environment.
On 7 December 2017 at 16:12, Bogdan Pricope
wrote:
> More results with odp_generator in lava setup:
>
> 7.6 mpps (TX) / 5.9 mpps
More results with odp_generator in lava setup:
7.6 mpps (TX) / 5.9 mpps (RX) - api-next with PR313 (Petri):
8.3 mpps (TX) / 6.3 mpps (RX) - api-next with PR313 (Petri) +
remove 1m sleep + replace atomic counters
14.8 mpps (TX) / 6.5 mpps (RX) - api-next with PR313 (Petri) + remove
1m sleep
From: Bogdan Pricope
Use of atomic counters reduces total packet throughput.
Signed-off-by: Bogdan Pricope
---
/** Email created from pull request 327
(bogdanPricope:master_update_generator_pr)
** https://github.com/Linaro/odp/pull/327
From: Bogdan Pricope
The 1 ms sleep is reducing total packet throughput.
Signed-off-by: Bogdan Pricope
---
/** Email created from pull request 327
(bogdanPricope:master_update_generator_pr)
** https://github.com/Linaro/odp/pull/327
**
From: Bogdan Pricope
Bigger TX bursts may increase packet throughput.
Signed-off-by: Bogdan Pricope
---
/** Email created from pull request 327
(bogdanPricope:master_update_generator_pr)
** https://github.com/Linaro/odp/pull/327
**
Various changes to increase performance of odp_generator example appliation.
remove 1 ms sleep on send loop
increase maximum TX burst size to 512
add configuration option for checksum support
replace atomic counter with per worker counters
github
/** Email
Matias Elo(matiaselo) replied on github web page:
platform/linux-generic/pktio/dpdk.c
line 24
@@ -29,14 +29,27 @@
#include
#include
+#if __GNUC__ >= 7
+#pragma GCC diagnostic push
+#pragma GCC diagnostic warning "-Wimplicit-fallthrough=0"
+#endif
#include
+#if __GNUC__ >= 7
+#pragma GCC
49 matches
Mail list logo