On 19 Apr 2019, at 0:11, William Tu wrote:
Hi Eelco,
Thanks for your feedbacks!
Not necessary all ports.
On a OVS switch, you can have some ports supporting AF_XDP,
and some ports are other types, ex: DPDK vhost, or tap.
But I’m wondering how would you deal with ports not supporting this
at
driver level?
Will you fall back to skb style, will you report this (as it’s
interesting to know from a performance level).
Guess I just need to look at your code :)
I'm adding an option when adding the port
s.t like options:xdpmode=drv, or skb
I put the patch here:
https://github.com/williamtu/ovs-ebpf/commit/ef2bfe15db55ecd629cdb75cbc90c7be613745e3
Nice! Will review your next patch in detail!
There are also features like traffic shaping that will not work.
Maybe
it will be worth adding the table for AF_XDP in
http://docs.openvswitch.org/en/latest/faq/releases/
Right, when using AF_XDP, we don't have QoS support.
If people want to do rate limiting on a AF_XDP port, another
way is to use OpenFlow meter actions.
That for me was the only thing that stood out, but just want to make
sure no other things were abstracted in the DPDK APIs…
Guess you could use the DPDK meters framework to support the same as
DPDK, the only thing is that you need enablement of DPDK also.
Right. We can try
./configure --with-dpdk --with-afxdp
Yes this way policing is supported, if compiled without DPDK it’s not.
Guess we need to give it some thought to see how to warn for this etc.
You said that your goal for the next version is to improve
performance
and add optimizations. Do you think that is important before we
merge
the series? We can continue to improve performance after it is
merged.
The previous patch was rather unstable and I could not get it
running
with the PVP test without crashing. I think this patchset should
get
some proper testing and reviews by others. Especially for all the
features being marked as supported in the above-mentioned table.
Yes, Tim has been helping a lot to test this and I have a couple of
new fixes. I will incorporate into next version.
Cool, I’ll talk to Tim offline, in addition, copy me on the next
patch
and I’ll check it out.
Do you have a time frame, so I can do the review based on that
revision?
OK I plan to incorporate your and Tim's feedback, and resubmit next
version
next Monday (4/22)
I’m back from PTO the 30th, so take whatever time you need…
If we set performance aside, do you have a reason to want to wait
to
merge this? (I wasn't able to easily apply this series to current
master, so it'll need at least a rebase before we apply it. And I
have
only skimmed it, not fully reviewed it.)
Yes, this patchset only allows 1 pmd and 1 queue.
I'm adding the multiqueue support.
We need some alignment here on how we add threads for PMDs XDP vs
DPDK.
If there are not enough cores for both the system will not start
(EMERGENCY exit). And user also might want to control which cores run
DPDK and which XDP.
Yes, my plan is to use the same commandline interface as OVS-DPDK
The pmd-cpu-mask and pmd-rxq-affinity
for example 4 pmds:
# ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x36
// AF_XDP uses 2
# ovs-vsctl add-port br0 enp2s0 -- set interface enp2s0 type="afxdp" \
options:n_rxq=2 options:xdpmode=drv
other_config:pmd-rxq-affinity="0:1,1:2"
// another DPDK device can use another 2 pmds
ovs-vsctl add-port br0 enp2s0 -- set interface enp2s0
type="dpdkvhost-user"
other_config:pmd-rxq-affinity="0:3,1:4"
In real life, people might not always use pmd-rxq-affinity, especially
with people now looking into dynamic re-assignment based on traffic
patterns.
However, the real problem I was referring two is how to assign specific
cores to DPDK PMDs vs AFXPD PMDs.
First, if you enable DPDK and AFXPD and only have a single core OVS
crashed(force exit). I think should just warn in the log and continue.
Secondly, there is no control on which core is used by which type. If
you have two hyperthreading pairs you might want to use one sibling set
for AFXDP and one for DPDK. Also not talking about NUMA aware yet, which
I think needs to be taken care of also.
and DPDK mode we use multiple queues to distribute the load, with
this
scenario does it double the number of CPUs used? Can we use the
poll()
mode as explained here,
https://linuxplumbersconf.org/event/2/contributions/99/, and how
will
it
work with multiple queues/pmd threads? What about any latency
tests,
is
it worse or better than kernel/dpdk? Also with the AF_XDP datapath,
there is no to leverage hardware offload, like DPDK and TC. And
then
there is the part that it only works on the most recent kernels.
You have lots of good points here.
My experiments show that it's slower than DPDK, but much faster than
kernel.
Looking for your improvement patch as for me it’s about 10x slower
for
the kernel with a single queue (see other email).
Thanks
Regards,
William
To me looking at this I would say it’s far from being ready to be
merged into OVS. However, if others decide to go ahead I think it
should
be disabled, not compiled in by default.
I agree. This should be experimental feature and we're adding s.t
like
#./configure --enable-afxdp
so not compiled in by default
Thanks
William
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev