Hi Billy,

See my reply in line.

Br,
Wang Zhike

-----Original Message-----
From: O Mahony, Billy [mailto:[email protected]] 
Sent: Wednesday, September 06, 2017 7:26 PM
To: 王志克; Darrell Ball; [email protected]; [email protected]; 
Kevin Traynor
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port

Hi Wang,

You are going to have to take the hit crossing the NUMA boundary at some point 
if your NIC and VM are on different NUMAs.

So are you saying that it is more expensive to cross the NUMA boundary from the 
pmd to the VM that to cross it from the NIC to the PMD?

[Wang Zhike] I do not have such data. I hope we can try the new behavior and 
get the test result, and then know whether and how much performance can be 
improved.

If so then in that case you'd like to have two (for example) PMDs polling 2 
queues on the same NIC. With the PMDs on each of the NUMA nodes forwarding to 
the VMs local to that NUMA?

Of course your NIC would then also need to be able know which VM (or at least 
which NUMA the VM is on) in order to send the frame to the correct rxq. 

[Wang Zhike] Currently I do not know how to achieve it. From my view, NIC do 
not know which NUMA should be the destination of the packet. Only after OVS 
handling (eg lookup the fowarding rule in OVS), then it can know the 
destination. If NIC does not know the destination NUMA socket, it does not 
matter which PMD to poll it.


/Billy. 

> -----Original Message-----
> From: 王志克 [mailto:[email protected]]
> Sent: Wednesday, September 6, 2017 11:41 AM
> To: O Mahony, Billy <[email protected]>; Darrell Ball
> <[email protected]>; [email protected]; ovs-
> [email protected]; Kevin Traynor <[email protected]>
> Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for
> physical port
> 
> Hi Billy,
> 
> It depends on the destination of the traffic.
> 
> I observed that if the traffic destination is across NUMA socket, the "avg
> processing cycles per packet" would increase 60% than the traffic to same
> NUMA socket.
> 
> Br,
> Wang Zhike
> 
> -----Original Message-----
> From: O Mahony, Billy [mailto:[email protected]]
> Sent: Wednesday, September 06, 2017 6:35 PM
> To: 王志克; Darrell Ball; [email protected]; ovs-
> [email protected]; Kevin Traynor
> Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for
> physical port
> 
> Hi Wang,
> 
> If you create several PMDs on the NUMA of the physical port does that have
> the same performance characteristic?
> 
> /Billy
> 
> 
> 
> > -----Original Message-----
> > From: 王志克 [mailto:[email protected]]
> > Sent: Wednesday, September 6, 2017 10:20 AM
> > To: O Mahony, Billy <[email protected]>; Darrell Ball
> > <[email protected]>; [email protected]; ovs-
> > [email protected]; Kevin Traynor <[email protected]>
> > Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for
> > physical port
> >
> > Hi Billy,
> >
> > Yes, I want to achieve better performance.
> >
> > The commit "dpif-netdev: Assign ports to pmds on non-local numa node"
> > can NOT meet my needs.
> >
> > I do have pmd on socket 0 to poll the physical NIC which is also on socket 
> > 0.
> > However, this is not enough since I also have other pmd on socket 1. I
> > hope such pmds on socket 1 can together poll physical NIC. In this
> > way, we have more CPU (in my case, double CPU) to poll the NIC, which
> > results in performance improvement.
> >
> > BR,
> > Wang Zhike
> >
> > -----Original Message-----
> > From: O Mahony, Billy [mailto:[email protected]]
> > Sent: Wednesday, September 06, 2017 5:14 PM
> > To: Darrell Ball; 王志克; [email protected]; ovs-
> > [email protected]; Kevin Traynor
> > Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for
> > physical port
> >
> > Hi Wang,
> >
> > A change was committed to head of master 2017-08-02 "dpif-netdev:
> > Assign ports to pmds on non-local numa node" which if I understand
> > your request correctly will do what you require.
> >
> > However it is not clear to me why you are pinning rxqs to PMDs in the
> > first instance. Currently if you configure at least on pmd on each
> > numa there should always be a PMD available. Is the pinning for
> performance reasons?
> >
> > Regards,
> > Billy
> >
> >
> >
> > > -----Original Message-----
> > > From: Darrell Ball [mailto:[email protected]]
> > > Sent: Wednesday, September 6, 2017 8:25 AM
> > > To: 王志克 <[email protected]>; [email protected]; ovs-
> > > [email protected]; O Mahony, Billy <[email protected]>;
> > Kevin
> > > Traynor <[email protected]>
> > > Subject: Re: [ovs-dev] OVS DPDK NUMA pmd assignment question for
> > > physical port
> > >
> > > Adding Billy and Kevin
> > >
> > >
> > > On 9/6/17, 12:22 AM, "Darrell Ball" <[email protected]> wrote:
> > >
> > >
> > >
> > >     On 9/6/17, 12:03 AM, "王志克" <[email protected]> wrote:
> > >
> > >         Hi Darrell,
> > >
> > >         pmd-rxq-affinity has below limitation: (so isolated pmd can
> > > not be used for others, which is not my expectation. Lots of VMs
> > > come and go on the fly, and manully assignment is not feasible.)
> > >                   >>After that PMD threads on cores where RX queues
> > > was pinned will become isolated. This means that this thread will
> > > poll only pinned RX queues
> > >
> > >         My problem is that I have several CPUs spreading on
> > > different NUMA nodes. I hope all these CPU can have chance to serve
> the rxq.
> > > However, because the phy NIC only locates on one certain socket
> > > node, non-same numa pmd/CPU would be excluded. So I am wondering
> > > whether
> > we
> > > can have different behavior for phy port rxq:
> > >               round-robin to all PMDs even the pmd on different NUMA 
> > > socket.
> > >
> > >         I guess this is a common case, and I believe it would
> > > improve rx performance.
> > >
> > >
> > >     [Darrell] I agree it would be a common problem and some
> > > distribution would seem to make sense, maybe factoring in some
> > > favoring of local numa PMDs ?
> > >                     Maybe an optional config to enable ?
> > >
> > >
> > >         Br,
> > >         Wang Zhike
> > >
> > >

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to