Re: [ovs-discuss] Changing Destination IP and MAC Address for TCP Connection

2018-12-06 Thread Darrell Ball


From:  on behalf of karthik karra 

Date: Thursday, December 6, 2018 at 5:35 PM
To: "ovs-discuss@openvswitch.org" 
Subject: [ovs-discuss] Changing Destination IP and MAC Address for TCP 
Connection

Hi All,

I have three VMs
VM1 - 192.168.1.101
VM2 - 192.168.1.102
VM3 - 192.168.1.103
I am using netcat for testing the flows between the VMs.

My scenario is, the netcat server listens on both VM2 and VM3. From VM1, I will 
provide "nc  ".
But I have a rule in OVS which mentions if source IP is VM1 and destination IP 
is VM2, then change the destination IP and MAC address to the VM3's such that 
the flow is redirected to the VM3.

This is working fine for UDP flows.

>>> really ?

For TCP, when I have searched for solution, I have found conntrack. I am using 
OVS 2.6 version as I have seen in one of the videos that from this version it 
supports the conntrack in user data path.


>> What is the o/p of  ‘sudo ovs-vsctl get bridge  
>> datapath_type’ ?

>> What is the o/p ‘sudo ovs-vsctl show’ ?


This is the flow rule I am using to match and change the addresses for TCP. I 
have followed the conntrack tutorial provided in OVS site.
ovs-ofctl add-flow ovs_bridge 
"table=0,priority=50,ct_state=-trk,tcp,nw_src=192.168.1.101,nw_dst=192.168.1.102,tp_dst=8001,idle_timeout=0,actions=ct(table=0),mod_nw_dst=192.168.1.103,mod_dl_dst=08:00:27:be:ce:e0,normal

>> Are you trying to use conntrack for NATing or you don’t care ?; right 
>> now, you are doing NAT outside of conntrack and
  anyways conntrack NAT is not in 2.6, assuming you are really 
using the userspace datapath ?

>> The above rule sends a packet thru conntrack that will not be very 
>> successful, but another copy of the packet will be subjected to 
>> nw_dst/dl_dst
  modification and be sent to the other VMs
   Normally endpoints talk bidirectionally, but your one rule seems 
to imply otherwise; can you explain ?
   What is the full o/p of ‘sudo ovs-ofctl dump-flows  ?

The SYN packet is still being sent to 102 rather than to 103.

>>  You mean the addresses are changed but the packet goes to VM3 or the 
>> addresses are not even changed from your POV ?
 Maybe the mac binding is not what you expect; first try sending to 
the openflow port of VM3 in
 the above rule first and see if that works; i.e.  change ‘normal’ 
to the ‘VM3 port’
 You could check ‘sudo ovs-appctl dpif/dump-flows ’

I have read that even the order of actions is important and I have put the 
"ct(table=0)" at the end. But its of no use.

Can anybody please help.

Thanks
Karra
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Bridge

2018-12-06 Thread Pranav Godway
Hi,
   I want to understand what is meant by a bridge (br0) exactly.

Regards
Pranav
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Changing Destination IP and MAC Address for TCP Connection

2018-12-06 Thread karthik karra
Hi All,

I have three VMs
VM1 - 192.168.1.101
VM2 - 192.168.1.102
VM3 - 192.168.1.103
I am using netcat for testing the flows between the VMs.

My scenario is, the netcat server listens on both VM2 and VM3. From VM1, I
will provide "nc  ".
But I have a rule in OVS which mentions if source IP is VM1 and destination
IP is VM2, then change the destination IP and MAC address to the VM3's such
that the flow is redirected to the VM3.

This is working fine for UDP flows.

For TCP, when I have searched for solution, I have found conntrack. I am
using OVS 2.6 version as I have seen in one of the videos that from this
version it supports the conntrack in user data path.

This is the flow rule I am using to match and change the addresses for TCP.
I have followed the conntrack tutorial provided in OVS site.
ovs-ofctl add-flow ovs_bridge
"table=0,priority=50,ct_state=-trk,tcp,nw_src=192.168.1.101,nw_dst=192.168.1.102,tp_dst=8001,idle_timeout=0,actions=ct(table=0),mod_nw_dst=192.168.1.103,mod_dl_dst=08:00:27:be:ce:e0,normal

The SYN packet is still being sent to 102 rather than to 103.

I have read that even the order of actions is important and I have put the
"ct(table=0)" at the end. But its of no use.

Can anybody please help.

Thanks
Karra
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Fwd: [ovs-dpdk] bandwidth issue of vhostuserclient virtio ovs-dpdk

2018-12-06 Thread Lam, Tiago
On 03/12/2018 10:18, LIU Yulong wrote:
> 
> 
> On Sat, Dec 1, 2018 at 1:17 AM LIU Yulong  > wrote:
> 
> 
> 
> On Fri, Nov 30, 2018 at 5:36 PM Lam, Tiago  > wrote:
> 
> On 30/11/2018 02:07, LIU Yulong wrote:
> > Hi,
> >
> > Thanks for the reply, please see my inline comments below.
> >
> >
> > On Thu, Nov 29, 2018 at 6:00 PM Lam, Tiago
> mailto:tiago@intel.com>
> > >> wrote:
> >
> >     On 29/11/2018 08:24, LIU Yulong wrote:
> >     > Hi,
> >     >
> >     > We recently tested ovs-dpdk, but we met some bandwidth
> issue. The
> >     bandwidth
> >     > from VM to VM was not close to the physical NIC, it's about
> >     4.3Gbps on a
> >     > 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3
> test can easily
> >     > reach 9.3Gbps. We enabled the virtio multiqueue for all
> guest VMs.
> >     In the
> >     > dpdk vhostuser guest, we noticed that the interrupts are
> >     centralized to
> >     > only one queue. But for no dpdk VM, interrupts can hash
> to all queues.
> >     > For those dpdk vhostuser VMs, we also noticed that the
> PMD usages were
> >     > also centralized to one no matter server(tx) or
> client(rx). And no
> >     matter
> >     > one PMD or multiple PMDs, this behavior always exists.
> >     >
> >     > Furthuremore, my colleague add some systemtap hook on
> the openvswitch
> >     > function, he found something interesting. The function
> >     > __netdev_dpdk_vhost_send will send all the packets to one
> >     virtionet-queue.
> >     > Seems that there are some algorithm/hash table/logic
> does not do
> >     the hash
> >     > very well. 
> >     >
> >
> >     Hi,
> >
> >     When you say "no dpdk VMs", you mean that within your VM
> you're relying
> >     on the Kernel to get the packets, using virtio-net. And
> when you say
> >     "dpdk vhostuser guest", you mean you're using DPDK inside
> the VM to get
> >     the packets. Is this correct?
> >
> >
> > Sorry for the inaccurate description. I'm really new to DPDK. 
> > No DPDK inside VM, all these settings are for host only.
> > (`host` means the hypervisor physical machine in the
> perspective of
> > virtualization.
> > On the other hand `guest` means the virtual machine.)
> > "no dpdk VMs" means the host does not setup DPDK (ovs is
> working in
> > traditional way),
> > the VMs were boot on that. Maybe a new name `VMs-on-NO-DPDK-host`?
> 
> Got it. Your "no dpdk VMs" really is referred to as OvS-Kernel,
> while
> your "dpdk vhostuser guest" is referred to as OvS-DPDK.
> 
> >
> >     If so, could you also tell us which DPDK app you're using
> inside of
> >     those VMs? Is it testpmd? If so, how are you setting the
> `--rxq` and
> >     `--txq` args? Otherwise, how are you setting those in your
> app when
> >     initializing DPDK?
> >
> >
> > Inside VM, there is no DPDK app, VM kernel also
> > does not set any config related to DPDK. `iperf3` is the tool for
> > bandwidth testing.
> >
> >     The information below is useful in telling us how you're
> setting your
> >     configurations in OvS, but we are still missing the
> configurations
> >     inside the VM.
> >
> >     This should help us in getting more information,
> >
> >
> > Maybe you have noticed that, we only setup one PMD in the pasted
> > configurations.
> > But VM has 8 queues. Should the pmd quantity match the queues?
> 
> It shouldn't match the queues inside the VM per say. But in this
> case,
> since you have configured 8 rx queues on your physical NICs as
> well, and
> since you're looking for higher throughputs, you should increase
> that
> number of PMDs and pin those rxqs - take a look at [1] on how to do
> that. Later on, increasing the size of your queues could also help.
> 
> 
> I'll test it. 
> Yes, as you noticed that the vhostuserclient  port has n_rxq="8",
> options:
> {n_rxq="8",vhost-server-path="/var/lib/vhost_sockets/vhu76f9a623-9f"}.
> And the physical NIC has both n_rxq="8", n_txq="8".
> options: {dpdk-devargs=":01:00.0", n_rxq="8", n_txq="8"}
> options: