We're doing some large-scale quality of service experiments on EC2 using OVS. Since we do not have access to the hypervisor, we are forced to run it within a virtual machine.
Basically, we would reassign the IP address delegated to eth0, to OVS's internal interface. This works fine with ofp_action_output. However, since we're unable to get it working with ofp_action_enqueue, we have to resort to a hack by creating a virtual interface (TAP) and adding it to the list of ports. The problem here is that you cannot communicate directly over a TAP interface, so in order to get around this, we create yet another virtual interface and interconnect them using VDE (Virtual Distributed Ethernet). In the end, the topology looks something like this: |- sw (intenal) |- eth0 |- tap0 <-> vde_switch <-> tap1 But now, due to this extra complexity, we are seeing all sorts of performance degradations. I suspect this could be due to the fact that vde_switch is running in user-space. So far I haven't been able to come up with a better solution. I can see this being useful in a real world scenario as well. Let's say you would like to rate limit traffic between a VM and the underlying hypervisor. Hope this makes sense. On Mon, Sep 12, 2011 at 03:09:41PM -0700, Ben Pfaff wrote: > On Mon, Sep 12, 2011 at 03:07:22PM -0700, Vjekoslav Brajkovic wrote: > > On Mon, Sep 12, 2011 at 07:56:11AM -0700, Ben Pfaff wrote: > > > It's because OpenFlow 1.0 says that the port in ofp_action_enqueue > > > should "refer to a valid physical port (i.e. < OFPP_MAX) or > > > OFPP_IN_PORT." OFPP_LOCAL isn't either of those. > > > > In that case, I'll redirect my question to openflow-spec mailing list, > > since this kind of a behaviour does not make much sense. > > Fair enough. > > What are you trying to accomplish by queuing to an internal port? It > does not, off-hand, sound useful. _______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
