Hello everyone,
I am running a small experiment to isolate bandwidth with flowvisor
(version 1.4) implementing a port based slicing. With mininet, I create
a topology that consists of a host h1 (10.0.0.1), connected to a switch,
called s3, connected to another host, h2 (10.0.0.2), this way:
h1 ----- s3 ------ h2
To achieve this I use this mininet command: sudo mn --custom
simpletopo.py --topo mytopo --switch ovsk --controller remote --mac
--arp --link tc
I want all traffic going to port 666 from h1 to h2 to have a bandwidth
of 1Mbps, and all traffic going to port 555 from h1 to h2 to have a
bandwidth of 1Gbps. To achieve this, I create two queues in switch s3,
q0 and q1. q0 is for the 1Mbps traffic and q1 for 1Gbps traffic.
I create the queues like this:
sudo ovs-vsctl -- set Port s3-eth1 qos=@newqos -- --id=@newqos create
QoS type=linux-htb other-config:max-rate=10000000000 queues=0=@q0,1=@q1
-- --id=@q0 create Queue other-config:min-rate=1000000
other-config:max-rate=1000000 -- --id=@q1 create Queue
other-config:min-rate=1000000000 other-config:max-rate=1000000000
sudo ovs-vsctl -- set Port s3-eth2 qos=@newqos -- --id=@newqos create
QoS type=linux-htb other-config:max-rate=10000000000 queues=0=@q0,1=@q1
-- --id=@q0 create Queue other-config:min-rate=1000000
other-config:max-rate=1000000 -- --id=@q1 create Queue
other-config:min-rate=1000000000 other-config:max-rate=1000000000
I have supposed that it's necessary to create a QoS for the two
interfaces that connect the switch to the hosts, that is a QoS for eth1
and a QoS for eth2. But I'm not sure of this. Can anyone tell me if I'm
wrong?
Then, I have created the two slices. Slice1 for traffic going to and
from port 666 and slice2 for traffic going to and from port 555.
fvctl add-slice slice1 tcp:127.0.0.2:6643 gene...@email.com
fvctl add-slice slice2 tcp:127.0.0.3:6653 gene...@email.com
fvctl add-flowspace --forced-queue=1 flow1 3 100
in_port=1,nw_src=10.0.0.1,nw_dst=10.0.0.2,tp_dst=666 slice1=7
fvctl add-flowspace --forced-queue=0 flow2 3 100
in_port=1,nw_src=10.0.0.1,nw_dst=10.0.0.2,tp_dst=555 slice2=7
fvctl add-flowspace --forced-queue=1 flow3 3 100
in_port=2,nw_src=10.0.0.2,nw_dst=10.0.0.1,tp_src=666 slice1=7
fvctl add-flowspace --forced-queue=0 flow4 3 100
in_port=2,nw_src=10.0.0.2,nw_dst=10.0.0.1,tp_src=555 slice2=7
After this, I run the controllers of each slice:
./pox.py --verbose pox.openflow.of_01 --address=127.0.0.2 --port=6643
pox.forwarding.l2_learning
./pox.py --verbose pox.openflow.of_01 --address=127.0.0.3 --port=6653
pox.forwarding.l2_learning
Then, I run the topology in mininet, with the command given above. And
finally, I run a test with iperf to test the bandwidth. However, I
always get a bandwidth of 17-18 Gbps. So flowvisor force-enqueue option
is not working. What is wrong with my procedure? Maybe a problem with
the queues I have created? I'd appreciate any help.
Thank you.
Best regards,
Guillermo Chica.
_______________________________________________
openflow-discuss mailing list
openflow-discuss@lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss