Hello,

Can you try changing the order of the queue and output actions, and also using 
a simple apply instruction?
Like this:
sudo dpctl unix:/tmp/s1 flow-mod cmd=add,table=3 meta=0x6,in_port=2 
apply:queue1,output=1

Zoltan


>-----Original Message-----
>From: Aravindhan Dhanasekaran [mailto:adha...@ncsu.edu]
>Sent: Saturday, October 11, 2014 6:17 PM
>To: Zoltán Lajos Kis; openflow-discuss@lists.stanford.edu
>Subject: Re: [openflow-discuss] Using queues in OpenFlow soft switch 13
>
>On 10/11/2014 01:22 AM, Zoltán Lajos Kis wrote:
>> Hello Aravind,
>>
>> You still need to use an output action in step 3. The output action selects 
>> the
>port to use, and the queue id selects the queue on the given output port.
>
>Thanks, Zoltan.
>
>My traffic now makes it to the other host, but it's not following the queue
>characteristics.
>
>Rate at which traffic is seen at server before programming queue:
>[ 13]  0.0- 1.0 sec  6026 KBytes  49368 Kbits/sec   0.069 ms   52/ 4250 (1.2%)
>[ 13]  1.0- 2.0 sec  6092 KBytes  49909 Kbits/sec   0.057 ms   11/ 4255 (0.26%)
>[ 13]  2.0- 3.0 sec  6074 KBytes  49757 Kbits/sec   0.093 ms   15/ 4246 (0.35%)
>
>Program queues and flows to use the queue:
>$ sudo dpctl unix:/tmp/s1 queue-mod 1 1 4 $ sudo dpctl unix:/tmp/s1 queue-
>mod 1 2 1 $ sudo dpctl unix:/tmp/s1 flow-mod cmd=add,table=3
>meta=0x6,in_port=2
>apply:output:1 apply:queue1
>$ sudo dpctl unix:/tmp/s1 flow-mod cmd=add,table=4 meta=0x11,in_port=2
>apply:output:1 apply:queue2
>
>Rate at which traffic is seen at server before programming queue:
>[ 13] 59.0-60.0 sec  6006 KBytes  49204 Kbits/sec   0.069 ms   64/ 4248 (1.5%)
>[ 13] 60.0-61.0 sec  6075 KBytes  49768 Kbits/sec   0.082 ms   16/ 4248 (0.38%)
>[ 13] 61.0-62.0 sec  6003 KBytes  49180 Kbits/sec   0.072 ms   75/ 4257 (1.8%)
>
>Thus, my TCP traffic still suffers.
>[ 13] 1546.0-1547.0 sec  5551 KBytes  45474 Kbits/sec [ 13] 1547.0-1548.0 sec
>5638 KBytes  46186 Kbits/sec [ 13] 1548.0-1549.0 sec  5571 KBytes  45640
>Kbits/sec
>[ 13] 1549.0-1550.0 sec   992 KBytes  8124 Kbits/sec
>[ 13] 1550.0-1551.0 sec   566 KBytes  4634 Kbits/sec
>[ 13] 1551.0-1552.0 sec   546 KBytes  4471 Kbits/sec
>
>Thanks,
>
>
>> BR,
>> Zoltan
>>
>>
>>> -----Original Message-----
>>> From: openflow-discuss [mailto:openflow-discuss-
>>> boun...@lists.stanford.edu] On Behalf Of Aravindhan Dhanasekaran
>>> Sent: Saturday, October 11, 2014 6:50 AM
>>> To: openflow-discuss@lists.stanford.edu
>>> Subject: [openflow-discuss] Using queues in OpenFlow soft switch 13
>>>
>>> [I believe this to be more of a ofsoftswitch specific question, but
>>> I'm posting it here as ofsoftswitch folks follow this list. Apologies
>>> for the wide distribution.]
>>>
>>> Hello,
>>>
>>> I'm trying to use queues in ofsoftswitch13 for a small traffic
>>> shaping/limiting experiment as discussed at
>>> http://archive.openflow.org/wk/index.php/Slicing
>>>
>>> I'm using top-of-trunk versions of Mininet and ofsoftswitch13 in my
>>> testbed and have two streams of TCP and UDP traffic.
>>>
>>> My topology:
>>> h1 [eth1] <-----> [s1-eth1] s1 [s1-eth2] <-----> [eth1] h2
>>>
>>> My flow tables:
>>> $ sudo dpctl unix:/tmp/s1 stats-flow
>>>     {table="0", match="oxm{eth_type="0x800", ip_proto="6"}",
>>> dur_s="3740", dur_ns="249000", prio="32768", idle_to="0",
>>> hard_to="0", cookie="0x0", pkt_cnt="4000521", byte_cnt="4093372606",
>>> insts=[meta{meta="0x6", mask="0xffffffffffffffff"},
>>> goto{table="3"}]},
>>>
>>>     {table="0", match="oxm{eth_type="0x800", ip_proto="17"}",
>>> dur_s="2622", dur_ns="608000", prio="32768", idle_to="0",
>>> hard_to="0", cookie="0x0", pkt_cnt="65574", byte_cnt="99147888",
>>> insts=[meta{meta="0x11", mask="0xffffffffffffffff"}, goto{table="4"}]},
>>>     {table="3", match="oxm{in_port="1", metadata="0x6"}",
>dur_s="3728",
>>> dur_ns="131000", prio="32768", idle_to="0", hard_to="0",
>>> cookie="0x0", pkt_cnt="1355622", byte_cnt="90368912",
>>> insts=[apply{acts=[out{port="2"}]}]},
>>>     {table="3", match="oxm{in_port="2", metadata="0x6"}",
>dur_s="3728",
>>> dur_ns="114000", prio="32768", idle_to="0", hard_to="0",
>>> cookie="0x0", pkt_cnt="2644899", byte_cnt="4003003694",
>>> insts=[apply{acts=[out{port="1"}]}]},
>>>     {table="4", match="oxm{in_port="1", metadata="0x17"}",
>dur_s="2670",
>>> dur_ns="800000", prio="32768", idle_to="0", hard_to="0",
>>> cookie="0x0", pkt_cnt="3", byte_cnt="4536",
>>> insts=[apply{acts=[out{port="2"}]}]},
>>>     {table="4", match="oxm{in_port="2", metadata="0x17"}",
>dur_s="1951",
>>> dur_ns="392000", prio="32768", idle_to="0", hard_to="0",
>>> cookie="0x0", pkt_cnt="1968", byte_cnt="2975616",
>>> insts=[apply{acts=[out{port="1"}]}]}]}
>>>
>>>
>>> Test setup:
>>>    1. Initially, I've both TCP and UDP traffic, originating at h2 and
>>> destined to h1, to use the same egress interface (s1-eth1) whose
>>> bandwidth has been limited to 10kbps. UDP takes over all the bandwidth
>as expected.
>>>    2. I then created two queues on the egress interface with min
>>> guaranteed rate as 7 and 3 kbps to be used for TCP and UDP respectively.
>>>
>>> $ sudo dpctl unix:/tmp/s1 queue-mod 1 1 7 $ sudo dpctl unix:/tmp/s1
>>> queue- mod 1 1 3 $ sudo dpctl unix:/tmp/s1 stats-queue SENDING
>(xid=0xF0FF00F0):
>>> stat_req{type="queue", flags="0x0", port="any", q="all"}
>>>
>>> RECEIVED (xid=0xF0FF00F0):
>>> stat_repl{type="queue", flags="0x0", stats=[{port="1", q="1",
>>> tx_bytes="0", tx_pkt="0", tx_err="0"}, {port="1", q="2",
>>> tx_bytes="0", tx_pkt="0", tx_err="0"}]}
>>>
>>>    3. I modified my flows to use the queue instead of directly
>>> sending it out of the port. After tj $ sudo dpctl unix:/tmp/s1
>>> flow-mod cmd=add,table=3
>>> meta=0x6,in_port=2 apply:queue1 $ sudo dpctl unix:/tmp/s1 flow-mod
>>> cmd=add,table=4 meta=0x11,in_port=2 apply:queue2 $ sudo dpctl
>>> unix:/tmp/s1 stats-flow | grep "q="
>>>     {table="3", match="oxm{in_port="2", metadata="0x6"}", dur_s="31",
>>> dur_ns="516000", prio="32768", idle_to="0", hard_to="0",
>>> cookie="0x0", pkt_cnt="62", byte_cnt="93868",
>insts=[apply{acts=[queue{q="1"}]}]},
>>>     {table="4", match="oxm{in_port="2", metadata="0x17"}",
>dur_s="21",
>>> dur_ns="535000", prio="32768", idle_to="0", hard_to="0",
>>> cookie="0x0", pkt_cnt="0", byte_cnt="0",
>>> insts=[apply{acts=[queue{q="2"}]}]}]}
>>>
>>>    4. After step 3, my traffic (both TCP and UDP) seems to be dropped
>>> at s1- eth1. I verified this using tcpdump on the ingress interface
>>> (s1-eth2) and I could see my traffic there.
>>>
>>> Can anyone please explain me what's wrong with my setup?
>>>
>>> Also, how would the switch know which queue of which port should be
>>> used for egress if we just specify the queue ID alone as done in step
>>> 3? Or should queue IDs be globally (across all datapath ports) unique?
>>>
>>> Thanks,
>>> /Aravind
>>> _______________________________________________
>>> openflow-discuss mailing list
>>> openflow-discuss@lists.stanford.edu
>>> https://mailman.stanford.edu/mailman/listinfo/openflow-discuss

_______________________________________________
openflow-discuss mailing list
openflow-discuss@lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss

Reply via email to