After configuring a slice:slice1_controller1 by adding a FlowSpace which
includes a <match> field e.g. tp_dst=5000,
controller1 is giving "OFPET_FLOW_MOD_FAILED (3), OFPFMFC_EPERM (2)" error
whenever a flow_mod message is sent to any connected switch.
If the FlowSpace does not include any such <match> rule (i.e. any) then
this error does not occur.

Controller: pox
Switches: Mininet virtual switches
FV version: FlowVisor 1.0.0 obtained directly from source

Any idea or comment would be highly appreciated.
Thanks.

Mehmet Fatih Aktas



On Thu, Apr 4, 2013 at 11:20 AM, mehmet fatih Aktaş
<mfatihak...@gmail.com>wrote:

> Hi Ali,
>
> I am getting another strange error so wanted to share here.
> By using new JSON api, configured FV as;
>
> fvctl -f fvpasswd_file add-flowspace myflowspace1 all 1 any myslice1=6
> fvctl -f fvpasswd_file add-flowspace myflowspace2 all 10 tp_dst=1000
> myslice2=6
>
> It created the FlowSpaces correctly, as can be seen from "list-flowspace"
> Configured Flow entries:
> {"force-enqueue": -1, "name": "myflowspace1", "slice-action":
> [{"slice-name": "myslice1", "permission": 6}], "queues": [], "priority": 1,
> "dpid": "all_dpids", "id": 12, "match": {"wildcards": 4194303}}
> {"force-enqueue": -1, "name": "myflowspace2", "slice-action":
> [{"slice-name": "myslice2", "permission": 6}], "queues": [], "priority":
> 10, "dpid": "all_dpids", "id": 13, "match": {"wildcards": 4194175,
> "tp_dst": 1000}}
>
> When I run the controller for myslice1 it connects to sws successfully but
> when I run the controller for myslice2 it gives sw-connection error for
> every sw: (This following log message is just for sw_dpid: ::04)
>
> ERROR:openflow.of_01:[00-00-00-00-00-04 1] OpenFlow Error:
> [00-00-00-00-00-04 1] Error: header:
> [00-00-00-00-00-04 1] Error:   version: 1
> [00-00-00-00-00-04 1] Error:   type:    1 (OFPT_ERROR)
> [00-00-00-00-00-04 1] Error:   length:  84
> [00-00-00-00-00-04 1] Error:   xid:     0
> [00-00-00-00-00-04 1] Error: type: OFPET_FLOW_MOD_FAILED (3)
> [00-00-00-00-00-04 1] Error: code: OFPFMFC_EPERM (2)
> [00-00-00-00-00-04 1] Error: datalen: 72
> [00-00-00-00-00-04 1] Error: 0000: 01 0e 00 48 00 00 01 02  00 10 00 1f 00
> 00 00 00   ...H............
> [00-00-00-00-00-04 1] Error: 0010: 00 00 00 00 00 00 00 00  00 00 00 00 00
> 00 00 00   ................
> [00-00-00-00-00-04 1] Error: 0020: 00 00 00 00 00 00 00 00  00 00 00 00 00
> 00 00 00   ................
> [00-00-00-00-00-04 1] Error: 0030: 00 00 00 00 00 00 01 00  00 03 00 00 00
> 00 80 00   ................
> [00-00-00-00-00-04 1] Error: 0040: ff ff ff ff ff ff 00
> 00                            ........
> INFO:openflow.of_01:[00-00-00-00-00-04 1] connected
> Connection [00-00-00-00-00-04 1]
> ________________________________________________________
> Despite the error, sws seem to be connected to controller and slicing
> logic seems to work fine.
> Just wondering what might cause this error log.
>
> Mehmet Fatih Aktas
>
>
>
>
> On Wed, Apr 3, 2013 at 8:00 PM, mehmet fatih Aktaş 
> <mfatihak...@gmail.com>wrote:
>
>> Hi Ali,
>>
>> Thanks for letting me know about this. I created an issue and did my best
>> to follow the guidelines you sent me.
>>
>> I will try to use old XMLRPC API in the meantime. Thanks for writing the
>> steps for that.
>> Regards.
>>
>> Mehmet Fatih Aktas
>>
>>
>>
>> On Wed, Apr 3, 2013 at 5:58 PM, Ali Al-Shabibi <
>> ali.al-shab...@stanford.edu> wrote:
>>
>>> Hi Mehmet,
>>>
>>> This is a bug with the new JSON api. Could you please create an issue
>>> for it and I will address it ASAP.
>>>
>>> In the meantime, you can add your flowspace using the old XMLRPC API. To
>>> do this, follow these steps:
>>>
>>> 0. Backup your config if you have things you don't want to lose.
>>>         0.1 fvctl save-config /etc/flowvisor/config.json
>>> 1. Re-enable the XMLRPC interface
>>>         1.01 Stop flowvisor
>>>         1.1 edit the /etc/flowvisor/config.json
>>>         1.2 set api_webserver_port to 8081
>>>         1.3 run fvconfig load /etc/flowvisor/config.json
>>>         1.4 start flowvisor
>>> 2. Add your flowspace using the fvctl-xml command.
>>>         2.1 fvctl-xml --url=https://localhost:8081 addFlowSpace any 10
>>> nw_dst=10.0.0.255/32 Slice:mySlice2=7
>>>
>>> Apologies for this, we will fix this soon. Please create an issue for
>>> this on https://github.com/OPENNETWORKINGLAB/flowvisor/issues?state=openand 
>>> follow these steps as much as possible ->
>>> https://github.com/OPENNETWORKINGLAB/flowvisor/wiki/Filing-New-Issues-or-bugs
>>>
>>> Cheers.
>>>
>>> --
>>> Ali
>>>
>>> On Apr 3, 2013, at 2:24 PM, mehmet fatih Aktaş <mfatihak...@gmail.com>
>>> wrote:
>>>
>>> > Ali, I will try to follow another approach similar to what you
>>> suggested. Your advice helped me a lot thank you very much.
>>> >
>>> > I have another simple question: I am configuring FV as
>>> > fvctl -f fvpasswd_file add-flowspace myflowspace1 all 1 any myslice1=7
>>> > fvctl -f fvpasswd_file add-flowspace myflowspace2 all 10 nw_dst=
>>> 10.0.0.255/32 myslice2=7
>>> >
>>> > As far as I understand, this means the packets not matching; nw_dst=
>>> 10.0.0.255/32, should not be forwarded to myslice2 but to myslice1.
>>> However, again all of the packets are issued to myslice2 because of its
>>> respectively high priority, I guess.
>>> >
>>> > Actually this behavior expected because after FV is getting configured
>>> such, then I "list-flowspace" and it gives:
>>> > Configured Flow entries:
>>> > {"force-enqueue": -1, "name": "myflowspace1", "slice-action":
>>> [{"slice-name": "myslice1", "permission": 6}], "queues": [], "priority": 1,
>>> "dpid": "all_dpids", "id": 296, "match": {"wildcards": 4194303}}
>>> > {"force-enqueue": -1, "name": "myflowspace2", "slice-action":
>>> [{"slice-name": "myslice2", "permission": 6}], "queues": [], "priority":
>>> 10, "dpid": "all_dpids", "id": 297, "match": {"wildcards": 4194303}}
>>> >
>>> > As this log shows, there is no difference at the "matching" fields. So
>>> I think, either i am doing sth wrong with nw_dst=10.0.0.255/32 or it is
>>> not working right.
>>> > (Because when add another match e.g. in_port=3, it is showing this in
>>> myflowspace2 match field differently from that of myflowspace1)
>>> >
>>> > Any help would be highly appreciated. Thanks for the time.
>>> >
>>> > Mehmet Fatih Aktas
>>> >
>>> >
>>> >
>>> > On Wed, Apr 3, 2013 at 4:18 PM, Ali Al-Shabibi <
>>> ali.al-shab...@stanford.edu> wrote:
>>> > [Responses inline]
>>> >
>>> > > After I created the slices as I wrote in the previous email; first I
>>> am running controller:8002, then 3 sws are not getting connected but only
>>> the sw whose dpid is added with the last flowspace entry. e.g. if the FV is
>>> configured such that
>>> > > fvctl -f fvpasswd_file add-slice myslice1 tcp:192.168.56.1:8001 mfa
>>> > > fvctl -f fvpasswd_file add-slice myslice2 tcp:192.168.56.1:8002 mfa
>>> > >
>>> > > fvctl -f fvpasswd_file add-flowspace myflowspace1 all 1 any
>>> myslice1=7
>>> > > fvctl -f fvpasswd_file add-flowspace myflowspace2 00:00:00:00:00:02
>>> 3 any myslice2=7
>>> > > fvctl -f fvpasswd_file add-flowspace myflowspace2 00:00:00:00:00:03
>>> 3 any myslice2=7
>>> > > fvctl -f fvpasswd_file add-flowspace myflowspace2 00:00:00:00:00:01
>>> 3 any myslice2=7
>>> > > Then only sw_dpid:::01 is connected to myslice2. I listed FV
>>> datapaths and it is showing;
>>> > > Connected switches:
>>> > >   1 : 00:00:00:00:00:00:00:01
>>> > >   2 : 00:00:00:00:00:00:00:02
>>> > >   3 : 00:00:00:00:00:00:00:03
>>> > >   4 : 00:00:00:00:00:00:00:04
>>> > > So there is no connectivity problem. Also as you suggested, I traced
>>> the OF packet exchanges between the sws and the controllers, but for this
>>> case, the connection messages (Hello, Features Request, Set Config etc.)
>>> are exchanged for only connected sw_dpid:::01.
>>> > >
>>> >
>>> > Are you sure flowvisor is connecting dpid 0x01 to myslice2? From your
>>> description I understand that initially only controller :8001 is running.
>>> Therefore, FlowVisor will create a connection for dpid 0x01 to myslice1
>>> only. So it is normal that you only see traffic for dpid 0x01. Does this
>>> make sense to you?
>>> >
>>> > >
>>> > > >Slicing on the dpid only will be tricky because having two entire
>>> datapaths in two different slices is nearly impossible, you need some other
>>> >variable to discriminate on. I don't know what kind of virtual subnets you
>>> want to build, but have you considered slicing on IPs or even >vlans?
>>> Another alternative which is quite simple is to slice on a combination of
>>> dpids and ports.
>>> > > This is a good suggestion, thanks, but what I want to have is
>>> sligthly more dynamic way of slicing the network by using FlowVisor. That
>>> is why I was trying to go for fine-grained manner of slicing: by specifying
>>> dpids individually for each slice. Also I am not trying to slice the
>>> datapath entirely here.
>>> > > Do you have any idea that might be useful to achieve this type of
>>> slicing else I will try to follow what you suggested, vlan or IP slicing.
>>> > > Thanks.
>>> > >
>>> >
>>> > Dpid slicing isn't very fine grained because once you allocate a dpid
>>> to a slice (ie. which no other discriminant), the flowspace (and therefore
>>> the slice it is in) with the highest priority will always have control of
>>> that dpid. From what I can tell, VLAN or IP (or MAC) slicing is your best
>>> bet here.
>>> >
>>> > > Mehmet Fatih Aktas
>>> > >
>>> > >
>>> > >
>>> > > On Wed, Apr 3, 2013 at 2:26 PM, Ali Al-Shabibi <
>>> ali.al-shab...@stanford.edu> wrote:
>>> > > Hi Mehmet,
>>> > >
>>> > > For your instability issue could you check that the connection
>>> between the switches and flowvisor is stable. That'll give us a better idea
>>> of where to start looking. You can verify this by running fvctl
>>> list-datapaths and confirming that all 4 switches remain connected. If they
>>> are always connected then you should probably capture a packet trace
>>> between flowvisor and the controllers to see what is actually going on.
>>> This can be done with wireshark.
>>> > >
>>> > >
>>> > > >
>>> > > > Also, even though the flowspaces of myslice1&2 are successfully
>>> created and all switches are getting connected successfully, FV does not
>>> send the packet_ins to both slice switches but only to one e.g.
>>> controller:8001.
>>> > > >
>>> > >
>>> > > So flowvisor does not do this. It will only forward control traffic
>>> to one slice. In your case myslice1 takes precedence because it has a
>>> higher priority and matches all dpids.
>>> > >
>>> > > > Overall, what i am trying to do is to slice the network into
>>> virtual subnets, and here i explained the problems I had during doing that.
>>> What I am doing may not be the best way, I would appreciate any help or
>>> comment.
>>> > >
>>> > > Slicing on the dpid only will be tricky because having two entire
>>> datapaths in two different slices is nearly impossible, you need some other
>>> variable to discriminate on. I don't know what kind of virtual subnets you
>>> want to build, but have you considered slicing on IPs or even vlans?
>>> Another alternative which is quite simple is to slice on a combination of
>>> dpids and ports.
>>> > >
>>> > > Let me know if this helps.
>>> > >
>>> > > > Thanks for the time.
>>> > > >
>>> > > > Mehmet Fatih Aktas
>>> > > >
>>> > > >
>>> > > > On Tue, Feb 26, 2013 at 3:08 PM, Ali Al-Shabibi <
>>> ali.al-shab...@stanford.edu> wrote:
>>> > > > Hi Mehmet,
>>> > > >
>>> > > > FlowVisor can reside anywhere really (within a reasonable
>>> latency), so you could have it running in the mininet VM or on another
>>> machine. Just point your mininet network to the FlowVisor.
>>> > > >
>>> > > > This can be done by giving the --controller remote option to
>>> mininet. That said, I'd be interested to know what problems you had
>>> installing FlowVisor.
>>> > > >
>>> > > > >
>>> > > > > Is there any simple tutorial or any resource that can help me to
>>> get on board quickly ?
>>> > > >
>>> > > > Unfortunately not yet, but I will be putting the tutorials up
>>> online officially soon, although they may not be very different to the ones
>>> you have found yet.
>>> > > >
>>> > > > Hope this helps!
>>> > > >
>>> > > > > Thanks.
>>> > > > >
>>> > > > > Mehmet Fatih Aktas
>>> > > > > _______________________________________________
>>> > > > > openflow-discuss mailing list
>>> > > > > openflow-discuss@lists.stanford.edu
>>> > > > > https://mailman.stanford.edu/mailman/listinfo/openflow-discuss
>>> > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > >
>>> > >
>>> > >
>>> >
>>> >
>>>
>>>
>>
>
_______________________________________________
openflow-discuss mailing list
openflow-discuss@lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss

Reply via email to