Re: [pox-dev] POX on planetlab

2013-04-12 Thread Kouvakas Alexandros
I still don't entirely understand your topology; it seems like you're doing
something a bit unusual.  Are the nodes in your topology switches or hosts?
 Or both (that is, they're acting as hosts running services or hosting
users or whatever, but they've also got OVS installed)?  If they're
switches, are you doing in-band control?  Are they connected with tunnels?
 What are these IP addresses actually the IP addresses of?

I am making a bunch of guesses here.  Tell me which of them are wrong.

I am guessing that they're all running OVS.  The IP addresses you've shown
are the IP of the machine's main physical interface (e.g., eth0).  You've
created tunnels along the paths you've shown and added the tunnel endpoints
to OVS.

You are almost right of what I have created in my network. As I told you in
the beginning, I run my experiments on planetlab nodes. Every sliver has
pre-installed the OVS. So, yes, every node that I call host, has also
installed OVS but I need them to act as hosts. I am also using the Makefile
(from sliver-ovs)
http://git.onelab.eu/?p=sliver-openvswitch.git;a=tree;f=planetlab;hb=HEAD ,
in order to create my overlay network. So, in order to be clear here is the
conf.mk which describes the overlay network (the IPs are a subnet of the
overlay network). Note that only the SENDER (the OF switch) has the
controller. By this I mean that I run ovs-vsctl set-controller only on
the SENDER node.

*#
SLICE=inria_nepi
HOST_SENDER=planetlab1.informatik.uni-erlangen.de
IP_SENDER=192.168.3.1/24
HOST_2=planetlab-1.ida.liu.se
IP_2=192.168.3.2/24
HOST_3=planetlab1.cyfronet.pl
IP_3=192.168.3.3/24
HOST_4=planetlab1.thlab.net
IP_4=192.168.3.4/24
HOST_5=lsirextpc01.epfl.ch
IP_5=192.168.3.5/24
HOST_6=planetlab1.tlm.unavarra.es
IP_6=192.168.3.6/24
HOST_7=plab1-itec.uni-klu.ac.at
IP_7=192.168.3.7/24

LINKS:=
LINKS+= SENDER-2
LINKS+= SENDER-3
LINKS+= SENDER-6
LINKS+= SENDER-7
LINKS+= 3-4
LINKS+= 4-5
LINKS+= 5-6
#*

On the OF Switch the output of *ovs-vsctl show* is

*Bridge inria_nepi
Controller tcp:131.188.44.100:6633
Port LSENDER-6
Interface LSENDER-6
type: tunnel
options: {remote_ip=130.206.158.138, remote_port=41604}
Port LSENDER-3
Interface LSENDER-3
type: tunnel
options: {remote_ip=149.156.5.114, remote_port=53192}
Port inria_nepi
Interface inria_nepi
type: internal
options: {local_ip=192.168.3.1, local_netmask=24}
Port LSENDER-2
Interface LSENDER-2
type: tunnel
options: {remote_ip=192.36.94.2, remote_port=36643}
Port LSENDER-7
Interface LSENDER-7
type: tunnel
options: {remote_ip=143.205.172.11, remote_port=57190}*

and the ifconfig output is

*eth0  Link encap:Ethernet  HWaddr 00:19:99:2B:09:3D
  inet addr:131.188.44.100  Bcast:131.188.44.255  Mask:255.255.255.0
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:109053530 errors:0 dropped:0 overruns:0 frame:0
  TX packets:107580812 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:39367698836 (36.6 GiB)  TX bytes:22924985597 (21.3 GiB)
  Interrupt:23 Memory:d002-d004

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:2000835 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2000835 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:1062552304 (1013.3 MiB)  TX bytes:1062552304 (1013.3 MiB)

tap1109-0 Link encap:Ethernet  HWaddr 1E:D8:04:47:45:D9
  inet addr:192.168.3.1  Bcast:192.168.3.255  Mask:255.255.255.0
  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:3387 errors:0 dropped:0 overruns:0 frame:0
  TX packets:435 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:500
  RX bytes:234206 (228.7 KiB)  TX bytes:18598 (18.1 KiB)*

The ovs-vsctl show output of the node .3.4 is

*###
Bridge inria_nepi
Port L4-5
Interface L4-5
type: tunnel
options: {remote_ip=192.33.210.16, remote_port=52703}
Port L3-4
Interface L3-4
type: tunnel
options: {remote_ip=149.156.5.114, remote_port=39454}
Port inria_nepi
Interface inria_nepi
type: internal
options: {local_ip=192.168.3.4, local_netmask=24}
 ###*

and the output of the .3.3 is
###
Bridge inria_nepi
Port inria_nepi
Interface inria_nepi
type: internal
options: {local_ip=192.168.3.3, local_netmask=24}
Port L3-4
Interface L3-4
type: tunnel

Re: [pox-dev] POX on planetlab

2013-04-12 Thread Murphy McCauley
On Apr 12, 2013, at 2:30 AM, Kouvakas Alexandros wrote:

 Note that only the SENDER (the OF switch) has the controller. By this I mean 
 that I run ovs-vsctl set-controller only on the SENDER node.

So each machine is running OVS, but only one of the OVS instances is connecting 
to a controller?  In this case, you may be able to get them to act as learning 
switches, but you can't program them from the controller, which is going to 
prevent you from doing interesting path selection using OpenFlow.  If that's 
what you want to do, connect them to the controller either via the controller's 
external IP (131.188.44.100), or by setting up in-band control and having the 
control traffic go through your tunnels.

-- Murphy

[pox-dev] Some questions about discovery module.

2013-04-12 Thread Weiyang Mo
Hi,

I always use the openflow.discovery as my topology module, however I met
some strange behaviors recently and raise some questions.

The strange behaviors are that link time_out appear unexpectedly, which
causes flow entry deleted  ( I'm using l2_multi). But actually the link is
good.

The unexpected link time_out may happen in following cases, and more
frequent if several cases at the same time:

(1)  If I keep requesting info from switches ( e.g. portstatus request).
I'm wondering why the request causes this. Is that because the request
flushes the LLDP packets?

(2)  If a new traffic is introduced. Is that because of the traffic to
controller blocks LLDP during learning before flow installed?

(3)  If do flow entry modification.  I guess the modification takes some
time and during this time, many data packets are forwarded to controller
and occupy control channel.

Unfortuanately, My program is doing all the above for some intelligent
routing. However the unexpected link time_out will fresh everything... It
is still need to have the time_out because sometimes it is really a link
disconnection. I'm requesting port_status every 2 seconds, and use the feed
back to intelligent route.

Is that  because of LLDPs are blaocked by other packets in control channel
and links cannot be updated? Is it possible to set highest priority for
LLDP in control channel rather than others? If the data channel is almost
fully occupied, will the LLDPs be blocked in that channel and be treated as
link time_out?

And another question is that: Why not only using port_status  as link_event
rather than link update? The most concern I can think probably is that some
cables are really bad, but they are stilltreated connected for switches?

Thanks very much.

Weiyang