I still don't entirely understand your topology; it seems like you're doing
something a bit unusual.  Are the nodes in your topology switches or hosts?
 Or both (that is, they're acting as hosts running services or hosting
users or whatever, but they've also got OVS installed)?  If they're
switches, are you doing in-band control?  Are they connected with tunnels?
 What are these IP addresses actually the IP addresses of?

I am making a bunch of guesses here.  Tell me which of them are wrong.

I am guessing that they're all running OVS.  The IP addresses you've shown
are the IP of the machine's main physical interface (e.g., eth0).  You've
created tunnels along the paths you've shown and added the tunnel endpoints
to OVS.

You are almost right of what I have created in my network. As I told you in
the beginning, I run my experiments on planetlab nodes. Every sliver has
pre-installed the OVS. So, yes, every node that I call host, has also
installed OVS but I need them to act as hosts. I am also using the Makefile
(from sliver-ovs)
http://git.onelab.eu/?p=sliver-openvswitch.git;a=tree;f=planetlab;hb=HEAD ,
in order to create my overlay network. So, in order to be clear here is the
conf.mk which describes the overlay network (the IPs are a subnet of the
overlay network). Note that only the SENDER (the OF switch) has the
controller. By this I mean that I run "ovs-vsctl set-controller" only on
the SENDER node.

*#####
SLICE=inria_nepi
HOST_SENDER=planetlab1.informatik.uni-erlangen.de
IP_SENDER=192.168.3.1/24
HOST_2=planetlab-1.ida.liu.se
IP_2=192.168.3.2/24
HOST_3=planetlab1.cyfronet.pl
IP_3=192.168.3.3/24
HOST_4=planetlab1.thlab.net
IP_4=192.168.3.4/24
HOST_5=lsirextpc01.epfl.ch
IP_5=192.168.3.5/24
HOST_6=planetlab1.tlm.unavarra.es
IP_6=192.168.3.6/24
HOST_7=plab1-itec.uni-klu.ac.at
IP_7=192.168.3.7/24

LINKS:=
LINKS+= SENDER-2
LINKS+= SENDER-3
LINKS+= SENDER-6
LINKS+= SENDER-7
LINKS+= 3-4
LINKS+= 4-5
LINKS+= 5-6
#####*

On the OF Switch the output of *"ovs-vsctl show"* is

*Bridge inria_nepi
        Controller "tcp:131.188.44.100:6633"
        Port "LSENDER-6"
            Interface "LSENDER-6"
                type: tunnel
                options: {remote_ip="130.206.158.138", remote_port="41604"}
        Port "LSENDER-3"
            Interface "LSENDER-3"
                type: tunnel
                options: {remote_ip="149.156.5.114", remote_port="53192"}
        Port inria_nepi
            Interface inria_nepi
                type: internal
                options: {local_ip="192.168.3.1", local_netmask="24"}
        Port "LSENDER-2"
            Interface "LSENDER-2"
                type: tunnel
                options: {remote_ip="192.36.94.2", remote_port="36643"}
        Port "LSENDER-7"
            Interface "LSENDER-7"
                type: tunnel
                options: {remote_ip="143.205.172.11", remote_port="57190"}*

and the ifconfig output is

*eth0      Link encap:Ethernet  HWaddr 00:19:99:2B:09:3D
          inet addr:131.188.44.100  Bcast:131.188.44.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:109053530 errors:0 dropped:0 overruns:0 frame:0
          TX packets:107580812 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:39367698836 (36.6 GiB)  TX bytes:22924985597 (21.3 GiB)
          Interrupt:23 Memory:d0020000-d0040000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2000835 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2000835 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1062552304 (1013.3 MiB)  TX bytes:1062552304 (1013.3 MiB)

tap1109-0 Link encap:Ethernet  HWaddr 1E:D8:04:47:45:D9
          inet addr:192.168.3.1  Bcast:192.168.3.255  Mask:255.255.255.0
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:3387 errors:0 dropped:0 overruns:0 frame:0
          TX packets:435 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:234206 (228.7 KiB)  TX bytes:18598 (18.1 KiB)*

The "ovs-vsctl show" output of the node .3.4 is

*###
Bridge inria_nepi
        Port "L4-5"
            Interface "L4-5"
                type: tunnel
                options: {remote_ip="192.33.210.16", remote_port="52703"}
        Port "L3-4"
            Interface "L3-4"
                type: tunnel
                options: {remote_ip="149.156.5.114", remote_port="39454"}
        Port inria_nepi
            Interface inria_nepi
                type: internal
                options: {local_ip="192.168.3.4", local_netmask="24"}
 ###*

and the output of the .3.3 is
###
Bridge inria_nepi
        Port inria_nepi
            Interface inria_nepi
                type: internal
                options: {local_ip="192.168.3.3", local_netmask="24"}
        Port "L3-4"
            Interface "L3-4"
                type: tunnel
                options: {remote_ip="141.11.0.165", remote_port="57268"}
        Port "LSENDER-3"
            Interface "LSENDER-3"
                type: tunnel
                options: {remote_ip="131.188.44.100", remote_port="38224"}
###

I hope now the topology is clear.




Now what you'd like to do is when a fine-grained (e.g. exact match) flow
comes in on some non-tunnel interface, you'd like to set up a specific path
from source (you know where this is because the flow showed up as a
packet_in from a non-tunnel port) to the destination.  How do you know
where the destination is (which switch and which port)?  The most obvious
answers are that you flood until you learn it, or you ARP for it (in many
cases, these end up being pretty similar in terms of the traffic).  A third
option -- which I think you may be thinking of -- would be that you think
you already know the destination switch because your actual destination
address coincides with one of your tunnel endpoint addresses (e.g., you're
trying to ping 192.168.3.4, so the destination is on the host with the eth0
with 192.168.3.4). Yes, that is true, because of the fact that I build the
topology of the network by my own, so I know the IP and also the MAC of the
destination (and also of the source) This strikes me as fairly odd, because
in this case I don't immediately see what you gain by having tunnels in the
first place since the addresses on the inside are the same as the addresses
on the outside. What do you mean by this? Do you think that I should choose
a different type of connection between the nodes?
The are located in different countries. I need to create an overlay network
consisted of some nodes. In this network I need to have a type of virtual
cable, so as to create a subnet. In fact the Makefile that I wrote about
before, creates tunnels between the nodes. Do you think that I should have
chosen a different type of connection?
My goal (more or less) is to do some experiments concerning the bandwidth,
on planetlab nodes by using OVS. After that I will make the same with
physical connected nodes.

Once you know the destination, you want to pick a path.  How do you know
the topology?  You could either statically configure it, or you could try
to discover it.  But either way, this path is in terms of *switches*.  It's
always in terms of switches.  Maybe the switches have IP addresses, but
that's sort of irrelevant -- IP addresses don't forward packets; switches
do.  If you want to think of it in terms of IP addresses, you just need to
figure out which IP address corresponds to which switch, and then figure
out which port leads to that switch.

If this is all right, I still don't understand why you're doing it this
way.  If I'm wrong about something, please clue me in. :)

(Incidentally, pieces to do most or all of the things I've mentioned exist.
 openflow.discovery figures out which ports connect switches.
 forwarding.l3_learning ARPs for unknown destinations and learns them.
 forwarding.l2_multi constructs end-to-end paths for fine-grained flows in
networks of switches.  You may just need to hack the right pieces together.)


Sorry for the big post.
Thank you very much for the help.
Alexandros


2013/4/11 Murphy McCauley <[email protected]>

> I still don't entirely understand your topology; it seems like you're
> doing something a bit unusual.  Are the nodes in your topology switches or
> hosts?  Or both (that is, they're acting as hosts running services or
> hosting users or whatever, but they've also got OVS installed)?  If they're
> switches, are you doing in-band control?  Are they connected with tunnels?
>  What are these IP addresses actually the IP addresses of?
>
> I am making a bunch of guesses here.  Tell me which of them are wrong.
>
> I am guessing that they're all running OVS.  The IP addresses you've shown
> are the IP of the machine's main physical interface (e.g., eth0).  You've
> created tunnels along the paths you've shown and added the tunnel endpoints
> to OVS.
>
> Now what you'd like to do is when a fine-grained (e.g. exact match) flow
> comes in on some non-tunnel interface, you'd like to set up a specific path
> from source (you know where this is because the flow showed up as a
> packet_in from a non-tunnel port) to the destination.  How do you know
> where the destination is (which switch and which port)?  The most obvious
> answers are that you flood until you learn it, or you ARP for it (in many
> cases, these end up being pretty similar in terms of the traffic).  A third
> option -- which I think you may be thinking of -- would be that you think
> you already know the destination switch because your actual destination
> address coincides with one of your tunnel endpoint addresses (e.g., you're
> trying to ping 192.168.3.4, so the destination is on the host with the eth0
> with 192.168.3.4).  This strikes me as fairly odd, because in this case I
> don't immediately see what you gain by having tunnels in the first place
> since the addresses on the inside are the same as the addresses on the
> outside.
>
> Once you know the destination, you want to pick a path.  How do you know
> the topology?  You could either statically configure it, or you could try
> to discover it.  But either way, this path is in terms of *switches*.  It's
> always in terms of switches.  Maybe the switches have IP addresses, but
> that's sort of irrelevant -- IP addresses don't forward packets; switches
> do.  If you want to think of it in terms of IP addresses, you just need to
> figure out which IP address corresponds to which switch, and then figure
> out which port leads to that switch.
>
> If this is all right, I still don't understand why you're doing it this
> way.  If I'm wrong about something, please clue me in. :)
>
> (Incidentally, pieces to do most or all of the things I've mentioned
> exist.  openflow.discovery figures out which ports connect switches.
>  forwarding.l3_learning ARPs for unknown destinations and learns them.
>  forwarding.l2_multi constructs end-to-end paths for fine-grained flows in
> networks of switches.  You may just need to hack the right pieces together.)
>
> -- Murphy
>
> On Apr 11, 2013, at 2:58 AM, Kouvakas Alexandros wrote:
>
> I will try to explain to you what I am trying to do with a different
> example.
> Let's say that we have the topology below<Diagram1.png>
>
> Let's say that I choose to ping from host .2 to the host .5.
> For some reason (possibly due to traffic or something else) I choose to
> instruct the package to follow the path :
> 192.16.3.2 --> 192.168.3.1 --> 192.168.3.6 --> 192.168.3.5    and not the
> path 3.2 --> 3.1 --> 3.3 --> 3.4 --> 3.5.
>
> First of all I suppose that I have to find the MAC addresses which are
> associated with the IPs of each host. Probably, this can be done with
> l3_learning.py code.
>
> The big question is how can I add manually the exact path that the ping
> package should follow according to my criteria.
> In the future, I will have to check the traffic between the nodes *and
> then* decide the path that should be followed.
>
> But for now, it's enough to begin with the determination of the path in
> the code.
>
> Thanks for your help
> Alexandros
>
>
> 2013/4/11 Murphy McCauley <[email protected]>
>
>> It's not entirely clear to me what you're trying to accomplish here.  Is
>> it that when you get a packet to any of the .2/.3./.4, you want to
>> duplicate it and send it to the other two also?  If so, I think this is
>> just three rules, one which matches on each address.  The actions for these
>> are two rewrites and two outputs.  You know the IP address you want to
>> rewrite to, but you'll also need to figure out the MAC address which goes
>> with the IP address, and you'll need to figure out which port to send on.
>>  The former is the job of ARP (though you could also potentially just learn
>> it), and the latter is basic learning behavior.  You could implement these
>> in the controller (the l3_learning component does this), but it may also be
>> possible to get some help from the OFPP_NORMAL virtual output port.
>>
>> -- Murphy
>>
>> On Apr 10, 2013, at 8:53 AM, Kouvakas Alexandros wrote:
>>
>> Hello again,
>> I want to install some flows manually to the OF switch. I have an overlay
>> network with a subnet 192.168.3.1/24. The central node with the OVS is
>> the one with the IP 192.168.3.1 and there are 3 hosts connected directly to
>> the central. It's more or less like the Openflow tutorial with mininet.
>>
>> What I want to do, is when the OF switch receives a packet, let's say
>> from the node 192.168.3.2, to forward it to the nodes for example
>> 192.168.3.4 and 192.168.3.3. I would like to use the IPs of the nodes and
>> not the MAC addresses or ports. In this kind of topology there is not any
>> real usefulness, but in the future I am planning to have a more complicated
>> overlay network with many nodes connected to each other. In the latter case
>> I will try to direct the packet through a path that I will choose.
>>
>> Is there any example of how I can do that? Do you think it is better to
>> alter the code of l2_learning.py or start from the scratch?
>>
>>
>> 2013/3/29 Murphy McCauley <[email protected]>
>>
>>> On Mar 29, 2013, at 3:28 AM, Felician Nemeth wrote:
>>>
>>> >>> My node has python version 2.6.2.
>>> >>
>>> >> Note that you may run into some problems here -- POX's requirement is
>>> Python 2.7.
>>> >
>>> > I'd like to mention that it is sometimes not straightforward to install
>>> > a new python version.  In which case, pythonbrew is really useful.
>>> >
>>> > https://github.com/utahta/pythonbrew
>>>
>>> This is a good tip, so I added it to the manual.  Thanks.
>>>
>>> Between pythonbrew, PyPy, and the recent patches, I think the world for
>>> 2.6-ers is decent.  The manual could probably be refactored a bit to put
>>> all the Python-version stuff in one place, but that's a project for another
>>> time or person. :)
>>>
>>> It sure would have been nice if 2.6 had been phased out before Python 3
>>> really started getting deployed, but it's looking like that's not going to
>>> happen.  Oh well. :)
>>>
>>> -- Murphy
>>
>>
>>
>>
>> --
>> Kouvakas Alexandros
>>
>>
>>
>
>
> --
> Kouvakas Alexandros
>
>
>


-- 
Kouvakas Alexandros

Reply via email to