I just downloaded the head from git repo and openflowd starts like expected.

My test case was:

1. start a controller on host A
ovs-controller ptcp:6633:<controller listen address>
It started with no output, but listens on specified ip and port, so I
considered it probably works.

2. configure host B
I connectig via ssh to host B, which has two physical interfaces: eth0,
eth1.

ovs-dpctl add-dp hostb-sw0
ovs-dpctl add-if hostb-sw0 eth1 (i connected using eth0)

I'mb using in-band way, so next is the ip config:
ifconfig hostb-sw0 <hostb openfow switch address>

ovs-openflowd hostb-sw0 tcp:<controller listen address>
The openflowd reports it connected with no error.

After that I disconnect my ssh session, and connect using the hostb-sw0's IP
address, I can connect successfully.

Then I configured the eth0 to use as datapath for the openflow switch.

ifconfig eth0 0.0.0.0 up
ovs-dpctl add-if hostb-sw0 eth0

ovs-dpctl show
sys...@dp0:
        flows: cur:15, soft-max:512, hard-max:262144
        ports: cur:3, max:1024
        groups: max:16
        lookups: frags:0, hit:2516, missed:1598, lost:726
        queues: max-miss:100, max-action:100
        port 0: vm1-sw0 (internal)
        port 1: eth0
        port 2: eth1

And it works fine.

When I do this operation at hostC I lost the communication to hostB and
hostC, and the controller write this log message:

Jul 15 13:04:19|00001|timeval|WARN|45 ms poll interval (0 ms user, 30 ms
system) is over 72 times the weighted mean interval 1 ms (237 samples)
Jul 15 13:04:19|00002|timeval|WARN|context switches: 0 voluntary, 1
involuntary
Jul 15 13:04:19|00003|coverage|INFO|Event coverage (epoch 237/entire run),
hash=22c700f9:
Jul 15 13:04:19|00004|coverage|INFO|flow_extract                 2 /
252
Jul 15 13:04:19|00005|coverage|INFO|poll_fd_wait                 5 /
 1014
Jul 15 13:04:19|00006|coverage|INFO|rconn_queued                 2 /
256
Jul 15 13:04:19|00007|coverage|INFO|rconn_sent                   2 /
256
Jul 15 13:04:19|00008|coverage|INFO|util_xalloc                 21 /
 2115
Jul 15 13:04:19|00009|coverage|INFO|vconn_received               2 /
257
Jul 15 13:04:19|00010|coverage|INFO|vconn_sent                   2 /
258
Jul 15 13:04:19|00011|coverage|INFO|hmap_expand                  0 /
1
Jul 15 13:04:19|00012|coverage|INFO|mac_learning_expired         0 /
6
Jul 15 13:04:19|00013|coverage|INFO|mac_learning_learned         0 /
 18
Jul 15 13:04:19|00014|coverage|INFO|pstream_open                 0 /
1
Jul 15 13:04:19|00015|coverage|INFO|vconn_open                   0 /
1
Jul 15 13:04:19|00016|coverage|INFO|82 events never hit
Jul 15 13:06:00|00017|timeval|WARN|11 ms poll interval (0 ms user, 10 ms
system) is over 17 times the weighted mean interval 1 ms (499 samples)
Jul 15 13:06:00|00018|timeval|WARN|context switches: 0 voluntary, 1
involuntary
Jul 15 13:06:00|00019|coverage|INFO|Event coverage (epoch 499/entire run),
hash=57205ddd:
Jul 15 13:06:00|00020|coverage|INFO|flow_extract               100 /
741
Jul 15 13:06:00|00021|coverage|INFO|poll_fd_wait                 6 /
 2418
Jul 15 13:06:00|00022|coverage|INFO|rconn_queued               100 /
747
Jul 15 13:06:00|00023|coverage|INFO|rconn_sent                 100 /
747
Jul 15 13:06:00|00024|coverage|INFO|util_xalloc                412 /
 5517
Jul 15 13:06:00|00025|coverage|INFO|vconn_received             100 /
750
Jul 15 13:06:00|00026|coverage|INFO|vconn_sent                 100 /
750
Jul 15 13:06:00|00027|coverage|INFO|hmap_expand                  0 /
1
Jul 15 13:06:00|00028|coverage|INFO|mac_learning_expired         0 /
 13
Jul 15 13:06:00|00029|coverage|INFO|mac_learning_learned         0 /
 36
Jul 15 13:06:00|00030|coverage|INFO|pstream_open                 0 /
1
Jul 15 13:06:00|00031|coverage|INFO|vconn_open                   0 /
1
Jul 15 13:06:00|00032|coverage|INFO|82 events never hit
Jul 15 13:06:00|00033|timeval|WARN|13 ms poll interval (10 ms user, 10 ms
system) is over 11 times the weighted mean interval 1 ms (500 samples)
Jul 15 13:06:00|00034|timeval|WARN|context switches: 0 voluntary, 1
involuntary
Jul 15 13:06:00|00035|coverage|INFO|Skipping details of duplicate event
coverage for hash=57205ddd in epoch 500
Jul 15 13:06:01|00036|timeval|WARN|12 ms poll interval (0 ms user, 10 ms
system) is over 10 times the weighted mean interval 1 ms (1042 samples)
Jul 15 13:06:01|00037|coverage|INFO|Skipping details of duplicate event
coverage for hash=57205ddd in epoch 1042
Jul 15 13:06:29|00038|rconn|WARN|tcp:x.x.x.x:50924: connection dropped
(Connection reset by peer)

Can anybody explain what happended?
The packets are lost, after that my physical switch (which is not an
openflow switch) reports huge traffic on all physical ports.

thx for your answers,
Lenard

2010/7/15 Pásztor Lénárd Zoltán <[email protected]>

> Hi All,
>
> I'm started testing the openvswitch on a virtualized environment.
> I followed this guide:
> http://openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=INSTALL.OpenFlow;hb=HEAD
>
> When I trying to start openflowd I got the following message:
>
> ovs-openflowd dp0 tcp:x.x.x.x
> Jul 15 11:12:17|00001|openflowd|INFO|Open vSwitch version 1.0.1
> Jul 15 11:12:17|00002|openflowd|INFO|OpenFlow protocol version 0x01
> Jul 15 11:12:17|00003|ofproto|INFO|using datapath ID 0000002320b94533
> Jul 15 11:12:17|00004|rconn|INFO|tcp:x.x.x.x: connecting...
> Jul 15 11:12:17|00005|netdev|WARN|attempted to create a device that may not
> be created: eth1
> Jul 15 11:12:17|00006|ofproto|WARN|ignoring port eth1 (1) because netdev
> eth1 cannot be opened (No such device)
> Jul 15 11:12:17|00007|ofproto|WARN|packet-in on unknown port 1
> Jul 15 11:12:17|00008|rconn|INFO|tcp:x.x.x.x: connected
> Jul 15 11:12:17|00009|ofproto|WARN|packet-in on unknown port 1
>
> more information about dp0:
>
> ovs-dpctl show
> sys...@dp0:
>         flows: cur:3, soft-max:512, hard-max:262144
>         ports: cur:2, max:1024
>         groups: max:16
>         lookups: frags:0, hit:61126, missed:21868, lost:6060
>         queues: max-miss:100, max-action:100
>         port 0: dp0 (internal)
>         port 1: eth1
>
>
> I have multiple dom0 with multiple physical interfaces (used for avability
> and bonding) and many hosted guests. I would like to set up virtual switches
> and networks for our guest OS-es, and manage it centrally if it is possible.
> Will openflow be good for me?
>
> thx for your answers,
> Lenard
>
>


-- 
üdv,

 Lénárd
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss_openvswitch.org

Reply via email to