Re: [Openstack] Rebooted, now can't ping my guest

2013-03-18 Thread The King in Yellow
So...is this the root of my problem?  The MAC addresses in my quantum
port-list do not match my real MAC addresses.  Any time I restart
openvswitch/quantum-plugin-openvswitch-agent, my br-tun blocks my traffic
with incorrect flows.

What went wrong here...should these interfaces actually have fa:16:3e MAC
addresses, or is the database wrong?

root@os-network:~# quantum port-list
+--+--+---+-+
| id   | name | mac_address   |
fixed_ips
|
+--+--+---+-+
| 9f9041ce-654d-4706-a208-60cf5fca5d90 |  | fa:16:3e:e2:38:da |
{subnet_id: 0617c874-9a95-40dc-ae4f-bb44eec806f6, ip_address:
10.5.5.1} |
| 28108125-119c-4ce4-a3a3-537639589791 |  | fa:16:3e:a9:a3:4c |
{subnet_id: cde12bce-eeed-4041-87e3-5a1b905b3c98, ip_address:
10.42.36.130} |
| 45ffdc5f-dad9-444a-aff4-3d39b607f828 |  | fa:16:3e:36:2e:54 |
{subnet_id: 0617c874-9a95-40dc-ae4f-bb44eec806f6, ip_address:
10.5.5.2} |
:
+--+--+---+-+
root@os-network:~# ifconfig qr-9f9041ce-65
qr-9f9041ce-65 Link encap:Ethernet  HWaddr 4e:bf:4a:70:a7:9e
  inet addr:10.5.5.1  Bcast:10.5.5.255  Mask:255.255.255.0
:
root@os-network:~# ifconfig tap45ffdc5f-da
tap45ffdc5f-da Link encap:Ethernet  HWaddr 16:83:e6:27:a4:b8
  inet addr:10.5.5.2  Bcast:10.5.5.255  Mask:255.255.255.0
:
root@os-network:~# service openvswitch-switch restart
:
root@os-network:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=35.154s, table=0, n_packets=123, n_bytes=8526,
priority=0 actions=NORMAL
root@os-network:~# service quantum-plugin-openvswitch-agent restart
quantum-plugin-openvswitch-agent stop/waiting
quantum-plugin-openvswitch-agent start/running, process 1419
root@os-network:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=4.022s, table=0, n_packets=3, n_bytes=418,
priority=3,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=mod_vlan_vid:1,output:1
 cookie=0x0, duration=4.056s, table=0, n_packets=5, n_bytes=548,
priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL
 cookie=0x0, duration=3.816s, table=0, n_packets=0, n_bytes=0,
priority=3,tun_id=0x1,dl_dst=fa:16:3e:36:2e:54 actions=mod_vlan_vid:1,NORMAL
 cookie=0x0, duration=3.987s, table=0, n_packets=0, n_bytes=0,
priority=3,tun_id=0x1,dl_dst=fa:16:3e:e2:38:da actions=mod_vlan_vid:1,NORMAL
 cookie=0x0, duration=4.573s, table=0, n_packets=9, n_bytes=1828,
priority=1 actions=drop
root@os-network:~# ovs-ofctl add-flow br-tun
priority=3,tun_id=0x1,dl_dst=16:83:e6:27:a4:b8
actions=mod_vlan_vid:1,NORMAL
root@os-network:~# ovs-ofctl add-flow br-tun
priority=3,tun_id=0x1,dl_dst=4e:bf:4a:70:a7:9e
actions=mod_vlan_vid:1,NORMAL
root@os-network:~#
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Rebooted, now can't ping my guest

2013-03-15 Thread The King in Yellow
Perhaps somebody could give me the contents of their quantum node's ovs-ofctl
dump-flows br-tun and I could figure out what mine *should* look like?


On Tue, Mar 12, 2013 at 4:24 PM, The King in Yellow yellowk...@gmail.comwrote:

 Okay, I have worked around my problem-- but I don't quite understand it,
 and hope somebody can help me.  It appears to be a problem with the Open
 vSwitch flows in br-tun on both compute and network.  Here are the flows as
 they are now, working.  I have manually added the priority=5 lines.
 Without those added manually on both sides, traffic from the guest's
 doesn't work properly.

 root@os-network:~# ovs-ofctl dump-flows br-tun
 NXST_FLOW reply (xid=0x4):
  cookie=0x0, duration=5331.934s, table=0, n_packets=, n_bytes=171598,
 priority=3,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
 actions=mod_vlan_vid:1,output:1
  cookie=0x0, duration=3.119s, table=0, n_packets=6, n_bytes=496,
 priority=5,dl_vlan=1 actions=NORMAL
  cookie=0x0, duration=10.759s, table=0, n_packets=0, n_bytes=0,
 priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL
  cookie=0x0, duration=5331.725s, table=0, n_packets=0, n_bytes=0,
 priority=3,tun_id=0x1,dl_dst=fa:16:3e:36:2e:54 actions=mod_vlan_vid:1,NORMAL
  cookie=0x0, duration=5331.898s, table=0, n_packets=0, n_bytes=0,
 priority=3,tun_id=0x1,dl_dst=fa:16:3e:e2:38:da actions=mod_vlan_vid:1,NORMAL
  cookie=0x0, duration=5332.499s, table=0, n_packets=3502, n_bytes=286312,
 priority=1 actions=drop
 root@os-network:~#

 root@os-compute-01:~# ovs-ofctl dump-flows br-tun
 NXST_FLOW reply (xid=0x4):
  cookie=0x0, duration=22348.618s, table=0, n_packets=20165,
 n_bytes=991767,
 priority=3,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
 actions=mod_vlan_vid:1,output:1
  cookie=0x0, duration=177.949s, table=0, n_packets=151, n_bytes=21830,
 priority=5,dl_vlan=1 actions=NORMAL
  cookie=0x0, duration=411.826s, table=0, n_packets=80, n_bytes=9566,
 priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL
  cookie=0x0, duration=22348.567s, table=0, n_packets=, n_bytes=252718,
 priority=3,tun_id=0x1,dl_dst=fa:16:3e:ee:9e:b2 actions=mod_vlan_vid:1,NORMAL
  cookie=0x0, duration=22348.128s, table=0, n_packets=1107, n_bytes=123234,
 priority=3,tun_id=0x1,dl_dst=fa:16:3e:8d:6d:13 actions=mod_vlan_vid:1,NORMAL
  cookie=0x0, duration=22348.353s, table=0, n_packets=1494, n_bytes=124036,
 priority=3,tun_id=0x1,dl_dst=fa:16:3e:95:94:9c actions=mod_vlan_vid:1,NORMAL
  cookie=0x0, duration=22347.912s, table=0, n_packets=3334, n_bytes=425776,
 priority=3,tun_id=0x1,dl_dst=fa:16:3e:7b:e3:ee actions=mod_vlan_vid:1,NORMAL
  cookie=0x0, duration=22349.47s, table=0, n_packets=879, n_bytes=75279,
 priority=1 actions=drop
 root@os-compute-01:~#

 Here is a sample packet that would have been blocked, sniffed in GRE.  The
 yellow background (if you can see the color) is the GRE header.  Inside the
 GRE payload, MAC (red) is1272.590f.cf56, and MAC (orange) is fa16.3e7b.e3ee
 are exchanging ping packets.

    00 50 56 81 44 e7 00 50 56 81 25 73 08 00 45 00  .PV.D..PV.%s..E.
 0010   00 82 64 93 40 00 40 2f ad a3 0a 0a 0a 02 0a 0a  ..d.@.@/
 0020   0a 01 20 00 65 58 00 00 00 00 12 72 59 0f cf 56  .. .eX.rY..V
 0030   fa 16 3e 95 94 9c 81 00 00 01 08 00 45 00 00 54  ...E..T
 0040   00 00 40 00 40 01 1d 00 0a 05 05 04 0a 2a 04 77  ..@.@*.w
 0050   08 00 00 9f 0e 17 00 08 17 87 3f 51 00 00 00 00  ..?Q
 0060   d3 96 00 00 00 00 00 00 10 11 12 13 14 15 16 17  
 0070   18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 26 27   !#$%'
 0080   28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 34 35 36 37  ()*+,-./01234567

    00 50 56 81 25 73 00 50 56 81 44 e7 08 00 45 00  .PV.%s.PV.D...E.
 0010   00 82 95 e3 40 00 40 2f 7c 53 0a 0a 0a 01 0a 0a  @.@/|S..
 0020   0a 02 20 00 65 58 00 00 00 00 fa 16 3e 95 94 9c  .. .eX.
 0030   12 72 59 0f cf 56 81 00 00 01 08 00 45 00 00 54  .rY..V..E..T
 0040   1e 24 00 00 3e 01 40 dc 0a 2a 04 77 0a 05 05 04  .$...@..*.w
 0050   00 00 08 9f 0e 17 00 08 17 87 3f 51 00 00 00 00  ..?Q
 0060   d3 96 00 00 00 00 00 00 10 11 12 13 14 15 16 17  
 0070   18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 26 27   !#$%'
 0080   28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 34 35 36 37  ()*+,-./01234567

 The source MAC does match the MAC for his gateway, 10.5.5.1:

 root@os-network:~# ovs-ofctl show br-int
 OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:862cf391d546

 n_tables:255, n_buffers:256
 features: capabilities:0xc7, actions:0xfff
  1(qr-9f9041ce-65): addr:12:72:59:0f:cf:56
  config: 0
  state:  0
 :

 ...which is the understandable problem.  That MAC address is not
 specifically entered in the OVS bridge br-tun.  Any clue why?  This is
 across reboots, service restarts, etc...  I guess it is the
 /etc/openswitch/conf.db that is corrupted?

 What I don't understand at this point is why removing the priority 5 flow
 on the compute

Re: [Openstack] question on the GRE Communication

2013-03-14 Thread The King in Yellow
That's the best way I found to see if GRE is up, just watching for two way
proto gre traffic.

Here's how you can match the IP addresses *inside* the GRE packet, which
you probably will want.  Note that 0x0a050505 is hexadecimal for my desired
IP address of 10.5.5.5:

root@os-network:~# tcpdump -i eth1 'proto gre and ( ip[58:4] = 0x0a050505
or ip[62:4] = 0x0a050505 )'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
16:18:17.434378 IP os-network-d  os-compute-01-d: GREv0, key=0x0, length
110: IP opskzlp119.snops.net  10.5.5.5: ICMP echo request, id 21321, seq
488, length 64
16:18:17.436190 IP os-compute-01-d  os-network-d: GREv0, key=0x0, length
110: IP 10.5.5.5  opskzlp119.snops.net: ICMP echo reply, id 21321, seq
488, length 64
16:18:18.435750 IP os-network-d  os-compute-01-d: GREv0, key=0x0, length
110: IP opskzlp119.snops.net  10.5.5.5: ICMP echo request, id 21321, seq
489, length 64
16:18:18.437798 IP os-compute-01-d  os-network-d: GREv0, key=0x0, length
110: IP 10.5.5.5  opskzlp119.snops.net: ICMP echo reply, id 21321, seq
489, length 64
:


On Thu, Mar 14, 2013 at 9:55 AM, Robert van Leeuwen 
robert.vanleeu...@spilgames.com wrote:

  Thanks for the reply. I have one more question.
  How we will check whether the tunneling is established or not?

 tcpdump can show you the GRE traffic:
 tcpdump -i ethX proto gre

 Cheers,
 Robert
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Can't register for forums.openstack.org

2013-03-12 Thread The King in Yellow
If anybody from the forums is on here, or can forward this message to the
forum admins, I have been trying for weeks to register.  This is from
multiple IP address ranges-- work, home, and cellular.  I keep getting the
following error:

Your IP 96.60.255.159 has been blocked because it is blacklisted. For
details please see http://search.atlbl.com/search.php?q=96.60.255.159. {
IP_BLACKLISTED_INFO }
search.atlbl.com goes to a parked website, so I'm guessing the registration
blacklist is actually broken.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Rebooted, now can't ping my guest

2013-03-08 Thread The King in Yellow
A little more information, or a completely different problem...I just
realized my network node can't ping its public gateway.  (Instead of
7.7.7.0/24, I am using 10.42.36.0/23.  My gateway is .1 (and has
non-redundant IP of .10), and my Network Node IP is .130).  This did work
at one time.

root@os-network:~# ping 10.42.36.1
PING 10.42.36.1 (10.42.36.1) 56(84) bytes of data.
From 10.42.36.130 icmp_seq=1 Destination Host Unreachable
From 10.42.36.130 icmp_seq=2 Destination Host Unreachable
From 10.42.36.130 icmp_seq=3 Destination Host Unreachable
^C
--- 10.42.36.1 ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4023ms
pipe 3
root@os-network:~# arp -an
? (10.42.36.1) at incomplete on qg-28108125-11
? (10.10.10.2) at 00:50:56:81:25:73 [ether] on eth1
? (10.42.38.11) at 40:55:39:25:a5:c1 [ether] on eth3
? (192.168.0.1) at 00:50:56:81:48:02 [ether] on eth0
? (10.5.5.4) at fa:16:3e:95:94:9c [ether] on tap45ffdc5f-da
? (10.42.36.10) at incomplete on qg-28108125-11
? (10.5.5.3) at fa:16:3e:8d:6d:13 [ether] on tap45ffdc5f-da
? (10.42.38.1) at 00:07:b4:01:b5:01 [ether] on eth3
root@os-network:~#

Nor does it work the other way:


From my gateway:

sniktc-nx10A-7010-KXDC-E12# sho run int vlan402

!Command: show running-config interface Vlan402
!Time: Fri Mar  8 14:11:45 2013

version 5.1(3)

interface Vlan402
  vrf member Enterprise
  no ip redirects
  ip address 10.42.4.10/22
  ip router eigrp 1
  ip pim sparse-mode
  glbp 402
ip 10.42.4.1
priority 200
authentication text KXDCglbp402
  no shutdown
  mtu 9216

sniktc-nx10A-7010-KXDC-E12# ping 10.42.36.130 vrf Enterprise source
10.42.36.10
PING 10.42.36.130 (10.42.36.130) from 10.42.36.10: 56 data bytes
Request 0 timed out
Request 1 timed out
Request 2 timed out
Request 3 timed out
Request 4 timed out

--- 10.42.36.130 ping statistics ---
5 packets transmitted, 0 packets received, 100.00% packet loss
sniktc-nx10A-7010-KXDC-E12#



root@os-network:~# ifconfig qg-28108125-11
qg-28108125-11 Link encap:Ethernet  HWaddr fa:16:3e:a9:a3:4c
  inet addr:10.42.36.130  Bcast:10.42.37.255  Mask:255.255.254.0
  inet6 addr: fe80::f816:3eff:fea9:a34c/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:34 errors:0 dropped:0 overruns:0 frame:0
  TX packets:354 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:4840 (4.8 KB)  TX bytes:22619 (22.6 KB)

root@os-network:~# ovs-vsctl show
e232f8c8-1cb8-4cf5-9de5-49f41e59fd38
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port gre-2
Interface gre-2
type: gre
options: {in_key=flow, out_key=flow, remote_ip=10.10.10.2}
Bridge br-ex
Port qg-28108125-11
Interface qg-28108125-11
type: internal
Port br-ex
Interface br-ex
type: internal
Port eth2
Interface eth2
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port tap45ffdc5f-da
tag: 1
Interface tap45ffdc5f-da
type: internal
Port qr-9f9041ce-65
tag: 1
Interface qr-9f9041ce-65
type: internal
ovs_version: 1.4.0+build0
root@os-network:~#

I suppose something has gone wrong inside quantum on the network node?  Is
there a good way to blow everything away and rebuild?  I tried the
following, but the quantum-networking script doesn't really work this way,
as it assumes it is creating items (and getting the uuids in return), which
doesn't work the second time through.

ovs-vsctl del-br br-int
ovs-vsctl del-br br-ex

service openvswitch-switch restart

ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2
ip link set up br-ex

service quantum-plugin-openvswitch-agent restart
service quantum-dhcp-agent restart
service quantum-l3-agent restart

sh ./quantum-networking-filled.sh

service quantum-l3-agent restart


Here is what quantum looks like right now-- note the additional
provider-routers, I'm guessing from failed runs of the quantum-networking
script.  Note also ...771's external gateway of ...3de0, which isn't
showing up anywhere...could that be the problem?  Should that be subnet
...06f6?

root@os-network:~# quantum subnet-list
+--+--+---+--+
| id   | name | cidr  |
allocation_pools |

Re: [Openstack] Rebooted, now can't ping my guest

2013-03-06 Thread The King in Yellow
On Wed, Mar 6, 2013 at 10:15 AM, Sylvain Bauza
sylvain.ba...@digimind.comwrote:

 Le 05/03/2013 18:14, The King in Yellow a écrit :



 I'm not clear on what the interfaces are, but q-9f9041ce-65 is 10.5.5.1
 on the network node, so he seems to be seeing the traffic.  tap45ffdc5f-da
 is listed as 10.5.5.2, and I have no idea what function that is performing.



 qr-X is the internal router IP interface (as you correctly guessed,
 ie. 10.5.5.1) and bound to br-int.
 tap- is the DHCP server IP interface (ie. 10.5.5.2), also bound to
 br-int.

 'brctl show' gives you the output.

 Could you please provide us route -n on the network node ?


os@os-network:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 10.42.36.1  0.0.0.0 UG0  00
qg-28108125-11
0.0.0.0 10.42.38.1  0.0.0.0 UG0  00 eth3
10.5.5.00.0.0.0 255.255.255.0   U 0  00
tap45ffdc5f-da
10.5.5.00.0.0.0 255.255.255.0   U 0  00
qr-9f9041ce-65
10.10.10.0  0.0.0.0 255.255.255.0   U 0  00 eth1
10.42.36.0  0.0.0.0 255.255.254.0   U 0  00
qg-28108125-11
10.42.38.0  0.0.0.0 255.255.254.0   U 1  00 eth3
169.254.0.0 0.0.0.0 255.255.0.0 U 1000   00 eth0
192.168.0.0 0.0.0.0 255.255.255.0   U 0  00 eth0
os@os-network:~$

Could you also make sure (with 'ovs-vsctl show') that port 'qr-' and
 'tap' do have the same tag number as for VMs ? (should be tag:1)


They are:

root@os-network:~# ovs-vsctl show
e232f8c8-1cb8-4cf5-9de5-49f41e59fd38
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port gre-2
Interface gre-2
type: gre
options: {in_key=flow, out_key=flow, remote_ip=10.10.10.2}
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port qr-9f9041ce-65
tag: 1
Interface qr-9f9041ce-65
type: internal
Port tap45ffdc5f-da
tag: 1
Interface tap45ffdc5f-da
type: internal
Bridge br-ex
Port qg-28108125-11
Interface qg-28108125-11
type: internal
Port eth2
Interface eth2
Port br-ex
Interface br-ex
type: internal
ovs_version: 1.4.0+build0
root@os-network:~#
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Rebooted, now can't ping my guest

2013-03-05 Thread The King in Yellow
That didn't quite do it.  Rebooted 10.5.5.5/6 and they did not get IPs.
Brought one up manually and could not ping anything else.  I note that I'm
missing the tag statement on those recreated interfaces in ovs-vsctl
show, so I deleted the interfaces and reran the statements you gave with
tag=1 appended.  Now, my manually configured 10.5.5.5 COULD ping my
working 10.5.5.7, and I could ssh between the two.  However, 10.5.5.5 still
can not get a DHCP address or (with hardcoded IP) reach 10.5.5.1 on the
network node, whereas 10.5.5.7 can.  Here's how things look now:

root@os-compute-01:~# ovs-dpctl show br-int
system@br-int:
lookups: hit:236399 missed:45742 lost:0
flows: 1
port 0: br-int (internal)
port 2: qvo7dcd14b3-70
port 9: qvo0b459c65-a0
port 10: qvo4f36c3ea-5c
port 11: qvo62721ee8-08
port 12: qvocf833d2a-9e
port 13: patch-tun (patch: peer=patch-int)
root@os-compute-01:~# ovs-vsctl show
3a52a17f-9846-4b32-b309-b49faf91bfc4
Bridge br-int
Port br-int
Interface br-int
type: internal
Port qvo0b459c65-a0
tag: 1
Interface qvo0b459c65-a0
Port qvo62721ee8-08
tag: 1
Interface qvo62721ee8-08
Port qvo4f36c3ea-5c
tag: 1
Interface qvo4f36c3ea-5c
Port qvocf833d2a-9e
tag: 1
Interface qvocf833d2a-9e
Port qvo7dcd14b3-70
tag: 1
Interface qvo7dcd14b3-70
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port gre-1
Interface gre-1
type: gre
options: {in_key=flow, out_key=flow, remote_ip=10.10.10.1}
ovs_version: 1.4.0+build0
root@os-compute-01:~# brctl show
bridge name bridge id   STP enabled interfaces
br-int  .222603554b47   no  qvo0b459c65-a0
qvo4f36c3ea-5c
qvo62721ee8-08
qvo7dcd14b3-70
qvocf833d2a-9e
br-tun  .3abeb87cdb47   no
qbr0b459c65-a0  8000.3af05347af11   no
qvb0b459c65-a0
vnet2
qbr4f36c3ea-5c  8000.e6a5faf9a181   no
qvb4f36c3ea-5c
vnet1
qbr62721ee8-08  8000.8af675d45ed7   no
qvb62721ee8-08
vnet0
qbr7dcd14b3-70  8000.aabc605c1b2c   no
qvb7dcd14b3-70
vnet4
qbrcf833d2a-9e  8000.36e77dfc6018   no
qvbcf833d2a-9e
vnet3
root@os-compute-01:~#


On Tue, Mar 5, 2013 at 8:02 AM, Sylvain Bauza sylvain.ba...@digimind.comwrote:

  You get it. This is the bug I mentioned related to compute nodes. Folks,
 anyone knowing the bug tracking numbre, btw ?

 'ovs-dpctl show' shows you that only qvo7dcd14b3-70 is bridged to br-int
 (and mapped to vnet4, which I guess is the vnet device for the correct VM).

 Could you please try :
 sudo ovs-vsctl add-port br-int qvo0b459c65-a0
 sudo ovs-vsctl add-port br-int qvo4f36c3ea-5c
 sudo ovs-vsctl add-port br-int qvo62721ee8-08
 sudo ovs-vsctl add-port br-int qvocf833d2a-9e
 sudo service quantum-plugin-openvswitch-agent restart

 and check that your VMs get network back ?

 -Sylvain

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Rebooted, now can't ping my guest

2013-03-05 Thread The King in Yellow
On Tue, Mar 5, 2013 at 11:19 AM, Sylvain Bauza
sylvain.ba...@digimind.comwrote:

  You should be close to the solution. Looking at your GRE tunnels, I only
 see a one-to-one tunnel in between your compute node and your network node
 (provided your netnode is 10.10.10.1). Could you please confirm that your
 controller is either on the compute node or on the network node ?


My network node is 10.10.10.1.  My controller is an independent node, and
not on the 10.10.10.x network (following this
architecturehttp://docs.openstack.org/folsom/basic-install/content/basic-install_requirements.htmlalthough
I have changed the external subnets).


 One could suggest to restart nova-compute and check.


I tried, and it did nothing.

I think something else might be going on...I do seem to be getting the
correct ARP entries for 10.5.5.5 on my network node:

root@os-network:/var/log/quantum# arp -an
? (10.5.5.4) at incomplete on tap45ffdc5f-da
? (10.5.5.3) at incomplete on tap45ffdc5f-da
? (10.10.10.2) at 00:50:56:81:25:73 [ether] on eth1
? (192.168.0.1) at 00:50:56:81:48:02 [ether] on eth0
? (10.5.5.7) at fa:16:3e:06:17:3c [ether] on tap45ffdc5f-da
? (10.5.5.5) at fa:16:3e:8d:2c:f9 [ether] on qr-9f9041ce-65
? (10.5.5.5) at fa:16:3e:8d:2c:f9 [ether] on tap45ffdc5f-da
? (10.42.38.1) at 00:07:b4:01:b5:01 [ether] on eth3
root@os-network:/var/log/quantum# arp -i qr-9f9041ce-65 -d 10.5.5.5
root@os-network:/var/log/quantum# arp -i tap45ffdc5f-da -d 10.5.5.5
root@os-network:/var/log/quantum# arp -an
? (10.5.5.4) at incomplete on tap45ffdc5f-da
? (10.5.5.3) at incomplete on tap45ffdc5f-da
? (10.10.10.2) at 00:50:56:81:25:73 [ether] on eth1
? (192.168.0.1) at 00:50:56:81:48:02 [ether] on eth0
? (10.5.5.7) at fa:16:3e:06:17:3c [ether] on tap45ffdc5f-da
? (10.5.5.5) at fa:16:3e:8d:2c:f9 [ether] on tap45ffdc5f-da
? (10.42.38.1) at 00:07:b4:01:b5:01 [ether] on eth3
root@os-network:/var/log/quantum# arp -an
? (10.5.5.4) at incomplete on tap45ffdc5f-da
? (10.5.5.3) at incomplete on tap45ffdc5f-da
? (10.10.10.2) at 00:50:56:81:25:73 [ether] on eth1
? (192.168.0.1) at 00:50:56:81:48:02 [ether] on eth0
? (10.5.5.7) at fa:16:3e:06:17:3c [ether] on tap45ffdc5f-da
? (10.5.5.5) at fa:16:3e:8d:2c:f9 [ether] on qr-9f9041ce-65
? (10.5.5.5) at fa:16:3e:8d:2c:f9 [ether] on tap45ffdc5f-da
? (10.42.38.1) at 00:07:b4:01:b5:01 [ether] on eth3
root@os-network:/var/log/quantum#

I'm not clear on what the interfaces are, but q-9f9041ce-65 is 10.5.5.1 on
the network node, so he seems to be seeing the traffic.  tap45ffdc5f-da is
listed as 10.5.5.2, and I have no idea what function that is performing.

root@os-network:/var/log/quantum# ping 10.5.5.7
PING 10.5.5.7 (10.5.5.7) 56(84) bytes of data.
64 bytes from 10.5.5.7: icmp_req=1 ttl=64 time=1.93 ms
64 bytes from 10.5.5.7: icmp_req=2 ttl=64 time=2.08 ms
^C
--- 10.5.5.7 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.931/2.007/2.083/0.076 ms
root@os-network:/var/log/quantum# arp -an
? (10.5.5.4) at fa:16:3e:e0:17:f0 [ether] on tap45ffdc5f-da
? (10.5.5.3) at incomplete on tap45ffdc5f-da
? (10.10.10.2) at 00:50:56:81:25:73 [ether] on eth1
? (192.168.0.1) at 00:50:56:81:48:02 [ether] on eth0
? (10.5.5.7) at fa:16:3e:06:17:3c [ether] on tap45ffdc5f-da
? (10.5.5.5) at fa:16:3e:8d:2c:f9 [ether] on tap45ffdc5f-da
? (10.42.38.1) at 00:07:b4:01:b5:01 [ether] on eth3
root@os-network:/var/log/quantum# ping 10.5.5.5
PING 10.5.5.5 (10.5.5.5) 56(84) bytes of data.
^C
--- 10.5.5.5 ping statistics ---
9 packets transmitted, 0 received, 100% packet loss, time 8062ms

root@os-network:/var/log/quantum#


 Also, could you please tcpdump your network node on your management IP and
 check if you see GRE packets coming from your compute node (while pinging
 or trying to get a lease) ?


Threw a sniff up at http://pastebin.com/giwZysxW.  There were 4 pings from
10.5.5.7 (starting line 47), followed by 4 pings from 10.5.5.5.
Interesting to see the 10.5.5.3 and .4 references...I don't have passwords
for those images (sshed in with the keys), so I rebooted them while
sniffing here: http://pastebin.com/xpbgnhxu  The network node ARP table
never populated with .3 or .4, either.

It looks like quantum-openvswitch-agent is started:

root@os-compute-01:~# ps -ef | egrep quantum | egrep -v grep
quantum  11504 1  1 09:27 ?00:01:44 python
/usr/bin/quantum-openvswitch-agent --config-file=/etc/quantum/quantum.conf
--config-file=/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
--log-file=/var/log/quantum/openvswitch-agent.log
root@os-compute-01:~#
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Fwd: Rebooted, now can't ping my guest

2013-03-05 Thread The King in Yellow
Sorry, replied directly:

-- Forwarded message --
From: The King in Yellow yellowk...@gmail.com
Date: Tue, Mar 5, 2013 at 12:56 PM
Subject: Re: [Openstack] Rebooted, now can't ping my guest
To: Sylvain Bauza sylvain.ba...@digimind.com


In fact, when I ping 10.5.5.2, tcpdump on the network node actually shows
responses...

root@os-network:/var/log/quantum# tcpdump -i tap45ffdc5f-da host 10.5.5.5
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap45ffdc5f-da, link-type EN10MB (Ethernet), capture size
65535 bytes
:
12:50:12.936292 ARP, Request who-has os-network.local tell 10.5.5.5, length
28
12:50:12.936329 ARP, Reply os-network.local is-at fa:16:3e:36:2e:54 (oui
Unknown), length 28
12:50:14.936338 ARP, Request who-has os-network.local tell 10.5.5.5, length
28
12:50:14.936378 ARP, Reply os-network.local is-at fa:16:3e:36:2e:54 (oui
Unknown), length 28
12:50:15.936299 ARP, Request who-has os-network.local tell 10.5.5.5, length
28
12:50:15.936343 ARP, Reply os-network.local is-at fa:16:3e:36:2e:54 (oui
Unknown), length 28
12:50:16.936008 ARP, Request who-has os-network.local tell 10.5.5.5, length
28
12:50:16.936045 ARP, Reply os-network.local is-at fa:16:3e:36:2e:54 (oui
Unknown), length 28

above, pinging 10.5.5.1 ; below, pinging 10.5.5.2

12:50:17.044161 IP 10.5.5.5  os-network.local: ICMP echo request, id
18433, seq 0, length 64
12:50:17.044222 IP os-network.local  10.5.5.5: ICMP echo reply, id 18433,
seq 0, length 64
12:50:18.044399 IP 10.5.5.5  os-network.local: ICMP echo request, id
18433, seq 1, length 64
12:50:18.044455 IP os-network.local  10.5.5.5: ICMP echo reply, id 18433,
seq 1, length 64
12:50:19.044847 IP 10.5.5.5  os-network.local: ICMP echo request, id
18433, seq 2, length 64
12:50:19.044899 IP os-network.local  10.5.5.5: ICMP echo reply, id 18433,
seq 2, length 64
12:50:20.045370 IP 10.5.5.5  os-network.local: ICMP echo request, id
18433, seq 3, length 64
12:50:20.045427 IP os-network.local  10.5.5.5: ICMP echo reply, id 18433,
seq 3


Unfortunately, the CirrOS image 10.5.5.5 is using doesn't seem to have the
arp utility, nor does 'ip -n' work.  I guess it isn't getting the ARP
replies for 10.5.5.1, and somehow got an ARP entry for 10.5.5.2 and isn't
seeing the echo replies.  Seems to imply some sort of issue from the
network node back to him?

Found ARP entries via cat /proc/1/net/arp, which seems to confirm that--
entry for 10.5.5.1 is 00:00:00:00:00:00.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Rebooted, now can't ping my guest

2013-03-04 Thread The King in Yellow
On Mon, Mar 4, 2013 at 3:18 AM, Sylvain Bauza sylvain.ba...@digimind.comwrote:

  Is the network node also acting as a Compute node ?


No, I am running three separate nodes-- network, compute and controller.


 The issue you were mentioning was related to the tap virtual device (for
 DHCP leases) : if the network node goes down, then the DHCP lease is
 expiring on the vm without being reack, and then your instance is loosing
 its IP address.
 By recreating the bridges upon reboot on the network node, the tap
 interface will be back up. On the VMs, only a DHCP request is enough, not a
 reboot (or even a compute node reboot).

 I know there is also a second bug related to virtio bridges on the compute
 nodes. This is still a bit unclear to me, but upon compute node reboot,
 virtio bridges are also not reattached, only new instances created
 afterwards.

 Could you please run 'ovs-dpctl show br-int' (provided br-int is the right
 bridge), 'ovs-vsctl show' and 'brctl show' ?


This is on the compute node, where I assume the issue is.  For the record,
I have five vms running here-- four created before rebuilding the
networking, and one after.  Only the one after is working.

root@os-compute-01:/var/log# ovs-dpctl show br-int
system@br-int:
lookups: hit:235944 missed:33169 lost:0
flows: 0
port 0: br-int (internal)
port 1: patch-tun (patch: peer=patch-int)
port 2: qvo7dcd14b3-70
root@os-compute-01:/var/log# ovs-vsctl show
3a52a17f-9846-4b32-b309-b49faf91bfc4
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port gre-1
Interface gre-1
type: gre
options: {in_key=flow, out_key=flow, remote_ip=10.10.10.1}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
Port br-int
Interface br-int
type: internal
Port qvo7dcd14b3-70
tag: 1
Interface qvo7dcd14b3-70
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
ovs_version: 1.4.0+build0
root@os-compute-01:/var/log# brctl show
bridge name bridge id   STP enabled interfaces
br-int  .222603554b47   no  qvo7dcd14b3-70
br-tun  .36c126165e42   no
qbr0b459c65-a0  8000.3af05347af11   no
qvb0b459c65-a0
vnet2
qbr4f36c3ea-5c  8000.e6a5faf9a181   no
qvb4f36c3ea-5c
vnet1
qbr62721ee8-08  8000.8af675d45ed7   no
qvb62721ee8-08
vnet0
qbr7dcd14b3-70  8000.aabc605c1b2c   no
qvb7dcd14b3-70
vnet4
qbrcf833d2a-9e  8000.36e77dfc6018   no
qvbcf833d2a-9e
vnet3
root@os-compute-01:/var/log#

Thank you for the assistance!  Lot of new stuff here I'm trying to come up
to speed on.


 Le 01/03/2013 21:28, The King in Yellow a écrit :

  On Fri, Mar 1, 2013 at 10:11 AM, Sylvain Bauza 
 sylvain.ba...@digimind.com wrote:

  There is a known bug for the network bridges, when rebooting :
 https://bugs.launchpad.net/quantum/+bug/1091605

 Try to delete/recreate your br-int/br-ex and then restart
 openvswitch_plugin/l3/dhcp agents, it should fix the issue.


  Thanks!  Now, I can create a new instance, and that works.  My previous
 instances don't work, however.  What do I need to do to get them reattached?


 root@os-network:/var/log/quantum# ping 10.5.5.6
 PING 10.5.5.6 (10.5.5.6) 56(84) bytes of data.
 ^C
 --- 10.5.5.6 ping statistics ---
 2 packets transmitted, 0 received, 100% packet loss, time 1008ms

 root@os-network:/var/log/quantum# ping 10.5.5.7
 PING 10.5.5.7 (10.5.5.7) 56(84) bytes of data.
 64 bytes from 10.5.5.7: icmp_req=1 ttl=64 time=2.13 ms
 64 bytes from 10.5.5.7: icmp_req=2 ttl=64 time=1.69 ms
 64 bytes from 10.5.5.7: icmp_req=3 ttl=64 time=1.93 ms
 64 bytes from 10.5.5.7: icmp_req=4 ttl=64 time=1.01 ms
 ^C
 --- 10.5.5.7 ping statistics ---
 4 packets transmitted, 4 received, 0% packet loss, time 3003ms
 rtt min/avg/max/mdev = 1.013/1.692/2.132/0.424 ms
 root@os-network:/var/log/quantum#



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Rebooted, now can't ping my guest

2013-03-01 Thread The King in Yellow
In my case, it actually appears that my vms aren't up-- the instances panel
says they are up, but looking at the console, it appears they aren't
getting an IP address.  This is a new instance:

Begin: Running /scripts/init-bottom ... done.
[2.849416] EXT4-fs (vda1): re-mounted. Opts: (null)
cloud-init start-local running: Thu, 28 Feb 2013 13:29:09 +. up
10.41 seconds

no instance data found in start-local

cloud-init-nonet waiting 120 seconds for a network device.

cloud-init-nonet gave up waiting for a network device.

ci-info: lo: 1 127.0.0.1   255.0.0.0   .

ci-info: eth0  : 1 .   .   fa:16:3e:e0:17:f0

route_info failed

Waiting for network configuration...

It looks like it made an OVS port, though.  This is on the compute node,
openvswitch-agent.log:


2013-02-28 08:34:19DEBUG [quantum.agent.linux.utils]
Command: ['sudo', '/usr/bin/quantum-rootwrap',
'/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get',
'Interface', 'qvo4f36c3ea-5c', 'external_ids']
Exit code: 0
Stdout: '{attached-mac=fa:16:3e:e0:17:f0,
iface-id=4f36c3ea-5c49-4625-a830-0c81f27ba139, iface-status=active,
vm-uuid=239d3051-255e-4213-9511-af0a82fcc744}\n'
Stderr: ''
2013-02-28 08:34:19DEBUG [quantum.agent.linux.utils] Running command:
sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl
--timeout=2 get Interface qvo62721ee8-08 external_ids
2013-02-28 08:34:19DEBUG [quantum.agent.linux.utils]
:
root@os-compute-01:/var/log/quantum# ovs-vsctl show
3a52a17f-9846-4b32-b309-b49faf91bfc4
Bridge br-int
Port qvo62721ee8-08
tag: 1
Interface qvo62721ee8-08
Port qvo1ed73bcc-9d
tag: 1
Interface qvo1ed73bcc-9d
Port qvoce0c94a9-ef
tag: 1
Interface qvoce0c94a9-ef
Port qvo135e78dd-8e
tag: 4095
Interface qvo135e78dd-8e
Port qvof37b7a55-a3
tag: 1
Interface qvof37b7a55-a3

Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port qvoaed25b41-9c
tag: 1
Interface qvoaed25b41-9c
Port qvo4f36c3ea-5c
tag: 1
Interface qvo4f36c3ea-5c
Bridge br-tun

Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port gre-1
Interface gre-1
type: gre
options: {in_key=flow, out_key=flow, remote_ip=10.10.10.1}

Port br-tun
Interface br-tun
type: internal
ovs_version: 1.4.0+build0
root@os-compute-01:/var/log/quantum#


I supposed it should be getting address via DHCP from quantum-dhcp-agent on
the network node?  It was running, nothing regarding this MAC in the logs.
I restarted quantum-dhcp-agent and rebooted, no change.

In fact, I got two CirrOS vms up, logged on the console and manually IPed
them (10.5.5.10/24 and 10.5.5.11/24), and they can't ping each other.  I
would expect them to, right?  They should both be connected to OVS switch
br-int, right?

Any pointers?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Rebooted, now can't ping my guest

2013-02-27 Thread The King in Yellow
I have been working on creating an OpenStack environment according to the Basic
Install http://docs.openstack.org/folsom/basic-install/content/index.htmldoc.
 It was working fine last night!  In order to make sure I didn't mess
anything up, I downed controller/network/compute nodes and cloned them
(they are nested on ESXi 5.0u1).

Upon coming back up, I can't ping my guests.  I'm on the network node,
pinging 10.5.5.3, which is a running guest.  I'm guessing the GRE tunnel
isn't coming between the compute and network node, since the br-*
interfaces down?  (After this, I manually ip link set up all br-*
interfaces on both compute and network-- nothing)

I have no experience with either Quantum or Open vSwitch, so I don't know
what this is telling me.  I'm rather at a loss-- can anybody point me in
the right direction here?  I don't see anything in the quantum logs right
now that seems to indicate an error-- openvswitch-agent.log is cycling
through things like the following, though:

2013-02-27 18:19:43DEBUG [quantum.agent.linux.utils] Running command:
sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl
--timeout=2 get Interface qr-9f9041ce-65 external_ids
2013-02-27 18:19:43DEBUG [quantum.agent.linux.utils]
Command: ['sudo', '/usr/bin/quantum-rootwrap',
'/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get',
'Interface', 'qr-9f9041ce-65', 'external_ids']
Exit code: 0
Stdout: '{attached-mac=fa:16:3e:e2:38:da,
iface-id=9f9041ce-654d-4706-a208-60cf5fca5d90, iface-status=active}\n'
Stderr: ''
2013-02-27 18:19:43DEBUG [quantum.agent.linux.utils] Running command:
sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl
--timeout=2 get Interface tap45ffdc5f-da external_ids
2013-02-27 18:19:43DEBUG [quantum.agent.linux.utils]
Command: ['sudo', '/usr/bin/quantum-rootwrap',
'/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get',
'Interface', 'tap45ffdc5f-da', 'external_ids']
Exit code: 0
Stdout: '{attached-mac=fa:16:3e:36:2e:54,
iface-id=45ffdc5f-dad9-444a-aff4-3d39b607f828, iface-status=active}\n'
Stderr: ''
2013-02-27 18:19:45DEBUG [quantum.agent.linux.utils] Running command:
sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl
--timeout=2 list-ports br-int
2013-02-27 18:19:45DEBUG [quantum.agent.linux.utils]
Command: ['sudo', '/usr/bin/quantum-rootwrap',
'/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports',
'br-int']
Exit code: 0
Stdout: 'patch-tun\nqr-9f9041ce-65\ntap45ffdc5f-da\n'
Stderr: ''
2013-02-27 18:19:45DEBUG [quantum.agent.linux.utils] Running command:
sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl
--timeout=2 get Interface patch-tun external_ids
2013-02-27 18:19:45DEBUG [quantum.agent.linux.utils]
Command: ['sudo', '/usr/bin/quantum-rootwrap',
'/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get',
'Interface', 'patch-tun', 'external_ids']
Exit code: 0
Stdout: '{}\n'
Stderr: ''


Here are the output of ifconfig -a, ovs-vsctl show, and ovs-cfctl of each
bridge on the network node:

root@os-network:~# ifconfig -a
br-ex Link encap:Ethernet  HWaddr 00:50:56:81:66:d8
  BROADCAST MULTICAST  MTU:1500  Metric:1
  RX packets:23 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:1380 (1.3 KB)  TX bytes:0 (0.0 B)

br-intLink encap:Ethernet  HWaddr 5e:5a:c3:07:44:42
  BROADCAST MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

br-tunLink encap:Ethernet  HWaddr 56:2d:9f:6c:ac:4f
  BROADCAST MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0  Link encap:Ethernet  HWaddr 00:50:56:81:28:f4
  inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
  inet6 addr: fe80::250:56ff:fe81:28f4/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:535 errors:0 dropped:10 overruns:0 frame:0
  TX packets:554 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:137612 (137.6 KB)  TX bytes:108783 (108.7 KB)

eth1  Link encap:Ethernet  HWaddr 00:50:56:81:44:e7
  inet addr:10.10.10.1  Bcast:10.10.10.255  Mask:255.255.255.0
  inet6 addr: fe80::250:56ff:fe81:44e7/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:76 errors:0 dropped:9 overruns:0 frame:0
  TX packets:40 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:14531 (14.5 KB)  TX