Re: [qubes-users] Oddness in sys-net's VIF startup

2016-08-22 Thread johnyjukya
> In trying to figure out why my ProxyVM has no VIF (on Qubes 3.2-testing) I
> was looking at the dmesg's of the servicevm's, and noticed something that
> looked a bit odd (running rapidly through vif interface #'s) in sys-net
> (fedora23 template).
> Similarly, iptables-save shows duplicate rules for the vif's:
>
> -A PR-QBS -d 10.137.1.1/32 -p udp -m udp --dport 53 -j DNAT
> --to-destination x.x.x.x
> -A PR-QBS -d 10.137.1.1/32 -p tcp -m tcp --dport 53 -j DNAT
> --to-destination x.x.x.x
> -A PR-QBS -d 10.137.1.254/32 -p udp -m udp --dport 53 -j DNAT
> --to-destination y.y.y.y
> -A PR-QBS -d 10.137.1.254/32 -p tcp -m tcp --dport 53 -j DNAT
> --to-destination y.y.y.y

Whoops, there's one each for tcp and udp, so the iptables rules are cool. 
But the duplicate interfaces still seem weird.

Also FYI, /proc/net/dev:

Inter-|   Receive|  Transmit
 face |bytespackets errs drop fifo frame compressed multicast|bytes   
packets errs drop fifo colls carrier compressed
lo:6788  73000 0  0 0 6788
 73000 0   0  0
enp0s0: 229463366  16497200   70 0  0 0
19397772   61227000 0   0  0
vif104.0: 3336560   22909000 0  0 0
83284242   57902000 0   0  0
vif107.0:  3094491216000 0  0 0  
1332291840000 0   0  0

JJ

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/c72427d3b153c2163d08e799f6ac165c.webmail%40localhost.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Oddness in sys-net's VIF startup

2016-08-22 Thread johnyjukya
In trying to figure out why my ProxyVM has no VIF (on Qubes 3.2-testing) I
was looking at the dmesg's of the servicevm's, and noticed something that
looked a bit odd (running rapidly through vif interface #'s) in sys-net
(fedora23 template).

Is this normal?:

[   42.978214] IPv6: ADDRCONF(NETDEV_UP): vif46.0: link is not ready
[   48.041487] vif vif-46-0 vif46.0: Guest Rx ready
[   48.041526] IPv6: ADDRCONF(NETDEV_CHANGE): vif46.0: link becomes ready
[  124.526920] IPv6: ADDRCONF(NETDEV_UP): vif47.0: link is not ready
[  127.644833] vif vif-47-0 vif47.0: Guest Rx ready
[  127.644877] IPv6: ADDRCONF(NETDEV_CHANGE): vif47.0: link becomes ready
[  269.128078] hrtimer: interrupt took 8109090 ns
[  308.271586] IPv6: ADDRCONF(NETDEV_UP): vif48.0: link is not ready
[  311.147584] vif vif-48-0 vif48.0: Guest Rx ready
[  311.147618] IPv6: ADDRCONF(NETDEV_CHANGE): vif48.0: link becomes ready
[  417.183606] IPv6: ADDRCONF(NETDEV_UP): vif49.0: link is not ready
[  420.387280] vif vif-49-0 vif49.0: Guest Rx ready
[  420.387321] IPv6: ADDRCONF(NETDEV_CHANGE): vif49.0: link becomes ready
[  610.402585] IPv6: ADDRCONF(NETDEV_UP): vif50.0: link is not ready
[  615.104469] vif vif-50-0 vif50.0: Guest Rx ready
[  615.104504] IPv6: ADDRCONF(NETDEV_CHANGE): vif50.0: link becomes ready
[  662.987747] IPv6: ADDRCONF(NETDEV_UP): vif51.0: link is not ready
[  665.578436] vif vif-51-0 vif51.0: Guest Rx ready
[  665.578471] IPv6: ADDRCONF(NETDEV_CHANGE): vif51.0: link becomes ready
[  868.758325] IPv6: ADDRCONF(NETDEV_UP): vif52.0: link is not ready
[  871.811326] vif vif-52-0 vif52.0: Guest Rx ready
[  871.811363] IPv6: ADDRCONF(NETDEV_CHANGE): vif52.0: link becomes ready
[ 2069.213008] IPv6: ADDRCONF(NETDEV_UP): vif62.0: link is not ready
[ 2080.027605] vif vif-62-0 vif62.0: Guest Rx ready
[ 2080.027648] IPv6: ADDRCONF(NETDEV_CHANGE): vif62.0: link becomes ready
[ 2145.558791] IPv6: ADDRCONF(NETDEV_UP): vif63.0: link is not ready
[ 2148.843475] vif vif-63-0 vif63.0: Guest Rx ready
[ 2148.843517] IPv6: ADDRCONF(NETDEV_CHANGE): vif63.0: link becomes ready
[ 2801.434340] IPv6: ADDRCONF(NETDEV_UP): vif65.0: link is not ready
[ 2805.179778] vif vif-65-0 vif65.0: Guest Rx ready
[ 2805.179817] IPv6: ADDRCONF(NETDEV_CHANGE): vif65.0: link becomes ready
[ 2969.658272] IPv6: ADDRCONF(NETDEV_UP): vif67.0: link is not ready
[ 2973.655697] vif vif-67-0 vif67.0: Guest Rx ready
[ 2973.655736] IPv6: ADDRCONF(NETDEV_CHANGE): vif67.0: link becomes ready
[ 3086.652456] IPv6: ADDRCONF(NETDEV_UP): vif69.0: link is not ready
[ 3090.197062] vif vif-69-0 vif69.0: Guest Rx ready
[ 3090.197102] IPv6: ADDRCONF(NETDEV_CHANGE): vif69.0: link becomes ready
[ 3833.675114] IPv6: ADDRCONF(NETDEV_UP): vif73.0: link is not ready
[ 3836.622944] vif vif-73-0 vif73.0: Guest Rx ready
[ 3836.622995] IPv6: ADDRCONF(NETDEV_CHANGE): vif73.0: link becomes ready
[ 3930.095741] IPv6: ADDRCONF(NETDEV_UP): vif74.0: link is not ready
[ 3933.490802] vif vif-74-0 vif74.0: Guest Rx ready
[ 3933.490840] IPv6: ADDRCONF(NETDEV_CHANGE): vif74.0: link becomes ready
[ 4017.723451] IPv6: ADDRCONF(NETDEV_UP): vif76.0: link is not ready
[ 4020.985417] vif vif-76-0 vif76.0: Guest Rx ready
[ 4020.985455] IPv6: ADDRCONF(NETDEV_CHANGE): vif76.0: link becomes ready
[ 4099.247372] IPv6: ADDRCONF(NETDEV_UP): vif77.0: link is not ready
[ 4102.325647] vif vif-77-0 vif77.0: Guest Rx ready
[ 4102.325698] IPv6: ADDRCONF(NETDEV_CHANGE): vif77.0: link becomes ready

It goes on up to vif.107.

Also, I end up with two identical (other than name) vif's with the same IP
address, 10.137.1.1:

vif104.0: flags=4163  mtu 1500
inet 10.137.1.1  netmask 255.255.255.255  broadcast 0.0.0.0
inet6 fe80::fcff::feff:  prefixlen 64  scopeid 0x20
ether fe:ff:ff:ff:ff:ff  txqueuelen 32  (Ethernet)
RX packets 22858  bytes 3301473 (3.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 57831  bytes 83238868 (79.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vif107.0: flags=4163  mtu 1500
inet 10.137.1.1  netmask 255.255.255.255  broadcast 0.0.0.0
inet6 fe80::fcff::feff:  prefixlen 64  scopeid 0x20
ether fe:ff:ff:ff:ff:ff  txqueuelen 32  (Ethernet)
RX packets 1207  bytes 309005 (301.7 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1831  bytes 132659 (129.5 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Similarly, iptables-save shows duplicate rules for the vif's:

-A PR-QBS -d 10.137.1.1/32 -p udp -m udp --dport 53 -j DNAT
--to-destination x.x.x.x
-A PR-QBS -d 10.137.1.1/32 -p tcp -m tcp --dport 53 -j DNAT
--to-destination x.x.x.x
-A PR-QBS -d 10.137.1.254/32 -p udp -m udp --dport 53 -j DNAT
--to-destination y.y.y.y
-A PR-QBS -d 10.137.1.254/32 -p tcp -m tcp --dport 53 -j DNAT
--to-destination y.y.y.y

etc.

Again, is this normal behavior?  Looks like something's fighting with itself.

JJ

-- 
You received this message because you are subscr