Re: [Dnsmasq-discuss] DHCPv6 doesn't work on Linux interfaces enslaved to a VRF

2023-10-11 Thread Simon Kelley

On 10/10/2023 11:25, Luci Stanescu wrote:

Hi Simon,


On 10 Oct 2023, at 00:17, Simon Kelley  wrote:

I've implemented option 1 here and it's currently running and dogfood 
on my home network. There are no VRF interfaces there: this is a test 
mainly to check that nothing breaks. So far, so good.


The patch I used is attached. It would be interesting to see if it 
solves the problem for you.


Many thanks for this! I can confirm that it works as expected with 
VRF-enslaved interfaces now.


Excellent. I've elaborated the patch slightly so that it logs when doing 
the fixup. If it turns out that there are cases where it's doing that 
inappropriately, the log will make it easier to see what's going on.


The patch is in the git public repo master branch now, so if anyone on 
the lists starts seeing "Working around kernel bug." messages, 
please reply here ASAP.




2. Finding authoritative information that the interface index from 
IPV6_PKTINFO is always set to the L3 interface on which a datagram 
was received. The kernel mailing list might be start? I'd certainly 
be happy to help think about and test various scenarios.


Please enquire about 2.


I've tested chains of bond- and bridge-enslaved interfaces (e.g. veth in 
bond in bridge in bond) and ipi6_ifindex seems to be set to the 
highest-up master, excluding VRF devices, so that seems promising and 
should cover the empirical bit. Joining a multicast group on an enslaved 
interface (if the master isn't a VRF) doesn't seem to work anyway.


I'll ask on the netdev kernel mailing list and see if I can get any 
assurances, but I'll have to wait for my DMARC record to expire first.




Thanks for that.


Simon.


Cheers,
Luci

--
Luci Stanescu
Information Security Consultant



___
Dnsmasq-discuss mailing list
Dnsmasq-discuss@lists.thekelleys.org.uk
https://lists.thekelleys.org.uk/cgi-bin/mailman/listinfo/dnsmasq-discuss


Re: [Dnsmasq-discuss] DHCPv6 doesn't work on Linux interfaces enslaved to a VRF

2023-10-10 Thread Luci Stanescu via Dnsmasq-discuss
Hi Simon,

> On 10 Oct 2023, at 00:17, Simon Kelley  wrote:
> 
> I've implemented option 1 here and it's currently running and dogfood on my 
> home network. There are no VRF interfaces there: this is a test mainly to 
> check that nothing breaks. So far, so good.
> 
> The patch I used is attached. It would be interesting to see if it solves the 
> problem for you.

Many thanks for this! I can confirm that it works as expected with VRF-enslaved 
interfaces now.

>> 2. Finding authoritative information that the interface index from 
>> IPV6_PKTINFO is always set to the L3 interface on which a datagram was 
>> received. The kernel mailing list might be start? I'd certainly be happy to 
>> help think about and test various scenarios.
> 
> Please enquire about 2.

I've tested chains of bond- and bridge-enslaved interfaces (e.g. veth in bond 
in bridge in bond) and ipi6_ifindex seems to be set to the highest-up master, 
excluding VRF devices, so that seems promising and should cover the empirical 
bit. Joining a multicast group on an enslaved interface (if the master isn't a 
VRF) doesn't seem to work anyway.

I'll ask on the netdev kernel mailing list and see if I can get any assurances, 
but I'll have to wait for my DMARC record to expire first.

Cheers,
Luci

-- 
Luci Stanescu
Information Security Consultant


smime.p7s
Description: S/MIME cryptographic signature
___
Dnsmasq-discuss mailing list
Dnsmasq-discuss@lists.thekelleys.org.uk
https://lists.thekelleys.org.uk/cgi-bin/mailman/listinfo/dnsmasq-discuss


Re: [Dnsmasq-discuss] DHCPv6 doesn't work on Linux interfaces enslaved to a VRF

2023-10-09 Thread Simon Kelley

On 09/10/2023 11:40, Luci Stanescu wrote:

Hi Simon,

Thank you for your response and your openness to this issue! My thoughts 
below, inline (and apologies for the rather long email).



On 9 Oct 2023, at 01:05, Simon Kelley  wrote:
1) Even if this is a kernel bug, kernel bugs fixes take a long time to 
spread, so working around them in dnsmasq is a good thing to do, as 
long as it doesn't leave us with long-term technical debt. This 
wouldn't be the first time a kernel bug has been worked around.


I agree, that's why I've opened this discussion here.


2) https://docs.kernel.org/networking/vrf.html says:

Applications that are to work within a VRF need to bind their socket 
to the VRF device:

setsockopt(sd, SOL_SOCKET, SO_BINDTODEVICE, dev, strlen(dev)+1);
or to specify the output device using cmsg and IP_PKTINFO.

Which kind of implies that this might not be a kernel bug, rather 
we're just not doing what's required to work with VRF.


I'm not convinced this isn't a kernel bug. The VRF implementation has 
been developed in stages over several years. It is indeed the case that 
initially the sockets had to be bound to the VRF device or to specify it 
via IP_PKTINFO/IPV6_PKTINFO. But then came support for 
net.ipv4.*_l3mdev_accept sysctls (which confusingly also affect AF_INET6 
sockets) as well as a series of patches in 2018 that allowed specifying 
a VRF slave device for several operations. Before that series of 
patches, it certainly made sense for sin6_scope_id in msg_name for 
recvmsg() to be the VRF device (it had to be) – but I'm not convinced it 
shouldn't have been changed after the rules for connect() and sendmsg() 
were relaxed. The thing is, as it stands, the kernel code works well for 
everything except IPv6 link-local communication, so it wouldn't be 
surprising for this to be a simple oversight.


I had tracked this down while trying to figure out what's going on here 
and detailed a bit in the kernel bug report, which you can find here:


https://lore.kernel.org/netdev/06798029-660d-454e-8628-3a9b9e1af...@safebits.tech/T/#u
 


Setting the device to send to using IP_PKTINFO rather than relying on 
the flowinfo field in the destination address would be quite possible, 
and the above implies that it will work.


Apologies for being pernickety, but it's the scope_id field which is 
relevant here, rather than flowinfo. And since we're talking AF_INET6, 
shouldn't it be IPv6_PKTINFO?


My bad. I meant scope_id. It was late :(




I *think* it should work. I have been unable to find a situation where 
the scope received in the IPV6_PKTINFO cmsg to recvmsg() cannot be used 
to reliably send a response out the same interface (which I believe is 
exactly what DHCPv6 code will always want to do), but my word is 
certainly no guarantee. More about this towards the end of the email.


However, it'll only work as long as you do NOT specify a scope in the 
destination of the sendmsg() call or that scope specified is exactly the 
same as in the IPV6_PKTINFO ancillary message. Specifically, you cannot 
specify the VRF master device index. I've adapted my earlier scripts to 
test this and I've pasted them at the end of this email.



This brings us on to

3) IPv4. Does DHCPv4 work with VRF devices? It would be nice to test, 
and fix any similar problems in the same patch. Interestingly, the 
DHCPv4 code already sets the outgoing device via IP_PKTINFO (there 
being no flowinfo field in an IPv4 sockaddr) so it stands a chance of 
just working.


DHCPv4 works just fine. My dnsmasq configuration uses 'interface' to 
specify the VRF slave interface (which in my case is a bridge) and 
DHCPv4 messages are sent out correctly.


Copying the inferface index into the flowinfo of the destination or 
setting IP_PKTINFO are both easy patches to make and try. The 
difficult bit is being sure that they won't break existing installations.


My tests seem to imply that leaving the received scope_id field (which 
is the VRF master device index) unchanged and setting IPV6_PKTINFO won't 
work. Three options seem to work:
1. Overwrite scope_id of source address from recvmsg() with the 
interface index from the received IPV6_PKTINFO.
         2. When performing the sendmsg(), set the scope_id of the 
destination to 0 and add IPV6_PKTINFO with the the empty address (since 
the received IPV6_PKTINFO specifies the multicast address and that won't 
do as a source) and the interface index from the received IPV6_PKTINFO.
3. If the socket is bound to an L3 interface (not the VRF master 
device), just set the scope_id in the destination to 0 and IPV6_PKTINFO 
is not required. I'm not sure this'll work for dnsmasq, but I thought of 
including it for the sake of completeness.




This is good information.

I've implemented option 1 here and it's currently running and dogfood on 
my home network. There are no VRF interfaces there: this is a test 

Re: [Dnsmasq-discuss] DHCPv6 doesn't work on Linux interfaces enslaved to a VRF

2023-10-09 Thread Luci Stanescu via Dnsmasq-discuss
Hi Simon,

Thank you for your response and your openness to this issue! My thoughts below, 
inline (and apologies for the rather long email).

> On 9 Oct 2023, at 01:05, Simon Kelley  wrote:
> 1) Even if this is a kernel bug, kernel bugs fixes take a long time to 
> spread, so working around them in dnsmasq is a good thing to do, as long as 
> it doesn't leave us with long-term technical debt. This wouldn't be the first 
> time a kernel bug has been worked around.

I agree, that's why I've opened this discussion here.

> 2) https://docs.kernel.org/networking/vrf.html says:
> 
> Applications that are to work within a VRF need to bind their socket to the 
> VRF device:
> setsockopt(sd, SOL_SOCKET, SO_BINDTODEVICE, dev, strlen(dev)+1);
> or to specify the output device using cmsg and IP_PKTINFO.
> 
> Which kind of implies that this might not be a kernel bug, rather we're just 
> not doing what's required to work with VRF.

I'm not convinced this isn't a kernel bug. The VRF implementation has been 
developed in stages over several years. It is indeed the case that initially 
the sockets had to be bound to the VRF device or to specify it via 
IP_PKTINFO/IPV6_PKTINFO. But then came support for net.ipv4.*_l3mdev_accept 
sysctls (which confusingly also affect AF_INET6 sockets) as well as a series of 
patches in 2018 that allowed specifying a VRF slave device for several 
operations. Before that series of patches, it certainly made sense for 
sin6_scope_id in msg_name for recvmsg() to be the VRF device (it had to be) – 
but I'm not convinced it shouldn't have been changed after the rules for 
connect() and sendmsg() were relaxed. The thing is, as it stands, the kernel 
code works well for everything except IPv6 link-local communication, so it 
wouldn't be surprising for this to be a simple oversight.

I had tracked this down while trying to figure out what's going on here and 
detailed a bit in the kernel bug report, which you can find here:

https://lore.kernel.org/netdev/06798029-660d-454e-8628-3a9b9e1af...@safebits.tech/T/#u

> Setting the device to send to using IP_PKTINFO rather than relying on the 
> flowinfo field in the destination address would be quite possible, and the 
> above implies that it will work.

Apologies for being pernickety, but it's the scope_id field which is relevant 
here, rather than flowinfo. And since we're talking AF_INET6, shouldn't it be 
IPv6_PKTINFO?

I *think* it should work. I have been unable to find a situation where the 
scope received in the IPV6_PKTINFO cmsg to recvmsg() cannot be used to reliably 
send a response out the same interface (which I believe is exactly what DHCPv6 
code will always want to do), but my word is certainly no guarantee. More about 
this towards the end of the email.

However, it'll only work as long as you do NOT specify a scope in the 
destination of the sendmsg() call or that scope specified is exactly the same 
as in the IPV6_PKTINFO ancillary message. Specifically, you cannot specify the 
VRF master device index. I've adapted my earlier scripts to test this and I've 
pasted them at the end of this email.

> This brings us on to
> 
> 3) IPv4. Does DHCPv4 work with VRF devices? It would be nice to test, and fix 
> any similar problems in the same patch. Interestingly, the DHCPv4 code 
> already sets the outgoing device via IP_PKTINFO (there being no flowinfo 
> field in an IPv4 sockaddr) so it stands a chance of just working.

DHCPv4 works just fine. My dnsmasq configuration uses 'interface' to specify 
the VRF slave interface (which in my case is a bridge) and DHCPv4 messages are 
sent out correctly.

> Copying the inferface index into the flowinfo of the destination or setting 
> IP_PKTINFO are both easy patches to make and try. The difficult bit is being 
> sure that they won't break existing installations.

My tests seem to imply that leaving the received scope_id field (which is the 
VRF master device index) unchanged and setting IPV6_PKTINFO won't work. Three 
options seem to work:
1. Overwrite scope_id of source address from recvmsg() with the 
interface index from the received IPV6_PKTINFO.
2. When performing the sendmsg(), set the scope_id of the destination 
to 0 and add IPV6_PKTINFO with the the empty address (since the received 
IPV6_PKTINFO specifies the multicast address and that won't do as a source) and 
the interface index from the received IPV6_PKTINFO.
3. If the socket is bound to an L3 interface (not the VRF master 
device), just set the scope_id in the destination to 0 and IPV6_PKTINFO is not 
required. I'm not sure this'll work for dnsmasq, but I thought of including it 
for the sake of completeness.

I've adapted my scripts slightly to allow easier testing of the behaviour. The 
receiver socket now binds to the VRF device instead (you can even not bind to 
any device and just set the net.ipv4.udp_l3mdev_accept sysctl to 1). The 
interface configuration is as before:

ip link add myvrf type vrf table 

Re: [Dnsmasq-discuss] DHCPv6 doesn't work on Linux interfaces enslaved to a VRF

2023-10-08 Thread Simon Kelley



On 07/10/2023 14:02, Luci Stanescu via Dnsmasq-discuss wrote:

Hi,

I've discovered that DHCPv6 doesn't work on Linux interfaces enslaved to 
a VRF. Now, I believe this to be a bug in the kernel and I've reported 
it, but in case you'd like to implement a workaround in dnsmasq, this is 
quite trivial, as I'll explain in a bit.


The issue is that when a datagram is received from an interface enslaved 
to a VRF device, the sin6_scope_id of the msg_name field returned from 
recvmsg() points to the interface index of the VRF device, instead of 
the enslaved device. Unfortunately, this is completely useless when the 
source address is a link-local address, as a subsequent sendmsg() which 
specifies that scope will fail with ENETUNREACH, as expected, 
considering the interface index of the enslaved device would have to be 
specified as the scope (there can of course be multiple interfaces 
enslaved to a single VRF device).


With DHCPv6, a DHCPSOLICIT is received from a link-local address and 
DHCPADVERTISE is sent to the source of that address, with a scope 
specified according to the scope from the msg_name field returned by 
recvmsg(). I've debugged this using strace, as dnsmasq doesn't print any 
errors when the send fails. Here is the recvmsg() call:


recvmsg(6, {msg_name={sa_family=AF_INET6, sin6_port=htons(546), 
sin6_flowinfo=htonl(0), inet_pton(AF_INET6, "fe80::216:3eff:fed0:4e7d", 
_addr), sin6_scope_id=if_nametoindex("myvrf")}, msg_namelen=28, 
msg_iov=[{iov_base="\1\203\273\n\0\1\0\16\0\1\0\1,\262\320k\0\26>\320N}\0\6\0\10\0\27\0\30\0'"..., iov_len=548}], msg_iovlen=1, msg_control=[{cmsg_len=36, cmsg_level=SOL_IPV6, cmsg_type=0x32}], msg_controllen=40, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 56


and the sending of the response later on:

sendto(6, 
"\2\203\273\n\0\1\0\16\0\1\0\1,\262\320k\0\26>\320N}\0\2\0\16\0\1\0\1,\262"..., 114, 0, {sa_family=AF_INET6, sin6_port=htons(546), sin6_flowinfo=htonl(0), inet_pton(AF_INET6, "fe80::216:3eff:fed0:4e7d", _addr), sin6_scope_id=if_nametoindex("myvrf")}, 28) = -1 ENETUNREACH (Network is unreachable)


Please notice that the scope is the index of the VRF master device, so 
the sendto() call is certain to fail.


When reporting the issue as a kernel bug, I reproduced the issue using 
local communication with unicast and a couple of simple Python scripts. 
Here's reproduction using local communication, but with multicast, to 
make it closer to home:


First, set up a VRF device and a veth pair, with one end enslaved to the 
VRF master (on which we'll be receiving datagrams) and the other end 
used to send datagrams.


ip link add myvrf type vrf table 42
ip link set myvrf up
ip link add veth1 type veth peer name veth2
ip link set veth1 master myvrf up
ip link set veth2 up

# ip link sh dev myvrf
110: myvrf:  mtu 65575 qdisc noqueue state UP 
mode DEFAULT group default qlen 1000

     link/ether da:ca:c9:2b:6e:02 brd ff:ff:ff:ff:ff:ff
# ip addr sh dev veth1
112: veth1@veth2:  mtu 1500 qdisc 
noqueue master myvrf state UP group default qlen 1000

     link/ether 32:63:cf:f5:08:35 brd ff:ff:ff:ff:ff:ff
     inet6 fe80::3063:cfff:fef5:835/64 scope link
        valid_lft forever preferred_lft forever
# ip addr sh dev veth2
111: veth2@veth1:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000

     link/ether 1a:8f:5a:85:3c:c0 brd ff:ff:ff:ff:ff:ff
     inet6 fe80::188f:5aff:fe85:3cc0/64 scope link
        valid_lft forever preferred_lft forever

The receiver:
import socket
import struct

s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVPKTINFO, 1)
s.setsockopt(socket.SOL_SOCKET, socket.SO_BINDTODEVICE, b'veth1')
s.bind(('', 2000, 0, 0))
mreq = struct.pack('@16sI', socket.inet_pton(socket.AF_INET6, 
'ff02::1:2'), socket.if_nametoindex('veth1'))

s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, mreq)

while True:
     data, cmsg_list, flags, source = s.recvmsg(4096, 4096)
     for level, type, cmsg_data in cmsg_list:
         if level == socket.IPPROTO_IPV6 and type == socket.IPV6_PKTINFO:
             dest_address, dest_scope = struct.unpack('@16sI', cmsg_data)
             dest_address = socket.inet_ntop(socket.AF_INET6, dest_address)
             dest_scope = socket.if_indextoname(dest_scope)
             print("PKTINFO destination {} {}".format(dest_address, 
dest_scope))

     source_address, source_port, source_flow, source_scope = source
     source_scope = socket.if_indextoname(source_scope)
     print("name source {} {}".format(source_address, source_scope))

And the sender:
import socket

s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
dest = ('ff02::1:2', 2000, 0, socket.if_nametoindex('veth2'))
s.sendto(b'foo', dest)

The receiver will print:
PKTINFO destination ff02::1:2 veth1
name source fe80::188f:5aff:fe85:3cc0 myvrf

Please notice that the receiver gets the right address, the one 
associated to veth2, but the scope identifies the VRF 

[Dnsmasq-discuss] DHCPv6 doesn't work on Linux interfaces enslaved to a VRF

2023-10-07 Thread Luci Stanescu via Dnsmasq-discuss
Hi,

I've discovered that DHCPv6 doesn't work on Linux interfaces enslaved to a VRF. 
Now, I believe this to be a bug in the kernel and I've reported it, but in case 
you'd like to implement a workaround in dnsmasq, this is quite trivial, as I'll 
explain in a bit.

The issue is that when a datagram is received from an interface enslaved to a 
VRF device, the sin6_scope_id of the msg_name field returned from recvmsg() 
points to the interface index of the VRF device, instead of the enslaved 
device. Unfortunately, this is completely useless when the source address is a 
link-local address, as a subsequent sendmsg() which specifies that scope will 
fail with ENETUNREACH, as expected, considering the interface index of the 
enslaved device would have to be specified as the scope (there can of course be 
multiple interfaces enslaved to a single VRF device).

With DHCPv6, a DHCPSOLICIT is received from a link-local address and 
DHCPADVERTISE is sent to the source of that address, with a scope specified 
according to the scope from the msg_name field returned by recvmsg(). I've 
debugged this using strace, as dnsmasq doesn't print any errors when the send 
fails. Here is the recvmsg() call:

recvmsg(6, {msg_name={sa_family=AF_INET6, sin6_port=htons(546), 
sin6_flowinfo=htonl(0), inet_pton(AF_INET6, "fe80::216:3eff:fed0:4e7d", 
_addr), sin6_scope_id=if_nametoindex("myvrf")}, msg_namelen=28, 
msg_iov=[{iov_base="\1\203\273\n\0\1\0\16\0\1\0\1,\262\320k\0\26>\320N}\0\6\0\10\0\27\0\30\0'"...,
 iov_len=548}], msg_iovlen=1, msg_control=[{cmsg_len=36, cmsg_level=SOL_IPV6, 
cmsg_type=0x32}], msg_controllen=40, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 56

and the sending of the response later on:

sendto(6, 
"\2\203\273\n\0\1\0\16\0\1\0\1,\262\320k\0\26>\320N}\0\2\0\16\0\1\0\1,\262"..., 
114, 0, {sa_family=AF_INET6, sin6_port=htons(546), sin6_flowinfo=htonl(0), 
inet_pton(AF_INET6, "fe80::216:3eff:fed0:4e7d", _addr), 
sin6_scope_id=if_nametoindex("myvrf")}, 28) = -1 ENETUNREACH (Network is 
unreachable)

Please notice that the scope is the index of the VRF master device, so the 
sendto() call is certain to fail.

When reporting the issue as a kernel bug, I reproduced the issue using local 
communication with unicast and a couple of simple Python scripts. Here's 
reproduction using local communication, but with multicast, to make it closer 
to home:

First, set up a VRF device and a veth pair, with one end enslaved to the VRF 
master (on which we'll be receiving datagrams) and the other end used to send 
datagrams.

ip link add myvrf type vrf table 42
ip link set myvrf up
ip link add veth1 type veth peer name veth2
ip link set veth1 master myvrf up
ip link set veth2 up

# ip link sh dev myvrf
110: myvrf:  mtu 65575 qdisc noqueue state UP mode 
DEFAULT group default qlen 1000
link/ether da:ca:c9:2b:6e:02 brd ff:ff:ff:ff:ff:ff
# ip addr sh dev veth1
112: veth1@veth2:  mtu 1500 qdisc noqueue 
master myvrf state UP group default qlen 1000
link/ether 32:63:cf:f5:08:35 brd ff:ff:ff:ff:ff:ff
inet6 fe80::3063:cfff:fef5:835/64 scope link
   valid_lft forever preferred_lft forever
# ip addr sh dev veth2
111: veth2@veth1:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000
link/ether 1a:8f:5a:85:3c:c0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::188f:5aff:fe85:3cc0/64 scope link
   valid_lft forever preferred_lft forever

The receiver:
import socket
import struct

s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVPKTINFO, 1)
s.setsockopt(socket.SOL_SOCKET, socket.SO_BINDTODEVICE, b'veth1')
s.bind(('', 2000, 0, 0))
mreq = struct.pack('@16sI', socket.inet_pton(socket.AF_INET6, 'ff02::1:2'), 
socket.if_nametoindex('veth1'))
s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, mreq)

while True:
data, cmsg_list, flags, source = s.recvmsg(4096, 4096)
for level, type, cmsg_data in cmsg_list:
if level == socket.IPPROTO_IPV6 and type == socket.IPV6_PKTINFO:
dest_address, dest_scope = struct.unpack('@16sI', cmsg_data)
dest_address = socket.inet_ntop(socket.AF_INET6, dest_address)
dest_scope = socket.if_indextoname(dest_scope)
print("PKTINFO destination {} {}".format(dest_address, dest_scope))
source_address, source_port, source_flow, source_scope = source
source_scope = socket.if_indextoname(source_scope)
print("name source {} {}".format(source_address, source_scope))

And the sender:
import socket

s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
dest = ('ff02::1:2', 2000, 0, socket.if_nametoindex('veth2'))
s.sendto(b'foo', dest)

The receiver will print:
PKTINFO destination ff02::1:2 veth1
name source fe80::188f:5aff:fe85:3cc0 myvrf

Please notice that the receiver gets the right address, the one associated to 
veth2, but the scope identifies the VRF master. However, I've noticed that the 
scope in PKTINFO actually identifies the index of the