Fernando Gont <mailto:[email protected]>
24 January 2015 08:19
Hi, Ray,

Thanks so much for your usual great feedback! -- Please find my comments
in-line....

On 01/23/2015 07:54 AM, Ray Hunter wrote:
On the whole, this draft is already in a very good state.

Thanks!



I have some additional input for this draft

I think it's also useful to distinguish between on-link and on-net attacks.

New reconnaissance method:

3.1.2.1 Snooping of DHCPv6 relay packets

DHCPv6 deployment in enterprise networks often relies on one or more
centrally located DHCPv6 servers that serve the entire enterprise from
one or more highly-available sites.

Although the end host interacts with local on-link routers, these router
may forward the DHCPv6 requests over the WAN to the centrally located
DHCPv6 server(s) using DHCPv6 relay
[https://tools.ietf.org/html/rfc3315#section-7]

Dumb question: Doesn't this imply that if you have e.g. issues with the
WAN link you cannot even bootstrap your local nodes?

That is correct. However, commercial pressure means that many sites do not have any local IT presence at all. They may not have any servers at all on site. IPAM systems are already widely used, but are expensive per appliance, and so are centralised for everything except the largest business critical sites. On the other hand, WAN links have been much cheaper and more reliable over the years. So a "zero server on site" has become a common deployment mode in my experience. People can't work without a WAN link anyway as the biggest corporate apps are concentrated in data centres.

The relay packets are not encrypted and thus may be vulnerable to on-net
snooping, potentially allowing an attacker to leverage existing on-net
access to gain additional knowledge about remote LANs that they do not
yet have access to. Obtaining access to a DHCPv6 server would also be a
very high value target, and such servers should be tightly secured.

I'm unsure regarding whether to include this as 3.1.2.*, or rather
include a more general top-level section of snooping, which mentions
DHCPv6 as a particular case -- e.g., ND snooping is another interesting
case.

Thoughts?
I think a general section on passive monitoring/snooping is a good way to go.

It could include snooping many protocols that be levered up:
HTTP to discover proxies or servers, DNS to discover more hosts and DNS servers, DHCPv6 relay to discover more end nodes, ND to discover local neighbours.


Some potential additional text for 3.2.1
3.2.1.  Reducing the subnet ID search space
There are a number of documents available online e.g.
[http://www.ietf.org/rfc/rfc5375.txt
http://www.ripe.net/ripe/docs/ripe-552 ] that provide recommendations
for allocation of address space in the /3 to /64 range, which address
various operational considerations, including: RIR assignment policy,
ability to delegate reverse DNS zones to different servers, ability to
aggregate routes efficiently, address space preservation, ability to
delegate address assignment within the organization, ability to add
allocate new sites/prefixes to existing entities without updating ACLs,
and ability to de-aggregate and advertise sub-spaces via various AS
interfaces.

Some of the allocation databases may even be publicly searchable.
Allocation schemes may also be algorithmic e.g. simple incremental from
1 upwards, but also sparse allocation over a fixed bit range
[http://www.ripe.net/ripe/docs/ripe-343#3]

The net effect of these administrative policies is that the address
space from 2000/3 to the /64 prefix assigned to a LAN may be highly
structured, and allocations of individual elements within this structure
may be predictable once other elements are known. For example, if an
attacker assumes or knows that the address space contains a "region of 6
bits that is sparsely assigned" then region 1 =1 region 2 may be 32,
region 3 may be 16). This assumption may be easily tested with just a
few probes (e.g. by waiting for an ICMP unreachable reply from an
upstream router).

Thus the amount of entropy for this portion of the address search space
from /3 to /64 may be vastly reduced compared to uniformly random /64
allocations.

I like the text you've suggested. My only observation would be: not sure
why you mention /3.. Unless the attacker mens to "scan the whole IPv6
Internet (unlikely), he'l probably start with a /32 or /48...

Sure. I'd also start with the local LAN and local site, but the scanner may also want to perform off-LAN/off-site reconnaissance once that is exhausted.

The starting assumption I'm making is that the scanning device / code has been dropped onto a network without any further knowledge, expect for it's own address, and that of its default router (learned via RA). Of course if more information is available then the scanning process can be seeded with priority targets.

The premise of IPv6 being un-scannable seems to be based on the fact that 2^128 is a big number.

However there are some major holes to this premise:
1) the address space has significant structure
2) the address space is administered by people
3) brute-force scans do not have to be exhaustive or run independently of other attacks; they can be successful if they merely provide additional hints/targets to other attack vectors, which can in turn re-seed the scan.

so looking at the entire 128 bit space, an attacker can divide and conquer.

The first level of hierarchy in the 128 bit space that can be leveraged is that the IETF has allocated all current unicast address space out of 2000::/3. So all scan engines can immediately limit the target space to 2000:/3 space for all practical purposes. Hence the /3 I named.

The second assumption is that the /64 boundary is very strong (and has been re-affirmed/explained by the IETF as being important).

So a scanning engine can legitimately assume that there are two independent address spaces to scan: the upper 64 bits [LAN ID] and lower 64 bits [IID].

The third assumption is that CIDR is still deployed. So knowledge about a shorter mask can be re-used for all longer masks.

In IPv4, address space was rare, and nodes were densely packed, so that when brute-force scanning, a positive response from one node could be leveraged to find surrounding nodes.

This can be leveraged up for in IPv6 because the exact opposite is true: address space is large and nodes are sparsely allocated, So an uncertain negative response that a /48 is not routed in the network can be leveraged up to mean that all longer masks are probably not allocated. Almost no one runs flat networks, as slots in the routing table are still expensive.

ICMPv6 is a much more engrained part of IPv6 in that PTB and other messages are required for normal operation of the protocol, so at the moment ICMPv6 information is more valuable than ICMP information.

The fourth assumption that can be leveraged is that address space that the top level is allocated by an RIR.

If a scanning node can either contact an online RIR database, or comes shipped with a copy of the RIR database, the scanning node can learn which prefixes have been allocated to this organisation simply based on it's own IPv6 GUA. The total target scanning space in the upper /64 can then be further safely limited initially to /32 to /64 = 32 bits.

Also an attacker can easily look up the local 2001:DB8::/32 and find any other associated entries e.g. /32 allocation from other regions, /48 allocations, routing DB entries.....

The fifth assumption is that address space is aggregated, or allocated by humans, and humans are lazy.

The address space in the /32 to /64 range is also likely to be allocated on 4 bit nibbled (to allow easy reverse DNS delegation, simple ACLs, route aggregation and de-aggregation, organisational delegation to site/regional IT managers etc.)

So you now have 32 bits to scan as top priority [from 2001:DB8::/64 to 2001:DB8:FFFF:FFFF::/64] (after the local /64 LAN). That problem can then be tackled by subdividing the space between /32 and /64 into 8 off 4 bit nibbles.

It's irrelevant to the scanner what these nibbles mean, e.g. whether they are a region, or a product division, or a /48 site allocation.

What is key to the scanner is to be able to perform a binary search on this space and discover differentiated results.

So a Patricia Tree can be used to store results of structured probes, with each branch of the tree keyed on a 4 bit nibble. Such a structure can efficiently store the results of millions of probes and pick out patterns in the /32 to /64 space on nibble boundaries.

Using assumption 3, 4, and 5 together, the attacker can send various probes to e.g. HTTP connect 2001:DB8:0:1::node_1/32, telnetconnect 2001:DB8:0:1::node_2/32 , HTTP connect 2001:DB8:1:1::node_1/32 with various hop count limits and various prefix lengths.

By examining the results of each probe at each node in the Patrica tree, common patterns at bit/nibble boundaries will emerge. e.g. a Type 3 code 0 response at a common router will show what address space is allocated behind that router. That can be either actively or passively searched using binary prefix length expansion. Or receiving an ICMv6 Type 1 code 0 from regional aggregation routers, or site boundary routers. Given the scarcity of IPv6 allocations, it's pretty likely to be able to obtain many "no route to host" responses for few probes.

It will then be possible to reproduce a portion of the routing prefixes used in the network and their prefix length.

Once an attacker sees that a site has a /48 boundary, the scanner can then again probe binary variations of the range /32 to /48 to find other site boundary routers, or existence of a second aggregation point e.g. a regional allocation boundary.

Especially since the allocations within the bit block will likely be based on one of 3 methods.
i) linear allocation 1,2,3....
ii) sparse allocation on a specific block length right justified, enumerated as 0,8,4,6 ..... for a 4 bit field or 0,32,16,38 for a 6 bit field iii) site/LAN/region 0 reserved for infra services (so that these address have long chains of 0:0 that can be easily typed as ::)

These assumptions can be easily tested using a relatively small number of probes compared to the address space covered.

For example, once the allocation method has been established or guessed as [32 bits from RIR]:[4 bits region][12 bits site ID][16 bits LAN ID]:[64 bit IID] or [32 bits from RIR]:[8 bits region][8 bits site ID][16 bits LAN ID]:[64 bit IID]

These individual ranges within the overall structure can be scanned separately using different techniques, and different methods of enumeration.

It now becomes relatively easy to independently enumerate regions [4 or 8 bits], sites within a region[12 or 8 bits], and LAN IDs within a site [16 bits], and to test for their existence.

Once it is confirmed that a particular region doesn't exist (e.g. using ICMP no route to host), or a site doesn't exist, all further scans of longer prefixes are unnecessary.

The [64 bit IID] can also be split into [6 hex nibbles manufacturers OID][4 hex nibbles pad][6 hex nibbles] if stable privacy addresses are not in use. The number of [Manufacturers OID's] per target company is also usually limited. And clusters do occur in IID on machines shipped on a similar date.

Also all site LAN routers may be assigned [32 bits from RIR]:[8 bits region][8 bits site ID][16 bits LAN ID]::1, so that they can easily be pinged from a central network management station.


An attacker can also effectively hide their own identity by rate limiting probes, and regularly re-assigning multiple privacy addresses.

If an attacker can send 1024 probes per second (not unreasonable as noise on large networks) a 6 nibble space MAC space can be scanned in about 5 hours once the upper 64 bits have been discovered. If there are 5-16 or so common manufacturers OID's in place, a /64 could be roughly scanned in a matter of days.

This is orders of magnitude different to a full /128 space without structure, and is certainly feasible IMHO.

12. Obtaining Network Information with traceroute6

As well as using traceroute6 as a source of information, if an
organization allows ICMP unreachable messages from routers, an on-net
attacker could probe the subnet search space to gain knowledge of the
network structure, and thus the address assignment policy. For example,
if a large number of traceroutes, or indeed any other connection probe,
consistently generate a response with an ICMP unreachable Type 1 code 0
"no route to destination", all originating from a common router on the
path, this could indicate that this router is either a boundary router,
or a router that it performs route aggregation. This then gives a hint
of how the address space is structured for reducing the subnet ID search
space.

I'm not sure I followed the part "This then gives a hint...".. Would you
mind elaborating a it?
see above

11. Gleaning Information from switch MAC tables and other equipment
using SNMP

If the underlying infrastructure is not properly secured, an attacker
can use knowledge gained from the switch TCAM forwarding table to learn
network structure, as well as MAC addresses in use in the network, which
can in many cases be mapped back to IPv6 addresses and machines.
Obviously SNMP and other management access should be secured.

This makes sense. Now, since SNMP is also mentioned for the Neighbor
Cache, I wonder how to include this info. -- e.g., keep the document "as
is" and just add a top-level section entitled "Gleaning Information from
network devices using SNMP" and have that section cover  switch TCAM
table, Neighbor Cache, routing table and others?

Thanks!
Agreed
Best regards,


--
Regards,
RayH

_______________________________________________
OPSEC mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/opsec

Reply via email to