Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu

Le 31/01/2014 18:13, Fernando Gont a écrit :

Alex,

On 01/31/2014 01:47 PM, Alexandru Petrescu wrote:

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.


I tend to agree, but I think you talk about a different kind of limit.
This kind of limit to avoid memory overflow, thrashing, is not the same
as to protect against security attacks.


What's the difference between the two? -- intention?


Mostly intention, yes, but there are some differences.

For example, if we talk limits of data structures then we talk mostly
implementations on the end nodes, the Hosts.


Enforce, say, 16K, 32K, or 64K. And document it.


Well, it would be strange to enforce a 16K limit on a sensor which only 
has 4k memory size.  Enforcing that limit already means write new code 
to enforce limits (if's and so are the most cycle-consuming).


On another hand, the router which connects to that sensor may very well 
need a higher limit.


And there's only one stack.

I think this is the reason why it would be hard to come up with such a 
limit.



For ND, if one puts a limit on the ND cache size on the end Host, one
would need a different kind of limit for same ND cache size but on the
Router.  The numbers would not be the same.


64K probably accommodates both, and brings a minimum level of sanity.


Depends on whether it's Host or Router... sensor or server, etc.


The protocol limit set at 64 (subnet size) is not something to prevent
attacks.  It is something that allows new attacks.


What actually allows attacks are bad programming habits.


We're too tempted to put that on the back of the programmer.


It's the programmer's fault to not think about limits. And it's our
fault (IETF) that do not make the programmer's life easy -- he should't
have to figure out what a sane limit would be.


:-)


But a
kernel programmer (where the ND sits) is hard to suppose to be using bad
habits.


THe infamous "blue screen of death" would suggest otherwise (and this is
just *one* example)...


The fault of blue-screen-of-death is put on the _other_ programmers 
(namely the non-agreed device programmers). :-) the hell is the others.



If one looks at the IP stack in the kernel one notices that
people are very conservative and very strict about what code gets there.


.. in many cases, after... what? 10? 20? 30 years?



  These are not the kinds of people to blame for stupid errors such as
forgetting to set some limits.


Who else?

And no, I don't just blame the programmer. FWIW, it's a shame that some
see the actual implementation of an idea as less important stuff. A good
spec goes hand in hand with good code.


I agree.


You cannot be something that you cannot handle. I can pretend to be
Superman... but if after jumping over the window somehow I don't start
flying, the thing ain't working and won't be funny when I hit the
floor.

Same thing here: Don't pretend to be able t handle a /32 when you can't.
In practice, you won't be able to handle 2**32 in the NC.


I'd say depends on the computer?  The memory size could, I believe.


References, please :-)


Well I think about simple computer with RAM and virtual memory and 
terabyte disks.  That would fit well a 2^64-entry NC no?



Take the /64 as "Addresses could be spread all over this /64" rather
than "you must be able to handle 2**64 addresses on your network".


It is tempting.  I would like to take it so.

But what about the holes?  Will the holes be subject to new attacks?
Will the holes represent address waste?


"Unused address space". In the same way that the Earth's surface is not
currently accommodating as many many as it could. But that doesn't meant
that it should, or that you'd like it to.


Hmm, intriguing... I could talk about the Earth and its ressources, the 
risks, the how long we must stay here together, the rate of population 
growth, and so on.


But this 'unused address space' is something one can't simply just live 
with.


Without much advertising, there are some predictions talking 80 billion 
devices arriving soon.  Something like the QR codes on objects, etc. 
These'd be connected directly or through intermediaries.  If one 
compares these figures one realizes that such holes may not be welcome. 
They'd be barriers to deployment.



If we come up with a method to significantly distribute these holes such
that us the inventors understand it, will not another attacker
understand it too, and attack it?


Play both sides. And attack yourself. scan6
(http://www.si6networks.com/tools/ipv6toolkit) exploit current
addressing techniques. draft-ietf-6man-stable-privacy-addresses is meant
to defeat it.

Maybe one problem is the usual disconnect between the two: Folks
building stuff as if nothing wrong is ever going to happen. And folks
breaking stuff without ever thinking about how things could be made
better.  -- But not much of a surprise: pointing out wea

Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu
Messages cités pour référence (si rien alors fin de message) : Le 
31/01/2014 16:59, Fernando Gont a écrit :

On 01/31/2014 12:26 PM, Alexandru Petrescu wrote:

And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.

There are some different needs with this limitation.

It's good to rate-limit a protocol exchange (to avoid dDoS), it's good
to limit the size of the buffers (to avoid buffer overflows), but it may
be arguable whether to limit the dynamic sizes of the instantiated data
structures, especially when facing requirements of scalability - they'd
rather be virtually infinite, like in virtual memory.

This means that the underlying hard limit will hit you in the back.

You should enforce limits that at the very least keeps the system usable.

At the end of the day, at the very least you want to be able to ssh to it.


I agree.  Or I'd say even less, such as rsh or telnet or SLIP into it; 
because ssh is a rather heavy exchange.



This is not a problem of implementation, it is a problem of unspoken
assumption that the subnet prefix is always 64.

Do you know what they say assumptions? -- "It's the mother of all f* ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.


I tend to agree, but I think you talk about a different kind of limit.  
This kind of limit to avoid memory overflow, thrashing, is not the same 
as to protect against security attacks.


The protocol limit set at 64 (subnet size) is not something to prevent 
attacks.  It is something that allows new attacks.


An implementation that will restrict the size of an instantiation of a 
data structure (say,limit its size to a max hosting 2^32 nodes) will be 
a clear limit to something else: subnets that want to be of that 
particular 2^32 size.


Also, think that people who develop IP stacks don't necessarily think 
Ethernet, they think many other link layers.  Once that stack gets into 
an OS as widespread as linux, there is little control about which link 
layer the IP stack will run on.  Actually there they want no limit at all.


It is not as simple as saying it is programmer's fault.


It is unspoken because
it is little required (almost none) by RFCs.  Similarly as when the
router of the link is always the .1.

That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.


I am trained thank you.

Alex


Speaking of scalability - is there any link layer (e.g. Ethernet) that
supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.

Scan Google's IPv6 address space, and you'll find one. (scan6 of
 is your friend :-) )

Cheers,





Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu

Le 31/01/2014 17:35, Fernando Gont a écrit :

On 01/31/2014 01:12 PM, Alexandru Petrescu wrote:



This is not a problem of implementation, it is a problem of unspoken
assumption that the subnet prefix is always 64.

Do you know what they say assumptions? -- "It's the mother of all f*
ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.


I tend to agree, but I think you talk about a different kind of limit.
This kind of limit to avoid memory overflow, thrashing, is not the same
as to protect against security attacks.


What's the difference between the two? -- intention?


Mostly intention, yes, but there are some differences.

For example, if we talk limits of data structures then we talk mostly 
implementations on the end nodes, the Hosts.


But if we talk limits of protocol, then we may talk implementations on 
the intermediary routers.


For ND, if one puts a limit on the ND cache size on the end Host, one 
would need a different kind of limit for same ND cache size but on the 
Router.  The numbers would not be the same.



The protocol limit set at 64 (subnet size) is not something to prevent
attacks.  It is something that allows new attacks.


What actually allows attacks are bad programming habits.


We're too tempted to put that on the back of the programmer.  But a 
kernel programmer (where the ND sits) is hard to suppose to be using bad 
habits.  If one looks at the IP stack in the kernel one notices that 
people are very conservative and very strict about what code gets there. 
 These are not the kinds of people to blame for stupid errors such as 
forgetting to set some limits.



The /64 has exposed bad programming habits.. that's it.




An implementation that will restrict the size of an instantiation of a
data structure (say,limit its size to a max hosting 2^32 nodes) will be
a clear limit to something else: subnets that want to be of that
particular 2^32 size.


You cannot be something that you cannot handle. I can pretend to be
Superman... but if after jumping over the window somehow I don't start
flying, the thing ain't working and won't be funny when I hit the floor.

Same thing here: Don't pretend to be able t handle a /32 when you can't.
In practice, you won't be able to handle 2**32 in the NC.


I'd say depends on the computer?  The memory size could, I believe.

What is not possible to imagine is that 2^32 computers sit together on 
the same Ethernet link.



Take the /64 as "Addresses could be spread all over this /64" rather
than "you must be able to handle 2**64 addresses on your network".


It is tempting.  I would like to take it so.

But what about the holes?  Will the holes be subject to new attacks? 
Will the holes represent address waste?


If we come up with a method to significantly distribute these holes such 
that us the inventors understand it, will not another attacker 
understand it too, and attack it?



Also, think that people who develop IP stacks don't necessarily think
Ethernet, they think many other link layers.  Once that stack gets into
an OS as widespread as linux, there is little control about which link
layer the IP stack will run on.  Actually there they want no limit at all.

It is not as simple as saying it is programmer's fault.


Not enforcing limits is a programmer's fault. Most security exploits
rely on that.


I tend to agree.


It is unspoken because
it is little required (almost none) by RFCs.  Similarly as when the
router of the link is always the .1.

That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.


I am trained thank you.


What I meant was: one should train oneself such that you don't really
need to think about it. Enforcing limits is one of those. First thing
your brain must be trained to is that before you allocate a data
structure, you check how big the thing is, and how big it's supposed to be.

And it's not just limits. e.g., how many *security* tools need superuser
privileges, but will never give up such superuser privileges once they
are not needed anymore?

"Know thyself" (http://en.wikipedia.org/wiki/Know_thyself). I know my
code is not going to be as good as it should. So I better limit the
damage that it can cause: enforce limits, and release unnecessary
privileges. And fail on the safe side. You could see it as
"compartmentalization", too.


Interesting.

Alex









Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu
Messages cités pour référence (si rien alors fin de message) : Le 
31/01/2014 16:59, Fernando Gont a écrit :

On 01/31/2014 12:26 PM, Alexandru Petrescu wrote:

And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.

There are some different needs with this limitation.

It's good to rate-limit a protocol exchange (to avoid dDoS), it's good
to limit the size of the buffers (to avoid buffer overflows), but it may
be arguable whether to limit the dynamic sizes of the instantiated data
structures, especially when facing requirements of scalability - they'd
rather be virtually infinite, like in virtual memory.

This means that the underlying hard limit will hit you in the back.

You should enforce limits that at the very least keeps the system usable.

At the end of the day, at the very least you want to be able to ssh to it.




This is not a problem of implementation, it is a problem of unspoken
assumption that the subnet prefix is always 64.

Do you know what they say assumptions? -- "It's the mother of all f* ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.



It is unspoken because
it is little required (almost none) by RFCs.  Similarly as when the
router of the link is always the .1.

That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.




Speaking of scalability - is there any link layer (e.g. Ethernet) that
supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.

Scan Google's IPv6 address space, and you'll find one. (scan6 of
 is your friend :-) )


Do you think they have somewhere one single link on which 2^64 nodes 
connect simultaneously?  (2^64 is a relatively large number, larger than 
the current Internet).


Or is it some fake reply?

Alex




Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu
Messages cités pour référence (si rien alors fin de message) : Le 
31/01/2014 16:13, Fernando Gont a écrit :

On 01/31/2014 10:59 AM, Aurélien wrote:

I personnally verified that this type of attack works with at least one
major firewall vendor, provided you know/guess reasonably well the
network behind it. (I'm not implying that this is a widespread attack type).

I also found this paper: http://inconcepts.biz/~jsw/IPv6_NDP_Exhaustion.pdf

I'm looking for other information sources, do you know other papers
dealing with this problem ? Why do you think this is FUD ?

The attack does work. But the reason it works is because the
implementations are sloppy in this respect: they don't enforce limits on
the size of the data structures they manage.

The IPv4 subnet size enforces an artificial limit on things such as the
ARP cache. A /64 removes such artificial limit. However, you shouldn't
be relying on such limit. You should a real one in the implementation
itself.

And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.


There are some different needs with this limitation.

It's good to rate-limit a protocol exchange (to avoid dDoS), it's good 
to limit the size of the buffers (to avoid buffer overflows), but it may 
be arguable whether to limit the dynamic sizes of the instantiated data 
structures, especially when facing requirements of scalability - they'd 
rather be virtually infinite, like in virtual memory.


This is not a problem of implementation, it is a problem of unspoken 
assumption that the subnet prefix is always 64.  It is unspoken because 
it is little required (almost none) by RFCs.  Similarly as when the 
router of the link is always the .1.


Speaking of scalability - is there any link layer (e.g. Ethernet) that 
supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.


I suppose the largest number of nodes in a single link may reach 
somewhere in the thousands of nodes, but not 2^64.


The limitation on the number of nodes on the single link comes not only 
from the access contention algorithms, but from the implementation of 
the core of the highest performance switches; these are limited in terms 
of bandwidth.  With these figures in mind, one realizes that it may be 
little reasonable to imagine subnets of maximum size 2^64 nodes.


Alex



If you want to play, please take a look at the ipv6toolkit:
. On the same page, you'll
also find a PDF that discusses ND attacks, and that tells you how to
reproduce the attack with the toolkit.

Besides, each manual page of the toolkit (ra6(1), na6(1), etc.) has an
EXAMPLES section that provides popular ways to run each tool.

Thanks!

Cheers,





smime.p7s
Description: Signature cryptographique S/MIME


Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu
Messages cités pour référence (si rien alors fin de message) : Le 
31/01/2014 14:07, Ole Troan a écrit :

Consensus around here is that we support DHCPv6 for non-/64 subnets
(particularly in the context of Prefix Delegation), but the immediate
next question is "Why would you need that?"

/64 netmask opens up nd cache exhaustion as a DoS vector.

FUD.


Sigh... as usual with brief statements it's hard to see clearly.

I think ND attacks may be eased by an always-same prefix length (64).

Some attacks may be using unsolicited NAs to deny others configuring a 
particular address.  That's easier if the attacker assumes the prefix 
length were, as usual, 64.


Additionally, an always-64 prefix length gives a _scanning_ perspective 
to the security dimension, as per section 2.2 "Target Address Space for 
Network Scanning" of RFC5157.


As a side note, security is not the only reason why people would like to 
configure prefixes longer than 64 on some subnets... some of the most 
obvious being the address exhaustion at the very edge.


Alex




cheers,
Ole





Question on DHCPv6 address assignment

2014-01-31 Thread Fernando Gont
Folks,

I'm wondering about the following two aspects of different DHCPv6
implementations out there:

1) What's the pattern with which addresses are generated/assigned? Are
they sequential (fc00::1, fc00::2, etc.)?  Random? Something else?

2) What about their stability? Is there any intent/mechanism for them to
be as "stable" as possible? Or is it usual for hosts to get a new
address for each lease?

P.S.: I understand this is likely to vary from one implementation to
another... so please describe which implementation/version you're
referring to.

Thanks!

Best regards,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





RE: Question about IPAM tools for v6

2014-01-31 Thread Templin, Fred L
Hi Erik,

> -Original Message-
> From: Erik Kline [mailto:e...@google.com]
> Sent: Friday, January 31, 2014 10:46 AM
> To: Templin, Fred L
> Cc: Nick Hilliard; Cricket Liu; ipv6-ops@lists.cluenet.de; 
> draft-carpenter-6man-wh...@tools.ietf.org;
> Mark Boolootian
> Subject: Re: Question about IPAM tools for v6
> 
> On 31 January 2014 10:22, Templin, Fred L  wrote:
> >> Not if you route a /64 to each host (the way 3GPP/LTE does for mobiles).  
> >> :-)
> >
> > A /64 for each mobile is what I would expect. It is then up to the
> > mobile to manage the /64 responsibly by either black-holing the
> > portions of the /64 it is not using or by assigning the /64 to a
> > link other than the service provider wireless access link (and
> > then managing the NC appropriately).
> 
> 
> 
> Yep.  My point, though, was that we can do the same kind of thing in
> the datacenter.

Sure, that works for me too.

> 
> 
> In general, I think ND exhaustion is one of those "solve it at Layer
> 3" situations, since we have the bits to do so.
> 
> IPv6 gives us a large enough space to see new problems of scale, and
> sometimes the large enough space can be used to solve these problems
> too, albeit with non-IPv4 thinking.

Right - thanks for clarifying.

Thanks - Fred
fred.l.temp...@boeing.com


RE: Question about IPAM tools for v6

2014-01-31 Thread Templin, Fred L
> Not if you route a /64 to each host (the way 3GPP/LTE does for mobiles).  :-)

A /64 for each mobile is what I would expect. It is then up to the
mobile to manage the /64 responsibly by either black-holing the
portions of the /64 it is not using or by assigning the /64 to a
link other than the service provider wireless access link (and
then managing the NC appropriately).

Thanks - Fred
fred.l.temp...@boeing.com


RE: SI6 Networks' IPv6 Toolkit v1.5.2 released!

2014-01-31 Thread Templin, Fred L
Hi Fernando,

I don't know if you are looking to add to your toolkit from outside
sources, but Sascha Hlusiak has created a tool called 'isatapd' that
sends RS messages to an ISATAP router and processes RA messages that
come back:

http://www.saschahlusiak.de/linux/isatap.htm

Does this look like something you might want to add to the toolkit?

Thanks - Fred
fred.l.temp...@boeing.com

> -Original Message-
> From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
> [mailto:ipv6-ops-
> bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Fernando Gont
> Sent: Friday, January 31, 2014 8:03 AM
> To: ipv6-ops@lists.cluenet.de
> Subject: SI6 Networks' IPv6 Toolkit v1.5.2 released!
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Folks,
> 
> [I had forgotten to send a heads-up to this list -- hopefully some of
> you will find this useful]
> 
> This is not meant to be a "big release", but it does fix some issues
> present in previous versions, and adds some new features (please find
> the changelog below).
> 
> So if you're using the ipv6toolkit, please upgrade to version 1.5.2.
> 
> Tarballs (plain one, and gpg-signed with my key below) can be found
> at: ).
> 
> * Tools:
> 
> If you want to find out which tools the ipv6toolkit comprises, just
> do a "man 7 ipv6toolkit".
> 
> 
> * Platforms:
> 
> We currently support these platforms: FreeBSD, NetBSD, OpenBSD, Debian
> GNU/Linux, Debian GNU/kfreebsd, Gentoo Linux, Ubuntu, and Mac OS.
> 
> Some of these platforms now feature the ipv6toolkit in their package
> system -- credits for that can be found below. :-)
> 
> 
> = CREDITS ==
> CONTRIBUTORS
> - 
> 
> ** Contributors **
> 
> The following people sent patches that were incorporated into this
> release of the toolkit:
> 
> Octavio Alvarez 
> Alexander Bluhm 
> Alistair Crooks 
> Declan A Rieb   
> 
> 
> ** Package maintainers **
> 
> Availability of packages for different operating systems makes it
> easier for users to install and update the toolkit, and for the toolkit
> to integrate better with the operating systems.
> 
> These are the maintainers for each of the different packages:
> 
>   + Debian
> 
> Octavio Alvarez , sponsored by Luciano Bello
> 
> 
>   + FreeBSD
> 
> Hiroki Sato 
> 
>   + Gentoo Linux
> 
> Robin H. Johnson 
> 
>   + Mac OS
> 
> Declan A Rieb  tests the toolkit on multiple Mac
> OS versions, to ensure clean compiles on such platforms.
> 
>   + NetBSD (pkgsrc framework)
> 
> Alistair Crooks 
> 
>   + OpenBSD
> 
> Alexander Bluhm 
> 
> 
> ** Troubleshooting/Debugging **
> 
> Spotting bugs in networking tool can be tricky, since at times they
> only show up in specific network scenarios.
> 
> The following individuals provided great help in identifying bugs in
> the the toolkit (thus leading to fixes and improvements):
> 
> Stephane Bortzmeyer 
> Marc Heuse 
> Erik Muller 
> Declan A Rieb 
> Tim 
> = CREDITS =
> 
> 
> = CHANGELOG =
> SI6 Networks IPv6 Toolkit v1.5.2
> 
>* All: Add support for GNU Debian/kfreebsd
>  The toolkit would not build on GNU Debian/kfreebsd before this
>  release.
> 
>* tcp6: Add support for TCP/IPv6 probes
>  tcp6 can now send TCP/IPv6 packets ("--probe-mode" option), and
>  read the TCP response packets, if any. This can be leveraged for
>  port scans, and miscellaneous measurements.
> 
> SI6 Networks IPv6 Toolkit v1.5.1
>* Fix Mac OS breakage
>  libipv6.h had incorrect definitions for "struct tcp_hdr".
> 
> SI6 Networks IPv6 Toolkit v1.5
> 
>* All: Improved the next-hop determination
>  Since the toolkit employs libpcap (as there is no portable way to
>  forge IPv6 addresses and do other tricks), it was relying on the
>  user specifying a network interface ("-i" was mandatory for all
>  tools) and that routers would send Router Advertisements on the
>  local links. This not only was rather inconvenient for users
>  (specifying a network interface was not warranted), but also meant
>  that in setups where RAs where not available (e.g., manual
>  configuration), the tools would fail. The toolkit now employs
>  routing sockets (in BSDs) or Netlink (in Linux), and only uses
>  "sending RAs" as a fall-back in case of failure (IPv6 not
>  configured on the local host).
> 
>* All: Improved source address selection
>  This is closely related to the previous bullet.
> 
>* All: More code moved to libipv6
>  More and more code was moved to libipv6 and removed to the
>  individual tool source files. As with some of the above, this was
>  painful and time-consuming, but was necessary -- and in the long
>  run it will make code maintenance easier.
> 
>* All: libipv6 used throughout all tools
>  This was rather painful and non-exciting, but necessary.
> 
> 
> SI6 Networks' IPv

Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 02:30 PM, Alexandru Petrescu wrote:
> I tend to agree, but I think you talk about a different kind of limit.
> This kind of limit to avoid memory overflow, thrashing, is not the
> same
> as to protect against security attacks.

 What's the difference between the two? -- intention?
>>>
>>> Mostly intention, yes, but there are some differences.
>>>
>>> For example, if we talk limits of data structures then we talk mostly
>>> implementations on the end nodes, the Hosts.
>>
>> Enforce, say, 16K, 32K, or 64K. And document it.
> 
> Well, it would be strange to enforce a 16K limit on a sensor which only
> has 4k memory size.

That's why it should be configurable. -- Set a better one at system startup.


> Enforcing that limit already means write new code
> to enforce limits (if's and so are the most cycle-consuming).

That's the minimum pain you should pay for not doing it in the first place.

And yes, writing sloppy code always requires less effort.



> On another hand, the router which connects to that sensor may very well
> need a higher limit.
> 
> And there's only one stack.
> 
> I think this is the reason why it would be hard to come up with such a
> limit.

Make a good default that handles the general case, and make it
configurable so that non-general cases can be addressed.



>>> For ND, if one puts a limit on the ND cache size on the end Host, one
>>> would need a different kind of limit for same ND cache size but on the
>>> Router.  The numbers would not be the same.
>>
>> 64K probably accommodates both, and brings a minimum level of sanity.
> 
> Depends on whether it's Host or Router... sensor or server, etc.

Do you run a host or router that needs more than 64K entries?



>>> But a
>>> kernel programmer (where the ND sits) is hard to suppose to be using bad
>>> habits.
>>
>> THe infamous "blue screen of death" would suggest otherwise (and this is
>> just *one* example)...
> 
> The fault of blue-screen-of-death is put on the _other_ programmers
> (namely the non-agreed device programmers). :-) the hell is the others.

I don't buy that. Win 95 (?) infamously crashed in front of the very
Bill Gates upon connection of a scanner.

And W95 was infamous for one-packet of death crashes (the "nukes" from
the '90s)



 You cannot be something that you cannot handle. I can pretend to be
 Superman... but if after jumping over the window somehow I don't start
 flying, the thing ain't working and won't be funny when I hit the
 floor.

 Same thing here: Don't pretend to be able t handle a /32 when you
 can't.
 In practice, you won't be able to handle 2**32 in the NC.
>>>
>>> I'd say depends on the computer?  The memory size could, I believe.
>>
>> References, please :-)
> 
> Well I think about simple computer with RAM and virtual memory and
> terabyte disks.  That would fit well a 2^64-entry NC no?

Consider yourself lucky if your implementation can gracefully handle,
say, 1M entries.



 Take the /64 as "Addresses could be spread all over this /64" rather
 than "you must be able to handle 2**64 addresses on your network".
>>>
>>> It is tempting.  I would like to take it so.
>>>
>>> But what about the holes?  Will the holes be subject to new attacks?
>>> Will the holes represent address waste?
>>
>> "Unused address space". In the same way that the Earth's surface is not
>> currently accommodating as many many as it could. But that doesn't meant
>> that it should, or that you'd like it to.
> 
> Hmm, intriguing... I could talk about the Earth and its ressources, the
> risks, the how long we must stay here together, the rate of population
> growth, and so on.
> 
> But this 'unused address space' is something one can't simply just live
> with.
> 
> Without much advertising, there are some predictions talking 80 billion
> devices arriving soon.  Something like the QR codes on objects, etc.
> These'd be connected directly or through intermediaries.  If one
> compares these figures one realizes that such holes may not be welcome.
> They'd be barriers to deployment.

mm.. what's the problem here?

Cheers,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
Alex,

On 01/31/2014 01:47 PM, Alexandru Petrescu wrote:
 It's as straightforward as this: whenever you're coding something,
 enforce limits. And set it to a sane default. And allow the admin to
 override it when necessary.
>>>
>>> I tend to agree, but I think you talk about a different kind of limit.
>>> This kind of limit to avoid memory overflow, thrashing, is not the same
>>> as to protect against security attacks.
>>
>> What's the difference between the two? -- intention?
> 
> Mostly intention, yes, but there are some differences.
> 
> For example, if we talk limits of data structures then we talk mostly
> implementations on the end nodes, the Hosts.

Enforce, say, 16K, 32K, or 64K. And document it.


> For ND, if one puts a limit on the ND cache size on the end Host, one
> would need a different kind of limit for same ND cache size but on the
> Router.  The numbers would not be the same.

64K probably accommodates both, and brings a minimum level of sanity.



>>> The protocol limit set at 64 (subnet size) is not something to prevent
>>> attacks.  It is something that allows new attacks.
>>
>> What actually allows attacks are bad programming habits.
> 
> We're too tempted to put that on the back of the programmer.

It's the programmer's fault to not think about limits. And it's our
fault (IETF) that do not make the programmer's life easy -- he should't
have to figure out what a sane limit would be.


> But a
> kernel programmer (where the ND sits) is hard to suppose to be using bad
> habits.

THe infamous "blue screen of death" would suggest otherwise (and this is
just *one* example)...



> If one looks at the IP stack in the kernel one notices that
> people are very conservative and very strict about what code gets there.

.. in many cases, after... what? 10? 20? 30 years?


>  These are not the kinds of people to blame for stupid errors such as
> forgetting to set some limits.

Who else?

And no, I don't just blame the programmer. FWIW, it's a shame that some
see the actual implementation of an idea as less important stuff. A good
spec goes hand in hand with good code.


>> You cannot be something that you cannot handle. I can pretend to be
>> Superman... but if after jumping over the window somehow I don't start
>> flying, the thing ain't working and won't be funny when I hit the
>> floor.
>>
>> Same thing here: Don't pretend to be able t handle a /32 when you can't.
>> In practice, you won't be able to handle 2**32 in the NC.
> 
> I'd say depends on the computer?  The memory size could, I believe.

References, please :-)



>> Take the /64 as "Addresses could be spread all over this /64" rather
>> than "you must be able to handle 2**64 addresses on your network".
> 
> It is tempting.  I would like to take it so.
> 
> But what about the holes?  Will the holes be subject to new attacks?
> Will the holes represent address waste?

"Unused address space". In the same way that the Earth's surface is not
currently accommodating as many many as it could. But that doesn't meant
that it should, or that you'd like it to.



> If we come up with a method to significantly distribute these holes such
> that us the inventors understand it, will not another attacker
> understand it too, and attack it?

Play both sides. And attack yourself. scan6
(http://www.si6networks.com/tools/ipv6toolkit) exploit current
addressing techniques. draft-ietf-6man-stable-privacy-addresses is meant
to defeat it.

Maybe one problem is the usual disconnect between the two: Folks
building stuff as if nothing wrong is ever going to happen. And folks
breaking stuff without ever thinking about how things could be made
better.  -- But not much of a surprise: pointing out weaknesses usually
hurt egos, and fixing stuff doesn't get as much credit as fixing it in
the security world.

Cheers,
-- 
Fernando Gont
SI6 Networks
e-mail: fg...@si6networks.com
PGP Fingerprint:  31C6 D484 63B2 8FB1 E3C4 AE25 0D55 1D4E 7492






Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 01:02 PM, Alexandru Petrescu wrote:
>>> Speaking of scalability - is there any link layer (e.g. Ethernet) that
>>> supports 2^64 nodes in the same link?  Any deployed such link? I
>>> doubt so.
>> Scan Google's IPv6 address space, and you'll find one. (scan6 of
>>  is your friend :-) )
> 
> Do you think they have somewhere one single link on which 2^64 nodes
> connect simultaneously?  (2^64 is a relatively large number, larger than
> the current Internet).
> 
> Or is it some fake reply?

Apparently, it's not fake (although I didn't scan the *whole* space). I
bet there's some trick there, though. -- I don't expect them to be
running 2**64 servers...

With a little bit more of research, it shouldn't be hard to check
whether the responses are legitimate or not (TCP timestamps, IP IDs,
etc. are usually your friends here).

Thanks,
-- 
Fernando Gont
SI6 Networks
e-mail: fg...@si6networks.com
PGP Fingerprint:  31C6 D484 63B2 8FB1 E3C4 AE25 0D55 1D4E 7492






Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 01:12 PM, Alexandru Petrescu wrote:
> 
>>> This is not a problem of implementation, it is a problem of unspoken
>>> assumption that the subnet prefix is always 64.
>> Do you know what they say assumptions? -- "It's the mother of all f*
>> ups".
>>
>> It's as straightforward as this: whenever you're coding something,
>> enforce limits. And set it to a sane default. And allow the admin to
>> override it when necessary.
> 
> I tend to agree, but I think you talk about a different kind of limit. 
> This kind of limit to avoid memory overflow, thrashing, is not the same
> as to protect against security attacks.

What's the difference between the two? -- intention?



> The protocol limit set at 64 (subnet size) is not something to prevent
> attacks.  It is something that allows new attacks.

What actually allows attacks are bad programming habits.

The /64 has exposed bad programming habits.. that's it.



> An implementation that will restrict the size of an instantiation of a
> data structure (say,limit its size to a max hosting 2^32 nodes) will be
> a clear limit to something else: subnets that want to be of that
> particular 2^32 size.

You cannot be something that you cannot handle. I can pretend to be
Superman... but if after jumping over the window somehow I don't start
flying, the thing ain't working and won't be funny when I hit the floor.

Same thing here: Don't pretend to be able t handle a /32 when you can't.
In practice, you won't be able to handle 2**32 in the NC.

Take the /64 as "Addresses could be spread all over this /64" rather
than "you must be able to handle 2**64 addresses on your network".



> Also, think that people who develop IP stacks don't necessarily think
> Ethernet, they think many other link layers.  Once that stack gets into
> an OS as widespread as linux, there is little control about which link
> layer the IP stack will run on.  Actually there they want no limit at all.
> 
> It is not as simple as saying it is programmer's fault.

Not enforcing limits is a programmer's fault. Most security exploits
rely on that.



>>> It is unspoken because
>>> it is little required (almost none) by RFCs.  Similarly as when the
>>> router of the link is always the .1.
>> That's about sloppy programming.
>>
>> Train yourself to do the right thing. I do. When I code, I always
>> enforce limits. If anything, just pick one, and then tune it.
> 
> I am trained thank you.

What I meant was: one should train oneself such that you don't really
need to think about it. Enforcing limits is one of those. First thing
your brain must be trained to is that before you allocate a data
structure, you check how big the thing is, and how big it's supposed to be.

And it's not just limits. e.g., how many *security* tools need superuser
privileges, but will never give up such superuser privileges once they
are not needed anymore?

"Know thyself" (http://en.wikipedia.org/wiki/Know_thyself). I know my
code is not going to be as good as it should. So I better limit the
damage that it can cause: enforce limits, and release unnecessary
privileges. And fail on the safe side. You could see it as
"compartmentalization", too.


-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





SI6 Networks' IPv6 Toolkit v1.5.2 released!

2014-01-31 Thread Fernando Gont
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Folks,

[I had forgotten to send a heads-up to this list -- hopefully some of
you will find this useful]

This is not meant to be a "big release", but it does fix some issues
present in previous versions, and adds some new features (please find
the changelog below).

So if you're using the ipv6toolkit, please upgrade to version 1.5.2.

Tarballs (plain one, and gpg-signed with my key below) can be found
at: ).

* Tools:

If you want to find out which tools the ipv6toolkit comprises, just
do a "man 7 ipv6toolkit".


* Platforms:

We currently support these platforms: FreeBSD, NetBSD, OpenBSD, Debian
GNU/Linux, Debian GNU/kfreebsd, Gentoo Linux, Ubuntu, and Mac OS.

Some of these platforms now feature the ipv6toolkit in their package
system -- credits for that can be found below. :-)


= CREDITS ==
CONTRIBUTORS
- 

** Contributors **

The following people sent patches that were incorporated into this
release of the toolkit:

Octavio Alvarez 
Alexander Bluhm 
Alistair Crooks 
Declan A Rieb   


** Package maintainers **

Availability of packages for different operating systems makes it
easier for users to install and update the toolkit, and for the toolkit
to integrate better with the operating systems.

These are the maintainers for each of the different packages:

  + Debian

Octavio Alvarez , sponsored by Luciano Bello


  + FreeBSD

Hiroki Sato 

  + Gentoo Linux

Robin H. Johnson 

  + Mac OS

Declan A Rieb  tests the toolkit on multiple Mac
OS versions, to ensure clean compiles on such platforms.

  + NetBSD (pkgsrc framework)

Alistair Crooks 

  + OpenBSD

Alexander Bluhm 


** Troubleshooting/Debugging **

Spotting bugs in networking tool can be tricky, since at times they
only show up in specific network scenarios.

The following individuals provided great help in identifying bugs in
the the toolkit (thus leading to fixes and improvements):

Stephane Bortzmeyer 
Marc Heuse 
Erik Muller 
Declan A Rieb 
Tim 
= CREDITS =


= CHANGELOG =
SI6 Networks IPv6 Toolkit v1.5.2

   * All: Add support for GNU Debian/kfreebsd
 The toolkit would not build on GNU Debian/kfreebsd before this
 release.

   * tcp6: Add support for TCP/IPv6 probes
 tcp6 can now send TCP/IPv6 packets ("--probe-mode" option), and
 read the TCP response packets, if any. This can be leveraged for
 port scans, and miscellaneous measurements.

SI6 Networks IPv6 Toolkit v1.5.1
   * Fix Mac OS breakage
 libipv6.h had incorrect definitions for "struct tcp_hdr".

SI6 Networks IPv6 Toolkit v1.5

   * All: Improved the next-hop determination
 Since the toolkit employs libpcap (as there is no portable way to
 forge IPv6 addresses and do other tricks), it was relying on the
 user specifying a network interface ("-i" was mandatory for all
 tools) and that routers would send Router Advertisements on the
 local links. This not only was rather inconvenient for users
 (specifying a network interface was not warranted), but also meant
 that in setups where RAs where not available (e.g., manual
 configuration), the tools would fail. The toolkit now employs
 routing sockets (in BSDs) or Netlink (in Linux), and only uses
 "sending RAs" as a fall-back in case of failure (IPv6 not
 configured on the local host).

   * All: Improved source address selection
 This is closely related to the previous bullet.

   * All: More code moved to libipv6
 More and more code was moved to libipv6 and removed to the
 individual tool source files. As with some of the above, this was
 painful and time-consuming, but was necessary -- and in the long
 run it will make code maintenance easier.

   * All: libipv6 used throughout all tools
 This was rather painful and non-exciting, but necessary.


SI6 Networks' IPv6 Toolkit v1.4.1

   * frag6: Fixed bug that prevented Ethernet header from being filled
 A bug in the code caused Ethernet frames to go on te wire without
 any of their header fields completed.

   * All: Use of library to avoid code replication
 An "libipv6" library was created, such that common functions do
 not need to be replicated for each tool. ni6, ns6, rs6, and tcp6
 now employ such library.


SI6 Networks' IPv6 Toolkit v1.4 release

   * frag6: Fixed the flooding option
 Fixed the fragment size used when employing the flooding option.
 It was previously sending fragment sizes that where not a multiple
 of eight, and hence these fragments were dropped.

   * scan6: Added support for 64-bit encoding of IPv4 addresses
 Option "--tgt-ipv4" was augmented to support both encodings (32 bit
 and 64 bit) of embedded IPv4 addresses.

   * tcp6: Fixed response to Neighbor Solicitations
 tcp6 was not responding to incoming Neighbor Solicitations. H

SI6 Networks' IPv6 Toolkit v1.5.2 released!

2014-01-31 Thread Fernando Gont
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Folks,

[I had forgotten to send a heads-up to this list -- hopefully some of
you will find this useful]

This is not meant to be a "big release", but it does fix some issues
present in previous versions, and adds some new features (please find
the changelog below).

So if you're using the ipv6toolkit, please upgrade to version 1.5.2.

Tarballs (plain one, and gpg-signed with my key below) can be found
at: ).

* Tools:

If you want to find out which tools the ipv6toolkit comprises, just
do a "man 7 ipv6toolkit".


* Platforms:

We currently support these platforms: FreeBSD, NetBSD, OpenBSD, Debian
GNU/Linux, Debian GNU/kfreebsd, Gentoo Linux, Ubuntu, and Mac OS.

Some of these platforms now feature the ipv6toolkit in their package
system -- credits for that can be found below. :-)


= CREDITS ==
CONTRIBUTORS
- 

** Contributors **

The following people sent patches that were incorporated into this
release of the toolkit:

Octavio Alvarez 
Alexander Bluhm 
Alistair Crooks 
Declan A Rieb   


** Package maintainers **

Availability of packages for different operating systems makes it
easier for users to install and update the toolkit, and for the toolkit
to integrate better with the operating systems.

These are the maintainers for each of the different packages:

  + Debian

Octavio Alvarez , sponsored by Luciano Bello


  + FreeBSD

Hiroki Sato 

  + Gentoo Linux

Robin H. Johnson 

  + Mac OS

Declan A Rieb  tests the toolkit on multiple Mac
OS versions, to ensure clean compiles on such platforms.

  + NetBSD (pkgsrc framework)

Alistair Crooks 

  + OpenBSD

Alexander Bluhm 


** Troubleshooting/Debugging **

Spotting bugs in networking tool can be tricky, since at times they
only show up in specific network scenarios.

The following individuals provided great help in identifying bugs in
the the toolkit (thus leading to fixes and improvements):

Stephane Bortzmeyer 
Marc Heuse 
Erik Muller 
Declan A Rieb 
Tim 
= CREDITS =


= CHANGELOG =
SI6 Networks IPv6 Toolkit v1.5.2

   * All: Add support for GNU Debian/kfreebsd
 The toolkit would not build on GNU Debian/kfreebsd before this
 release.

   * tcp6: Add support for TCP/IPv6 probes
 tcp6 can now send TCP/IPv6 packets ("--probe-mode" option), and
 read the TCP response packets, if any. This can be leveraged for
 port scans, and miscellaneous measurements.

SI6 Networks IPv6 Toolkit v1.5.1
   * Fix Mac OS breakage
 libipv6.h had incorrect definitions for "struct tcp_hdr".

SI6 Networks IPv6 Toolkit v1.5

   * All: Improved the next-hop determination
 Since the toolkit employs libpcap (as there is no portable way to
 forge IPv6 addresses and do other tricks), it was relying on the
 user specifying a network interface ("-i" was mandatory for all
 tools) and that routers would send Router Advertisements on the
 local links. This not only was rather inconvenient for users
 (specifying a network interface was not warranted), but also meant
 that in setups where RAs where not available (e.g., manual
 configuration), the tools would fail. The toolkit now employs
 routing sockets (in BSDs) or Netlink (in Linux), and only uses
 "sending RAs" as a fall-back in case of failure (IPv6 not
 configured on the local host).

   * All: Improved source address selection
 This is closely related to the previous bullet.

   * All: More code moved to libipv6
 More and more code was moved to libipv6 and removed to the
 individual tool source files. As with some of the above, this was
 painful and time-consuming, but was necessary -- and in the long
 run it will make code maintenance easier.

   * All: libipv6 used throughout all tools
 This was rather painful and non-exciting, but necessary.


SI6 Networks' IPv6 Toolkit v1.4.1

   * frag6: Fixed bug that prevented Ethernet header from being filled
 A bug in the code caused Ethernet frames to go on te wire without
 any of their header fields completed.

   * All: Use of library to avoid code replication
 An "libipv6" library was created, such that common functions do
 not need to be replicated for each tool. ni6, ns6, rs6, and tcp6
 now employ such library.


SI6 Networks' IPv6 Toolkit v1.4 release

   * frag6: Fixed the flooding option
 Fixed the fragment size used when employing the flooding option.
 It was previously sending fragment sizes that where not a multiple
 of eight, and hence these fragments were dropped.

   * scan6: Added support for 64-bit encoding of IPv4 addresses
 Option "--tgt-ipv4" was augmented to support both encodings (32 bit
 and 64 bit) of embedded IPv4 addresses.

   * tcp6: Fixed response to Neighbor Solicitations
 tcp6 was not responding to incoming Neighbor Solicitations. H

Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 12:26 PM, Alexandru Petrescu wrote:
>>
>> And it's not just the NC. There are implementations that do not limit
>> the number of addresses they configure, that do not limit the number of
>> entries in the routing table, etc.
> 
> There are some different needs with this limitation.
> 
> It's good to rate-limit a protocol exchange (to avoid dDoS), it's good
> to limit the size of the buffers (to avoid buffer overflows), but it may
> be arguable whether to limit the dynamic sizes of the instantiated data
> structures, especially when facing requirements of scalability - they'd
> rather be virtually infinite, like in virtual memory.

This means that the underlying hard limit will hit you in the back.

You should enforce limits that at the very least keeps the system usable.

At the end of the day, at the very least you want to be able to ssh to it.



> This is not a problem of implementation, it is a problem of unspoken
> assumption that the subnet prefix is always 64.

Do you know what they say assumptions? -- "It's the mother of all f* ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.


> It is unspoken because
> it is little required (almost none) by RFCs.  Similarly as when the
> router of the link is always the .1.

That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.



> Speaking of scalability - is there any link layer (e.g. Ethernet) that
> supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.

Scan Google's IPv6 address space, and you'll find one. (scan6 of
 is your friend :-) )

Cheers,
-- 
Fernando Gont
SI6 Networks
e-mail: fg...@si6networks.com
PGP Fingerprint:  31C6 D484 63B2 8FB1 E3C4 AE25 0D55 1D4E 7492






graphic display of IPv6 table

2014-01-31 Thread Antonio Prado
hello,

anyone aware of a tool like ASpath-tree for IPv6 table?

thank you
--
antonio


Re: Neighbor Cache Exhaustion, was Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 11:16 AM, Enno Rey wrote:
> Hi Guillaume,
> 
> willing to share your lab setup / results? We did some testing
> ourselves in a Cisco-only setting and couldn't cause any problems.
> [for details see here:
> http://www.insinuator.net/2013/03/ipv6-neighbor-cache-exhaustion-attacks-risk-assessment-mitigation-strategies-part-1/]
>
>  After that I asked for other practical experience on the
> ipv6-hackers mailing list, but got no responses besides some "I heard
> this is a problem in $SOME_SETTING" and references to Jeff Wheeler's
> paper (which works on the - wrong - assumption that an "incomplete"
> entry can stay in the cache for a long time, which is not true for
> stacks implementing ND in conformance with RFC 4861). So your
> statement is actually the first first-hand proof of NCE being a
> real-world problem I ever hear of. thanks in advance for any
> additional detail.

Are we talking about Ciscos, specifically?

I recall reproducing this sort of thing on BSDs, Linux, and Windows.

Note: In some cases, the problem is that even when the entries in the
INCOMPLETE state are timeout, if the rate is lower than the rate at
which you "produce" them, it's still a problem.

Too bad -- we do have plenty of experience with this.. e.g., managing
the IP reassembly queue.

Thanks,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 09:33 AM, Mohacsi Janos wrote:
> 
>> On 29/01/2014 22:19, Cricket Liu wrote:
>>> Consensus around here is that we support DHCPv6 for non-/64 subnets
>>> (particularly in the context of Prefix Delegation), but the immediate
>>> next question is "Why would you need that?"
>>
>> /64 netmask opens up nd cache exhaustion as a DoS vector.
> 
> ND cache size Should be limited by HW/SW vendors - limiting number
> entries ND cache entries per MAC adresss, limiting number of outstanding
> ND requests etc.

+1

Don't blame the subnet size for sloppy implementations.

Cheers,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 10:59 AM, Aurélien wrote:
> 
> I personnally verified that this type of attack works with at least one
> major firewall vendor, provided you know/guess reasonably well the
> network behind it. (I'm not implying that this is a widespread attack type).
> 
> I also found this paper: http://inconcepts.biz/~jsw/IPv6_NDP_Exhaustion.pdf
> 
> I'm looking for other information sources, do you know other papers
> dealing with this problem ? Why do you think this is FUD ?

The attack does work. But the reason it works is because the
implementations are sloppy in this respect: they don't enforce limits on
the size of the data structures they manage.

The IPv4 subnet size enforces an artificial limit on things such as the
ARP cache. A /64 removes such artificial limit. However, you shouldn't
be relying on such limit. You should a real one in the implementation
itself.

And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.

If you want to play, please take a look at the ipv6toolkit:
. On the same page, you'll
also find a PDF that discusses ND attacks, and that tells you how to
reproduce the attack with the toolkit.

Besides, each manual page of the toolkit (ra6(1), na6(1), etc.) has an
EXAMPLES section that provides popular ways to run each tool.

Thanks!

Cheers,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Neighbor Cache Exhaustion, was Re: Question about IPAM tools for v6

2014-01-31 Thread Enno Rey
Hi Guillaume,

willing to share your lab setup / results?
We did some testing ourselves in a Cisco-only setting and couldn't cause any 
problems. [for details see here: 
http://www.insinuator.net/2013/03/ipv6-neighbor-cache-exhaustion-attacks-risk-assessment-mitigation-strategies-part-1/]

After that I asked for other practical experience on the ipv6-hackers mailing 
list, but got no responses besides some "I heard this is a problem in 
$SOME_SETTING" and references to Jeff Wheeler's paper (which works on the - 
wrong - assumption that an "incomplete" entry can stay in the cache for a long 
time, which is not true for stacks implementing ND in conformance with RFC 
4861).
So your statement is actually the first first-hand proof of NCE being a 
real-world problem I ever hear of. thanks in advance for any additional detail.

best

Enno





On Fri, Jan 31, 2014 at 02:59:24PM +0100, Aur??lien wrote:
> On Fri, Jan 31, 2014 at 2:07 PM, Ole Troan  wrote:
> 
> > >> Consensus around here is that we support DHCPv6 for non-/64 subnets
> > >> (particularly in the context of Prefix Delegation), but the immediate
> > >> next question is "Why would you need that?"
> > >
> > > /64 netmask opens up nd cache exhaustion as a DoS vector.
> >
> > FUD.
> >
> >
> Hi Ole,
> 
> I personnally verified that this type of attack works with at least one
> major firewall vendor, provided you know/guess reasonably well the network
> behind it. (I'm not implying that this is a widespread attack type).
> 
> I also found this paper: http://inconcepts.biz/~jsw/IPv6_NDP_Exhaustion.pdf
> 
> I'm looking for other information sources, do you know other papers dealing
> with this problem ? Why do you think this is FUD ?
> 
> Thanks,
> -- 
> Aur??lien Guillaume

-- 
Enno Rey

ERNW GmbH - Carl-Bosch-Str. 4 - 69115 Heidelberg - www.ernw.de
Tel. +49 6221 480390 - Fax 6221 419008 - Cell +49 173 6745902 

Handelsregister Mannheim: HRB 337135
Geschaeftsfuehrer: Enno Rey

===
Blog: www.insinuator.net || Conference: www.troopers.de
Twitter: @Enno_Insinuator
===


Re: Question about IPAM tools for v6

2014-01-31 Thread Aurélien
On Fri, Jan 31, 2014 at 2:07 PM, Ole Troan  wrote:

> >> Consensus around here is that we support DHCPv6 for non-/64 subnets
> >> (particularly in the context of Prefix Delegation), but the immediate
> >> next question is "Why would you need that?"
> >
> > /64 netmask opens up nd cache exhaustion as a DoS vector.
>
> FUD.
>
>
Hi Ole,

I personnally verified that this type of attack works with at least one
major firewall vendor, provided you know/guess reasonably well the network
behind it. (I'm not implying that this is a widespread attack type).

I also found this paper: http://inconcepts.biz/~jsw/IPv6_NDP_Exhaustion.pdf

I'm looking for other information sources, do you know other papers dealing
with this problem ? Why do you think this is FUD ?

Thanks,
-- 
Aurélien Guillaume


Re: show ipv6 destination cache on BSD host

2014-01-31 Thread Ignatios Souvatzis
On Thu, Jan 30, 2014 at 09:20:18PM +0100, Matjaz Straus Istenic wrote:
> On 30. jan. 2014, at 21:13, Nick Hilliard  wrote:
> 
> > ndp -an
> Well, this is for local IPv6 ND cache only. I'm looking for a command to 
> display the _destination_ cache in order to check for changed Path MTU. Rui's 
> suggestions works fine:

Ah. For NetBSD, this seems to be what you want:

agent:> netstat -f inet6 -nr | grep D
DestinationGatewayFlagsRefs 
 UseMtu Interface
2001:638:e813:a00::d25 fe80::20d:61ff:fe46:50ad%xennet0 UGHD
01   1280  xennet0
2a01:170:1012:77::25   fe80::20d:61ff:fe46:50ad%xennet0 UGHD
1   14   1280  xennet0





Re: Question about IPAM tools for v6

2014-01-31 Thread Ole Troan
>> Consensus around here is that we support DHCPv6 for non-/64 subnets
>> (particularly in the context of Prefix Delegation), but the immediate
>> next question is "Why would you need that?"
> 
> /64 netmask opens up nd cache exhaustion as a DoS vector.

FUD.

cheers,
Ole


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Question about IPAM tools for v6

2014-01-31 Thread Mohacsi Janos




On Fri, 31 Jan 2014, Nick Hilliard wrote:


On 29/01/2014 22:19, Cricket Liu wrote:

Consensus around here is that we support DHCPv6 for non-/64 subnets
(particularly in the context of Prefix Delegation), but the immediate
next question is "Why would you need that?"


/64 netmask opens up nd cache exhaustion as a DoS vector.


ND cache size Should be limited by HW/SW vendors - limiting number entries 
ND cache entries per MAC adresss, limiting number of outstanding ND 
requests etc.



Best Regards,
Janos Mohacsi


Re: Question about IPAM tools for v6

2014-01-31 Thread Nick Hilliard
On 29/01/2014 22:19, Cricket Liu wrote:
> Consensus around here is that we support DHCPv6 for non-/64 subnets
> (particularly in the context of Prefix Delegation), but the immediate
> next question is "Why would you need that?"

/64 netmask opens up nd cache exhaustion as a DoS vector.

Nick



Re: Question about IPAM tools for v6

2014-01-31 Thread Cricket Liu
Hi Mark.

On Jan 29, 2014, at 11:07 AM, Mark Boolootian  wrote:

>> Can anyone say whether existing IP Address Management tools that
>> support IPv6 have built-in assumptions or dependencies on the
>> /64 subnet prefix length, or whether they simply don't care about
>> subnet size?
> 
> We use Infoblox's IPAM.  There aren't any limitations of which I'm
> aware in terms of allocating space and IPv6 prefix length in the IPAM.
> However, I don't know if there are restrictions when it comes to
> DHCPv6, as we've only set up /64s.

Consensus around here is that we support DHCPv6 for non-/64 subnets 
(particularly in the context of Prefix Delegation), but the immediate next 
question is "Why would you need that?"

cricket