On 03/11/2013 06:34 PM, Kevin Chadwick wrote:
>> On 03/09/2013 07:53 AM, Kevin Chadwick wrote:
>>>> "There is no reason to believe that IPv6 will result in an 
>>>> increased use of IPsec."
>>>> 
>>>> Bull. The biggest barrier to IPsec use has been NAT! If an 
>>>> intermediate router has to rewrite the packet to change the 
>>>> apparent source and/or destination addresses, then the 
>>>> cryptographic signature will show it, and the packet will be 
>>>> correctly identified as having been tampered with!
>>>> 
> 
> http://marc.info/?l=openbsd-misc&m=135325641430178&w=2

I believe you've misunderstood what Brauer is saying there.

"" NAT needs to process every packets
"
" opposed to the !NAT case, where a router doesn't have to "process"
" every packet. rrright.
"

Here, when Brauer is talking about processing, he's not talking about
tampering with (modifying) packets, he's talking about inspecting them
as part of connection state and for other things.

This is absolutely distinct from *modifying* the packet, which is what
IPsec is intended to detect. I also wouldn't count 'dropping' packets as
modification, as:

A) an intermediate firewall isn't likely to allow any packet of a stream
through to begin with if it's going to block any packet in the stream at
all.
B) Handling of dropped packets is the responsibility of the transport
layer. UDP is supposed to handle it in stride. TCP is supposed to notice
and retry.

> 
>>> 
>>> It's hardly difficult to get around that now is it.
>> 
>> Sure, you can use an IP-in-IP tunnel...but that's retarded. IPSec 
>> was designed from the beginning to allow you to do things like sign
>> your IP header and encrypt everything else (meaning your UDP, TCP,
>> SCTP or what have you).
>> 
>> Setting up a tunnel just so your IP header can be signed wastes 
>> another 40 bytes for every non-fragmented packet. Ask someone 
>> trying to use data in a cellular context how valuable that 40 bytes
>> can be.
>> 
>>> You are wrong the biggest barrier is that it is not desirable to
>>>  do this as there are many reasons for firewalls to inspect 
>>> incoming packets. I don't agree with things like central virus 
>>> scanning especially by damn ISPs using crappy Huawei hardware, 
>>> deep inspection traffic shaping rather than pure bandwidth usage
>>>  tracking or active IDS myself but I do agree with scrubbing 
>>> packets.
>> 
>> It's not the transit network's job to scrub packets. Do your 
>> scrubbing at the VPN endpoint, where the IPSec packets are 
>> unwrapped.
>> 
>> Trusting the transit network to scrub packets is antithetical to 
>> the idea of using security measures to avoid MITM and traffic 
>> sniffing attacks in the first place!
>> 
> 
> I never said it was. I was more thinking of IPSEC relaying which 
> would be analogous to a VPN end point but without losing the end-end,
> neither are desirable,

Please, explain to me what the heck you mean, then? When you say

>>> You are wrong the biggest barrier is that it is not desirable to
>>>  do this as there are many reasons for firewalls to inspect 
>>> incoming packets.

I can't possibly understand what you're talking about except with the
context you've given me.

The only other thing I can take from what you're saying up to this point
is that you believe VPNs are bad, which I find, well, laughable.

> NAT has little to do with the lack of IPSEC deployment.

You keep saying this, but saying a thing doesn't make it understood; you
have to explain why.

> 
> What do you gain considering the increased resources,

You mean the bandwidth overhead of the ESP and/or AH headers? As opposed
to, what, TLS? GRE? IP-in-TLS-in-IP?

Let me have a clean, cheap TCP-on-ESP-on-IP stack for my
campus-to-campus connections!

> pointlessly increasing chances of cryptanalysis and pointlessly 
> increasing the chances of exploitation due to the fact that the more
>  complex IPSEC itself can have bugs like Openssl does,

If I read your argument correctly, you would view encryption in general
as harmful?

> not to mention amplifying DDOS without the attacker doing anything, 
> which is the biggest and more of a threat than ever,

One of my servers is currently undergoing a SYN flood. I'm well aware
that the Internet is a dangerous place.

Honestly, if someone wants to DDOS you, the increased amplification
factor of DNSSEC isn't going to be the deciding factor of whether your
server stays up or goes down.

> or are you going to stop using the internet.

Use hyperbole much?

> When ipv4 can utilise encryption without limitations including IPSEC 
> but more appropriately like ssh just fine when needed you see it is 
> simply not desirable and a panacea that will not happen. You are 
> simply in a bubble as the IETF were.

For the purposes of tunnels, I've used IPsec on IPv4, SSH and TLS.

Quite frankly? IPsec on IPv6 is the least painful option of all of these.

IPsec on IPv4 is frustrating because the VPN clients are poorly
implemented, and you *must* use TCP/UDP-in-ESP/AH-in-(optional TCP or
UDP in)-IP, or you're not going to get through NAT without getting the
network administrator to explicitly set up a forward for you.

TLS works (and is very common for tunneling), but you're again stuck
with whatever-in-IP-in-TLS-in-TCP/UDP-in-IP. Or, more commonly,
whatever-in-IP-in-Ethernet-in-TLS-in-TCP/UDP-in-IP. I'd be amazed if you
didn't think that was a royal mess.

SSH for tunneling? *Royal* PITA, especially when you have to connect to
multiple places at once. If all you have is one endpoint you have to
connect to to get where you're going, you can use SOCKS. Otherwise,
you're forwarding ports. If you're forwarding ports, you're OK, so long
as you don't have to use the same local port for two different purposes.
And, yes, I've *seen* setups so convoluted that there were explicit
instructions stating which local port would have to be used for each
remote service, with server-side accommodations to suit, simply so that
those local ports wouldn't conflict.

With IPsec on IPv6, in *all* of the circumstances where I've required a
VPN, the setup would have been a clean (whatever)-in-ESP/AH-in-IP. That
would be *much* cleaner, and *much* more efficient at all levels of the
network.

> 
>>> 
>>>> With IPsec, NAT is unnecessary. (You can still use it if you 
>>>> need it...but please try to avoid it!)
>>>> 
>>> 
>>> Actually it is no problem at all and is far better than some of 
>>> the rubbish ipv6 encourages client apps to do. (See the links I 
>>> sent in the other mail)
>> 
>> Please read the links before you send them, and make specific 
>> references to the content you want people to look at. I've read and
>> responded to the links you've offered (which were links to archived
>> messages on mailing lists, and the messages were opinion pieces
>> with little (if any) technical material.)
>> 
> 
> 
>>> 
>>>> Re "DNS support for IPv6"
>>>> 
>>>> "Increased size of DNS responses due to larger addresses might
>>>>  be exploited for DDos attacks"
>>>> 
>>>> That's not even significant. Have you looked at the size of DNS
>>>> responses? The increased size of the address pales in 
>>>> comparison to the amount of other data already stuffed into the
>>>> packet.
>>> 
>>> It's been ages since I looked at that link and longer addresses 
>>> would certainly be needed anyway but certainly with DNSSEC again 
>>> concocted by costly unthoughtful and unengaging groups who chose
>>> to ignore DJB and enable amplification attacks.
>> 
>> What from DJB did they ignore? I honestly don't know what you're 
>> talking about.
>> 
> 
> They completely ignored dnscurve.org

You know, I managed to read up on DNSCurve thanks to the link Michael
Orlitzky provided, and you know what I think of it?

I think it would do more harm to the DNS infrastructure than good. The
implementation advocated would disallow intermediary DNS servers, and
that is a *very bad thing*, because it disallows caching.

Caching isn't just about being able to get a response to the client
quickly. It's also about being able to get a response to the client *at
all*.

By default, a DNS resolver will make a query, wait 30s for a timeout,
retry their query, wait significantly longer for a timeout, and then try
a third time while waiting significantly longer.

If the resolver's query reaches the DNS server fine, and the DNS
server's response reaches the resolver fine, all is well. If the query
or response packets get lost along the way, then the resolver (which is
probably someone's web browser) waits 30s before it tries again. 30s is
a *long* time. Your average user will only wait 10s before hitting F5 or
moving along.

Now, consider a typical modern residential network:

{( internet )} -> ISP -> ADSL -> home router -> end user.

That ADSL link is an absolutely treacherous beast. It drops packets
without mercy, and the more you can accomplish without having to send
packets across it, the better. Unfortunately, your average modern
website includes files from up to a dozen different domains, with a few
redirections involved for most of them.

The typical modern home router has a caching DNS server built-in because
DNS commonly operates over UDP, and UDP does not provide delivery
guarantee. The end user's resolver (which is in their web browser)
queries the home router, which then recurses and makes the query of the
upstream servers on the end-user's behalf. Once the recursive resolver
gets a response, it caches that response.

I use a Debian PC as my home router. When I was on DSL, I initially
didn't set up a caching recursive resolver. (I also didn't have caching
recursive resolvers installed locally on my machines.) The Internet felt
terrible. Half the resources on any given social media site or complex
web page didn't load until the second or third refresh, simply because
the DNS requests to find the relevant servers got lost along the way.

Once I set up a caching recursive resolver on my router, my Internet
experience improved dramatically.

Now consider a more painful (yet still typical) network:

{( internet )} -> ISP -> ADSL -> wifi hot spot -> end user

That wifi hot spot hammers packets harder than the ADSL link that's also
taking its due. Between having too many APs in a given area, too many
clients on a public network and too much ambient noise in the 2.4GHz ISM
band, getting a packet through that network is like playing frogger
blind; you only accomplish it because your computer plays the game with
infinite lives and doesn't let you see all the times the frog got squished.

This is why all Windows systems have had caching recursive resolvers
built-in for years, and why NetworkManager likes to use dnsmasq; you've
added caching to local systems to work around terrible local physical
layers.

Do we need to get into using cellular connections and tethering in
non-urban areas? I could tell you about a six-mile stretch on my wife's
daily commute that the phone claims to have data service in, but can't
get a packet through to save its battery.

> or that RSA768 was not strong enough to be a good choice

I don't think many people seriously anticipated how rapidly Moore's law
would catch up with public-key cryptography. I'm not sure *anybody*
without security clearance or heavy NDAs could have anticipated it using
any sane series of logic.

> and ECDSA should be looked at and most importantly the DOS 
> amplification (we are talking years ago).

I understand that this is a key issue for you. I can't comprehend why,
as I can't see why it's as significant an issue as you seem to believe
it to be.

> I even had a discussion with a dns caching tools (that I do like a 
> lot) author who completely dismissed the potential of RSA being 
> broken for years and years. Guess what's come to light since.

As I said, Moore's law managed to surprise a great number of very
reasonable people. There's *no* way that anyone in  1995 could have
reasonably anticipated the rise of fully-programmable
mass-parallel-calculation engines whose development was driven and made
cheap (and thus highly, highly scaleable) by *video games*.

> 
>>> 
>>> His latest on the "DNS security mess"
>>> 
>>> http://cr.yp.to/talks/2013.02.07/slides.pdf
>> 
>> I've never before in my life seen someone animate slideshow 
>> transitions and save off intermediate frames as individual PDF 
>> pages. That was painful.
>> 
> 
> Yeah, xpdf worked well though. I actually couldn't find the link and
>  looked it up and thought it was just an update of 2012 as it had the
>  same title and only got around to reading it about an hour later.
> 
>> So, I read what was discussed there. First, he describes failings 
>> of HTTPSEC. I don't have any problem with what he's talking about 
>> there, honestly; it makes a reasonable amount of sense,
>> considering intermediate caching servers aren't very common for
>> HTTP traffic, and HTTPS traffic makes intermediate caching
>> impossible. (unless you've already got serious security problems by
>> way of a MITM opening.)
>> 
>> Then he turns around and dedicates two slides showing a DNS 
>> delegation sequence...and then states in a single slide that DNSSEC
>> has all the same problems as HTTPSEC.
>> 
>> DNSSEC doesn't have the same problems as HTTPSEC, because almost 
>> *every* recursive resolving DNS server (which is most of the DNS 
>> servers on the Internet) employs caching.
>> 
> 
> I suggest you read the 2012 slides.

Provide me with a link, and I will. I'm not going to go digging without
a strong idea of what I'm looking for, and where to find it.

> 
>>> 
>>>> "An attacker can connect to an IPv4-only network, and forge 
>>>> IPv6 Router Advertisement messages. (*)"
>>> 
>>>> Again, this depends on them being on the same layer 2 network 
>>>> segment.
>>> 
>>>> The same class of attacks would be possible for any IPv4 
>>>> successor that implemented either RAs or DHCP.
>>> 
>>> Neither of which I use.
>> 
>> You're telling me you don't use DHCP? Seriously, that you 
>> statically configure the IPv4 addresses of all the hosts on your 
>> network?
>> 
>> With one exception, I haven't personally seen a network configured
>>  in that way since 1998! Certainly, every network has some hosts 
>> configured statically, but virtually no network I've observed (and
>>  I've seen private networks between 2 and 50 hosts, and commercial
>>  networks between 5 and 30k hosts) managed completely statically.
>> 
> 
> None of my networks for way over a decade have ever employed DHCP
> and boy am I glad in avoiding many security issues. That was one of
> the first decisions in network design I made as a teenager.
> 
> In fact the only networks that do use DHCP by definition are not
> well cared for such as roaming or tethering a laptop I trust little
> to my phone which I also trust little and for many good, sorry very
> bad mobile network reasons.

You're pretty much in a class of your own, there. I'm sorry to say I
can't empathize or agree in the slightest...

> 
>>> 
>>> As I said we would be here all day and that link wasn't as good 
>>> as the one I was actually looking for.
>>> 
>>> local NAT done right is no problem and actually a good thing and
>>>  I have no issues playing games, running servers or anything else
>>>  behind NAT.
>> 
>> See others' responses about port standardization, and about how it 
>> enforces a distinction between 'clients' and 'servers' that's 
>> unnecessary (and even harmful) for a variety of applications.
>> 
> 
> Boy should there be a distinction between clients and servers, 
> without it we would be in a world of pain.

Why? You keep making statements without giving reasons for them.

> However I still atest that there is no problem and far less than
> what ipv6 advocates.

Are you referring to the problems identified by the slides you've
referenced so far? And that I've largely addressed, debunked or pointed
out are no worse than IPv4? You never gave me a point-by-point
response...instead you indicated the slide deck wasn't as good as the
one you were looking for. Apart from your finding any dynamic network
configuration appalling, I don't know what you're talking about.

> 
>>> Global NAT works well enough
>> 
>> With global NAT, anything you do that depends on port forwarding is
>> broken.
>> 
>>> but isn't a good thing and wouldn't exist if they had simply 
>>> added more addresses quickly. The hardware uptake would have been
>>> no issue rather than a decade of pleads.
>> 
>> There's almost nothing you've pointed at so far that would have 
>> prevented backbones from IPv6 uptake...and the backbones dragged 
>> their heels long enough. IPv4 with expanded address bits would have
>> been worse; IPv6 explicitly includes features intended to ease the
>> strain on the size of a global routing view!
>> 
> 
> And much more which is the problem

Wait, what? No!

I'm talking about route aggregation, which comes in part from the use of
CIDR notation. Through route aggregation, you reduce the number of
entries in the routing table. Yes, the byte size of the global routing
view will grow with IPv6. IPv4 with 128 address bits would have been far
worse, unless route aggregation was made part of its design (as was done
with IPv6).

> and yet still including the bad parts of ipv4 apparently.

Are there any parts of IPv4 that IPv6 carries that you consider bad
parts, or are we just talking about DHCP again?

> 
>>> 
>>> We haven't even touched on the code yet and so all the vulnerable
>>> especially home hardware which yes often has vulnerable sps
>>> anyway but by no way just home hardware.
>>> 
>>> The ipvshit links give an insight into the code complexity.
>> 
>> You call that code complex? (and I still don't know what 'ipvshit'
>>  is, except possibly one guy's pejorative description of IPv6)
> 
> Think about what it is doing from the comment not the actual code.

You mean this seemingly harmless comment?

/*
 * sin6_len is the size of the sockaddr so substract the offset of
 * the possibly truncated sin6_addr struct.
 */

Or do you mean this advocacy of NAT444

" who sez that your made up isp has to hand out network-wide unique IPs
" to his customers?

This attack on the person he's replying to:

" why do i even waste time on some ipvshit advocate that acts like a
" politician claiming we have to eat shit because there wouldn't be an
" alternative, making up a case out of nothing to "prove" his case?

Or his diatribe about IPv6?

" look at the oh so bright future yourself, look at the code required to
" deal with that misdesigned piece of shit.
" did i just say "designed"? sorry. it's obvious that nothing remotely
" related to design was involved.

I'm not seeing what I've misinterpreted.

> 
>> 
>>> Note OpenBSDs kernel which is very secure (unlike Linux whose 
>>> primary goal is function)
>> 
>> I have no complaints about OpenBSD.
>> 
>>> and has had just a few remote holes in well over a decade, one of
>>> which was in ipv6
>> 
>> So OpenBSD, who you venerate, had a bug in its IPv6 implementation,
>> and for that you view IPv6 as an abomination? Is that it? (Or is it
>> because the guy who wrote the buggy code thinks so, and you
>> venerate him? Because that seems plausible, too.)
>> 
> 
> You seem to underestimate the gravity or such a bug making it into 
> the OpenBSD kernel.

You seem to labor under the misunderstanding that OpenBSD is perfect.

> 
> However, not just that, try searching osvdb.org for ipv4 = < 1 page
> 
> ipv6 = > 3 pages

You know what I'm noticing in those three pages? Most of those bugs are
in hardware vendor firmware. I don't think it would have been any better
if they'd had to implement a "IPv4 with extended address bits" protocol.
Say what you want about IPv6 being complex, the truth is that IPv6 was
explicitly designed to simplify and streamline packet routing.

> 
>>> and which I had avoided without down time because I won't and 
>>> what's more shouldn't use ipv6 wherever possible
>> 
>> Do you mean that as "I'll avoid IPv6 as much as possible" or do you
>> mean that as "I won't use IPv6 in all places where it's possible to
>> use IPv6"?
>> 
> 
> Former
> 
>> If the former, I'm afraid I still haven't seen a solid technical 
>> case against it.
>> 
> 
> 
>>> and had actually removed it from the kernel all together.
>> 
>> That's sensible; if you're not going to use code, remove it.
>> 
> 
> You would be surprised, many security books used to say it was a 
> waste of time. I ignored them.
> 
>>> 
>>> If I am Trolling rather than simply trying to make people aware 
>>> then stating ipv6 is wonderful is Trolling just as much or more.
>> 
>> IPv6 fixes things about the Internet that are currently broken by 
>> IPv4 address exhaustion. I try to point out where IPv6 fixes 
>> things, I try to clear up popular misconceptions about IPv6, and I
>>  try to help people understand a thing rather than simply fear it.
>> 
>> If you take a highly competent IPv4 network administrator and drop
>>  him into IPv6 territory without ensuring that he has knowledge of
>>  IPv6 best practices and practical concerns, you're likely to get a
>>  broken network out of it. I try to help keep that from happening.
> 
> It is more than that. Though I am glad you are reducing the problems
>  out there.

I want to highlight this next bit.

> IPV6 unarguably is worse than IPV4. You can argue it is 
> also better

...

> but where if it is it serves no useful purpose for me 
> that would make me even consider using it unless it is redesigned.

I can't say nobody is going to force you to use IPv6. That would be a
lie. Eventually, there will be services on the Internet which you won't
be able to access without using IPv6 in some way.

It's far better to learn how to use the thing (better still, learn how
to use it properly!) than to get pushed deeper into the pool without
first learning how to swim.


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to