Re: SHA1 collisions proven possisble

2017-03-01 Thread Peter Kristolaitis

On 3/1/2017 10:50 PM, James DeVincentis via NANOG wrote:

Realistically any hash function *will* have collisions when two items are 
specifically crafted to collide after expending insane amounts of computing 
power, money, and… i wonder how much in power they burned for this little stunt.


Easy enough to estimate.

A dual-socket server with 2 X5675 CPUs (12 cores total) draws about 225W 
under full load, or about 18.75W per core.


0.01875 kW * 8766 h/y * 6500 y = about 1,070,000 kWh

For the GPU side, an NVIDIA Tesla K80 GPU accelerator draws 300W at full 
load.


0.3 kW * 8766 h/y * 110 y = about 290,000 kWh.

So the total calculation consumed about 1.36M kWh.

A quick Google search tells me the US national average industrial rate 
for electricity is $0.0667/kWh, for a cost of $90,712. That's not 
counting AC-DC conversion loss, or the power to run the cooling.  Or the 
cost of the hardware, though it's fair to assume that in Google's case 
they didn't have to buy any new hardware just for this.




Re: Consumer networking head scratcher

2017-03-01 Thread Chuck Anderson
On Thu, Mar 02, 2017 at 12:24:38PM +0700, Roland Dobbins wrote:
> On 2 Mar 2017, at 9:55, Oliver O'Boyle wrote:
> 
> >Currently, I have 3 devices connected. :)
> 
> What about DNS issues?  Are you sure that you really have a
> networking issue, or are you having intermittent DNS resolution
> problems caused by flaky/overloaded/attacked recursivs, EDNS0

This reminded me of another possibility related to NAT table
exhaustion.  Are you running a full recursive resolver on a system
behind the NAT?  Especially one like unbound possibly w/dnssec?  I had
some strange issues caused during the time when unbound was priming
its cache from a cold start...


Re: SHA1 collisions proven possisble

2017-03-01 Thread Royce Williams
On Wed, Mar 1, 2017 at 7:57 PM, James DeVincentis via NANOG
 wrote:

[ reasonable analysis snipped :) ]

> With all of these reasons all wrapped up. It clearly shows the level of hype 
> around this attack is the result of sensationalist articles and clickbait 
> titles.

I have trouble believing that Sleevi, Whalley et al spent years
championing the uphill slog of purging the global web PKI
infrastructure of SHA-1 to culminate in a flash-in-the-pan clickbait
party.

Instead, consider how long it has historically taken to pry
known-to-be-weak hashes and crypto from entrenched implementations.

If this round of hype actually scares CxOs and compliance bodies into
doing The Right Thing in advance ... then the hype doesn't bother me
in the slightest.

Royce


Re: Consumer networking head scratcher

2017-03-01 Thread Roland Dobbins

On 2 Mar 2017, at 9:55, Oliver O'Boyle wrote:


Currently, I have 3 devices connected. :)


You could have one or more botted machines launching outbound DDoS 
attacks, potentially filling up the NAT translation table and/or getting 
squelched by your broadband access provider with layer-4 granularity.  
And the boxes themselves could be churning away due to being compromised 
(look at CPU and memory stats over time).  Aggressive horizontal 
scanning is often a hallmark of botted machines, and it can interrupt 
normal network access on the botted hosts themselves.


I don't actually think that's the case, given the symptomology you 
report, but just wanted to put it out there for the list archive.


What about DNS issues?  Are you sure that you really have a networking 
issue, or are you having intermittent DNS resolution problems caused by 
flaky/overloaded/attacked recursivs, EDNS0 problems (i.e., filtering on 
DNS responses > 512 bytes), or TCP/53 blockage?  Different host 
OSes/browsers/apps exhibit differing re-query characteristics.  Are the 
Windows boxes and the other boxes set to use the same recursors?  Can 
you resolve DNS requests during the outages?


Are your boxes statically-addressed, or are they using DHCP?  
Periodically-duplicate IPs can cause intermittent symptoms, too.  If 
you're using the consumer router as a DHCP server, DHCP-lease nonsense 
could be a contributing factor.


Are the Windows boxes running some common application/service which 
updates and/or churns periodically?  Are they members of a Windows 
workgroup?  All kinds of strange name-resolution stuff goes on with 
Windows-specific networking.


Also, be sure to use -n with traceroute.  tcptraceroute is useful, too.  
netstat -rn should work on Windows boxes, IIRC.


---
Roland Dobbins 


Re: SHA1 collisions proven possisble

2017-03-01 Thread James DeVincentis via NANOG
Let me add some context to the discussion.

I run threat and vulnerability management for a large financial institution. 
This attack falls under our realm. We’ve had a plan in progress for several 
years to migrate away from SHA-1. We’ve been carefully watching the progression 
of the weakening of SHA-1 as computing power has increased and access to 
large-scale computing has become standard over the last 5 years. This does 
nothing to change our timeline. 

The attack does nothing to prove anything we already didn’t know. As computing 
power increases we must change hashing mechanisms every few years. This is why 
this is no surprise to us in the security sphere. However the presentation of 
this particular information has been following a very troublesome trend we’ve 
been seeing in the security sphere. Naming a vulnerability something silly but 
easily rememberable by management types. ‘HeartBleed’, ‘Shattered’, 
‘CloudBleed’, ‘SomethingBleed’… This is a publicity stunt by Google to whip up 
a hype and it worked. Case in point, are some of the posts in this thread that 
completely dismiss fact for assumption and embellishment. 

With specific regard to SSL certificates: "Are TLS/SSL certificates at risk? 
Any Certification Authority abiding by the CA/Browser Forum regulations is not 
allowed to issue SHA-1 certificates anymore. Furthermore, it is required that 
certificate authorities insert at least 64 bits of randomness inside the serial 
number field. If properly implemented this helps preventing a practical 
exploitation.” (https://shattered.it/ ). It seems not 
all of the news outlets read the entire page before typing up a sensationalist 
post claiming all your data is at risk suddenly.

Here’s why this is sensationalist. If anyone with *actual* hands-on work in the 
*security* sphere in *recent* years disagrees, I’ll be happy to discuss. 

- Hardened SHA1 exists to prevent this exact type of attack. 

- Every hash function will eventually have collisions. It’s literally 
impossible to create a hashing function that will never have a collision. There 
are an infinite number of inputs that can go into a hash function. There are a 
finite number of outputs of a hash function. Hash functions create an 
implausibility of a collision. That’s it. There is no 100% certainty that any 
hash function will not have collisions.

- Google created a weak example. The difference in the document they generated 
was a background color. They didn’t even go a full RGBA difference. They went 
from Red to Blue. That’s a difference of 4 bytes (R and B values). It took them 
nine quintillion computations to generate the correct spurious data to create a 
collision when they controlled both the documents with a 4 byte difference. 
That spurious data inflated the PDF by at least a few hundred KB. Imagine the 
computations it would take for the examples they give? Anyone know? No. They 
didn’t dare attempt it because they knew it’s not possible.  

- This wasn’t even an attack on a cryptographic method that utilizes SHA1.  
This was a unique identifier / integrity attack. Comparing an SHA1 hash is not 
the correct way to verify authenticity of a document. Comparing an SHA1 how you 
verify integrity of a document, looking for corruption. Authenticity is derived 
from having the data signed from a trusted source or encrypted using say PGP 
from a trusted source.

- And last but not least.. which takes all of the bite out of the attack. 
Google also showed it was easily detectable. Is a weakness or attack on a hash 
function really viable of it’s easily and readily detectable? No. It’s not. 
(See: IDS, WAFs: They filter and detect attacks against systems that may be 
vulnerable and prevent them by checking for the attacks). So If I see a hash 
collision? I’ll modify the algorithm… Wait.. This sounds awfully familiar.. Oh 
yea… Hardened SHA1.

With all of these reasons all wrapped up. It clearly shows the level of hype 
around this attack is the result of sensationalist articles and clickbait 
titles.

It also appears the majority of those who embrace fact also abandoned this 
thread fairly early once it began to devolve into embracing sensationalism. I’m 
going to join them. 

*micdrop* *unsubscribe*


> On Mar 1, 2017, at 9:49 PM, Matt Palmer  wrote:
> 
> On Thu, Mar 02, 2017 at 03:42:12AM +, Nick Hilliard wrote:
>> James DeVincentis via NANOG wrote:
>>> On top of that, the calculations they did were for a stupidly simple
>>> document modification in a type of document where hiding extraneous
>>> data is easy. This will get exponentially computationally more
>>> expensive the more data you want to mask. It took nine quintillion
>>> computations in order to mask a background color change in a PDF.
>>> 
>>> And again, the main counter-point is being missed. Both the good and
>>> bad documents have to be brute forced which largely defeats the
>>> purpose. Tthose numbers of 

Re: SHA1 collisions proven possisble

2017-03-01 Thread James DeVincentis via NANOG
I like the footnote they attached specifically for SHA1. 

"[3] Google spent 6500 CPU years and 110 GPU years to convince everyone we need 
to stop using SHA-1 for security critical applications. Also because it was 
cool."

It’s also not preimage. This isn’t even a FIRST preimage attack. That table 
needs an additional field type: “First non-preimage deliberate crafted 
collision created”. 

However, it proves a theory that maybe with some refining *could* turn into a 
preimage attack. 

Realistically any hash function *will* have collisions when two items are 
specifically crafted to collide after expending insane amounts of computing 
power, money, and… i wonder how much in power they burned for this little stunt.

> On Mar 1, 2017, at 9:42 PM, Nick Hilliard  wrote:
> 
> James DeVincentis via NANOG wrote:
>> On top of that, the calculations they did were for a stupidly simple
>> document modification in a type of document where hiding extraneous
>> data is easy. This will get exponentially computationally more
>> expensive the more data you want to mask. It took nine quintillion
>> computations in order to mask a background color change in a PDF.
>> 
>> And again, the main counter-point is being missed. Both the good and
>> bad documents have to be brute forced which largely defeats the
>> purpose. Tthose numbers of computing hours are a brute force. It may
>> be a simplified brute force, but still a brute force.
>> 
>> The hype being generated is causing management at many places to cry
>> exactly what Google wanted, “Wolf! Wolf!”.
> 
> The Reaction state table described in
> https://valerieaurora.org/hash.html appears to be entertainingly accurate.
> 
> Nick



Re: SHA1 collisions proven possisble

2017-03-01 Thread Matt Palmer
On Thu, Mar 02, 2017 at 03:42:12AM +, Nick Hilliard wrote:
> James DeVincentis via NANOG wrote:
> > On top of that, the calculations they did were for a stupidly simple
> > document modification in a type of document where hiding extraneous
> > data is easy. This will get exponentially computationally more
> > expensive the more data you want to mask. It took nine quintillion
> > computations in order to mask a background color change in a PDF.
> > 
> > And again, the main counter-point is being missed. Both the good and
> > bad documents have to be brute forced which largely defeats the
> > purpose. Tthose numbers of computing hours are a brute force. It may
> > be a simplified brute force, but still a brute force.
> > 
> > The hype being generated is causing management at many places to cry
> > exactly what Google wanted, “Wolf! Wolf!”.
> 
> The Reaction state table described in
> https://valerieaurora.org/hash.html appears to be entertainingly accurate.

With particular reference to the "slashdotter" column.

- Matt



Re: SHA1 collisions proven possisble

2017-03-01 Thread Nick Hilliard
James DeVincentis via NANOG wrote:
> On top of that, the calculations they did were for a stupidly simple
> document modification in a type of document where hiding extraneous
> data is easy. This will get exponentially computationally more
> expensive the more data you want to mask. It took nine quintillion
> computations in order to mask a background color change in a PDF.
> 
> And again, the main counter-point is being missed. Both the good and
> bad documents have to be brute forced which largely defeats the
> purpose. Tthose numbers of computing hours are a brute force. It may
> be a simplified brute force, but still a brute force.
> 
> The hype being generated is causing management at many places to cry
> exactly what Google wanted, “Wolf! Wolf!”.

The Reaction state table described in
https://valerieaurora.org/hash.html appears to be entertainingly accurate.

Nick


Re: Consumer networking head scratcher

2017-03-01 Thread Oliver O'Boyle
Next -->

On March 1, 2017, at 9:31 PM, Ryan Pugatch  wrote:




On Wed, Mar 1, 2017, at 09:29 PM, Oliver O'Boyle wrote:

Each device associated with the AP consumes memory. Small low-end routers don't 
typically come with much memory. If you've got a lot of devices associated with 
the AP you will run out of memory. I'm not sure how many devices you're 
connecting, though. Three will not cause this problem. 30 might.


O.



Currently, I have 3 devices connected. :)




Re: Consumer networking head scratcher

2017-03-01 Thread Ryan Pugatch




On Wed, Mar 1, 2017, at 09:29 PM, Oliver O'Boyle wrote:

> Each device associated with the AP consumes memory. Small low-end
> routers don't typically come with much memory. If you've got a lot of
> devices associated with the AP you will run out of memory. I'm not
> sure how many devices you're connecting, though. Three will not cause
> this problem. 30 might.
> 

> O.

> 



Currently, I have 3 devices connected. :)




Re: Consumer networking head scratcher

2017-03-01 Thread Oliver O'Boyle
Each device associated with the AP consumes memory. Small low-end routers
don't typically come with much memory. If you've got a lot of devices
associated with the AP you will run out of memory. I'm not sure how many
devices you're connecting, though. Three will not cause this problem. 30
might.

O.

On Wed, Mar 1, 2017 at 9:22 PM, Ryan Pugatch  wrote:

>
>
> On Wed, Mar 1, 2017, at 06:35 PM, Jean-Francois Mezei wrote:
> > On 2017-03-01 11:28, Ryan Pugatch wrote:
> >
> > > At random times, my Windows machines (Win 7 and Win 10, attached to the
> > > network via WiFi, 5GHz) lose connectivity to the Internet.
> >
> > > For what it's worth, the router is a Linksys EA7300 that I just picked
> > > up.
> >
> >
> > Way back when, I have a netgear router. It ended having a limit on its
> > NAT translation table, and when I had too many connections going at same
> > time (or not yet timed out), I would lose connection. There was an
> > unofficial patch to the firmware (litterally a patch in code that
> > defined table size) to increase that table to 1000- as I recall.
> >
> > Does the Linksys have a means to display the NAT translation table and
> > see if maybe connections are lost when that table is full and lots of
> > connections have not yet timed out ?
> >
>
>
> It doesn't seem to provide visibility into the NAT tables.  However, I'm
> starting to think you might be on to something.
>
> The issue actually happened to my Mac tonight, and sure enough the
> traceroute dies at the same time.  So, it isn't just the Windows
> machines impacted.
>
> I did a packet capture on my end, and on a server somewhere that I
> control and sent pings from my laptop to the server.
>
> The server received my ICMP packets and responded, but those responses
> never made it back to my laptop.
>
> Meanwhile, my Roku is actively streaming from the Internet, so it's not
> like the Internet was down.
>



-- 
:o@>


Re: Consumer networking head scratcher

2017-03-01 Thread Ryan Pugatch


On Wed, Mar 1, 2017, at 06:35 PM, Jean-Francois Mezei wrote:
> On 2017-03-01 11:28, Ryan Pugatch wrote:
> 
> > At random times, my Windows machines (Win 7 and Win 10, attached to the
> > network via WiFi, 5GHz) lose connectivity to the Internet. 
> 
> > For what it's worth, the router is a Linksys EA7300 that I just picked
> > up.
> 
> 
> Way back when, I have a netgear router. It ended having a limit on its
> NAT translation table, and when I had too many connections going at same
> time (or not yet timed out), I would lose connection. There was an
> unofficial patch to the firmware (litterally a patch in code that
> defined table size) to increase that table to 1000- as I recall.
> 
> Does the Linksys have a means to display the NAT translation table and
> see if maybe connections are lost when that table is full and lots of
> connections have not yet timed out ?
> 


It doesn't seem to provide visibility into the NAT tables.  However, I'm
starting to think you might be on to something.

The issue actually happened to my Mac tonight, and sure enough the
traceroute dies at the same time.  So, it isn't just the Windows
machines impacted.

I did a packet capture on my end, and on a server somewhere that I
control and sent pings from my laptop to the server.

The server received my ICMP packets and responded, but those responses
never made it back to my laptop.

Meanwhile, my Roku is actively streaming from the Internet, so it's not
like the Internet was down.


Re: SHA1 collisions proven possisble

2017-03-01 Thread James DeVincentis via NANOG
Keep in mind botnets that large are comprised largely of IoT devices which have 
very little processing power compared to the massive multi-core, high 
frequency, high memory bandwidth (this is especially important for 
cryptographic operations) CPUs in data centers. It doesn’t take much processing 
power to launch DDoS attacks so that’s why IoT is perfect for botnets. Those 
botnets which have desktop grade systems are also comprised of typically older 
machines that go unpatched and do not have high end server CPUs or GPUs. A 
botnet is also not going to get you the high end GPUs you need for phase 2. 
Generally the people with hardcore GPUs are gamers and workstation users that 
push those GPUs. They're going to notice the GPUs being utilized abnormally. 

On top of that, the calculations they did were for a stupidly simple document 
modification in a type of document where hiding extraneous data is easy. This 
will get exponentially computationally more expensive the more data you want to 
mask. It took nine quintillion computations in order to mask a background color 
change in a PDF.

And again, the main counter-point is being missed. Both the good and bad 
documents have to be brute forced which largely defeats the purpose. Tthose 
numbers of computing hours are a brute force. It may be a simplified brute 
force, but still a brute force. 

The hype being generated is causing management at many places to cry exactly 
what Google wanted, “Wolf! Wolf!”.

> On Mar 1, 2017, at 6:22 PM, valdis.kletni...@vt.edu wrote:
> 
> On Wed, 01 Mar 2017 15:28:23 -0600, "james.d--- via NANOG" said:
> 
>> Those statistics are nowhere near real world for ROI. You'd have to invest
>> at least 7 figures (USD) in resources. So the return must be millions of
>> dollars before anyone can detect the attack. Except, it's already
>> detectable.
> 
> *Somebody* has to invest 7 figures in resources.  Doesn't have to be you.
> 
> Remember that if you have access to a 1M node botnet, you could have 
> 56,940,000
> hours of CPU time racked racked up in... under 60 hours.
> 



Re: SHA1 collisions proven possisble

2017-03-01 Thread valdis . kletnieks
On Wed, 01 Mar 2017 15:28:23 -0600, "james.d--- via NANOG" said:

> Those statistics are nowhere near real world for ROI. You'd have to invest
> at least 7 figures (USD) in resources. So the return must be millions of
> dollars before anyone can detect the attack. Except, it's already
> detectable.

*Somebody* has to invest 7 figures in resources.  Doesn't have to be you.

Remember that if you have access to a 1M node botnet, you could have 56,940,000
hours of CPU time racked racked up in... under 60 hours.



pgpjrUioHTAsh.pgp
Description: PGP signature


Re: Consumer networking head scratcher

2017-03-01 Thread Jean-Francois Mezei
On 2017-03-01 11:28, Ryan Pugatch wrote:

> At random times, my Windows machines (Win 7 and Win 10, attached to the
> network via WiFi, 5GHz) lose connectivity to the Internet. 

> For what it's worth, the router is a Linksys EA7300 that I just picked
> up.


Way back when, I have a netgear router. It ended having a limit on its
NAT translation table, and when I had too many connections going at same
time (or not yet timed out), I would lose connection. There was an
unofficial patch to the firmware (litterally a patch in code that
defined table size) to increase that table to 1000- as I recall.

Does the Linksys have a means to display the NAT translation table and
see if maybe connections are lost when that table is full and lots of
connections have not yet timed out ?



RE: SHA1 collisions proven possisble

2017-03-01 Thread james.d--- via NANOG
> The what?  RFC5280 does not contain the string "finger".

The fingerprint (or thumbprint) is the hash (sha1/sha256) of the certificate
data in DER format, it's not part of the actual certificate. The fingerprint
is largely used in the security and development community in order to
quickly identify a unique certificate. Application developers (See: Google,
Microsoft, Apple, etc) also hard-code fingerprints into applications to
defend against anyone attempting to MITM the traffic (for obscurity or
security purposes). 

Fingerprints are used instead of serial numbers since two CAs can issue two
certificates with the same serial number. It's also the fastest way to
determine the identity of a certificate without examining individual data in
certificates. 

> The CA doesn't "change" the serial number (a CSR doesn't have a place to
even ask for a serial), they pick one, and while it's *supposed* to be at
least partially random, given the largely appalling state of CA operations
(and, even worse, the competence of the auditors who are supposed to be
making sure they're doing the right thing), I'd be awfully surprised if
there wasn't at least one CA in a commonly-used trust store which was
issuing certificates with predictable serial numbers.

Predictable serial numbers still wouldn't help you here and certificates
contain multiple unique identifiers. There's a massive brute force component
to this attack as well, both the "good" and "bad" certificate would have to
be brute forced. Let's also remember the ONLY example of this so far is PDF
documents where massive amounts of data can be hidden in order to manipulate
the hashes. This isn't the case with certificates. 

On the subject of the example documents. The example documents given are
unbelievably basic in their differing appearances and the attack is _easily_
detected.  

> Except all the ones that the payment industry (there's a group with no
stake in good security, huh?) have managed to convince browsers to allow
(thankfully, they get a good counter-cryptanalysis over them first), and all
the ones that have been issued "by mistake" to inconsequential organizations
like, say, HMRC (which just appear in CT logs, and the vigilance of the
community finds and brings to the attention of trust stores).

Again, this attack doesn't work on any existing arbitrary item and is easily
detected. So any existing item is safe until a preimage attack is found. 

The sky is not falling. The most this will affect is generation of unique
identifiers (which is not security related) using the SHA1 algorithm. This
has already been seen when trying to commit both of the example PDF
documents to a git repository. 

This whole situation is being blown way out of proportion and significantly
oversimplified. This is a PR stunt by Google to keep to their timeline from
when they cried the sky was falling years ago about SHA1
(https://security.googleblog.com/2014/09/gradually-sunsetting-sha-1.html). 

Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in total
6,500 years of CPU computation to complete the attack first phase =
56,940,000 hours CPU time
110 years of GPU computation to complete the second phase = 963,600 hours
GPU time

Those statistics are nowhere near real world for ROI. You'd have to invest
at least 7 figures (USD) in resources. So the return must be millions of
dollars before anyone can detect the attack. Except, it's already
detectable. 

Google nullified their point of demonstrating the attack by showing it was
easily detectable.

-Original Message-
From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Matt Palmer
Sent: Wednesday, March 1, 2017 1:34 PM
To: nanog@nanog.org
Subject: Re: SHA1 collisions proven possisble

On Tue, Feb 28, 2017 at 01:16:23PM -0600, James DeVincentis via NANOG wrote:
> The CA signing the cert actually changes the fingerprint

The what?  RFC5280 does not contain the string "finger".

> (and serial number, which is what is checked on revocation lists)

The CA doesn't "change" the serial number (a CSR doesn't have a place to
even ask for a serial), they pick one, and while it's *supposed* to be at
least partially random, given the largely appalling state of CA operations
(and, even worse, the competence of the auditors who are supposed to be
making sure they're doing the right thing), I'd be awfully surprised if
there wasn't at least one CA in a commonly-used trust store which was
issuing certificates with predictable serial numbers.

> Beyond that, SHA1 signing of certificates has long been deprecated and 
> no new public CAs will sign a CSR and cert with SHA1.

Except all the ones that the payment industry (there's a group with no stake
in good security, huh?) have managed to convince browsers to allow
(thankfully, they get a good counter-cryptanalysis over them first), and all
the ones that have been issued "by mistake" to inconsequential organisations
like, say, HMRC (which just appear in CT logs, and the 

Re: Consumer networking head scratcher

2017-03-01 Thread Ryan Pugatch


On Wed, Mar 1, 2017, at 03:58 PM, iam...@gmail.com wrote:
> On many non-windows OS (Mac OSX, Linux, FreeBSD etc.) you can specify
> ICMP
> traceroute using -I:
> 
> traceroute -I google.com
> 
> I wonder if this would replicate your experience with Windows tracert


Definitely on my list to test.

Thanks.


Re: Consumer networking head scratcher

2017-03-01 Thread iam...@gmail.com
On many non-windows OS (Mac OSX, Linux, FreeBSD etc.) you can specify ICMP
traceroute using -I:

traceroute -I google.com

I wonder if this would replicate your experience with Windows tracert


Re: Consumer networking head scratcher

2017-03-01 Thread Ryan Pugatch


On Wed, Mar 1, 2017, at 02:57 PM, William Herrin wrote:
> On Wed, Mar 1, 2017 at 2:31 PM, Ryan Pugatch  wrote:
> > So in that case, I would be back to my original issue where I stop being
> > able to pass traffic to the Internet, and when that happens my
> > traceroute always dies at the same hop.  After disconnecting and
> > reconnecting, the same traceroute will go all the way through.
> 
> Hi Ryan,
> 
> Next step: run Wireshark and see what you see during the traceroutes.
> Are they leaving with a reasonable TTL? Is it certain that nothing
> returns? Are the packets going to the ethernet MAC address you expect
> them to?
> 
> I had a fun problem once when I cloned some VMs but neglected to
> change the source MAC address. They all seemed to work under light
> load but get two downloading at once and suddenly they both
> experienced major packet loss.
> 
> Regards,
> Bill
> 

Definitely the direction I'm going.  Even aside from the traceroutes,
I'm going to capture some regular web traffic to see what is happening. 
Planning to send traffic to a machine I control to see if any packets
are actually making it through at all.

I'm not sure if this new Linksys router has any packet capture ability
that is exposed to the end user, but I'd also love be able to see what's
actually going through the router itself.

Thanks,
Ryan


Re: Consumer networking head scratcher

2017-03-01 Thread William Herrin
On Wed, Mar 1, 2017 at 2:31 PM, Ryan Pugatch  wrote:
> So in that case, I would be back to my original issue where I stop being
> able to pass traffic to the Internet, and when that happens my
> traceroute always dies at the same hop.  After disconnecting and
> reconnecting, the same traceroute will go all the way through.

Hi Ryan,

Next step: run Wireshark and see what you see during the traceroutes.
Are they leaving with a reasonable TTL? Is it certain that nothing
returns? Are the packets going to the ethernet MAC address you expect
them to?

I had a fun problem once when I cloned some VMs but neglected to
change the source MAC address. They all seemed to work under light
load but get two downloading at once and suddenly they both
experienced major packet loss.

Regards,
Bill



-- 
William Herrin  her...@dirtside.com  b...@herrin.us
Owner, Dirtside Systems . Web: 


Re: SHA1 collisions proven possisble

2017-03-01 Thread Matt Palmer
On Tue, Feb 28, 2017 at 01:16:23PM -0600, James DeVincentis via NANOG wrote:
> The CA signing the cert actually changes the fingerprint

The what?  RFC5280 does not contain the string "finger".

> (and serial number, which is what is checked on revocation lists)

The CA doesn't "change" the serial number (a CSR doesn't have a place to
even ask for a serial), they pick one, and while it's *supposed* to be at
least partially random, given the largely appalling state of CA operations
(and, even worse, the competence of the auditors who are supposed to be
making sure they're doing the right thing), I'd be awfully surprised if
there wasn't at least one CA in a commonly-used trust store which was
issuing certificates with predictable serial numbers.

> Beyond that, SHA1 signing of certificates has long been deprecated and no
> new public CAs will sign a CSR and cert with SHA1.

Except all the ones that the payment industry (there's a group with no stake
in good security, huh?) have managed to convince browsers to allow
(thankfully, they get a good counter-cryptanalysis over them first), and all
the ones that have been issued "by mistake" to inconsequential organisations
like, say, HMRC (which just appear in CT logs, and the vigilance of the
community finds and brings to the attention of trust stores).

- Matt

-- 
 I remember going to my first tutorial in room 404. I was most upset
when I found it.



Re: Consumer networking head scratcher

2017-03-01 Thread Ryan Pugatch


On Wed, Mar 1, 2017, at 02:04 PM, William Herrin wrote:
> > On Wed, Mar 1, 2017, at 01:23 PM, Aaron Gould wrote:
> >> That's strange... it's like the TTL on all Windows IP packets are
> >> decrementing more and more as time goes on causing you to get less and
> >> less hops into the internet
> 
> Hi Ryan,
> 
> Windows tracert uses ICMP echo-request packets to trace the path. It
> expects either an ICMP destination unreachable message or an ICMP echo
> response message to come back. The final hop in the trace will return
> an ICMP echo-response or an unreachable-prohibited. The ones prior to
> the final hop will return an unreachable-time-exceeded if they return
> anything at all.
> 
> If the destination does not respond to ping, if those pings are
> dropped, or if it responds with an unreachable that's dropped you will
> not receive a response and the tracert will not find its end. That's
> why you're seeing the "decrementing" behavior you describe.
> 
> I have no information about whether comcast blocks pings to its routers.
> 
> Regards,
> Bill Herrin
> 

I see what you're saying, and that could explain the decrementing
behavior I'm seeing which ultimately is not a real indicator of the
problem I am having.

So in that case, I would be back to my original issue where I stop being
able to pass traffic to the Internet, and when that happens my
traceroute always dies at the same hop.  After disconnecting and
reconnecting, the same traceroute will go all the way through.

Thanks for the thoughts.


Re: Consumer networking head scratcher

2017-03-01 Thread valdis . kletnieks
On Wed, 01 Mar 2017 14:04:07 -0500, William Herrin said:

> I have no information about whether comcast blocks pings to its routers.

All the Comcast gear in the path from my home router to non-Comcast addresses
will quite cheerfully rate-limit answer both pings and traceroutes.


pgpO6xO_p6EQX.pgp
Description: PGP signature


Re: Consumer networking head scratcher

2017-03-01 Thread William Herrin
> On Wed, Mar 1, 2017, at 01:23 PM, Aaron Gould wrote:
>> That's strange... it's like the TTL on all Windows IP packets are
>> decrementing more and more as time goes on causing you to get less and
>> less hops into the internet

Hi Ryan,

Windows tracert uses ICMP echo-request packets to trace the path. It
expects either an ICMP destination unreachable message or an ICMP echo
response message to come back. The final hop in the trace will return
an ICMP echo-response or an unreachable-prohibited. The ones prior to
the final hop will return an unreachable-time-exceeded if they return
anything at all.

If the destination does not respond to ping, if those pings are
dropped, or if it responds with an unreachable that's dropped you will
not receive a response and the tracert will not find its end. That's
why you're seeing the "decrementing" behavior you describe.

I have no information about whether comcast blocks pings to its routers.

Regards,
Bill Herrin



-- 
William Herrin  her...@dirtside.com  b...@herrin.us
Owner, Dirtside Systems . Web: 


Re: Consumer networking head scratcher

2017-03-01 Thread Ryan Pugatch
The issue doesn't happen with my previous router, and I've tested
multiple computers (one that isn't mine.)

It doesn't seem like it decrements over time.. it just dies sooner as I
trace further up the path.  I can consistently die at the 7th hop if I
try to go to Google, but if I trace to the 6th hop, it'll die at the 5th
hop!


On Wed, Mar 1, 2017, at 01:23 PM, Aaron Gould wrote:
> That's strange... it's like the TTL on all Windows IP packets are
> decrementing more and more as time goes on causing you to get less and
> less hops into the internet
> 
> I wonder if it's a bug/virus/malware affecting only your windows
> computers.
> 
> -Aaron
> 
> 


Consumer networking head scratcher

2017-03-01 Thread Ryan Pugatch
Hi everyone,

I've got a real head scratcher that I have come across after replacing
the router on my home network.

I thought I'd share because it is a fascinating issue to me.

At random times, my Windows machines (Win 7 and Win 10, attached to the
network via WiFi, 5GHz) lose connectivity to the Internet.  They can
continue to access internal resources, such as the router's admin
interface.  Other devices including Macs, iPhones, Android phones, and
Rokus never have this issue.

I realized that on the Windows machines, when the connection drops, if I
run a traceroute, it dies at a certain hop every time (out in Comcast's
network, who is my ISP) even though a Mac sitting right next to it is
able to go all the way through to the destination.

The even stranger thing I discovered last night is that if I trace to
the hop before the hop that it dies at, it then dies at the hop before
that (and as I trace to closer and closer hops, it dies the hop before
that!)

This is illustrated in the traces I've captured here:
http://pastebin.com/raw/R1UHLi0U

For what it's worth, the router is a Linksys EA7300 that I just picked
up.

I can't even imagine what would cause this issue at this point.  If
anyone has any thoughts, I'd love to hear them!

I'm going to start studying some packet captures to see if I can spot an
issue.

Best,
Ryan


Re: SHA1 collisions proven possisble

2017-03-01 Thread James DeVincentis via NANOG
The CA signing the cert actually changes the fingerprint (and serial number, 
which is what is checked on revocation lists), so this is not a viable 
scenario. Beyond that, SHA1 signing of certificates has long been deprecated 
and no new public CAs will sign a CSR and cert with SHA1.

> On Feb 27, 2017, at 8:18 AM, Chris Adams  wrote:
> 
> Once upon a time, valdis.kletni...@vt.edu  said:
>> There's only 2 certs.  You generate 2 certs with the same hash, and *then* 
>> get
>> the CA to sign one of them.
> 
> The point is that the signed cert you get back from the CA will have a
> different hash, and the things that they change that cause the hash to
> change are outside your control and prediction.
> 
> -- 
> Chris Adams 


Even with massive computing power, the tampering is still detectable since this 
attack does not allow for the creation of a hash collision from any arbitrary 
document. It requires specific manipulation of all items that result in a 
collision.

> On Feb 27, 2017, at 7:39 AM, valdis.kletni...@vt.edu wrote:
> 
> On Mon, 27 Feb 2017 07:23:43 -0500, Jon Lewis said:
>> On Sun, 26 Feb 2017, Keith Medcalf wrote:
>> 
>>> So you would need 6000 years of computer time to compute the collision
>>> on the SHA1 signature, and how much additional time to compute the
>>> trapdoor (private) key, in order for the cert to be of any use?
>> 
>> 1) Wasn't the 6000 years estimate from an article >10 years ago?
>> Computers have gotten a bit faster.
> 
> No, Google's announcement last week said their POC took 6500 CPU-years
> for the first phase and 110 GPU-accelerated for the second phase.
> 
> You are totally on target on your second point.  A million node botnet
> reduces it to right around 60 hours.




Research project and survey: Network filtering and IP spoofing

2017-03-01 Thread Franziska Lichtblau
Hi,

we are a team of researchers from TU Berlin [1] working on a measurement project
to assess the ramifications of traffic with spoofed source IP addresses in the
Internet.

To better understand the operational challenges that you as network operators
face when deploying (or not deploying) source IP address filtering techniques,
we'd like to invite you to participate in our survey.

If you could spare 5 minutes of your time, we'd be delighted if you could fill
out our survey form and tell us about your current practices regarding network
filtering.

To participate, please visit:
[2] http://filteringsurvey.inet.tu-berlin.de/

If you have any concerns or questions, you can reply on-list or contact us via
[3] filtering-sur...@inet.tu-berlin.de. We will only publish anonymized results 
of
this study and once we've analyzed your feedback we'll publish a digest of the
results on-list if you're interested.

As you are probably subscribed to more network operator lists you might
encounter this mail multiple times. We apologize for cross-posting, but in
order to get results that will give us meaningful insights we need the widest
coverage we can get. 

Thank you very much for your support!

Franziska Lichtblau

[1] www.inet.tu-berlin.de
[2] http://filteringsurvey.inet.tu-berlin.de/
[3] filtering-sur...@inet.tu-berlin.de

-- 
Franziska Lichtblau, M.A.building MAR, 4th floor, room 4.004
Fachgebiet INET - Sekr. MAR 4-4  phone: +49 30 314 757 33
Technische Universität Berlin   gpg-fp: 4FA0 F1BC 8B9A 7F64 797C
Marchstrasse 23 - 10587 Berlin  221C C6C6 2786 91EC 5CD5


signature.asc
Description: PGP signature


Re: IRR database for local usage

2017-03-01 Thread Job Snijders
On Wed, Mar 01, 2017 at 10:49:07AM +, Nagarjun Govindraj via NANOG wrote:
> Is it possible to maintian an IRR database locally for quering route
> objects from various RIR's and do a regular sync like what RPKI validator
> does for ROA's.

IRRExplorer's database is available as json blob, if you are only
interested in route objects & as-sets this might be of use to you.
IRRexplorer talks NRTM with various databases, and these dumps are
refreshed every few minutes.

wget http://irrexplorer.nlnog.net/static/dumps/irrexplorer-routes.json.bz2
wget http://irrexplorer.nlnog.net/static/dumps/irrexplorer-as_sets.json.bz2

Kind regards,

Job


Re: IRR database for local usage

2017-03-01 Thread Rubens Kuhl
Yeap. If you look at http://irr.net/docs/list.html , all of them list FTP
sites where you can get all information in bulk, load into your IRR daemon
and have a fast look-up for all that data.


Rubens




On Wed, Mar 1, 2017 at 7:49 AM, Nagarjun Govindraj via NANOG <
nanog@nanog.org> wrote:

> Hi nanog,
>
> Is it possible to maintian an IRR database locally for quering route
> objects from various RIR's and do a regular sync like what RPKI validator
> does for ROA's.
>
> - Nagarjun
>


IRR database for local usage

2017-03-01 Thread Nagarjun Govindraj via NANOG
Hi nanog,

Is it possible to maintian an IRR database locally for quering route
objects from various RIR's and do a regular sync like what RPKI validator
does for ROA's.

- Nagarjun