Re: Yahoo DMARC breakage

2014-04-10 Thread Michael Thomas

On 04/09/2014 06:04 PM, Miles Fidelman wrote:


Especially after reading some of the discussions on the DMARC mailing 
list where it's clear that issues of breaking mailing lists were 
explicitly ignored and dismissed.


There's been 10 years of ostrichism about policy and mailing lists, 
especially from the crowd

who were against ADSP until they were for it.

Mike



Re: Fwd: Serious bug in ubiquitous OpenSSL library: Heartbleed

2014-04-08 Thread Michael Thomas
Just as a data point, I checked the servers I run and it's a good thing 
I didn't reflexively update them first.
On Centos 6.0, the default openssl is 1.0.0 which supposedly doesn't 
have the vulnerability, but the
ones queued up for update do. I assume that redhat will get the patched 
version soon but be careful!


Mike

On 04/07/2014 10:06 PM, Paul Ferguson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I'm really surprised no one has mentioned this here yet...

FYI,

- - ferg



Begin forwarded message:


From: Rich Kulawiec r...@gsp.org Subject: Serious bug in
ubiquitous OpenSSL library: Heartbleed Date: April 7, 2014 at
9:27:40 PM EDT

This reaches across many versions of Linux and BSD and, I'd
presume, into some versions of operating systems based on them.
OpenSSL is used in web servers, mail servers, VPNs, and many other
places.

Writeup: Heartbleed: Serious OpenSSL zero day vulnerability
revealed
http://www.zdnet.com/heartbleed-serious-openssl-zero-day-vulnerability-revealed-728166/

  Technical details: Heartbleed Bug http://heartbleed.com/

OpenSSL versions affected (from link just above):  OpenSSL 1.0.1
through 1.0.1f (inclusive) are vulnerable OpenSSL 1.0.1g is NOT
vulnerable (released today, April 7, 2014) OpenSSL 1.0.0 branch is
NOT vulnerable OpenSSL 0.9.8 branch is NOT vulnerable



- -- 
Paul Ferguson

VP Threat Intelligence, IID
PGP Public Key ID: 0x54DC85B2
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iF4EAREIAAYFAlNDg9gACgkQKJasdVTchbIrAAD9HzKaElH1Tk0oIomAOoSOvfJf
3Dvt4QB54os4/yewQQ8A/0dhFZ/YuEdA81dkNfR9KIf1ZF72CyslSPxPvkDcTz5e
=aAzE
-END PGP SIGNATURE-





Re: misunderstanding scale

2014-03-24 Thread Michael Thomas

On 03/24/2014 09:20 AM, William Herrin wrote:

On Mon, Mar 24, 2014 at 3:00 AM, Karl Auer ka...@biplane.com.au wrote:

Addressable is not the same as
accessible; routable is not the same as routed.

Indeed. However, all successful security is about _defense in depth_.
If it is inaccessible, unrouted, unroutable and unaddressable then you
have four layers of security. If it is merely inaccessible and
unrouted you have two.




A distinction without a difference, IMHO. Either I can send you an 
incoming SYN or I can't.


The real battle here, IMHO, is to get the next gen CPE vendors to do the 
right thing. NANOG
folks ought to be keeping tabs on the Homenet working group and then 
DEMAND that any

CPE support its security, etc, baselines.

Mike



Re: misunderstanding scale

2014-03-24 Thread Michael Thomas

On 3/24/14 10:08 AM, William Herrin wrote:

On Mon, Mar 24, 2014 at 12:28 PM, Michael Thomas m...@mtcc.com wrote:

On 03/24/2014 09:20 AM, William Herrin wrote:

On Mon, Mar 24, 2014 at 3:00 AM, Karl Auer ka...@biplane.com.au wrote:

Addressable is not the same as
accessible; routable is not the same as routed.

Indeed. However, all successful security is about _defense in depth_.
If it is inaccessible, unrouted, unroutable and unaddressable then you
have four layers of security. If it is merely inaccessible and
unrouted you have two.

A distinction without a difference, IMHO. Either I can send you an incoming
SYN or I can't.

Hi Mike,

You can either press the big red button and fire the nukes or you
can't, so what difference how many layers of security are involved
with the Football?

I say this with the utmost respect, but you must understand the
principle of defense in depth in order to make competent security
decisions for your organization. Smart people disagree on the details
but the principle is not only iron clad, it applies to all forms of
security, not just IP network security.




The point here is that your depth is the same with or without nat. The
act of address translation does not alter its routability, it's the firewall 
rules
that say no incoming SYN's without an existing connection state, etc. That,
and always has been, the business end of firewalls.

The other thing about v6 is that counting on addressibility in any way shape
or form is a fool's errand: hosts want desperately to number their interfaces
with whatever GUA's they can given RA's, etc. So you may think you're only 
giving
out ULA's, but I wouldn't count on that from a security perspective. v6 is not 
like
DHCPv4 even a little in that respect: if the hosts can get a GUA, they will 
configure
it and use it.

Mike





Re: misunderstanding scale

2014-03-24 Thread Michael Thomas

On 3/24/14 10:37 AM, valdis.kletni...@vt.edu wrote:

On Mon, 24 Mar 2014 13:13:43 -0400, William Herrin said:


You'd expect folks to give up two layers of security at exactly the
same time as they're absorbing a new network protocol with which
they're yet unskilled? Does that make sense to you from a
risk-management standpoint?

The problem is that the two layers of security that they're giving up
are made from the same fabric as the Emperor's new clothes

Made of neutrinos for which nobody is exactly sure have mass.

Mike



Re: misunderstanding scale

2014-03-24 Thread Michael Thomas

On 03/24/2014 06:05 PM, Owen DeLong wrote:


So ULA the printers (if you must).

That doesn’t create a need for ULA on anything that talks to the internet, nor 
does it create a requirement to do NPT or NAT66.



From a security perspective, I wouldn't trust my printer to not number 
itself with a GUA.
Unlike v4 with DHCP, any kind of glitch causing leakage of 
RA's-bearing-Global-prefixes (i'm
sure there is a Greek Tragedy written about this) will cause it to 
number the interface with that
prefix. You can argue that's misconfiguration and I wouldn't disagree, 
but it's just way to
easy for the (printer) host to do, and it wouldn't be very apparent to 
anything but the

host (printer).

I'm not entirely sure what the whole answer is to this. We're still 
talking about raw ip addresses
here, so somebody would have to know the GUA the printer numbered itself 
to. Naming autodiscovery
doesn't currently traverse subnets, though homenet and others are trying 
to relax that. Some sort
of logic like if I can't add my address to dns then don't listen to 
incoming requests on my gua might
be helpful, but as I said... people interested in this really should pay 
attention to the homenet working

group which is charged, for better or worse, to sort a lot of this out.

Mike



Re: misunderstanding scale

2014-03-23 Thread Michael Thomas

[]

It seems to me that the only thing that really matters in v6 wars for 
enterprise is whether their
content side has a v6 face. Who really cares whether they migrate away 
from v4  so long as
they make their outward facing content (eg web, etc) available over v6? 
That's really the key.


Mike



Re: spamassassin

2014-02-18 Thread Michael Thomas

On 02/18/2014 05:52 PM, Randy Bush wrote:

in the last 3-4 days, a *massive* amount of spam is making it past
spamassassin to my users and to me.  see appended for example.  not
all has dkim.




It's been a while since i've been in this world, but I wonder whether 
bayes filters are
using the public key of the dkim selector as a token. if they don't 
change selectors/keys
they'd probably be s-canned pretty quickly. It would require that the 
dkim subsystem
talk to the bayes subsystem since the public key isn't in the signature, 
so i'm guessing

not.

Mike



Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread Michael Thomas

On 12/30/2013 08:03 AM, Dobbins, Roland wrote:

On Dec 30, 2013, at 10:44 PM, valdis.kletni...@vt.edu 
valdis.kletni...@vt.edu wrote:


What percentage of Cisco gear that supports a CALEA lawful intercept mode is 
installed in situations where CALEA doesn't apply, and thus there's a high 
likelyhood that said support is misconfigured and abusable without being 
noticed?

AFAIK, it must be explicitly enabled in order to be functional.  It isn't the 
sort of thing which is enabled by default, nor can it be enabled without making 
explicit configuration changes.




Also, the way that things are integrated it's usually an explicit 
decision to pull a piece of functionality
in rather than inheriting it. Product managers don't willingly want to 
waste time pulling things
in that a) don't make them money, and b) require support. So I doubt 
very seriously that CALEA
functionality is accidentally included into inappropriate things. Doubly 
so because of the performance

implications.

Mike



Re: Caps (was Re: ATT UVERSE Native IPv6, a HOWTO)

2013-12-06 Thread Michael Thomas

On 12/06/2013 05:54 AM, Mark Radabaugh wrote:


I realize most of the NANOG operators are not running end user 
networks anymore.   Real consumption data:


Monthly_GBCountPercent
100GB 3658 90%
100-149 368 10%
150-199 173 4.7%
200-249  97 2.6%
250-299  50 1.4%
300-399  27 0.7%
400-499   9 0.25%
500-599   4 0.1%
600-699   4 0.1%
700-799   3 0.1%
800  1 0.03%

Overall average:  36GB/mo


The user at 836MB per month is on a 3.5Mbps plan paying $49.95/mo.   
Do we do anything about it?  No - because our current AUP and policies 
say he can do that.




Thanks for the stats, real life is always refreshing :)

It seems to me -- all things being equal -- that the real question is 
whether Mr. Hog is impacting your
other users. If he's not, then what difference does it make if he 
consumes the bits, or if the bits over
the air are not consumed at all? Is it because of transit costs? That 
seems unlikely because Mr. Hog's

800gb is dwarfed by your 3658*36gb (almost three orders of magnitude).

If he is impacting other users, doesn't this devolve into a shaping 
problem which is there regardless

of whether it's him or 4 people at 200GB?

Mike



Re: latest Snowden docs show NSA intercepts all Google and Yahoo DC-to-DC traffic

2013-11-02 Thread Michael Thomas

On 11/01/2013 07:18 PM, Mike Lyon wrote:

So even if Goog or Yahoo encrypt their data between DCs, what stops
the NSA from decrypting that data? Or would it be done simply to make
their lives a bit more of a PiTA to get the data they want?




My bet is that when the said the were partially capable of 
intercepting things,
that means that they haven't broken any of the usual suspects in a 
spectacular
way, but instead are using anything they can think of to do what they 
want to
do. So all of the known crypto vulnerabilities, backdoors, breakins, 
etc, etc are
added to the partial bucket. And it wouldn't surprise me that that 
partial is
an impressive amount, because so much of internet security is a big old 
maginot

line.

Mike



Re: Happy Birthday, ARPANET!

2013-10-29 Thread Michael Thomas

On 10/29/2013 07:51 PM, Jay Ashworth wrote:

The Paley Center for Media reminds us that on this day in 1969 at 2230 PST, the 
first link was turned up between UCLAs Sigma 7 and SRIs 940.




OMG: I didn't know that I've actually worked on one of the net's first 
machines. Though not at

the time, but a Sigma 7 at UCI in the late 70's.

Mike



Re: If you're on LinkedIn, and you use a smart phone...

2013-10-26 Thread Michael Thomas

Chris Hartley wrote:

Anyone who has access to logs for their email infrastructure ought
probably to check for authentications to user accounts from linkedin's
servers.  Likely, people in your organization are entering their
credentials into linkedin to add to their contact list.  Is it a
problem if a social media company has your users' credentials?  I
guess it depends on your definition of is.  The same advice might
apply to this perversion of trust as well, but I'm not sure how
linkedin is achieving this feat.



Heck, it ought to show in the received headers. Of course they may purposefully
not be adding a received header in which their sleaze factor goes up even more.

Mike



Re: If you're on LinkedIn, and you use a smart phone...

2013-10-26 Thread Michael Thomas

Scott Howard wrote:


Have you actually confirmed it's NOT opt-in?  The screenshots on the
Linked-in engineering blog referenced earlier certainly make it look like
it is.

http://engineering.linkedin.com/sites/default/files/intro_installer_0.png

Of course, you could argue there's a difference between opting-in for
enhancing your email with Intro and opting-in for Please MITM all of my
email and dynamic modify it, but that's really just semantics - it
definitely appears to be opt-in.


There's consent and then  there's informed consent. Unless they explicitly
disclaim that WE CAN AND DO READ EVERY PIECE OF MAIL YOU SEND AND RECEIVE
AND USE IT FOR WHATEVER WE WANT then it isn't informed consent. My guess is
that the confirmation dialogs are more along the lines of DO YOU LIKE CUTE
KITTENS?

Mike



Re: nanog.org website - restored

2013-10-07 Thread Michael Thomas

On 10/7/13 4:24 PM, Andrew Koch wrote:

Working with onsite personel to upgrade the server with additional
memory failed during the first announced maintenance.  Compatible memory
was located and tested leading to the second maintenance when it was
successfully installed.

At this time we have increased the memory on the server and are at a
stable point.



How primative. When i want more memory I just log into the provider's
web console and tell it I want more geebees.

Mike



Re: Internet Surveillance and Boomerang Routing: A Call for Canadian Network Sovereignty

2013-09-08 Thread Michael Thomas

On 9/8/13 12:58 AM, Randy Bush wrote:

Quite frankly, all this chatter about technical 'calls to arms' and
whatnot is pointless and distracting (thereby calling into question
the motivations behind continued agitation for technical remedies,
which clearly won't have any effect whatsoever).

cool.  then i presume you will continue to run using rc4 and rsa 1024.
smart folk over there at arbor.



Even if you believe that it's pretty futile to try to protect yourself against 
~$50b,
there's a long tail of others to worry about.

Mike



Re: The US government has betrayed the Internet. We need to take it back

2013-09-06 Thread Michael Thomas

On 09/06/2013 12:14 PM, Eugen Leitl wrote:

On Fri, Sep 06, 2013 at 12:03:56PM -0700, Michael Thomas wrote:

On 09/06/2013 11:19 AM, Nicolai wrote:

That's true -- it is far easier to subvert email than most other
services, and in the case of email we probably need a wholly new
protocol.


Uh, a first step might be to just turn on [START]TLS. We're not using the
tools that have been implemented and deployed for a decade at least.


Of course:

Received: from sc1.nanog.org (sc1.nanog.org [50.31.151.68])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)


doesn't instill a lot of confidence :) It's better than nothing though.

Mike



Re: The US government has betrayed the Internet. We need to take it back

2013-09-06 Thread Michael Thomas

On 09/06/2013 12:52 PM, Nicolai wrote:

On Fri, Sep 06, 2013 at 12:03:56PM -0700, Michael Thomas wrote:

On 09/06/2013 11:19 AM, Nicolai wrote:

That's true -- it is far easier to subvert email than most other
services, and in the case of email we probably need a wholly new
protocol.


Uh, a first step might be to just turn on [START]TLS. We're not using the
tools that have been implemented and deployed for a decade at least.

Agreed.  Although some people are uncomfortable with OpenSSL's track record,
and don't want to trade system security for better-than-plaintext
network security.

But the deeper issue is coercing providers to give up mail stored on
private servers, bypassing the network altogether.  TLS doesn't address
this problem.  Short term: deploy [START]TLS.  Long term: we need a new
email protocol with E2E encryption.




I'd say we already have those things too in the form of PGP/SMIME.
Who knows what the NSA can break, but it's just not right to say that
we need new protocols. The means has been there for many years to
secure email (fsvo 'secure'), it's just that it's not terribly convenient
so we just don't for the most part.

Mike



Re: The US government has betrayed the Internet. We need to take it back

2013-09-06 Thread Michael Thomas

On 09/06/2013 11:19 AM, Nicolai wrote:

That's true -- it is far easier to subvert email than most other
services, and in the case of email we probably need a wholly new
protocol.



Uh, a first step might be to just turn on [START]TLS. We're not using the
tools that have been implemented and deployed for a decade at least.

Mike



Re: Yahoo is now recycling handles

2013-09-05 Thread Michael Thomas

On 09/04/2013 09:17 PM, valdis.kletni...@vt.edu wrote:

On Wed, 04 Sep 2013 20:47:40 -0500, Leo Bicknell said:

There's still the much more minor point that when I tried to self
serve I ended up at a blank page on the Yahoo! web site, hopefully they
will figure that out as well.

I'm continually amazed at the number of web designers that don't test
their pages with NoScript enabled.  Just sayin'.



Noscript users are even less important than ie6 users. Welcome to
the long tail of irrelevance.

Mike



Re: Super Space Self Storage : At The Heart of what was to become the epicenter of Silicon Valley.

2013-07-28 Thread Michael Thomas

On 07/28/2013 07:20 AM, jamie rishaw wrote:

http://www.theatlantic.com/technology/archive/13/07/not-even-silicon-valley-escapes-history/277824/


-j


Yeah, that's a fun article. My guess in 20 years the current boom in SF will 
revert
to the wildtype and instead of the Twitter on midmarket the Tenderloin will be
as it always was, and skinny jeans will no longer fit.

Mike



Re: ARIN WHOIS for leads

2013-07-26 Thread Michael Thomas

On 7/26/13 9:54 AM, Alex Rubenstein wrote:

Case in point.. And I'm going to name drop, but do not consider this a shame.
I have been looking at various filtering technologies, and was looking at
Barracudas site. I went on with my day, but noticed that filtering vendors
start showing up on random websites. Fast forward 24 hours later..

You know what I am waiting for?

The LED billboards on the side of the road displaying targeted advertisements, 
based on your proximity to them, because your android phone is telling the sign 
where you are.

Who thinks I am crazy?



Now, if it broadcast the grocery list and all I had to do is blink twice to
approve it so that all I had to do is drive through... that i might be able to
live with.

Mike



Re: Google's QUIC

2013-06-29 Thread Michael Thomas

On 06/28/2013 09:54 PM, shawn wilson wrote:

On Jun 29, 2013 12:23 AM, Christopher Morrow morrowc.li...@gmail.com
wrote:

On Fri, Jun 28, 2013 at 10:12 PM, Octavio Alvarez
alvar...@alvarezp.ods.org wrote:

On Fri, 28 Jun 2013 17:20:21 -0700, Christopher Morrow
morrowc.li...@gmail.com wrote:


Runs in top of UDP... Is not UDP...

If it has protocol set to 17 it is UDP.


So QUIC is an algorithm instead of a protocol?

it's as much a protocol as http is.. I suppose my point is that it's
some protocol which uses udp as the transport.

Because of this I don't see any (really) kernel/stack changes
required, right? it's just something the application needs to work out
with it's peer(s). No different from http vs smtp...


SCTP was layer 4, if QUIC is the same, than it will too. If QUIC is layer 5
up, it won't. That might be the difference (I haven't looked into QUIC).



From the FAQ, QUIC is an experiment that they're building on top of UDP
to test out what a new layer 4 transport protocol built with modern http
in mind  ought to look like. They say that they eventually want to fold
the results back into TCP (if possible, but given their goals it doesn't seem
especially likely that the result would still be TCP).

Mike



Google's QUIC

2013-06-28 Thread Michael Thomas

http://arstechnica.com/information-technology/2013/06/google-making-the-web-faster-with-protocol-that-reduces-round-trips/?comments=1

Sorry if this is a little more on the dev side, and less on the ops side but 
since
it's Google, it will almost certainly affect the ops side eventually.

My first reaction to this was why not SCTP, but apparently they think that 
middle
boxen/firewalls make it problematic. That may be, but UDP based port filtering 
is
probably not far behind on the flaky front.

The second justification was TLS layering inefficiencies. That definitely has my
sympathies as TLS (especially cert exchange) is bloated and the way that it was
grafted onto TCP wasn't exactly the most elegant. Interestingly enough, their
main justification wasn't a security concern so much as helpful middle boxen
getting their filthy mitts on the traffic and screwing it up.

The last thing that occurs to me reading their FAQ is that they are seemingly 
trying
to send data with 0 round trips. That is, SYN, data, data, data... That really 
makes me
wonder about security/dos considerations. As in, it sounds too good to be true. 
But
maybe that's just the security cruft? But what about SYN cookies/dos? Hmmm.

Other comments or clue?

Mike



Re: Google's QUIC

2013-06-28 Thread Michael Thomas

On 06/28/2013 01:16 PM, Josh Hoppes wrote:

My first question is, how are they going to keep themselves from
congesting links?


The FAQ claims they're paying attention to that, but I haven't read the
details. I sure hope they grok that not understanding Van Jacobson dooms
you to repeat it.

https://docs.google.com/document/d/1lmL9EF6qKrk7gbazY8bIdvq3Pno2Xj_l_YShP40GLQE/preview?sle=true#heading=h.h3jsxme7rovm

Mike



On Fri, Jun 28, 2013 at 3:09 PM, Michael Thomas m...@mtcc.com wrote:

http://arstechnica.com/information-technology/2013/06/google-making-the-web-faster-with-protocol-that-reduces-round-trips/?comments=1

Sorry if this is a little more on the dev side, and less on the ops side but
since
it's Google, it will almost certainly affect the ops side eventually.

My first reaction to this was why not SCTP, but apparently they think that
middle
boxen/firewalls make it problematic. That may be, but UDP based port
filtering is
probably not far behind on the flaky front.

The second justification was TLS layering inefficiencies. That definitely
has my
sympathies as TLS (especially cert exchange) is bloated and the way that it
was
grafted onto TCP wasn't exactly the most elegant. Interestingly enough,
their
main justification wasn't a security concern so much as helpful middle
boxen
getting their filthy mitts on the traffic and screwing it up.

The last thing that occurs to me reading their FAQ is that they are
seemingly trying
to send data with 0 round trips. That is, SYN, data, data, data... That
really makes me
wonder about security/dos considerations. As in, it sounds too good to be
true. But
maybe that's just the security cruft? But what about SYN cookies/dos? Hmmm.

Other comments or clue?

Mike






Re: Google's QUIC

2013-06-28 Thread Michael Thomas

On 06/28/2013 02:07 PM, Jay Ashworth wrote:

- Original Message -

From: Michael Thomas m...@mtcc.com
My first reaction to this was why not SCTP, but apparently they think

Simple Computer Telephony Protocol?  Did anyone ever actually implement that?


No:


 http://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol


*Mike*




Re: Google's QUIC

2013-06-28 Thread Michael Thomas

On 06/28/2013 02:28 PM, joel jaeggli wrote:

On 6/28/13 2:15 PM, Michael Thomas wrote:

On 06/28/2013 02:07 PM, Jay Ashworth wrote:

- Original Message -

From: Michael Thomas m...@mtcc.com
My first reaction to this was why not SCTP, but apparently they think

Simple Computer Telephony Protocol?  Did anyone ever actually implement that?


No:


http://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol


SCTP is used successfully for the purpose for which it was intended, which is 
connecting SS7 switches over IP. It's not as much a posterchild for an 
application agnostic transport as some people would like to think.


Well, I'm pretty sure that Randy Stewart has a more expansive take on SCTP
than SS7oIP, but I get the impression that there just weren't enough other 
things
that cared  about head of line blocking. But now, 15 years later, it seems like 
maybe
there is.

In any case, if what you're worried about is head of line blocking, SCTP 
definitely
does that, and there are kernel implementations so for an *experiment* it seems
like you could get a long way by ignoring its warts.

But I think the most provocative thing is the idea of sending data payloads 
prior
to handshake. That sort of scares me because of the potential for an 
amplification
attack. But I haven't actually read the protocol paper itself.

Mike



Re: huawei

2013-06-15 Thread Michael Thomas

On 06/15/2013 05:13 AM, Rich Kulawiec wrote:

First: this is a fascinating discussion.  Thank you.

Second:

On Sat, Jun 15, 2013 at 01:56:34AM -0500, Jimmy Hess wrote:

There will be indeed be _plenty_ of ways that a low bit rate channel
can do everything the right adversary needs.

A few bits for second is plenty of data rate for  sending control
commands/rule changes to a router backdoor mechanism, stealing
passwords, or leaking cryptographic keys   required to decrypt the VPN
data stream intercepted from elsewhere on the network,   leaking
counters, snmp communities, or interface descriptions,   or
criteria-selected forwarded data samples, etc

I was actually thinking much slower: a few bits per *day*.  Maybe slower yet.

(So what if it takes a month to transmit a single 15-character password?)

For people who think in terms of instant gratification, or perhaps,
in next-quarter terms, or perhaps, in next-year terms, that might be
unacceptabe.  But for people who think in terms of next-decade or
beyond, it might suffice.

And if the goal is not get the password for router 12345 but get as
many as possible, then a scattered, random, slow approach might yield
the best results -- *because* it's scattered, random, and slow.



And all of us here by virtue of talking about it do not have a day job
which involves thinking all of this stuff up. A lot of the stuff the DoD
is willing to talk about is seriously brilliant, and that's just the public
stuff.

Information really, really wants to be free. Getting access to poorly defended
routers is probably the easy part for them. Masking the payloads is something
that they get paid the big bux for in general, so it is seriously naive to think
they don't have dozens of tricks they employ on a daily basis. The only thing
we really have to counter their ingenuity, IMO, are laws and other layer 8
impediments.

Mike, still wonders if this phenomenon is just a restatement of entropy



Re: huawei

2013-06-14 Thread Michael Thomas

On 06/14/2013 10:51 AM, valdis.kletni...@vt.edu wrote:

On Fri, 14 Jun 2013 13:21:09 -0400, Scott Helms said:


How?  There is truly not that much room in the IP packet to play games and
if you're modifying all your traffic this would again be pretty easy to
spot.  Again, the easiest/cheapest method is that there is a backdoor there
already.

Do you actually examine your traffic and drop packets that have non-zeros
in reserved fields?  (Remember what that did to the deployment of ECN?)

And there's plenty of room if you stick a TCP or IP option header in there. Do
you actually check for those too?

How fast can you send data to a cooperating router down the way if you splat
the low 3 bits of TCP timestamps on a connection routed towards the cooperating
router? (SUre, you just busted somebody's RTT calculation, but it will just
decide it's a high-jitter path and deal with it).



Right. The asymmetry here is staggering. That's why they are hugely advantaged
aside from the staggering asymmetry in funding. The only thing that we have on
our side is that with enough eyeballs low probabilities become better, but the
military well knows that problem for centuries, I'm sure.

Mike



Re: huawei

2013-06-14 Thread Michael Thomas

On 06/14/2013 11:35 AM, Scott Helms wrote:
In $random_deployment they have no idea what the topology is and odd behavior is *always *noticed over time. The amount of time it would take to transmit useful information would nearly guarantees someone noticing and the more successful the exploit was the more chance for discovery there would be. 


As a software developer for many, many years, I can guarantee you
that is categorically wrong. I'd venture to say you probably don't even
notice half. And that's for things that are just bugs or misfeatures.
Something that was purposeful and done by people who know what
they're doing... your odds in Vegas are better IMO.

Mike, who's seen way too many how in the hell did that ever work?



Re: huawei

2013-06-14 Thread Michael Thomas

On 06/14/2013 05:34 PM, Scott Helms wrote:

Is it possible?  Yes, but it's not feasible because the data rate would be
too low.  That's what I'm trying to get across.  There are lots things that
can be done but many of those are not useful.

I could encode communications in fireworks displays, but that's not
effective for any sort of communication system.



You're really hung up on bit rate, and you really shouldn't. Back in
the days before gigabit pipes, tapping out morse was considered
a data rate beyond belief. Ships used flags and signaling lights well
into the second world war at least. The higher the value of the
information, the lower the bit rate you need to transmit it (I think
this might formally be information entropy, but I'm not certain).

You might think that there is nothing of particularly high value to be
had within the confines of what a (compromised) router can produce,
but I'd say prepare to be surprised. I'm not much of a military guy, but
some of the stuff they dream up makes you go how on earth did you
think that up?. And that's just the unclassified widely known stuff.
Part of the issue when you say it could be done cheaper somewhere
else presupposes we know the economics of what they're trying to do.
We don't, so we should assume that routers just like everything else are
a target, and that you almost certainly won't notice it if they are.

Mike



Re: huawei

2013-06-13 Thread Michael Thomas

On 06/13/2013 09:31 AM, Saku Ytti wrote:

On (2013-06-13 12:22 -0400), Patrick W. Gilmore wrote:


Do you think Huawei has a magic ability to transmit data without you noticing?

I always found it dubious that public sector can drop them from tender
citing publicly about spying, when AFAIK Huawei hasn't never actually been
to court about it much less found guilty of it.

It's convenient way to devaluate one competitor. I'm just not sure if it's
actually legal in $my_locale to invent reasons to exclude vendor in public
sector RFQs.



Er, um, there are more ways to spy than virtual wires back to the mothership...

Mike



Re: huawei

2013-06-13 Thread Michael Thomas

On 06/13/2013 09:35 AM, Patrick W. Gilmore wrote:


I am assuming a not-Hauwei-only network.

The idea that a router could send things through other routers without someone 
who is looking for it noticing is ludicrous.



::cough:: steganography ::cough::

Mike



Re: huawei

2013-06-13 Thread Michael Thomas

On 06/13/2013 10:20 AM, Scott Helms wrote:


Not really, no one has claimed it's impossible to hide traffic.   What is true 
is that it's not feasible to do so at scale without it becoming obvious.   
Steganography is great for hiding traffic inside of legitimate traffic between 
two hosts but if one of my routers starts sending cay photos somewhere, no 
matter how cute, I'm gonna consider that suspicious.  That's an absurd example 
(hopefully funny) but _any_ from one of my routers over time would be obvious, 
especially since to be effective this would have to go on much of the time and 
in many routers.  Hiding all that isn't feasible for a really technically 
astute company and they're not in that category yet (IMO).



It all depends on what you're trying to accomplish. Hijacking many cat photos to
send your cat photo... how deep is your DPI?

Remember also, the answer to the universe fits in 6 bits...

Mike



Re: huawei

2013-06-13 Thread Michael Thomas

On 06/13/2013 05:28 PM, Scott Helms wrote:

Bill,

Certainly everything you said is correct and at the same time is not useful
for the kinds traffic interception that's been implied.  20 packets of
random traffic capture is extraordinarily unlikely to contain anything of
interest and eve if you do happen to get a juicy fragment your chances of
getting more ate virtually nil.  An effective system must either capture
and transmit large numbers of packets or have a command and control system
in order to target smaller captures against a shifting list of addresses.
Either of those things are very detectable.   I've spent a significant
amount of time looking at botnet traffic which has the same kind of
requirements.



I think you're having a failure of imagination that anything less than
a massive amount of information sent back to the attacker could be
useful. I think there are lots and lots of things that could be extremely
useful that would only require a simple message with got here back to the
attacker if the got here condition was sufficiently interesting. Spying 
doesn't
have the same motivations as typical botnets for illicit commerce.

Mike



Re: huawei

2013-06-13 Thread Michael Thomas

On 06/13/2013 06:11 PM, Scott Helms wrote:


Not at all Michael, but that is a targeted piece of data and that means  a command and 
control system.  I challenge your imagination to come up with a common scenario where a 
non targeted I'm/they're here that's useful to either the company or the 
Chinese government keeping in mind that you have no fore knowledge of where these devices 
might be deployed.   Also, no oneseems to want to touch the fact that doing this kind of 
snooping would be several orders of magnitude easier on laptops and desktops which have 
been sold by Lenovo for much longer than networking gear by Huawei.




Non targeted? Why be so narrow? For a targeted use, something that detects,
oh say, we [the Syrians] gassed the rebels in some stream and sends it out a
covert channel  would be very interesting. Remember that vast sums of money are
spent on these intelligence gathering systems. Whether they're targeting 
routers is
really hard to say -- the attacker has the advantage of knowing what they're 
looking for
and we don't. So in a router? It may just be opportunistic that they're easier 
or safer
to penetrate? We really don't know. Things are rarely as they appear on the 
surface.


Mike, I just heard the Syria example from the Newshour as I typed... this isn't 
hard



Re: huawei

2013-06-13 Thread Michael Thomas

On 06/13/2013 06:57 PM, Scott Helms wrote:


What you're describing is a command and control channel unless you're 
suggesting that the router itself had the capacity to somehow discern that.   
That's the problem with all the pixie dust theories.  The router can't, it 
doesn't know who the rebels are much less their net block ahead of time. 
Something has to pass rules to the box to be able trigger off of.



I think you're misunderstanding: the router is watching traffic and gives clues
that we're gassing the rebels that was added to all of the  DPI vectors
which get surreptitiously added to the other DPI terms unbeknownst to the
owner and sent back to the attacker. That's enormously powerful. All it takes
is sufficient money and motivation. Is this speculative? Of course -- I'm not
a spook. Is it possible? You bet.

Mike



Re: IPv6 and HTTPS

2013-04-29 Thread Michael Thomas

On 04/29/2013 11:00 AM, Jack Bates wrote:


If the existing cards handle CGN without additional licensing, then the only 
real cost is personal, my sanity, and the company need/will not factor that in.


One thing to consider is what the new support load will be from issues dealing
with CGN causing new breakage. That might be baked into your support already,
but at larger scale it probably isn't. Maybe it's marginal, but it's worth 
asking.

Mike



Re: It's the end of the world as we know it -- REM

2013-04-25 Thread Michael Thomas

So here is the question I have: when we run out, is there *anything* that
will reasonably allow an ISP to *not* deploy carrier grade NAT? Assuming
that it's death for the ISP to just say no to the long tail of legacy v4-only
sites?

One thing that occurs to me though is that it's sort of in an ISP's interest
to deploy v6 on the client side because each new v6 site that lights up on
the internet side is less traffic forced through the CGN gear which is 
ultimately
a cost down. So maybe an alternative to a death penalty is a molasses penalty:
make the CGN experience operable but bad/congested/slow :)

Mike

On 04/25/2013 07:12 AM, Arturo Servin wrote:

Yes.

We figured this out and we are starting a program (or a set of
activities) to promote the deployment of IPv6 in what we call End-users
organizations (basically enterprises, universities). We are seeing much
lower adoption numbers than our ISP's categories.

One basic problem that we have found when talking with enterprises is
that the perceived value of deploy v6 is near to zero as they have v4
addresses (universities) or NAT.

Regards,
as

On 4/24/13 6:26 PM, Fred Baker (fred) wrote:

If we really want to help the cause, I suspect that focusing attention on 
enterprise, and finding ways to convince them that address shortages are also 
their problem, will help the most.





Re: It's the end of the world as we know it -- REM

2013-04-25 Thread Michael Thomas

On 04/25/2013 10:10 AM, Brandon Ross wrote:

On Thu, 25 Apr 2013, Michael Thomas wrote:


So here is the question I have: when we run out, is there *anything* that
will reasonably allow an ISP to *not* deploy carrier grade NAT?


Do you count NAT64 or MAP as carrier grade NAT?


I suppose that the way to frame this as: does it require the ISP to
carry flow statefulness in their network in places where they didn't
have to before. That to my mind is the big hit.




One thing that occurs to me though is that it's sort of in an ISP's interest
to deploy v6 on the client side because each new v6 site that lights up on
the internet side is less traffic forced through the CGN gear which is 
ultimately
a cost down. So maybe an alternative to a death penalty is a molasses penalty:
make the CGN experience operable but bad/congested/slow :)


Hm, sounds like NAT64 or MAP to me (although, honestly, we may end up making MAP 
too good.)



I was going to say that NAT64 could be helpful, but thought better of it
because it may have its own set of issues. For example, are all of the resources
*within* the ISP v6 available? They may be a part of the problem as well as a
part of the solution too. I would think that just the prospect of having a less
expensive/complex infrastructure would be appealing as v6 adoption ramps up,
and gives ISP's an incentive to give the laggards an incentive.

Mike



Re: It's the end of the world as we know it -- REM

2013-04-25 Thread Michael Thomas

On 04/25/2013 11:09 AM, Owen DeLong wrote:

On Apr 25, 2013, at 11:24 AM, Michael Thomas m...@mtcc.com wrote:


So here is the question I have: when we run out, is there *anything* that
will reasonably allow an ISP to *not* deploy carrier grade NAT? Assuming
that it's death for the ISP to just say no to the long tail of legacy v4-only
sites?

This assumes facts not in evidence. However, given that assumption, it's
not so much a question of whether to CGN, but how. It looks like it may
be far better, for example, to do something like 464xlat with an all IPv6
network than to run dual-stack with NAT444 or DS-LITE.

There's no shortage of possible ways to run IPv4 life support, but they're
all life support. You have all the same risks as human life support…

Intracranial pressure, diverse intravascular coagulopathy (DIC),
stroke (CVA), embolisms, etc. In the network, we refer to these
as router instability, state table overflow, packet loss, bottlenecks,
etc.

Other options include NAT64/DNS64, A+P, etc.

Bottom line… The more IPv6 gets deployed on the content side, the less
this is going to hurt. Eyeballs will be forced to deploy soon enough. It's
content and consumer electronics that are going to be the most painful
laggards.


At some level, I wonder how much the feedback loop of providers
won't deploy ipv6 because everybody says they won't deploy ipv6
has caused this self-fulfilling prophecy :/

On the other hand, there is The Cloud. I assume that aws and all of the
other major vm farms have native v6 networks by now (?). I hooked up
v6 support on linode in, oh, less than an hour for my site. Maybe part
of this just evangelizing with the Cloud folks to get the word out that
v6 is both supported *and* beneficial for your site? And it might give them
a leg up with legacy web infrastructure data centers to lure them? Oh,
your corpro IT guys won't light up v6? let me show you how easy it is on
$MEGACLOUD.

Mike



Re: It's the end of the world as we know it -- REM

2013-04-25 Thread Michael Thomas

On 04/25/2013 07:27 PM, Owen DeLong wrote:

At some level, I wonder how much the feedback loop of providers
won't deploy ipv6 because everybody says they won't deploy ipv6
has caused this self-fulfilling prophecy :/

It's a definite issue. The bigger issue is the financial incentives are all in 
the
wrong direction.

Eyeball networks have an incentive not to deploy IPv6 until content providers
have done so or until they have no other choice.


Yet, eyeball networks are the ones being asked to pony up all of the
cost to put in place the hacks to keep v4 running so they don't get
support center calls. That's a pretty asymmetric, and a potential opportunity.




On the other hand, there is The Cloud. I assume that aws and all of the
other major vm farms have native v6 networks by now (?). I hooked up

You again assume facts not in evidence. Many cloud providers have done
IPv6. Rackspace stands out as exemplary in this regard. Linode has done
some good work in this space.

AWS stands out as a complete laggard in this area.


Heh... that's why I put all kinds of question marks and hedges :)
That's disappointing about aws. On the other hand, if aws lights
up v6, a huge amount of content will be v6 capable in one swell-foop.
Which is a different problem of death by a thousand cuts of corpro
data centers, and racked up servers in no-name cages.




v6 support on linode in, oh, less than an hour for my site. Maybe part
of this just evangelizing with the Cloud folks to get the word out that
v6 is both supported *and* beneficial for your site? And it might give them
a leg up with legacy web infrastructure data centers to lure them? Oh,
your corpro IT guys won't light up v6? let me show you how easy it is on
$MEGACLOUD.

+1 -- I encourage people to seek providers that support IPv6.



Name. and. shame. At some level, some amount of bs is probably useful
to scare the suits: OMG, VZW'S PHONES SUPPORT V6, DO WE DO THAT.
Roll your eyes, but, well, remember they're suits.

Mike



Re: It's the end of the world as we know it -- REM

2013-04-24 Thread Michael Thomas

On 04/24/2013 03:26 PM, Fred Baker (fred) wrote:


Frankly, the ISPs likely to be tracking this list aren't the people holding 
back there. To pick on one that is fairly public, Verizon Wireline is running 
dual stack for at least its FIOS customers, and also deploying CGN, and being 
pretty up front about the impacts of CGN. Verizon Wireless, if I understand the 
statistics available, is estimated to have about 1/4 of its client handsets 
accessing Google/Yahoo/Facebook using IPv6.




Fred, isn't the larger problem those enterprise's outward facing web presence,
etc? As great as it is that vzw is deploying on handsets, don't they also need
to dual-stack and by inference cgn (eventually) so that their customers can get
at the long tail of non-v6 sites?

Mike



Re: It's the end of the world as we know it -- REM

2013-04-24 Thread Michael Thomas

On 04/24/2013 05:34 PM, Fred Baker (fred) wrote:

On Apr 24, 2013, at 4:50 PM, Michael Thomas m...@mtcc.com
  wrote:


On 04/24/2013 03:26 PM, Fred Baker (fred) wrote:

Frankly, the ISPs likely to be tracking this list aren't the people holding 
back there. To pick on one that is fairly public, Verizon Wireline is running 
dual stack for at least its FIOS customers, and also deploying CGN, and being 
pretty up front about the impacts of CGN. Verizon Wireless, if I understand the 
statistics available, is estimated to have about 1/4 of its client handsets 
accessing Google/Yahoo/Facebook using IPv6.

Fred, isn't the larger problem those enterprise's outward facing web presence,
etc? As great as it is that vzw is deploying on handsets, don't they also need
to dual-stack and by inference cgn (eventually) so that their customers can get
at the long tail of non-v6 sites?

Kind of my point. I hear a lot of complaining of the form I don't need to deploy 
IPv6 because there is relatively little traffic out there. Well, surprise surprise. 
That's a little like me saying that I don't need to learn Chinese because nobody speaks 
to me in Chinese. There are a lot of Chinese speakers; they don't speak to me in Chinese, 
which they often prefer to speak among themselves, because I wouldn't have a clue what 
they were saying.

The Verizon Wireless numbers, and a list of others on the same page, tell me 
that if offered a  record, networks and the equipment that use them will in 
fact use IPv6. What is in the way is the residential gateway, which is often 
IPv4-only, and the enterprise web and email service, which is often IPv4 only. 
You want to fix that, fix the residential gateway, the enterprise load 
balancer, and the connectivity between them and their respective upstreams.


It's a pity that it's probably not realistic for a VZW or other large-enough 
carrier
to say We have deployed ipv6, we will not deploy CGN and let the chips fall
where they may. Basically the ipv4 death penalty. Of course VZW-wireless 
probably
couldn't even do that because they'd be waging war on VZW-wireline. Sigh.

Mike



Re: home network monitoring and shaping

2013-02-13 Thread Michael Thomas

On 02/12/2013 04:46 PM, Joel Maslak wrote:

Large buffers have broken the average home internet.  I can't tell you how
many people are astonished when I say one of your family members
downloading a huge Microsoft ISO image (via TCP or other congestion-aware
algorithm) shouldn't even be noticed by another family member doing web
browsing.  If it is noticed, the network is broke.  Even if it's at the end
of a slow DSL line.


This is true only to a point: if you have 5 people streaming movies
on a 2 people broadband you're going to have problems regardless of
the queuing discipline. That said, it's pretty awful that in this day and
age that router vendors can't be bothered to set the default linux kernel
queuing  parameters to something reasonable.

In any case, my point was really about wanting to deal with what happens
when your isp bandwidth is saturated and being able to track it down and/or
kill off the offenders. I haven't bought a router in the last year or two as
apps have become de rigueur, but it sure seems like it would be nice to
be able to do that. I'm pretty sure that I still can't (= being a dumb consumer,
not a net geek jockey), but would like to hear otherwise.

Mike



home network monitoring and shaping

2013-02-12 Thread Michael Thomas


O oracle of nanog: unlike things like rogue processes eating tons of CPU,
it seems to me that network monitoring is essentially a black art for the
average schmuck home network operator (of which I count myself). That
is: if the network is slow, it's really hard to tell why that might be and
who of the eleventy seven devices on my wifi is sucking the life out of my
bandwidth. And then even if I get an idea of who the perp is, my remediation
choice seems to be find that device, smash it with sledge hammer.

It seems that there really ought to be a better way here to manage my
home network. Like, for example, the ability to get stats from router and
tell it to shape various devices/flows to play nice. Right now, it seems to
me that the state of the art is pretty bad -- static-y kinds of setups for
static-y kinds of flows that people-y kind of users don't understand or
touch on their home routers.

The ob-nanog is that my intertoobs r slow is most likely a call to your
support desks which is expensive, of course. Is anything happening on
this front? Is openwrt, for example, paying much attention to this problem?

Mike



Re: home network monitoring and shaping

2013-02-12 Thread Michael Thomas

On 02/12/2013 02:07 PM, Warren Bailey wrote:

Someone created an application for uverse users that goes into the gateway and pulls relevant 
information. The information (link retrain, for example) is then color coded for caution and 
out of range. The application is called up real time, not something peddled by att to 
show how great your connection is. People unfortunately believe a speed test is a 
reliable way to measure a connection quality. There may be utilities out there like this that 
look at signal levels and statistics to tell the user their connection blows. I believe the 
uvrealtime application actually shows the provider sending resets as a deterrent for using 
bit torrent.




It would be nice for such a thing to tell me that my ISP connection is
having trouble too, but I'm mostly interested in understanding the things
that are nominally under my control on my home network. It seems that
most routers have (gratuitous?) apps these day, but given the awfulness
of their web UI's and their configurability, I don't have much hope that
they do what I want.

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2013-01-30 Thread Michael Thomas

On 01/30/2013 01:51 PM, Cutler James R wrote:

On Jan 30, 2013, at 12:43 PM, joel jaeggli joe...@bogus.com wrote:


As a product of having a motorola sb6121 and a netgear wndr3700 both of which I 
bought at frys I have ipv6 in my house with dhcp pd curtesy of commcast. If it 
was any simpler somebody else would have had to install it.


Except that Apple Airport Extreme users must have one of the newer hardware 
versions, that is my experience as well.

And, even before Comcast and new AEBS, Hurricane Electric removed all other excuses for 
claiming no IPv6.


Remove excuses != Create incentive. There are an infinite number of
things I can do to remove excuses. Unless they're in my face (read: causing
me headaches), they do not create incentive. My using my or my company's
software which doesn't work in my own environment (= work, home, phone, etc)
creates incentive. Lecturing me about how I can get a HE tunnel and that if
I don't i'm ugly and my mother dresses me funny, otoh, just creates vexation.

Mike



Re: Suggestions for the future on your web site: (was cookies, and

2013-01-26 Thread Michael Thomas

Rich Kulawiec wrote:

On Thu, Jan 24, 2013 at 09:50:15AM -0600, Joe Greco wrote:

However, as part of a defense in depth strategy, it can still make
sense.  


Brother, you're preaching to the choir.  I've argued for defense in depth
for longer than I can remember.  Still am.

But defenses have to be *meaningful* defenses.  Captchas are a pretend
defense.  They're wishful thinking.  They're faith-based security.


Oh, I dunno. I run a website that has a fairly low volume forums that 
occasionally gets
a drive by spamming. I'm pretty sure that if I implemented even a naive captcha 
it would
go back to zero. Same thing with proof of email box control things that has to 
be even
easier to automate. Would they bother? I doubt it -- it was never particularly 
worth their
effort to even do the easy drive bys.

Mike



Re: OOB core router connectivity wish list

2013-01-10 Thread Michael Thomas

On 01/10/2013 07:02 AM, Jared Mauch wrote:

On Jan 10, 2013, at 9:51 AM, Mikael Abrahamsson swm...@swm.pp.se wrote:

I certainly want to use something more modern, having run Xmodem to load images 
into devices or net-booted systems with very large images in the past…

I've seen all sorts of creative ways to do this (e.g.: DSL for OOB, 3G, private 
VPLS network via outside carrier).  It is a challenge in the modern network 
space.  Plus I have to figure that 9600 modems are going to be harder to find 
as time goes by.. at some point folks will stop making them.




Isn't the biggest issue here resilience? If you have ethernet/IP as your
OOB mechanism, how sure can you be that it's really OOB? This is,
I'm assuming the fallback for when things are really, really hosed.
What would happen if you needed to physically get hands into many,
many pops?

Mike



Re: Gmail and SSL

2013-01-03 Thread Michael Thomas

On 01/02/2013 09:14 PM, Damian Menscher wrote:

Back on topic: encryption without knowing who you're talking to is worse
than useless (hence no self-signed certs which provide a false sense of
security),


In fact, it's very useful -- what do you think the initial diffie-hellman
exchanges are doing with pfs? Encryption without (strong) authentication
is still useful for dealing with passive listening. It's a shame, for example,
that wifi security doesn't encrypt everything on an open AP to require
attacks be active rather than passive. It's really easy to just scan the
airwaves, but I probably don't need to remind you of that.

Mike



Re: why haven't ethernet connectors changed?

2012-12-21 Thread Michael Thomas

On 12/21/2012 04:08 AM, Aled Morris wrote:
Good luck with that! :-) Referring back to the original question and the reference to Raspberry Pi... The latest HDMI has Ethernet capability and the connector is already on the Pi, so there's a possible (future) solution that would work for all manner of consumer applications - even ones that don't need video or audio - just use the network capability of HDMI. Aled 


Interesting.

I'd turn this back the other way though: in this day and age, why do we have any
interconnection/bus that isn't just ethernet/IP? IP, as we all know, doesn't 
imply
global reachability. What we far too often do with specialized IO channels is 
recreate
networking, usually poorly.

That too would solve the Raspberry Pi problem.

Mike, naming being one big issue which is getting short-shrift in homenet



Re: why haven't ethernet connectors changed?

2012-12-21 Thread Michael Thomas

On 12/21/2012 09:29 AM, Tony Finch wrote:

Michael Thomas m...@mtcc.com wrote:

I'd turn this back the other way though: in this day and age, why do we
have any interconnection/bus that isn't just ethernet/IP?

The need for isochronous transmission and more bandwidth.




That's why G*d invented RTP, of course. And all of these buses are slow
by the time they're popular enough to worry about. In any case, delete
the ethernet part if you want to still play with the mac/phy.

Mike



Re: why haven't ethernet connectors changed?

2012-12-21 Thread Michael Thomas

On 12/21/2012 12:00 PM, Aled Morris wrote:

On 21 December 2012 18:22, Chris Adams cmad...@hiwaay.net wrote:


I will say that one nice thing about having different connectors for
different protocols (on consumer devices anyway) is that you don't have
to worry about somebody plugging the Internet into the Video 1 port
and wondering why they aren't getting a picture.




I do agree but I also think that for HDMI Ethernet your TV (which is the
device with lots of HDMI sockets) will act as an Ethernet switch, so there
shouldn't be any Ethernet enabled vs. Video Enabled ports.

Now of course that means you probably need Spanning Tree in your domestic
appliances.



In this day and age exactly how hard is this? Since it's all linux
under the hood, isn't it just a brctl away?

Mike



why haven't ethernet connectors changed?

2012-12-20 Thread Michael Thomas

I was looking at a Raspberry Pi board and was struck with how large the ethernet
connector is in comparison to the board as a whole. It strikes me: ethernet
connectors haven't changed that I'm aware in pretty much 25 years. Every other
cable has changed several times in that time frame. I imaging that if anybody
cared, ethernet cables could be many times smaller. Looking at wiring closets,
etc, it seems like it might be a big win for density too.

So why, oh why, nanog the omniscient do we still use rj45's?

Mike



Re: why haven't ethernet connectors changed?

2012-12-20 Thread Michael Thomas

On 12/20/2012 10:28 AM, Michael Loftis wrote:

It's not all about density.  You *Must* have positive retention and alignment.  
None of the USB nor firewire standards provide for positive retention.  eSATA 
does sort of in some variants but the connectors for USB are especially 
delicate and easy to break off and destroy.  There's the size of the Cat5/5e/6 
cable to be considered too.

Then you must consider that the standard must allow for local termination, the 
RJ45 (And it's relatives) are pretty good at this.  Fast, reliable, repeatable 
termination with a single simple tool that requires only a little bit of 
mechanical input from the user of the tool.


If you look at the Raspberry Pi though, it takes a substantial piece of real 
estate
though. Not everything needs to be industrial strength connectors as witnessed
by USB and HDMI -- if they fail I'm just as unhappy as if ethernet fails. 
Surely we
want keep shrinking these cute little purpose built controller-like things and 
not
*have* to rely on wireless as the only other space-saving means?

Mike



Re: why haven't ethernet connectors changed?

2012-12-20 Thread Michael Thomas

On 12/20/2012 11:43 AM, William Herrin wrote:

Also, RJ45 is around the minimum size where you can hand-terminate a
cable. How would you go about quickly making a 36.5 foot 8 conductor
cable with, say, micro USB ends?



You're assuming that that's a universal requirement. Most people
in retail situations just buy the cables, or they are shipped with the
widget. They're also pretty used to being screwed over by greedy
manufacturers for whom cable churn is a profit center (I'm looking
at you, Apple).

Mike



Re: why haven't ethernet connectors changed?

2012-12-20 Thread Michael Thomas

On 12/20/2012 12:01 PM, William Herrin wrote:
On the other hand, I wonder if it would be worth asking the 802.3 committee look at defining a single-pair ethernet standard that would interoperate with a normal 4-pair switch. So, you'd have two conductors into some kind of 2P2C micro-RJ connector on one end of the cable but into a full RJ45 connector on the other. A single-pair pair cable would run at best at a quarter of the speed of a four pair cable but for something like the Raspberry Pi that's really not a problem. Regards, Bill Herrin 


Yeah, that's kind of along the lines I'm thinking too. In the home of the 
future,
say, I probably would like to have power/network for little sensors, etc, where
you already have a gratuitous digital controller now, and then some. Do these
things need to have gig-e speeds? Probably not... for a lot even Bluetooth 
speeds
are probably fine. But they do want to be really small and really inexpensive.

(Yes, I know about zigbee, but there's room for a variety of solutions depending
on the situation.)

Mike



Re: Advisory — D-root is changing its IPv4 address on the 3rd of January.

2012-12-14 Thread Michael Thomas

Matthew Newton wrote:

On Fri, Dec 14, 2012 at 04:42:46PM +, Nick Hilliard wrote:

On 13/12/2012 22:54, Jason Castonguay wrote:

Advisory — D-root is changing its IPv4 address on the 3rd of January.

You've just given 3 weeks notice for a component change in one of the few
critical part of the Internet's infrastructure, at a time when most


I think that /was/ the advance notification - you've got 6 months :)

 The old address will continue to work for at least six months
  after the transition, but will ultimately be retired from
  service.


So really stupid question, and hopefully it's just me, do I need to do something
on my servers?

Second question: I know that renumbering is important in the abstract, but is 
there
really an overwhelming reason why renumbering the root servers is critical? 
Shouldn't
they all be in PI space for starters?

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-12-02 Thread Michael Thomas

On 12/01/2012 11:55 PM, Owen DeLong wrote:

ps. I work for a division of my employer that does not yet have IPv6 support in 
its rather popular consumer software product. Demand for IPv6 from our rather 
large customer base is, at present, essentially nonexistent, and other things 
would be way above it in the stack-ranked backlog(s) anyway. One could argue 
that until we add IPv6 support throughout our systems, consumers will continue 
to demand IPv4 connectivity from operators in order to run software like ours, 
rather than us being cut off from any meaningful proportion of customers.


Yes, but unlike Skype, most popular applications have competitors and whichever 
competitor provides the better user experience will cut the others off from a 
meaningful proportion of their customers.



You have a really strange metric of what constitutes a better user experience.
I look at things like enrollment/take rates, friending, reviews, etc to 
determine
whether people are having a better user experience. I can with all sincerity say
that ipv6 is not something that provides a better user experience. I enabled 
it
for my site just because I was curious and linode makes getting v6 dead simple
and I didn't think I had anything that would puke on v6 (I didn't). If it took 
any
more effort than that, I'd have gone and found something else to be curious
about because it wouldn't have been worth my time given things that really
do impact user experience.

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-29 Thread Michael Thomas

On 11/28/2012 09:40 PM, Jeroen Massar wrote:

On 2012-11-28 18:26, Michael Thomas wrote:


It's very presumptuous for you to tell me what my development/test
priorities ought to be, and I can tell you for absolute certain that any
such badgering will be met with rolled eyes and quick dismissal.

You are missing the point that people have been told already for a
decade to add IPv6 support to their products.


Programmers are routinely told all manner of apocalyptic things,
and that they're idiots for not heeding the soothsayers. Ho Hum.
At least Y2K had a finite stopping point. When v6 gets one of those,
maybe you'll have more luck.


The
only way that things will get fixed is if there's a perceived need to
fix them.

I fully agree, but instead of waiting till the last moment you can also
plan ahead and be ahead of the game.


Please. When there's deployment, there will be fixes. The *vast*
majority of the problem is with ISP's. This isn't even an 80-20 problem,
it's a 1% problem. All you're managing to do here is tick off developers
as if they are in any way, shape, or form responsible for the lack of
v6 deployment.



Phone Apps btw are only something from the last few years, thus you
can't even claim there is a 'legacy' there and IPv6 didn't exist yet
arguments don't go either. Note also that most devkits (Android/IOS)
provide IP-agnostic APIs, thus if used you at least have nothing
IPv6-specific in that code.


Phone apps, by and large, are designed by people in homes or
small companies. They do not have v6 connectivity. Full stop.
They don't care about v6. Full stop. It's not their fault, even if
you think they should invest a significant amount of time to fix
theoretical problems.


The only way things are going to change is to make v6 a part of everybody's
day-to-day life. That means ISP's giving me and every other developer a
/64 at home at the very least.

And that is happening, I hope you are ready to support those users
because well, everybody told you it would happen, thus don't cry when
you are too late at the game...


It sure isn't happening in Silly Valley or San Francisco that I've seen.



(of course, some people simply do not care about the job they deliver,
but in that case, it is also wise to not comment on a public list about
things ;)



All your patronizing tone does is fix you for a religious zealot. You're
a dime a dozen and ignorable. Telling people they're incompetent because
they won't fix your hobby-horse theoretical problem does exactly the
opposite of what you want. Developers and the companies that employ
them react to perceived need for bugs and features. When there is a
perceived need, the bugs will be fixed. Until then, by all means rant on
while the actual problem (ISP's) do nothing.

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-29 Thread Michael Thomas

On 11/29/2012 10:36 AM, Cameron Byrne wrote:


Got some bad data here.  Let me help.

Sent from ipv6-only Android
On Nov 29, 2012 8:22 AM, Michael Thomas m...@mtcc.com 
mailto:m...@mtcc.com wrote:

 Phone apps, by and large, are designed by people in homes or
 small companies. They do not have v6 connectivity. Full stop.
 They don't care about v6. Full stop. It's not their fault, even if
 you think they should invest a significant amount of time to fix
 theoretical problems.


Phone apps generally work with ipv6 since  they are developed using high level 
and modern sdk's.

My sample says over 85% of Android Market top apps work fine on ipv6. For folks 
to really get in trouble they need to be using NDK... that is where the 
ipv4-only apis live, not SDK afaik ... NDK implies greater knowledge and risk 
in Android.

The apps that fail are not from noobies in a garage. The failures are  from 
Microsoft/Skype , Netflix , Amazon Prime streaming , Spotify and other well 
heeled folks that are expected to champion technology evolution. And,  
Microsoft and Netflix were certainly part of world v6 launch. They just have 
more work to do.



Ie, the referral problem. One would expect those to have problems because
referrals suck generally, and are tangled up horrifically with NAT traversal.
I don't really worry about those guys so much because it's just a business
case rather than cluelessness. The fact that they aren't getting bit hard
enough to make that business case says something.

Which is why all of this gnashing of the teeth toward developers is
wildly off the mark. It's the network that's the problem.


So, please note: most Android apps work on v6. Millions of mobile phone 
subscribers have ipv6 (all vzw LTE by default, all t-mobile samsung by phone 
configuration). The problem apps are from top tech companies,  not garage devs.



Yeah, I just checked having switch to vzw yesterday: Galaxy S3 ipv6, iphone5 
ipv4.

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-28 Thread Michael Thomas

On 11/28/2012 09:00 AM, Jeroen Massar wrote:


And still, if you as a proper engineer where not able to test/add IPv6
code in the last 10++ years, then you did something very very wrong in
your job, the least of which is to file a ticket for IPv6 support in the
ticket tracking system so that one could state I thought of it, company
did not want it.



It's very presumptuous for you to tell me what my development/test
priorities ought to be, and I can tell you for absolute certain that any
such badgering will be met with rolled eyes and quick dismissal. The
only way that things will get fixed is if there's a perceived need to
fix them. Getting corpro-IT to upgrade to v6 -- as if there is even a
corpro-anything with most phone apps -- just to be able to test against
v6 is a fantasy. Any developer who told me that we can't ship because
we don't have a v6 testbed without clear feedback via bug reports, etc
would be instructed on the difficulties of applying sufficient thermal
energy to large bodies of water.

The only way things are going to change is to make v6 a part of everybody's
day-to-day life. That means ISP's giving me and every other developer a
/64 at home at the very least.

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-28 Thread Michael Thomas

On 11/28/2012 10:30 AM, david peahi wrote:

On the practical side: Have all programmers created a 128 bit field to
store the IPv6 address, where IPv4 programs use a 32 bit field to store the
IP address? This would seem to be similar to the year 2000 case where
almost all programs required auditing to see if they took into account
dates after 1999.



Surely you mean varchar(15), right? :)

Mike



Re: Big day for IPv6 - 1% native penetration

2012-11-27 Thread Michael Thomas

On 11/27/2012 11:58 AM, Cameron Byrne wrote:

On Tue, Nov 27, 2012 at 11:28 AM, mike m...@mtcc.com wrote:



Is this the app's fault? What are they doing wrong?



Yes, it is the app's fault.

They are either doing IPv4 literals or IPv4-only sockets

The IPv4 literal issues is when they do wget http://192.168.1.1; ...
hard coding IPv4 addresses... instead of using an FQDN like wget
http://example.com;.  Using an FQDN allows the DNS64 to work
correctly.


I can understand spotify, but don't really understand why waze
would have a problem unless they're doing some sort of rendezvous
like protocol that embeds ip addresses. That said, I'd say that the
vast majority of apps don't have this sort of problem and will quite
unknowingly and correctly work with v6.

Mike



Re: Big day for IPv6 - 1% native penetration

2012-11-27 Thread Michael Thomas

On 11/27/2012 12:41 PM, Mark Andrews wrote:

In message 50b512b6.1010...@mtcc.com, mike writes:

On 11/26/12 9:32 PM, Mikael Abrahamsson wrote:

The main problem with IPv6 only is that most app developers (most programme

rs totally) do not really have access to this, so no testing is being done.
This is a point that is probably more significant than is appreciated. If the
  app,
IT, and networking ecosystem don't even have access to ipv6 to play around wi
th,
you can be guaranteed that they are going to be hesitant about lighting v6 up
in real life.

Mike

I've had IPv6 for nearly a decade with no help from my ISP.  I
needed it to do IPv6 developement.  It isn't hard to get IPv6 if
you need it.



The point is that most developers and others don't think they
need it so they don't seek it out. If there were far more native
v6 (ie, that it's there without having to do something proactive),
the app problems would almost certainly work themselves out
because it would become apparent.

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-27 Thread Michael Thomas

On 11/27/2012 01:07 PM, Jeroen Massar wrote:

On 2012-11-27 20:21, mike wrote:

This is a point that is probably more significant than is
appreciated. If the app, IT, and networking ecosystem don't even have
access to ipv6 to play around with, you can be guaranteed that they
are going to be hesitant about lighting v6 up in real life.

I cannot be saf for the people who claim to be programmers who do things
with networking and who do not care to follow the heavy hints that they
have been getting for at least the last 10 years that their applications
need to start supporting IPv6. Especially as APIs like getaddrinfo()
make it really easy to do so.



I think you vastly overestimate that developers will a) know about
v6 and b) care even if they do if it's not affecting them. Asking mortals
to understand tunnel brokers -- even developer mortals -- just isn't going
to happen. If we want the small percentage of apps that break with v6
to be fixed, it needs to a) show up as a bug report from enough people
to matter and b) need to be testable by your average developer.

This chicken and egg problem can really only be solved by ISP's, IMO.

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-27 Thread Michael Thomas

On 11/27/2012 03:44 PM, Owen DeLong wrote:


I would think that a developer of corporate network-based applications that is 
worth his salt would be one of the people pushing the IT/Neteng group to give 
him the tools to do his job. If he waits until they are implementing IPv6 on 
corporate desktops, he guarantees himself a really bad game of catch-up once 
that time arrives.



the only pushing that is generally possible is of the 'on string' variety.

Mike



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-27 Thread Michael Thomas

On 11/27/2012 09:00 PM, Mark Andrews wrote:

In message 20121128041816.gf1...@dyn.com, Andrew Sullivan writes:

On Wed, Nov 28, 2012 at 08:41:13AM +1100, Mark Andrews wrote:

If they are writing network based code a tunnel broker should not
be a issue.  Tunnel brokers are not that hard to use.  They are
after all just a VPN and millions of road warriers use them everyday.

Oh, for crumb's sake.  You're quite right: millions of road worriers
use VPNs every day, because they involve downloading a program and
the config your IT dept says to use and that's it.

And using some tunnel brokers are just as easy.

Even manual config isn't that hard and is a lot easier that getting
dialup networking was before ppp was available.


Let's be clear: nobody sets up a VPN because they want to.

Mike



Re: Big day for IPv6 - 1% native penetration

2012-11-26 Thread Michael Thomas

On 11/26/2012 03:18 PM, Dobbins, Roland wrote:


Apple and Microsoft are application developers as well as OS vendors.  How much 
of a priority do you think IPv6 capabilities are to their application 
development organizations?  How much of a priority do you think IPv6 
capabilities are to their customer bases?


I don't see either Apple or Microsoft as being the hindrance. In fact, both
of them seem pretty ready, fsvo ready. Unlike ISP's by and large. But I'm
pretty sure that both iPhones and Androids are pretty happy about being
in v6 land since I see them showing up in my logs all the time, for the few
providers that have lit up v6.

I'm all for bagging on those two, but it seems pretty unjustified here.



How much of a priority do you think IPv6 capabilities are for corporate IT 
departments, beyond a checklist item on RFPs in order to CYA?

Where are the IPv6-only SQL Server deployments within enterprises, for example? 
 In fact, where are the IPv6-enabled client access LANs within enterprises?  Or 
even the *plans* for these types of deployments/capabilities?


Er, uh, huh? v6 has been available forever on the usual suspect host operating
systems, and most server side apps don't need to do much to support lighting
v6 support up that I can think of.  I turned it on and it was pretty much a big
ho-hum, cool it works.

Mike




Re: Big day for IPv6 - 1% native penetration

2012-11-26 Thread Michael Thomas

On 11/26/2012 04:24 PM, Dobbins, Roland wrote:

On Nov 27, 2012, at 6:56 AM, Michael Thomas wrote:


Er, uh, huh? v6 has been available forever on the usual suspect host operating 
systems, and most server side apps don't need to do much to support lighting
v6 support up that I can think of.

Where are the *deployments*, though?


Google and Facebook support ipv6. What more do we need?


And lighting up IPv6 within enterprises is not a trivial task.



Not on the server side that I can see. It's a network problem first
and foremost, and starts by having the excuse that they can't get
v6 upstream from their ISP's.

Mike



Re: Big day for IPv6 - 1% native penetration

2012-11-26 Thread Michael Thomas

On 11/26/2012 04:38 PM, Dobbins, Roland wrote:

On Nov 27, 2012, at 7:35 AM, Michael Thomas wrote:


Not on the server side that I can see. It's a network problem first and 
foremost, and starts by having the excuse that they can't get v6 upstream from 
their ISP's.

It's hugely problematic to accomplish internally, never mind for external 
connectivity.



But not because servers and client devices don't support it; they do.
Bag on where the problem actually is: the death spiral of network vendors,
ISP's and IT departments not wanting to commit and blaming each other.
I primarily fault ISP's because they are, you know, the backbone. If they
don't commit, the game of chicken continues.

Mike



Re: IPv4 address length technical design

2012-10-05 Thread Michael Thomas

On 10/05/2012 05:25 PM, Barry Shein wrote:

5. Bits is bits.

I don't know how to say that more clearly.

An ipv6 address is a string of 128 bits with some segmentation
implications (net part, host part.)

A host name is a string of bits of varying length. But it's still just
ones and zeros, an integer, however you want to read it.


Wasn't David Cheriton proposing something like this?

http://www-dsg.stanford.edu/triad/

Mike



Re: IPv6 Ignorance

2012-09-18 Thread Michael Thomas

On 09/18/2012 08:08 AM, Jared Mauch wrote:


We've been doing this for years on both Juniper  IOS/IOS-XR devices.  Must be 
someone else.

We do run into this whole feature parity thing often.  The vendors seem to be 
challenged in this space.  I suspect a significant part of it is they don't 
actually *use* IPv6 internally or in their lab.  We have been operating our 
network with IPv6 for many years now.  I believe in most cases our connection 
to the management plane go IPv6 only as well.

It's been fun to see the few SSH over IPv6 defects and other elements arise as 
time has passed, but those days are over.  It's just tiring now and no longer 
amusing.  (hey you kids, get off my lawn?).



Of course they're challenged. There's a finite amount of dev they can
do at any one time, and they go for what is going to make revenue. If
you tell them that the way to your wallet is to implement some new
feature in v4 and you're not emphatic that it be v6 also, they are going
to do the utterly predictable thing. If you really want to make progress
instead of bellyache, list off the features you need to run your network.

Better yet, deploy v6 instead of saying that you'll only do it when it's
perfect. That just tells your account critter that v6 isn't important to
you. I'll bet you'll find features that you want that are v6 specific
that you'd open your wallet for *way* before features that don't interest
you that you're requiring in the name of parity.

Mike



Re: IPv6 Ignorance

2012-09-16 Thread Michael Thomas

On 09/16/2012 08:23 PM, Randy Bush wrote:
and don't bs me with how humongous the v6 address space is. we once though 32 bits was humongous. randy 


No we didn't .

Mike



Re: The End-To-End Internet (was Re: Blocking MX query)

2012-09-05 Thread Michael Thomas

On 09/05/2012 05:56 AM, Daniel Taylor wrote:


On 09/04/2012 03:52 PM, Michael Thomas wrote:

On 09/04/2012 09:34 AM, Daniel Taylor wrote:

If you are sending direct SMTP on behalf of your domain from essentially random 
locations, how are we supposed to pick you out from spammers that do the same?



Use DKIM.

You say that like it's a lower bar than setting up a fixed SMTP server and 
using that.


I say it like it addresses your concern.

Mike



Re: The End-To-End Internet (was Re: Blocking MX query)

2012-09-05 Thread Michael Thomas

On 09/05/2012 07:50 AM, Henry Stryker wrote:
Not only that, but a majority of spam I receive lately has a valid DKIM signature. They are adaptive, like cockroaches. 


The I part of DKIM is Identified. That's all it promises. It's a
feature, not a bug, that spammers use it.

Mike



Re: The End-To-End Internet (was Re: Blocking MX query)

2012-09-05 Thread Michael Thomas

On 09/05/2012 08:49 AM, Sean Harlow wrote:

2. The reason port 25 blocks remain effective is that there really isn't a 
bypass.


In the Maginot Line sense,  manifestly.

Mike



Re: The End-To-End Internet (was Re: Blocking MX query)

2012-09-05 Thread Michael Thomas

On 09/05/2012 12:50 PM, Daniel Taylor wrote:


On 09/05/2012 10:19 AM, Michael Thomas wrote:

On 09/05/2012 05:56 AM, Daniel Taylor wrote:


On 09/04/2012 03:52 PM, Michael Thomas wrote:

On 09/04/2012 09:34 AM, Daniel Taylor wrote:

If you are sending direct SMTP on behalf of your domain from essentially random 
locations, how are we supposed to pick you out from spammers that do the same?



Use DKIM.

You say that like it's a lower bar than setting up a fixed SMTP server and 
using that.


I say it like it addresses your concern.


Well, if you've got proper forward and reverse DNS, and your portable SMTP 
server identifies itself properly, and you are using networks that don't filter 
outbound port 25, AND you have DKIM configured correctly and aren't using it 
for a situation for which it is inappropriate, then you'll get the same results 
with a portable SMTP server that you would sending through a properly 
configured static server.

So, no, use DKIM does not address the delivery difficulties inherent to using 
a portable SMTP server.


My how the goalposts are moving. DKIM solves the problem of producing
a stable identifier for a mail stream which is what your originally positioned
goalposts was asking for. It also makes reverse dns lookups even more
useless than they already are.

Mike



Re: Blocking MX query

2012-09-04 Thread Michael Thomas

On 09/04/2012 05:05 AM, William Herrin wrote:

There are no good subscribers trying to send email direct to a
remote port 25 from behind a NAT. The good subscribers are either
using your local smart host or they're using TCP port 587 on their
remote mail server. You may safely block outbound TCP with a
destination of port 25 from behind your NAT without harming reasonable
use of your network.



Would that were true going forward. Consider a world where your
home is chock full of purpose built devices, most likely with an
embedded web browser for configuration where you have a
username/password for each. In the web world this works because
there is a hidden assumption that you can use email for user/password
reset/recovery and that it works well. When your boxen can't do that
because email is blocked, what are you going to do? Reset to factory
defaults? Every time I forget? And please lets not get another useless
lecture on why the unwashed masses should be using password vaults.
They won't.

This hidden assumption of a reliable out of band mechanism for account
recovery is going to come to the fore as v6 rolls out and ip is as gratuitously
added to cheap devices as digital controls are now. Email is the glue that
keeps the web world functioning. Until there's something else, it will
continue to be needed and its role will expand in the home too.

Mike



Re: Blocking MX query

2012-09-04 Thread Michael Thomas

On 09/04/2012 11:55 AM, William Herrin wrote:

On Tue, Sep 4, 2012 at 12:59 PM, Michael Thomas m...@mtcc.com wrote:

On 09/04/2012 05:05 AM, William Herrin wrote:

There are no good subscribers trying to send email direct to a
remote port 25 from behind a NAT. The good subscribers are either
using your local smart host or they're using TCP port 587 on their
remote mail server. You may safely block outbound TCP with a
destination of port 25 from behind your NAT without harming reasonable
use of your network.

Would that were true going forward. Consider a world where your
home is chock full of purpose built devices, most likely with an
embedded web browser for configuration where you have a
username/password for each. In the web world this works because
there is a hidden assumption that you can use email for user/password
reset/recovery and that it works well.

Hi Mike,

A. What device do you offer as an example of this? I haven't stumbled
across one yet. Web sites yes. Physical home devices, no.

What I *have* seen is devices that call out to a web server, you make
an account on the remote web server to configure them and then all the
normal rules about accounts on remote web servers apply.


I want to buy hardware from people, not their ill-conceived cloud
service that dies when there's no more business case for it and is probably
evil anyway.



B. Bad hidden assumption. Expect it to fail as more than a few cable
and DSL providers are blocking random port 25 outbound. Besides, some
folks change email accounts like they change underwear. Relying on
that email address still working a year from now is not smart.



I'm well aware of port 25 blocking. I'm saying your assumption
that there is *never* any reason for a home originating port 25
traffic is a bad one. It's never been a good one, but the collateral
damage was pretty low when NAT's are in the way. v6 will change
that, and the collateral damage will rise. Unless you can come up
with another ubiquitous out of band method for account recovery,
expect the tension -- and help desk calls -- to increase.

Mike



Re: The End-To-End Internet (was Re: Blocking MX query)

2012-09-04 Thread Michael Thomas

On 09/04/2012 01:07 PM, David Miller wrote:



There is no requirement that all endpoints be *permitted* to connect to
and use any service of any other endpoint.  The end-to-end design
principle does not require a complete lack of authentication or
authorization.

I can refuse connections to port 25 on my endpoint (mail server) from
hosts that do not conform to my requirements (e.g. those that do not
have forward-confirmed reverse DNS) without violating the end-to-end
design principle in any way.




The thing that has never set well with me with ISP blanket port 25
blocking is that the fate sharing is not correct. If I have a mail server
and I refuse to take incoming connects from dynamic home IP
blocks, the fate sharing is correct: I'm only hurting myself if there's
collateral damage. When ISP's have blanket port 25, the two parties
of the intended conversation never get a say: things just break
mysteriously as far as both parties are concerned, but the ISP isn't
hurt at all. So they have no incentive to drop their false positive
rate. That's not good.

Mike





Re: The End-To-End Internet (was Re: Blocking MX query)

2012-09-04 Thread Michael Thomas

On 09/04/2012 09:34 AM, Daniel Taylor wrote:

If you are sending direct SMTP on behalf of your domain from essentially random 
locations, how are we supposed to pick you out from spammers that do the same?



Use DKIM.

Mike



Re: DNS caches that support partitioning ?

2012-08-17 Thread Michael Thomas

On 08/17/2012 01:32 PM, valdis.kletni...@vt.edu wrote:

On Fri, 17 Aug 2012 15:32:11 -0400, Andrew Sullivan said:

On Fri, Aug 17, 2012 at 04:13:09PM -, John Levine wrote:

The application I have in mind is to see if it helps to keep DNSBL
traffic, which caches poorly, from pushing other stuff out of the
cache, but there are doubtless others.

If it's getting evicted from cache because other things are getting
used more often, why do you want to put your thumb on that scale? The
other queries are presumably benefitting just as much from the caching.

I think John's issue is that he's seeing those other queries *not* benefiting
from the caching because they get pushed out by DNSBL queries that will likely
not ever be used again.  You don't want your cached entry for www.google.com
to get pushed out by a lookup for a dialup line somewhere in Africa.

If the dnsbl queries are not likely to be used again, why don't they
set their ttl way down?

In any case, DNSBL's use of DNS has always been a hack. If v6
causes the hack to blow up, they should create their own protocol
rather than ask how we can make the global DNS accommodate
their misuse of DNS.

Mike



Re: using reserved IPv6 space

2012-07-18 Thread Michael Thomas

On 07/18/2012 06:10 AM, valdis.kletni...@vt.edu wrote:

On Wed, 18 Jul 2012 10:04:05 +0300, Saku Ytti said:


However I'm not sure what would be good seed? ISO3166 alpha2 +
domestic_business_id + 0..n (for nth block you needed)

You want to roll in at some entropy by adding in the current date or
something, so two Joes' Burritos and Internet in 2 different states don't
generate the same value.  There's a reason that 4193 recommends
a 64bit timestamp and an EUI64.



ulamart.com is available for the enterprising amongst us...

Mike




Re: job screening question

2012-07-10 Thread Michael Thomas

On 07/10/2012 03:56 AM, Bret Clark wrote:

On 07/10/2012 03:32 AM, goe...@anime.net wrote:

On Mon, 9 Jul 2012, Jeroen van Aart wrote:

William Herrin wrote:

This is, incidentally, is a detail I'd love for one of the candidates
to offer in response to that question. Bonus points if you discuss MSS
clamping and RFC 4821.

The less precise answer, path MTU discovery breaks, is just fine.

I would say that the ability to quickly understand, troubleshoot and find a
solution to a problem (and document it) is a far better skill to have than
having ready made answers to interview questions learned by heart.

It should take a skilled person less than 30 minutes to find the answer to
that question and understand it too. The importance of knowing many things by
heart has become incredibly moot.

If you are applying for a network position, you better know the *basics*.
Having to look up the basics is not a good sign.

Do you really want to hire someone who is going to have to look up basic
networking concepts for 30 minutes every time they are in a meeting and
asked a question?

-Dan


Hence the reason he mentioned skilled person...


This all has to be tempered with the zeitgeist as what is basic knowledge
now, will be charming history at some point. All of it. No, a vampire tap has
nothing to do with Twilight. No, the difference between 74 and 54 series
logic is not 20. All of us oldsters would do well to try to keep up with what's
new and hip coming out of schools and grill them in an intelligent fashion.
Better yet, let them teach you something which shows if they understand
or whether they're just parroting stuff back.

MIke




Re: F-ckin Leap Seconds, how do they work?

2012-07-02 Thread Michael Thomas

On 07/02/2012 09:04 AM, Jay Ashworth wrote:

- Original Message -

From: Alex Harrowell a.harrow...@gmail.com
On 02/07/12 16:47, AP NANOG wrote:

Do you happen to know all the kernels and versions affected by this?

2.6.26 to 3.3 inclusive per news.ycombinator.com/item?id=4183122

Well, my 2.6.32 CentOS6/64 machine, which is not running Java, just purred
right along, logging the leapsecond at 7pm, and not even blinking, so...
(Amazon EC2, NTP enabled; 3 strat-2s from us.pool)



My centos 6/64 running 3.0 seemed to weather it too. I'm not quite
clear on what I should be looking for to classify it as being broken though.

Mike



Re: LinkedIn password database compromised

2012-06-23 Thread Michael Thomas

On 06/23/2012 05:52 PM, Keith Medcalf wrote:

Leo,

This will never work.  The vested profiteers will all get together and make it a condition that 
in order to use this method the user has to have purchased a verified key from them.  
Every site will use different profiteers (probably whoever gives them the biggest kickback).


What is their leverage to extort? A web site just needs to make the
client and server end agree on a scheme, and they control both ends.
They can't force me to use their scheme any more than they can force
me to not use Basic and use their scheme instead. Keep in mind that
asymmetric keys do not imply certification, and asymmetric keys are
cheap and plentiful -- as in a modest amount of CPU time these days.

Mike


  You will end up paying thousands of dollars a year (as a user) to buy multiple keys from the profiteers, and provide them all sorts of 
private information in the process.  They will then also insist that web sites using this method provide tracking information 
on key usage back to the profiteers.  The profiteers will then sell all this information to whomever they want, for whatever purpose, so 
long as it results in a profit.  Some of this will get kicked back to participating web sites.  Then there will be affiliate 
programs where any web site can sign up with the profiteers to silently do key exchanges without the users 
consent so that more tracking information can be collected, for which the participating affiliate web site will get a kickback.  Browser 
vendors will assist by making the whole process transparent.  Contracts in restraint of trade will be enforced by the 
profiteers to prevent any browser from using the method unless it is completely invisible to the user.

Then (in the US) the fascist government will step in and require registration 
of issued keys along with the verified real-world identity linkage.

If it does not use self-generated unsigned keys, it will never fly.

---
()  ascii ribbon campaign against html e-mail
/\  www.asciiribbon.org



-Original Message-
From: Leo Bicknell [mailto:bickn...@ufp.org]
Sent: Wednesday, 20 June, 2012 15:39
To: nanog@nanog.org
Subject: Re: LinkedIn password database compromised

In a message written on Wed, Jun 20, 2012 at 02:19:15PM -0700, Leo Vegoda
wrote:

Key management: doing it right is hard and probably beyond most end users.

I could not be in more violent disagreement.

First time a user goes to sign up on a web page, the browser should
detect it wants a key uploaded and do a simple wizard.

   - Would you like to create an online identity for logging into web
 sites?Yes, No, Import

User says yes, it creates a key, asking for an e-mail address to
identify it.  Import to drag it in from some other program/format,
No and you can't sign up.

Browser now says would you like to sign up for website 'foobar.com',
and if the user says yes it submits their public key including the
e-mail they are going to use to log on.  User doesn't even fill out
a form at all.

Web site still does the usual e-mail the user, click this link to verify
you want to sign up thing.

User goes back to web site later, browser detects auth needed and
public key foo accepted, presents the cert, and the user is logged in.

Notice that these steps _remove_ the user filling out forms to sign up
for simple web sites, and filling out forms to log in.  Anyone who's
used cert-based auth at work is already familiar, the web site
magically knows you.  This is MUCH more user friendly.

So the big magic here is the user has to click on yes to create a key
and type in an e-mail once.  That's it.  There's no web of trust.  No
identity verification (a-la ssl).  I'm talking a very SSH like system,
but with more polish.

Users would find it much more convenient and wonder why we ever used
passwords, I think...

--
Leo Bicknell - bickn...@ufp.org - CCIE 3440
 PGP keys at http://www.ufp.org/~bicknell/










Re: Dear Linkedin,

2012-06-10 Thread Michael Thomas

On 06/10/2012 11:22 AM, John T. Yocum wrote:

A merchant can offer a cash discount.


I believe that the law just recently changed on that account. I believe
that what Barry says was the old reality.

Mike


--John

On 6/10/2012 11:16 AM, Barry Shein wrote:


I was under the impression (I should dig out my contract) that
merchant contracts also forbid charging more for a charge than for
cash or conversely discount for cash! but I see so many violations
of that particularly at gas stations I wonder if that's negotiable in
the contract.

I remember my father buying a car and pulling out a credit card asking
if they accepted them? The dealer said sure no problem so he said fine
then take another 3% (whatever) off I'll pay cash/check.








Re: OT: Credit card policies (was Re: Dear Linkedin,)

2012-06-10 Thread Michael Thomas

On 06/10/2012 11:33 AM, Jay Ashworth wrote:

- Original Message -

From: Michael Thomasm...@mtcc.com
On 06/10/2012 11:22 AM, John T. Yocum wrote:

A merchant can offer a cash discount.

I believe that the law just recently changed on that account. I
believe that what Barry says was the old reality.

Perhaps, but Cash/Credit for gas dates back to before I moved to Florida in
1981.  Even Further Off-Topic, isn't debit supposed to be cash?  Why do
I pay the Credit price for it?



I dunno, maybe they're an exception? Maybe it had something to do
with competing with the old oil company credit cards?

MIke



Dear Linkedin,

2012-06-08 Thread Michael Thomas

Linkedin has a blog post that ends with this sage advice:

 * Make sure you update your password on LinkedIn (and any site that you visit 
on the Web) at least once every few months.

I have accounts at probably 100's of sites. Am I to understand that I am 
supposed to remember
each one of them and dutifully update them every month or two?

 * Do not use the same password for multiple sites or accounts.

So the implication is that I have 100's of passwords all unique and that I must
change every one of them to be something new and unique every few months.
And remember each of them. And not write them down.

 * Create a strong password for your account, one that includes letters, 
numbers, and other characters.

And that each of those passwords needs to be really hard to guess that I change 
to every
few months on 100's of web sites.

I'm sorry, my brain doesn't hold that many passwords. Unless you're a savant, 
neither does
yours. So what you're telling me and the rest of the world is impossible.

What's most pathetic about this is that somebody actually believes that we all 
really
deserve this finger wagging.

Mike




Re: Dear Linkedin,

2012-06-08 Thread Michael Thomas

On 06/08/2012 12:56 PM, Paul Graydon wrote:

Use a password safe.  Simple.  Most of them even include secure password 
generators.  That way you only have one password to remember stored in a 
location you have control over (and is encrypted), and you get to adopt secure 
practices with websites.

The only real inconvenience might be having to log into each of whatever sites 
it is you're concerned about and changing the password on them.


Does your password safe know how to change the password on each
website every several months?

Mike



Re: Dear Linkedin,

2012-06-08 Thread Michael Thomas

On 06/08/2012 01:24 PM, Paul Graydon wrote:

On 06/08/2012 10:22 AM, Michael Thomas wrote:

On 06/08/2012 12:56 PM, Paul Graydon wrote:

Use a password safe.  Simple.  Most of them even include secure password 
generators.  That way you only have one password to remember stored in a 
location you have control over (and is encrypted), and you get to adopt secure 
practices with websites.

The only real inconvenience might be having to log into each of whatever sites 
it is you're concerned about and changing the password on them.


Does your password safe know how to change the password on each
website every several months?

Mike

Oh come on.. now you're just being ridiculous, even bordering on childish.
LinkedIn are offering solid advice, routed in safe practices.  If you don't 
want to do it that's your problem.  Stop bitching just because security is hard.


Uh, I'm not the one saying you should change your passwords every
month, Linkedin is. If you think it's childish, take it up with them.

Mike



Re: Dear Linkedin,

2012-06-08 Thread Michael Thomas

On 06/08/2012 01:24 PM, Paul Graydon wrote:

Oh come on.. now you're just being ridiculous, even bordering on childish.
LinkedIn are offering solid advice, routed in safe practices.  If you don't 
want to do it that's your problem.  Stop bitching just because security is hard.


PS: when security is hard, people simply don't do it. Blaming the victim
of poor engineering that leads people to not be able to perform best
practices is not the answer.

Mike



Re: Dear Linkedin,

2012-06-08 Thread Michael Thomas

On 06/08/2012 01:35 PM, Lyndon Nerenberg wrote:

On 2012-06-08, at 1:22 PM, Michael Thomas wrote:


Does your password safe know how to change the password on each
website every several months?

Yes.


I run a website. If it can change it on mine, I'd like to understand
how it manages to do that.

Mike



Re: Dear Linkedin,

2012-06-08 Thread Michael Thomas

On 06/08/2012 01:41 PM, Alec Muffett wrote:

PS: when security is hard, people simply don't do it. Blaming the victim
of poor engineering that leads people to not be able to perform best
practices is not the answer.

Passwords suck, but they are the best that we have at the moment in terms of 
being cheap and free from infrastructure - see http://goo.gl/3lggk

We've been in a bubble for the past few years, where Moore's law hardware had 
not quite caught up with the speed of SHA and MD5 password hashing throughput 
for effective brute force guessing; that bubble is well and truly burst.

Welcome back to 1995 where the advice is to change your passwords frequently, 
because it has a half-life of usefulness imposed upon it from (a) day to day 
external exposure and (b) the march of technology - and keep your hashing 
algorithms up to date, too.  See http://goo.gl/iL9EP for suggestions.



A lot has changed from 1995, and still we're using technology that
is essentially unchanged from the 1960's. For my part, on my app/website
(Phresheez), the app actually auto-generates passwords for the user
so that they don't have to type one in. I do this mainly because people
hate typing on phones, but it has the nice property that if you have
a password exposure event, you do not have the cascading failure
mode that Linkedin has now unleashed. With apps and browsers that
can remember passwords why are we still insisting that users generate
and remember their own bad passwords? That's one reason that I
find the finger wagging tone of that Linkedin post extremely problematic --
they have obviously never even considered thinking beyond the current
bad practice.

Mike





Re: Password Safes

2012-06-08 Thread Michael Thomas

On 06/08/2012 02:01 PM, Lyndon Nerenberg wrote:

On 2012-06-08, at 1:41 PM, Michael Thomas wrote:


I run a website. If it can change it on mine, I'd like to understand
how it manages to do that.

I log in to your website, change my password, and the software picks up that 
I've changed the password and updates the safe accordingly.  The software 
doesn't initiate the password change, it just notices it and updates its 
database accordingly.  Sorry, I should have explained that more clearly.

If you have a Mac or a Windows box, download the 1Password 30 day trail and 
take it for a run.  It really is a useful bit of software.  No, it doesn't work 
on my *BSD, Solaris, or Plan 9 machines. But it does sync across all my Mac, 
Windows, and Android gear, and the Android client lets me pull up passwords on 
my phone when I'm on one of the systems that doesn't have a native 1Password 
client, or when I am on the road.



Ah, ok. Still Linkedin's contention that I should log in to every account
that I've created and change the password is still silly -- nobody's going
to do that.

That said, if there were a standardized way to get these password vault
software -- or whatever else wanted to manage them -- to do key refresh,
I'd be happy to implement it for my site. To my knowledge, such a protocol
does not exist.

Mike



Re: Dear Linkedin,

2012-06-08 Thread Michael Thomas

On 06/08/2012 05:59 PM, Ted Cooper wrote:


They have some things correct in this and some are complete hogwash.

Changing your password does not provide any additional security. It is
meant to give protection against your credentials having being
discovered, but if they have been compromised in that way, they'll have
the one you change it to in next to no time too. If the hashes have been
compromised, then yes, it's time to change the password.

Having a different password for every website is very important though,
as demonstrated many times when these lists of passwords and associated
usernames turn up. Anyone who uses the same password on multiple sites
will find that they have their accounts on multiple services accessed
instead of just the original.



I agree that it's important, but everything about the current state
of affairs makes that impossible except for geeks that care about
password vaults, apparently. The great unwashed masses, however,
do not do this and there is no reason to expect that they will do
it any time soon.

My own experience with auto-generating hard passwords and dealing
with password recovery is that it seems to work really well, and that
it puts the onus on the *website* instead of the user. Every browser
has a password rememberer these days that happily fills in your username
and password. Every app that needs access can do the same thing. It
doesn't get you key rotation [*], but with passwords which are essentially
random and unique per site it's less necessary because you don't have
the cross-site contamination vulnerability.

Mike

[*] key rotation is largely orthogonal, but I suppose that it's feasible to
cook up a scheme that even got you that.



Re: Wacky Weekend: The '.secure' gTLD

2012-05-31 Thread Michael Thomas

On 05/31/2012 05:43 PM, Grant Ridder wrote:

I think this is an interesting concept, but i don't know how well it will
hold up in the long run.  All the initial verification and continuous
scanning will no doubtingly give the .secure TLD a high cost relative to
other TLD's.



Countries would never all agree on what the definition of secure
was, so clearly we'll have to have

secure.ly
secure.it
secure.us
secure.no
...

Mike



Re: Wacky Weekend: The '.secure' gTLD

2012-05-31 Thread Michael Thomas

On 05/31/2012 06:16 PM, Fred Baker wrote:


not necessarily. It can be done with a laptop that does dig and sends email 
to the place.

What will drive the price up is the lawsuits that come out of the woodwork when they 
start trying to enforce their provisions. What? I have already printed my 
letterhead! What do you mean my busted DKIM service is a problem?

BTW, getting DKIM on stuff isn't the issue. I'm already getting spam with DKIM 
headers in it. It's getting the policy in place that if a domain is known to be 
using DKIM, to drop traffic from it that isn't signed or for which the 
signature fails.


Wow, I wouldn't have expected such a deep dive on DKIM here, but...

Last I heard, where we're at is sort of bilateral agreements between the
paypals of the world telling the gmails of the world to drop broken/missing
DKIM signatures. And that is between pretty specialized situations -- it
doesn't apply to corpro-paypal denizens, just their transactional mail.
The good news is that even though it's specialized, it's both high volume
and high value.

The big problem with a larger scope -- as we found out when I was still
at Cisco -- is that it's very difficult for $MEGACORP to hunt down
all of the sources of legitimate email that is sent in the name of
$MEGACORP. Some of these mail producers are ages old, unowned,
unmaintained, and still needed. It's very difficult to find them all,
let alone remediate them. So posting some policy like DROP IF
NOT SIGNED will send false positives to an unacceptable level
for $MEGACORP.  So the vast majority of Cisco's email is signed, but
not all of it. After 4 years away, I would be very surprised to hear that
has changed because IT really doesn't have much motivation to hunt
it all down even if it ultimately lead to being able to make a stronger
statement.

One other thing:


That particular one is from an email sent to me by a colleague named Tony 
Lit...@cisco.com, who is a Cisco employee. It gives you proof that the 
message originated from Cisco, and in this case, that Cisco believes that it was 
originated by Tony Li.


In reality, Cisco doesn't know that's it really coming from Tony Li. We
never required authentication to submission servers. And even if we
did, it wouldn't be conclusive, of course.

A valid DKIM signature really says: we Cisco take responsibility for this
email. If it's spam, if it's spoofed from a bot, if it's somebody having
dubious fun spoofing Tony Li... you get no guarantee as the receiving
MTA that it isn't one of those, but you can reasonable complain to
Cisco if you're getting them because it's going through their
infrastructure. I think that's an incremental improvement because it
was far too easy for the $ISP's of the world to blow off complaints of
massive botnets on their networks because they could just say that
it must have been spoofed. If they sign their mail, it's now their
responsibility.

Mike




<    2   3   4   5   6   7   8   >