Re: An update on the AfriNIC situation

2021-08-27 Thread Laszlo Hanyecz


On 2021-08-28 00:58, Tom Beecher wrote:
Fundamentally I think everyone should care about this situation. As I 
read it, it breaks down as :


- AFRINIC and Cloud Innovation are engaged in a dispute over number 
assignment policies.
- AFRINIC invokes the clause that they are reclaiming the space in 
question.
- Cloud Innovation files for garnishment, stating that AFRINIG is 
'taking' IP addresses worth millions of dollars, therefore it is 
entitled to damages.
- The courts grant the garnishment, although such garnishment is at 
Cloud Innovation's risk. ( Meaning if they are challenged and lose, 
they are on the hook for damages for taking the action.)
- However, in the process, since Cloud Innovation has claimed enough 
damages to freeze all of AFRINIC's accounts, they now have no money 
accessible to protect their legal rights on the IP address, or defend 
themselves against the damaged seizure.


As Mr. Curran put forth a week ago, 'property rights' of IP 
allocations are an unsettled area. What happens if this holds up that 
the court there rules that you 'own' the IPs assigned to you from an 
RIR until you give them back? That sure creates a mess with any RIR 
views that allocations as simply entries in a database.


This thing about entries in a database seems to be used when it's 
convenient to downplay the market value and operational importance of IP 
addresses.  Even the people leading the crusade against Cloud Innovation 
in this thread are claiming that it's about millions of addresses for 
dramatic effect.  This kind of dispute is going to happen more as the 
value of addresses goes up and the RIRs realize what a gold mine they 
have by shaking down spammers and other unpopular resource holders.  I 
imagine some of the operators on this list have resources they could 
return if they wanted to.  Maybe some will, but they'd be better off 
selling them before the RIRs decide to expand their scope and start mass 
reclaiming for profit. Hopefully RPKI, AS0, etc. won't be abused to 
reclaim resources just because they're valuable.


How long would it take before some clown in a boardroom decides that 
want to latch onto that ruling and do something stupid to 'maximize 
shareholder value' and there's an expensive legal brawl over that? How 
long before people start popping up and laying claim as the 'rightful 
owner' of addresses from the origin days? Do we really want to see 
RIRs and our companies dealing with BS lawsuits for things like this? 
It only has to work once...






On Fri, Aug 27, 2021 at 11:37 AM Bill Woodcock > wrote:


As many of you are aware, AfriNIC is under legal attack by Heng Lu
/ “Cloud Innovation.”

John Curran just posted an excellent summary of the current state
of affairs here:


https://teamarin.net/2021/08/27/afrinic-and-the-stability-of-the-internet-number-registry-system/



If, like me, you feel like chipping in a little bit of money to
help AfriNIC make payroll despite Heng having gotten their bank
accounts frozen, some of the African ISP associations have put
together a fund, which you can donate to here:

https://www.tespok.co.ke/?page_id=14001


It’s an unfortunate situation, but the African Internet community
has really pulled together to defend themselves, and they’ve got a
lot less resources than most of us do.

                               -Bill





Re: "Hacking" these days - purpose?

2020-12-14 Thread Laszlo Hanyecz




On 2020-12-14 16:48, Mark Tinka wrote:



On 12/14/20 18:38, David Bass wrote:

It becomes more clear when you think about the options out there, and 
get a little creative.  Now a days it’s definitely chess that’s being 
played.


You're right, it really doesn't take much. Preying on humanity can 
yield great results.


One that has started springing up in my neck of the woods - to 
simplify car-jacking) - is to obtain a list of customers that 
subscribe to a vehicle tracking service. The thugs will then call a 
customer, claiming their tracking device is faulty and needs to be 
checked physically. The thugs will come to your home or office, tell 
you that in order to finalize the fix, they need to test drive your 
car. And boom, that's your car gone!


The hacking, now, IMHO, is to obtain user information to profile who 
is exploitable, and how. After that, low-tech rules.


Mark.



This stuff is definitely the most visible type of scamming but this is 
not any different from swindling people at a flea market.  It isn't so 
much hacking as just using internet to communicate with people and then 
tricking them.  I think this is a different skill set than gaining 
access to personal data though.


Gaining access to someone else's computer's files has historically not 
been a big deal, so I'm guessing it didn't become a huge problem because 
there was little to gain from doing it.  It might be inconvenient for 
people, it might be used as part of a larger con against a victim, but 
it still requires a lot more steps to profit from it.  We all know that 
we can't stop that from happening, but even going back to the early 90s 
we've had malware protection vendors making money off this fear, and the 
problem has now reached a point where the placebo security won't cut it 
and we'll have to start figuring this problem out.


The impact of these kinds of breaches has always been minor, but in the 
past 10 years we've placed more and more things into primary storage on 
a computer, including cryptographic secrets which only function if 
they're kept secret.  Losing a wallet full of credit cards isn't as bad 
as losing a wallet full of cash.  There wasn't any way to put money into 
computer files before, but now there is. Even if only a few people carry 
money, if it's easy to steal millions of wallets and costs nothing, it's 
worth doing it for the hope of eventually hitting a money holder.


-Laszlo






Re: "Hacking" these days - purpose?

2020-12-14 Thread Laszlo Hanyecz

Bitcoin.

There wasn't much purpose to 'hacking' for a long time.  Even when 
talking about DDoS stuff, it's still just temporary vandalism, it's only 
an inconvenience, and it can be undone pretty quickly.  The whole idea 
of providing security has been turned into a wink-wink scam where people 
pretend to do busy work for money but everyone knows you'll still get 
breached and it doesn't really matter, so long as you can blame it on 
someone else and it's in the fine print.  Look at what a business DDoS 
has become, both on the provider and the protection side.


Stealing data is also a thing but even that is not inherently valuable 
unless you can blackmail the victim or sell it to a buyer. That kind of 
business requires more skills than just computer hacking to pull off, 
and carries a lot of risk in dealing with other humans who already know 
you're a data thief.


This all changed with bitcoin, because now simply gaining access and 
finding the data is the pay dirt and it can be claimed anonymously 
without dealing with any other humans.


-Laszlo


On 2020-12-12 22:26, Peter E. Fry wrote:


Simple question: What's the purpose of obtaining illicit access to 
random devices on the Internet these days, considering that a large 
majority of attacks are now launched from cheap, readily available and 
poorly managed/overseen "cloud" services?  Finding anything worthwhile 
to steal on random machines on the Internet seems unlikely, as does 
obtaining access superior (in e.g. location, bandwidth, anonymity, 
etc.) to the service from which the attack was launched.



I was thinking about this the other day as I was poking at my 
firewall, and hopped onto the archives (here and elsewhere) to see if 
I could find any discussion.  I found a few mentions (e.g. "Microsoft 
is hacking my Asterisk???"), but I didn't catch any mention of 
purpose.  Am I missing something obvious (either a purpose or a 
discussion of such)?  Have I lost my mind entirely? (Can't hurt to 
check, as I'd likely be the last to know.)



Peter E. Fry






Re: Abuse Desks

2020-04-29 Thread Laszlo Hanyecz




On 2020-04-29 17:51, Mukund Sivaraman wrote:

On Wed, Apr 29, 2020 at 01:49:14PM -0400, Tom Beecher wrote:

What if I am at home, and while working on a project, fire off a wide
ranging nmap against say a /19 work network to validate something
externally? Should my ISP detect that and make a decision that I shouldn't
be doing that, even though it is completely legitimate and authorized
activity? What if I fat fingered a digit and accidentally ran that same
scan against someone else's /19? Should that accidental destination of
non-malicious scans be able to file an abuse report against me and get my
service disconnected because they didn't like it?

Abuse departments should be properly handling LEGITIMATE abuse complaints.
Not crufty background noise traffic that is never going away.

Sure. Handling legitimate abuse complaints would be quite sufficient. :)

Mukund


Since this is a distributed network and there's not a central authority 
to rule on each incident being legitimate, the only way to stay out of 
the politics is to ignore people's abuse complaints. Someone's SSH 
server is being spammed with probes?  That's pretty low bandwidth, not 
much threat to the network from a cracking script.  Maybe you don't like 
it, maybe it's criminal or whatever else, but ostensibly it's some 
paying customer's traffic and it should be delivered unmolested.  When 
someone's infrastructure is getting packeted or having their routers 
crashed repeatedly, they respond to that, usually without having to be 
emailed, because it's actual abuse of their network.  A lot of this 
other stuff is just people abusing the abuse contacts to get someone 
else taken offline.  Phishing websites fall into this category - it's 
not network abuse, it's just content someone doesn't like, and one way 
to get it taken down is to threaten the network that carries the traffic 
for it.


-Laszlo





Re: OT: Tech bag

2019-08-02 Thread Laszlo Hanyecz




On 2019-08-02 16:42, James Downs wrote:

On Fri, Aug 02, 2019 at 11:19:08AM -0500, Hunter Fuller wrote:


This one has since been released, and it has a laptop compartment. My

Yeah, I definitely look for some sort of laptop compartment. If not
padded on its own, I stick the laptop into a padded sleeve. I run one
of these: https://tacticalgear.com/511-all-hazards-prime-backpack-black

And subdivide for a particular loadout with various smaller cases like:
https://countycomm.com/collections/view-all-storage-products/products/apx-multi-purpose-dual-zip-case-by-maratac

or something similar to these: 
https://www.casesbysource.com/category/soft-padded-cases

Unfortunately Deep Outdoors, who made a number of great soft-sided padded cases
has gone out of business...



I use GORUCK GR1
https://www.goruck.com/gr1
It has a padded laptop compartment.

-Laszlo



Re: It's been 20 years today (Oct 16, UTC). Hard to believe.

2018-10-17 Thread Laszlo Hanyecz




On 2018-10-17 02:35, Michael Thomas wrote:
I believe that the IETF party line these days is that Postel was wrong 
on this point. Security is one consideration, but there are others.


Postel's maxim also allowed extensibility.  If our network code rejects 
(or crashes) on things we don't currently understand and use, it ensures 
that they can't be used by apps that come along later either.  The 
attitude of rejecting everything in the name of security is what has 
forced app developers to tunnel APIs and everything else inside HTTP/DNS.




Mike

On 10/16/2018 07:18 PM, b...@theworld.com wrote:

What it's trying to say is that you have control over your own code
but not others', in general.

So make your own code (etc) robust and forgiving since you can't edit
others' code to conform to your own understanding of what they should
be sending you.

I suppose that pre-dates github but nonetheless much of the code which
generates bits flung at you is proprietary and otherwise out of your
control but what you can control is your code's reaction to it.

And of course the bits you generate which should try to make
conservative assumptions about what they might accept and interpret as
you expect.

For example just because they sent you a seemingly malformed HTTP
request, and given that 4xx is for error codes, doesn't mean you
should return "420 You must be high!" and expect to be understood.







Re: Waste will kill ipv6 too

2017-12-28 Thread Laszlo Hanyecz



On 2017-12-28 17:55, Michael Crapse wrote:

Yes, let's talk about waste, Lets waste 2^64 addresses for a ptp.
If that was ipv4 you could recreate the entire internet with that many
addresses.


After all these years people still don't understand IPv6 and that's why 
we're back to having to do NAT again, even though we now have a 
practically endless supply of integers.  If we could have all agreed to 
just do /64+/48 to every edge host/router, no questions asked, we'd 
never have to talk about this again.  Playing tetris with addresses had 
to be done with IPv4 but it's not even remotely a concern with IPv6 - 
the idea of waste and sizing networks is a chore that doesn't need to be 
thought about anymore.  As you say, if you have a /64, you could run the 
entire internet with it, if you really wanted to do the kinds of hacks 
we've been doing with v4, but the idea is that you don't need to do any 
of that.


-Laszlo



On 28 December 2017 at 10:39, Owen DeLong  wrote:


On Dec 28, 2017, at 09:23 , Octavio Alvarez 

wrote:

On 12/20/2017 12:23 PM, Mike wrote:

On 12/17/2017 08:31 PM, Eric Kuhnke wrote:
Call this the 'shavings', in IPv4 for example, when you assign a P2P
link with a /30, you are using 2 and wasting 2 addresses. But in IPv6,
due to ping-pong and just so many technical manuals and other advices,
you are told to "just use a /64' for your point to points.

Isn't it a /127 nowadays, per RFC 6547 and RFC 6164? I guess the
exception would be if a router does not support it.

Best regards,
Octavio.

Best practice used most places is to assign a /64 and put a /127 on the
interfaces.

Owen







Re: Geolocation: IPv4 Subnet blocked by HULU, and others

2017-12-27 Thread Laszlo Hanyecz



On 2017-12-27 22:38, Jima wrote:

On 2017-12-27 14:10, Jared Mauch wrote:
On Dec 27, 2017, at 3:50 PM, Grant Taylor via NANOG  
wrote:
Doesn't Hulu (et al) have an obligation to provide service to their 
paying customers?


Does this obligation extend to providing service independent of the 
carrier that paying customers uses?


Or if Hulu choose to exclude known problem carriers (i.e. VPN 
providers) don't they have an obligation to confirm that their 
exclusions are accurate?  Further, to correct problems if their data 
is shown to be inaccurate?


I have a suspicion that these folks acquired IP space that was 
previously marked as part of a VPN provider, or Hulu is detecting it 
wrongly as VPN provider IP space.


I was sitting on this, but what the heck.

I personally am curious as to what bug and/or feature allowed a random 
WISP in Utah (or the parent-ish ISP in New Jersey) to have IP space 
allocated from AfriNIC.


One might consider Hulu et al not so at-fault with that fact in 
consideration.


- Jima


Addresses aren't an identity nor are they tied to a physical location, 
so this is pretty irrelevant.  What Hulu should be doing is asking the 
user where they're located, instead of trying to tell them.  This thread 
happens here a couple times a week and the frequency of it will increase 
as addresses are recycled.  Clearly there is a lot of collateral damage 
from using GeoIP, but it mostly works on the big national ISPs so they 
still make money.  The WISPs and other small ISPs are an acceptable 
amount of loss, I guess.  The problem is that this is Hulu's fault but 
the pain is felt by everyone else except them, so they have no reason to 
want to stop doing this.


-Laszlo



Re: Long AS Path

2017-06-20 Thread Laszlo Hanyecz


On 2017-06-20 23:12, James Braunegg wrote:

Dear All

Just wondering if anyone else saw this yesterday afternoon ?

Jun 20 16:57:29:E:BGP: From Peer 38.X.X.X received Long AS_PATH= AS_SEQ(2) 174 
12956 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 
23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 
23456 23456 23456 ... attribute length (567) More than configured MAXAS-LIMIT

Jun 20 16:15:26:E:BGP: From Peer 78.X.X.X received Long AS_PATH= AS_SEQ(2) 5580 
3257 12956 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 
23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 23456 
23456 23456 23456 ... attribute length (568) More than configured MAXAS-LIMIT

Someone is having fun, creating weird and wonderful long AS paths based around 
AS 23456, we saw the same pattern of data from numerous upstream providers.

Kindest Regards,

James Braunegg



[cid:image001.png@01D280A4.01865B60]

1300 769 972 / 0488 997 207

ja...@micron21.com

www.micron21.com/


[cid:image002.png@01D280A4.01865B60]


[cid:image003.png@01D280A4.01865B60]

[cid:image004.png@01D280A4.01865B60]

Follow us on Twitter for important service and 
system updates.


This message is intended for the addressee named above. It may contain 
privileged or confidential information. If you are not the intended recipient 
of this message you must not use, copy, distribute or disclose it to anyone 
other than the addressee. If you have received this message in error please 
return the message to the sender by replying to it and then delete the message 
from your computer.






Could be this: 
http://blog.ipspace.net/2009/02/root-cause-analysis-oversized-as-paths.html




Re: Spitballing IoT Security

2016-10-27 Thread Laszlo Hanyecz

On 2016-10-27 23:24, Ronald F. Guilmette wrote:

I put forward what I think is a reasonbly modest scheme to try to get
IoT things to place hard limits on their "unsolicited" packet output at
the kernel level, and I'm going to go off now and try to find and then
engage some Linux embedded kernel people and see what they think.  Maybe
the whole thing is a dumb idea and not worth persuing, but I'm not con-
vinced of that yet.  So I'll go off, investigate in some more appropriate
forum, and report back here if/when I have anything useful to say.

Hacking embedded kernels to make them fault-tolerant, even in the event
of attackers getting a root shell prompt, isn't going to save the world
from DDoS attacks, but it may be one small part of the solution.


Regards,
rfg


This doesn't make sense to me.  When the device is compromised, the 
default software with the restrictions will just be reconfigured or 
replaced.  This process is similar to installing DD-WRT, or even a 
simple update from the vendor, for example.  Botnets download and 
install the software they require and often they close the original 
infection vector to prevent another botnet from reinfecting.  Check out 
the Mirai source code that was posted.


-Laszlo



Re: Death of the Internet, Film at 11

2016-10-21 Thread Laszlo Hanyecz


On 2016-10-22 00:39, Ronald F. Guilmette wrote:

P.S.  To all of you Ayn Rand devotees out there who still vociferously
argue that it's nobody else's business how you monitor or police your
"private" networks, and who still refuse to take even minimalist steps
(like BCP 38), congratulations.


What does BCP38 have to do with this?  All that does is block one 
specific type of attack (and cause a lot of collateral damage).  The IoT 
devices do not need to spoof addresses - they can just generate attack 
traffic directly.  This is even better, because you can't cut those 
eyeball addresses off - those are the same addresses your target 
audience is using.  If you cut off the eyeball networks there's not much 
point to running an internet business website anymore.


-Laszlo



Re: Request for comment -- BCP38

2016-09-26 Thread Laszlo Hanyecz



On 2016-09-26 18:03, John Levine wrote:

If you have links from both ISP A and ISP B and decide to send traffic
out ISP A's link sourced from addresses ISP B allocated to you, ISP A
*should* drop that traffic on the floor.

This is a legitimate and interesting use case that is broken by BCP38.

I don't agree that this is legitimate.

Also we're talking about typical mom & pop home users here.

There are SOHO modems that will fall back to a second connection if
the primary one fails, but that's not what we're talking about here.

The customers I'm talking about are businesses large enough to have
two dedicated upstreams, and a chunk of address spaced SWIP'ed from
each.  Some run BGP but I get the impression as likely as not they
have static routes to the two upstreams.

For people who missed it the last time, I said $50K/mo, not $50/mo.  Letters 
matter.


This doesn't have to be $50k/mo though.  If the connections weren't 
source address filtered for BCP38 and you could send packets down either 
one, the CPE could simply start with 2 default routes and take one out 
when it sees a connection go down.  This could work with a cable + DSL 
connection even.  It would be easy to further refine which connection to 
use for a particular service by simply adding a specific route for that 
service's address.  This would be a lot better than having to restart 
everything after one of the connections fails.   This would provide 
functionality similar to the BGP setup without any additional work from 
the service provider. People can't build CPE software that does this 
type of connection balancing because they can't rely on this working due 
to BCP38 implementation.  In my experience the only way you can get 
people to stop source address filtering is if you mention BGP, but BGP 
shouldn't be required to do this.


-Laszlo



R's,
John





Re: Request for comment -- BCP38

2016-09-26 Thread Laszlo Hanyecz


On 2016-09-26 15:12, Hugo Slabbert wrote:


On Mon 2016-Sep-26 10:47:24 -0400, Ken Chase  wrote:

This might break some of those badly-behaving "dual ISP" COTS routers 
out there
that use different inbound from outbound paths since each is the 
fastest of

either link.


As it should.

If you have links from both ISP A and ISP B and decide to send traffic 
out ISP A's link sourced from addresses ISP B allocated to you, ISP A 
*should* drop that traffic on the floor.  There is no automated or 
scalable way for ISP A to distinguish this "legitimate" use from 
spoofing; unless you consider it scalable for ISP A to maintain 
thousands if not more "exception" ACLs to uRPF and BCP38 egress 
filters to cover all of the cases of customers X, Y, and Z sourcing 
traffic into ISP A's network using IPs allocated to them by other ISPs?




This is a legitimate and interesting use case that is broken by BCP38.  
The effectiveness of BCP38 at reducing abuse is dubious, but the 
benefits of asymmetric routing are well understood.  Why should everyone 
have to go out of their way to break this.. it works fine if you just 
don't mess with it.


If you want to play asymmetry tricks, get some PI space and make 
arrangements.  If that's outside your wheelhouse, get an ISP that will 
sell this to you as a service either with dissimilar links they 
provide to you or over-the-top with tunnels etc.


Playing NAT games with different classes of traffic to e.g. send 
traffic type 1 over ISP A and traffic type 2 over ISP B *BUT* using 
the corresponding source addresses in each case and having the traffic 
return back over the same links is fine and dandy.  If you send 
traffic into an ISP-provided link using addresses from another 
provider, though, that ISP *should* be dropping that traffic.  If they 
don't, send them here so we can yell at them.




So instead of being able to use simple destination based routes to 
direct their traffic, like the service provider can, the CPE operator 
has to learn and implement policy based routing and manage state to 
juggle each of the IP addresses they are assigned.  It's orders of 
magnitude harder to do this with the current ecosystem of routers/CPEs, 
than it is to add a destination route.  I think stuff like this is one 
of the reasons why many are hesitant to implement this type of 
filtering.  It makes a specific type of abuse easier to track down *for 
someone else* but it doesn't help you much and it can cause debugging 
nightmares when something doesn't work due to filtering.


-Laszlo

I did this manually when I was messing around with multiple broadband 
links on

a fbsd router years ago, was glad it worked at the time.

/kc


On Mon, Sep 26, 2016 at 07:11:42AM -0700, Paul Ferguson said:
 >No -- BCP38 only prescribes filtering outbound to ensure that no 
packets leave your network with IP source addresses which are not 
from within your legitimate allocation.

 >
 > - ferg
 >
 >
 >On September 26, 2016 7:05:49 AM PDT, Stephen Satchell 
 wrote:
 >>Is this an accurate thumbnail summary of BCP38 (ignoring for the 
moment

 >>
 >>the issues of multi-home), or is there something I missed?
 >>
 >>> The basic philosophy of BCP38 boils down to two axioms:
 >>>
 >>> Don't let the "bad stuff" into your router
 >>> Don't let the "bad stuff" leave your router
 >>>
 >>> The original definition of "bad stuff" is limited to source-
 >>> address grooming both inbound and outbound. I've expanded on 
the

 >>> original definition by including rule generation to control
 >>> broadcast address abuse.
 >
 >--
 >Sent from my Android device with K-9 Mail. Please excuse my brevity.

--
Ken Chase - m...@sizone.org Toronto Canada






Re: Looking for recommendations for a dedicated ping responder

2016-09-09 Thread Laszlo Hanyecz


On 2016-09-09 19:52, Dan White wrote:

Are there any products you're using which are dedicated to responding to
customer facing pings?


PaaS (pong-as-a-service)?




Re: Handling of Abuse Complaints

2016-08-29 Thread Laszlo Hanyecz
I know this is against the popular religion here but how is this abuse 
on the part of your customer?  Google, Level3 and many others also run 
open resolvers, because they're useful services. This is why we can't 
have nice things.



On 2016-08-29 15:55, Jason Lee wrote:

NANOG Community,

I was curious how various players in this industry handle abuse complaints.
I'm drafting a policy for the service provider I'm working for about
handing of complaints registered against customer IP space. In this example
I have a customer who is running an open resolver and have received a few
complaints now regarding it being used as part of a DDoS attack.

My initial response was to inform the customer and ask them to fix it. Now
that its still ongoing over a month later, I'd like to take action to
remediate the issue myself with ACLs but our customer facing team is
pushing back and without an idea of what the industry best practice is,
management isn't sure which way to go.

I'm hoping to get an idea of how others handle these cases so I can develop
our formal policy on this and have management sign off and be able to take
quicker action in the future.

Thanks,

Jason




Re: Netflix banning HE tunnels

2016-06-08 Thread Laszlo Hanyecz



On 2016-06-08 18:57, Javier J wrote:

Tony, I agree 100% with you. Unfortunately I need ipv6 on my media subnet
because it's part of my lab. And now that my teenage daughter is
complaining about Netflix not working g on her Chromebook I'm starting to
think consumers should just start complaining to Netflix. Why should I have
to change my damn network to fix Netflix?

In her eyes it's "daddy fix Netflix" but the heck with that. The man hours
of the consumers who are affected to work around this issue is less than
the man hours it would take for Netflix to redirect you with a 301 to an
ipv4 only endpont.

If Netflix needs help with this point me in the right direction. I'll be
happy to fix it for them and send them a bill.



They're doing the same thing with IPv4 (banning people based on the 
apparent IP address).  Your IPv4 numbers may not be on their blacklist 
at the moment, and disabling IPv6 might work for you, but the underlying 
problem is the practice of GeoIP/VPN blocking, and the HE.net tunnels 
are just one example of the collateral damage.


I don't know why Netflix and other GeoIP users can't just ask customers 
where they are located, instead of telling them.  It is possible that 
some user might lie, but what about "assume good faith"?  It shows how 
much they value you as a customer if they would rather dump you than 
trust you to tell them where you are located.


-Laszlo




Re: Netflix VPN detection - actual engineer needed

2016-06-08 Thread Laszlo Hanyecz



On 2016-06-08 16:12, Owen DeLong wrote:


It’s a link, just like any other link, over which IPv6 can be transmitted.
You can argue that it’s a lower quality link than some alternatives, but I have
to tell you I’ve gotten much more reliable service at higher bandwidth from
that link than from my T-Mobile LTE service, so I’d argue that it is a higher
quality service than T-Mobile.




Well there is one good thing that might come out of this if you're a 
tunnel user.. the tunnels can have even more bandwidth now, with all the 
Netflix traffic moving off them.  I have no special visibility into how 
(over)loaded they are, just speculating.


-Laszlo



Re: Netflix VPN detection - actual engineer needed

2016-06-06 Thread Laszlo Hanyecz

On 2016-06-06 19:39, Christopher Morrow wrote:


​Doing any sort of 'authentication' or 'authorization' on src-IP is just ..
broken.​




This.

Netflix is pretending to have a capability (geolocation by src ip) that 
doesn't exist and there is collateral damage from the application of 
their half baked solution.  Those who end up getting dropped as 
collateral damage are rightly upset about the discrimination.


-Laszlo



Re: Netflix VPN detection - actual engineer needed

2016-06-06 Thread Laszlo Hanyecz


On 2016-06-06 15:21, Tore Anderson wrote:


But Netflix shouldn't have any need to ask in the first place. Their
customers need to log in to their own personal accounts in order to
access any content, when they do Netflix can discover their addresses.

Tore


Hey there's an idea, how about they ASK the users where they are 
located, instead of telling them where they are located.  Presumably a 
user will have a new billing address when they move to a new place.  
That ought to be a lot more accurate than lookup based on a static map 
of number -> location.  I don't think this is too crazy of an idea.. my 
car insurance company asks me what zip code I keep my cars in.  Netflix 
could ask people what zip code they watch video from.


-Laszlo



Re: Netflix VPN detection - actual engineer needed

2016-06-05 Thread Laszlo Hanyecz


On 2016-06-05 23:45, Damian Menscher wrote:
Who are these non-technical Netflix users who accidentally stumbled 
into having a HE tunnel broker connection without their knowledge?  I 
wasn't aware this sort of thing could happen without user consent, and 
would like to know if I'm wrong.  Only thing I can imagine is if ISPs 
are using HE as a form of CGN.


Another question: what benefit does one get from having a HE tunnel 
broker connection?  Is it just geek points, or is there a practical 
benefit too?


Damian


Well, you could use the HE.net tunnels to work around the problem if 
their GeoIP checks block you in the first place.
HE.net tunnelbroker is commonly used by home users on ISPs which don't 
provide v6 on their own, like Verizon's fios.  Home routers generally 
have support for this built in and it doesn't take someone with a lot of 
technical knowledge to set it up.


You can also set up BGP with HE and they will give you free transit on 
the free tunnel and accept your announcements.  Personally I have set it 
up with and without BGP at small office locations as a way to provide 
IPv6 to the office workers, when only v4 was available.  You just click 
to get a HE.net /48.


For P2P stuff it's a way to get around NAT - you can get inbound torrent 
connections or host a shooting game match on your desktop behind the NAT 
router.


-Laszlo



Re: Netflix VPN detection - actual engineer needed

2016-06-05 Thread Laszlo Hanyecz



On 2016-06-05 22:48, Damian Menscher wrote:


What *is* standard about them?  My earliest training as a sysadmin taught
me that any time you switch away from a default setting, you're venturing
into the unknown.  Your config is no longer well-tested; you may experience
strange errors; nobody else will have seen the same bugs.

That's exactly what's happening here -- people are setting up IPv6 tunnel
broker connections, then complaining that there are unexpected side
effects.



Damian,

If we were talking about some device that is outputting incorrect 
packets and they are failing to work with Netflix I would agree with 
you, but in this case the packets are standard and everything works 
fine.  Netflix went out of their way to try to find a way to make it not 
work.  The users and geeks aren't just breaking stuff and expecting 
others to work around their broken setup, but this is actually what 
Netflix is doing.  All Netflix can look at is the content of the packet 
and so they're using the source address to discriminate.  It is true 
that some users might be able to work around it if they can get on an 
ISP that gives them an allowed address, but that isn't a good solution 
for an open internet.


There are a lot of non technical Netflix users who are being told to 
turn off IPv6, switch ISPs, get a new VPN, etc. because Netflix has a 
broken system.  Those users don't care what IPv6 is, they just learn 
that it's bad because it breaks Netflix.  Most users have no way to 
change these things and they just aren't going to be able to use Netflix 
anymore.  That's a very selfish way to operate, a huge step backwards, 
and it's a kick in the balls to everyone who works to make technological 
progress on the internet.   The simple truth is that Netflix is trying 
to figure out where people are located, but this is not possible to do 
reliably with current internet technology.  Instead they did something 
that is unreliable, and many customers become collateral damage through 
no fault of their own. All the breakage is on the Netflix side.


-Laszlo



Re: Netflix VPN detection - actual engineer needed

2016-06-05 Thread Laszlo Hanyecz


On 2016-06-05 21:18, Damian Menscher wrote:

This entire thread confuses me. Are there normal home users who are being
blocked from Netflix because their ISP forces them through a HE VPN?  Or is
this massive thread just about a handful of geeks who think IPv6 is cool
and insist they be allowed to use it despite not having it natively?  I
could certainly understand ISP concerns that they are receiving user
complaints because they failed to provide native IPv6 (why not?), but
whining that you've managed to create a non-standard network setup doesn't
work with some providers seems a bit silly.

Damian


I think this thread is specifically about bashing Netflix for blocking 
HE, but the root of the problem is in trying to use the apparent source 
address of a packet to determine where a person might be located.


In this case Netflix is deliberately trying to fight VPNs and the users 
understand what's going on.  Usually a blocked user can't even load the 
website they are blocked from, so they can't even complain, unless they 
happen to notice that works from some other ISP (at home/work perhaps).  
In these situations people blame the network/ISP and that's the part 
that ticks off the admins of those networks.  Try explaining to a 
complaining user that it's the website's fault while it works from their 
friend's connection.


For another example, some CDN hosts offer their customers the ability to 
block requests based on GeoIP country - this is a terrible idea for 
obvious reasons but that doesn't stop CDNs from offering it, and of 
course website owners fall for it and enable it.  Then what happens is 
there are a bunch of users who can't access the site at all.  It makes 
no sense because they are not 'bad guys' and they're not from the wrong 
country, so what gives?  Well it's just collateral damage, they can move 
to a major city and use a major national ISP that's in the database.  
Maybe they're on a HE tunnel, maybe they're on a new ISP who just got 
their netblocks.. in the end they are blacklisted and to those users it 
just looks like the website operator went out of business.  How 
widespread is this problem?  For me, the websites of the local public 
school system and a major local grocery store block based on GeoIP and I 
can't access them because my numbers aren't in their db.  There are city 
services sites that I can't access without jumping through hoops with 
proxies or VPNs.  I've personally tried to complain to several of these 
website operators and even after escalations the best I can get is "did 
you try clearing your cookies".  It's not good.


-Laszlo



Re: Netflix VPN detection - actual engineer needed

2016-06-03 Thread Laszlo Hanyecz


On 2016-06-03 19:37, Matthew Huff wrote:

I would imagine it was done on purpose. The purpose of the Netflix VPN 
detection was to block users from outside of different regions due to content 
providers requests. Since HE provides free ipv6 tunnels, it's an easy way to 
get around the blockage, hence the restriction.




I know this isn't news to anyone on the list but I want to point out 
that the root of this problem is in trying to attach an Earth location 
to a network packet.  The only good solution we have for this is to ASK 
the user where they are located.  Netflix has a broken system that is 
causing a lot of collateral damage because the whole thing is based on 
the premise that they can determine where the users are by guessing.  If 
you just got your netblock it's probably going to be banned because it's 
not in their GeoIP database.  Maybe if you jump through all the right 
hoops, in a few months time they will update the database.


Working around it just sends the message that this is an acceptable 
practice and you will own the problems they caused.  This a widespread 
problem and not specific to Netflix.


There's also another angle to this in that old IP addresses (that work 
with Netflix/youtube/whatever) become more valuable and newly registered 
netblocks (like the ones everyone should be getting for IPv6) are not 
useful.  This might be a good way to keep new ISPs out too, unless they 
can pay for a well aged IPv4 block so their subscribers can access 
Netflix and friends.


-Laszlo



Re: NIST NTP servers

2016-05-13 Thread Laszlo Hanyecz


On 2016-05-13 14:12, Lamar Owen wrote:

On 05/11/2016 09:46 PM, Josh Reynolds wrote:

maybe try [setting up an NTP server] with an odroid?


...


You really have to have at least a temperature compensated quartz 
crystal oscillator (TCXO) to even begin to think about an NTP server, 
for anything but the most rudimentary of timing.




There are WWVB clocks that try to sync nightly.  Many of them don't even 
have a second indicator, but they give reliable time to the minute.  NTP 
is a lot better than this as it continuously disciplines the clock 
instead of just lining it up once a day, but we're talking about doing 
this over the internet where we measure latency in milliseconds.  If 
you're working down at the picosecond level you will probably not be 
using NTP to distribute your clock signal.  Running an NTP client 
against pool servers is a lot better than not running it at all, but 
running it against a fancy local server with a GPSDO hooked up to it is 
only marginally better than the pool servers.


It all depends on what you want to do but a cheap ARM or Intel Atom 
computer works well for an NTP server (remember millisecond level 
accuracy).  If you can afford to build a secure bunker with armed guards 
and redundant everything for your time server that's good, but a few RPi 
style computers with GPS hats are almost as good, and you can buy a lot 
of them for very little money..


-Laszlo



Re: NIST NTP servers

2016-05-10 Thread Laszlo Hanyecz


On 2016-05-10 15:36, Mike wrote:

On 5/10/2016 11:22 AM, Leo Bicknell wrote:

In a message written on Mon, May 09, 2016 at 11:01:23PM -0400, b f wrote:

In search of stable, disparate stratum 1 NTP sources.

http://wpollock.com/AUnix2/NTPstratum1PublicServers.htm


We tried using “time.nist.gov” which returns varying round-robin addresses
(as the link says), but Cisco IOS resolved the FQDN and embedded the
numeric address in the “ntp server” config statement.

Depending on your hardware platform your Cisco Router is likely not
a great NTP server.  IOS is not designed for hyper-accuracy.


After letting the new server config go through a few days of update cycles,
the drift, offset and reachability stats are not anywhere as good as what
the stats for the Navy time server are - 192.5.41.41 / tock.usno.navy.mil.

The correct answer here is to run multiple NTP servers in your
network.  ...
[snip]


I think the correct answer here starts with a question --- what level of
time accuracy is required for the local NTP server(s)? Which then begs
the question, what level of accuracy is needed for the clients?

A shop with a client need for nanosecond accuracy begs for an entirely
different solution set than a shop where a millisecond of accuracy is
needed on the clients, and still a different solution set that a shop
where "a few milliseconds either way" is quite OK.




You can get pretty nutty with this, and it's fun to do, but regular NTP 
over the internet is good enough for millisecond accuracy.  A default 
install with pool servers is pretty good.  A custom config with a local 
NTP server (with less, possibly more symmetric network latency) is a 
little better, but for common sysadmin needs like cron jobs and log 
correlation, you likely won't notice a difference.  I would worry more 
about having that config distributed correctly and monitoring all your 
servers to make sure their NTP is healthy, rather than worrying about 
the source of your NTP sync.  The pool servers are fine, and NTP is good 
at deciding when they're acting up.  The computer running NTP doesn't 
have to be anything special, but beware of VMs - depending on the 
virtualization type and the guest OS, you may not even be able to get 
NTP to engage because of the clock instability.  Chrony might work 
better for VMs.  For a linux NTP server, I prefer modern Intel CPUs with 
invariant tsc - linux will use it as a clocksource (cat 
/sys/devices/system/clocksource/clocksource0/current_clocksource
) .  A Raspberry Pi or something in between also works equally well if 
you're going to be syncing this over a jittery shared network anyway.  I 
would suggest having more than one server, in different locations if you 
can, and if you're able to supplement with pool servers, add those too.  
The most likely failure mode you're going to have is that it doesn't 
work at all because it's not running, it can't correct the local clock 
because of excess instability, or you lost all network connections.  
Worrying about whether you have 4 or 8 servers is kind of moot in any of 
those (much more likely) faults.


-Laszlo




Re: Friday's Random Comment - About: Arista and FIB/RIB's

2016-04-29 Thread Laszlo Hanyecz

On 2016-04-29 12:48, Nick Hilliard wrote:

Alain Hebert wrote:

 PS: "Superfluous" is a nice way to say that the best path of a
subnet is the same as his supernet.

... from the point of view of the paths that you see, which is to say
two egress paths.  Someone else on the internet may have a different set
of bgp views which will give a different set of results for the bgp
decision process.  The more paths you receive from different sources,
the more likely it is that this list of 120k "superfluous" prefixes will
converge towards zero.

You're right that it's often not necessary to accept all paths, and your
fib view can optimised in a way that your rib shouldn't be.  All these
things can be used to drop the forwarding lookup engine resource
requirements, although it is important to understand that there is no
such thing as a free lunch and if you do this, there might well be edge
cases which could cause your optimisation to fail and things to blow up
horribly in your face.  Still, it's an interesting thing to examine.

Nick


What Nick said is basically what I was asking about in the Arista 
thread.  Are there new edge cases and new failure modes that are 
introduced by this strategy?  It seems like you'd have to recompute the 
minimal set of forwarding rules each time a prefix is added or removed, 
and a single update may cause you to have to do many adds/removes to 
bring your compressed rules into sync, like when a hole is punched in an 
aggregated prefix.


I'm curious about specific failure modes that can result from this, if 
anyone can share examples/experience with it.


Thanks,
Laszlo





Re: Arista Routing Solutions

2016-04-28 Thread Laszlo Hanyecz


On 2016-04-28 11:06, Alain Hebert wrote:

 Well,

 Once you eliminate the ~160k superfluous prefixes (last time I
checked)...  This is a none issue.

 Some work on some sort summary function would keep those devices
alive...  but we all know there is more money to be made the faster the
device become obsolete :(




Can you explain how this works?  How can a router determine which prefix 
is superfluous?  How does it cope when a suppressed prefix is withdrawn 
or a more specific prefix is added? Is this just one of those 'it works 
some of the time' solutions or is this something that can be done safely 
with an appropriate algorithm?


Thanks,
Laszlo



Re: GeoIP database issues and the real world consequences

2016-04-13 Thread Laszlo Hanyecz


On 2016-04-13 05:57, Todd Crane wrote:

As to a solution, why don’t we just register the locations (more or less) with 
ARIN? Hell, with the amount of money we all pay them in annual fees, I can’t 
imagine it would be too hard for them to maintain. They could offer it as part 
of their public whois service or even just make raw data files public.

Just a though

—Todd




Ultimately these services want to locate users, not routers, servers, 
tablets and such.  If you want to answer the question "where is the 
user?" then you have to ask them - only they know the answer - not their 
ISP, not ARIN, not DNS.  If you really insist on using the IP address, 
then maybe you could connect to it and ask it, like an identd scheme.  
This could be built into a web browser and prompt the user asking 
permission.  As long as we're using a static list of number -> location 
we will just be guessing and hoping they stay near the assumed location 
and we're not too wrong.  This whole practice of trying to map network 
numbers is the problem.


Also note that one of the things that wasn't explicitly mentioned in the 
original article but was hinted at was the use of something similar to 
Skyhook, another static list of address -> location. It sounded like the 
'find my phone' services were leading people to an Atlanta home based on 
having a wireless access point that was recorded as being there.  This 
is similarly wrong, but not the same as geolocating IP addresses.  It 
geolocates wireless AP MAC addresses.  You can really see this break 
down when the wireless AP is on a bus.


-Laszlo



Re: GeoIP database issues and the real world consequences

2016-04-11 Thread Laszlo Hanyecz


On 2016-04-11 18:15, John Levine wrote:



Bodies of water probably are the least bad alternative.  I wonder if
they're going to hydrolocate all of the unknown addresses, or only the
ones where they get publically shamed.

R's,
John


I imagine some consumers of the data will 'correct' the position to fall 
on the nearest road in front of the nearest house.


-Laszlo





Re: GeoIP database issues and the real world consequences

2016-04-11 Thread Laszlo Hanyecz
Why not use the locations of their own homes?  They're indirectly 
sending mobs to randomly chosen locations.  There's enough middle men 
involved so they can all say they're doing nothing wrong, but wrong is 
being done.


-Laszlo


On 2016-04-11 17:34, Steve Mikulasik wrote:

Just so everyone is clear, Maxmind is changing their default locations.

" Now that I’ve made MaxMind aware of the consequences of the default locations it’s 
chosen, Mather says they’re going to change them. They are picking new default locations 
for the U.S. and Ashburn, Virginia that are in the middle of bodies of water, rather than 
people’s homes."






Re: announcement of freerouter

2015-12-28 Thread Laszlo Hanyecz

Mike,

Csaba's front page previously described the software as being a 
'routerOS', like in the very first sentence on the page.  I'm assuming 
that the person who complained about that didn't read past the first 
sentence and just wanted to troll.  It's obvious to me that decades of 
work have gone into this free router software, and the term router OS 
was just being used to describe what the software does - an OS for a 
router.


It looks to me like the author has a deep understanding of networking to 
be able to implement all this from scratch and I think we can learn a 
lot from reading this code.  He's also giving it away for free, which is 
hard to argue with.


-Laszlo

On 2015-12-28 18:28, Mike - st257 wrote:

Date: Thu, 24 Dec 2015 22:23:24 -0600
From: Josh Reynolds 
To: mate csaba 
Cc: c...@nop.hu, NANOG 
Subject: Re: announcement of freerouter
Message-ID:
 

Re: Question re session hijacking in dual stack environments w/MacOS

2015-09-28 Thread Laszlo Hanyecz

On 2015-09-27 12:24, John Schimmel wrote:

Most Web application firewalls have cross-site request forgery protection.
When a form is downloaded, the firewall inserts a hidden field or cookie
that contains the IP address of the request.  When the form is submitted,
the firewall then verifies that the post is sent from the same address.


This reminds me of ICMP blocking which breaks path MTU discovery and 
thus blocks all users with < 1500 MTU.


The technique described here doesn't sound like it would protect from 
XSS or CSRF; it would just introduce seemingly random failures like the 
OP described.  The idea with trying to tie the apparent network address 
to a session is to make session hijacking harder, not local scripting 
attacks (which could come from the same address anyway), but it's a bad 
idea regardless because there is not normally a reason for a session to 
be 'sticky' in this way and so there's no effort made to keep the same 
address, it just happens by accident sometimes.  Making this work so the 
WAF can be happy is in conflict with actually useful things like load 
balancing, cache proxies, privacy addresses, etc.  It probably works 
some percentage of the time for some users, and those who it doesn't 
work for just get blamed for having a bad 
browser/computer/ISP/whatever.  I hope that as the failure rate 
increases, people using these solutions eventually realize that they're 
blocking themselves off from the net.


-Laszlo




Re: Question re session hijacking in dual stack environments w/MacOS

2015-09-26 Thread Laszlo Hanyecz


On 2015-09-26 14:34, David Hubbard wrote:

Websites that require some type of authentication that is handled via
session cookies have been booting our users out randomly with "your ip
address has changed" type message.  This occurs when their Mac decides
to switch between protocols because the site views it as a session
hijacking attempt when Joe User with session ID xyz switches from
192.0.2.10 to 2001:db8::1:1:a or vice versa.




This sounds like a really poor practice on the part of the website 
operators.  Users on wireless devices may be switching networks 
throughout the same session (wifi/LTE), or there could be a cluster of 
proxies, or short DHCP leases, or tor circuit changes, or privacy 
extensions, etc.  This is almost as bad as using GeoIP databases to 
authenticate.


-Laszlo




Re: Dual stack IPv6 for IPv4 depletion

2015-07-09 Thread Laszlo Hanyecz
On Jul 9, 2015, at 11:08 PM, Owen DeLong o...@delong.com wrote:

 
 On Jul 9, 2015, at 15:55 , Ricky Beam jfb...@gmail.com wrote:
 
 On Thu, 09 Jul 2015 18:23:29 -0400, Naslund, Steve snasl...@medline.com 
 wrote:
 That would be Tivo's fault wouldn't it.
 
 Partially, even mostly... it's based on Bonjour. That's why the shit doesn't 
 work over the internet.
 
 (It's just http/https, so it will, in fact, work, but their apps aren't 
 designed to work that way. Many 3rd party control apps have no problems.)
 
 Correct… It _IS_ TiVO’s fault. However, the reality I’m trying to point out 
 is that application developers make assumptions based
 on the commonly deployed environment that they expect in the world.
 
 If we create a limited environment, then that is what they will code to.
 
 If we deliver /48s, then they will come up with innovative ways to make use 
 of those deployments. If we deliver /56s, then innovation will be constrained 
 to what can be delivered to /56s, even for sites that have /48s.
 
 Owen
 

I would love to see things go Owen's way.. /48s everywhere, everyone agrees 
it's a good idea, and we can just assume that it will work.  We can move on, 
this is one less thing to stress about.

On the other hand, I do wonder how this will work, even if most people are 
getting /48s.  Perhaps in a few years we'll be past all this, and there will be 
a well accepted standard way.  Maybe it will be RIPng.  Maybe some thing that 
we haven't seen yet.  Or maybe there will be 800 ways of doing it, because the 
protocol spec allows that, and so we should complicate our lives further by 
requiring everyone to support every possibility of combinations.  This is the 
worst possible outcome because it means unnecessary complexity, more work for 
everyone involved, and less reliability.

If you're writing an application, do you bother supporting /48, /56, RA, DHCP, 
etc, while also supporting an https polling mechanism for the times when none 
of that stuff works?  We can pretend that it doesn't matter and that software 
should 'just work' with any network, but that's simply not possible for many 
applications.  I think as an application developer, you're much better off 
aiming for the least common denominator, accepting the limitations of that, and 
just moving on.  This means polling, reflectors, NAT, proxyarp, etc.  Things 
that you can control, to make your app work.  Supporting a bunch of different 
ways is a waste of everyone's time and just makes your application less 
reliable and harder to test.  Unless your specific application benefits greatly 
from a more capable network, it's probably not worth even thinking about, as 
long as you know that you will still have to support the 'bad' ones.

A music streaming application can use a hardcoded well known server name to 
access a centralized service.  It can even communicate with other users by 
using that central server as a database or reflector.  It would be 'nice' if it 
could ask the network for a prefix and use a different address for each peer it 
talks to, but what's the point in developing that, if you still have to support 
the other case?

A wifi hotspot device would benefit from prefix delegation, but it could of 
course use NAT or proxying without the cooperation of the network.  This is one 
application where it might be worth supporting all the different combinations, 
but it means that all those different methods need to be tested, and they can 
break in different ways, and there's no way to be sure it's right.

Choice is good, you can run your own network any way you want, etc, but it's 
not good when people are making choices just for the sake of being different 
and incompatible.  After all, the point of the internet is to communicate with 
everyone else, which means we all need to agree on how we will communicate.  
How can we expect everyone to embrace IPv6 if we can't even provide a 
straightforward procedure to get connected to it?

-Laszlo





Re: Android (lack of) support for DHCPv6

2015-06-11 Thread Laszlo Hanyecz
Lorzenzo is probably not going to post anymore because of this.

It looks to me like Lorenzo wants the same thing as most everyone here, aside 
from the university net nazis, and he's got some balls to come defend his 
position against the angry old men of NANOG.  Perhaps the approach of attacking 
DHCP is not the right one, but it sounds like his goal is to make IPv6 better 
than how IPv4 turned out.

Things like privacy extensions, multiple addresses and PD are great because 
they make it harder for people to do address based tracking, which is generally 
regarded as a desirable feature except by the people who want to do the 
tracking.  DHCPv6 is a crutch that allows operators to simply implement IPv6 
with all the same hacks as IPv4 and continue to do address based access 
control, tracking, etc.  It's like a 'goto' statement - it can be used to do 
clever things, but it can also be used to hack stuff and create very hard to 
fix problems down the road.  I think what Lorenzo is trying to do is to use his 
influence/position to forcefully prevent people from doing this, and while that 
may not be the most diplomatic way, I admire his courage in posting here and 
trying to reason with the mob.

-Laszlo


On Jun 10, 2015, at 10:24 PM, Michael Thomas m...@mtcc.com wrote:

 On 06/10/2015 02:51 PM, Paul B. Henson wrote:
 From: Lorenzo Colitti
 Sent: Wednesday, June 10, 2015 8:27 AM
 
 please do not construe my words on this thread as being Google's position
 on anything. These messages were sent from my personal email address, and I
 do not speak for my employer.
 Can we construe your postings on the issue thread as being Google and/or 
 Androids official position? They are posted by lore...@google.com with a tag 
 of Project Member, and I believe you also declined the request in the 
 issue under that mantle.
 
 
 Oh, stop this. The only thing this will accomplish is a giant black hole of 
 silence from anybody at Google and any other $MEGACORP
 in a similar situation.
 
 Mike



Re: Android (lack of) support for DHCPv6

2015-06-11 Thread Laszlo Hanyecz
On Jun 12, 2015, at 12:51 AM, Ray Soucy r...@maine.edu wrote:

 That's really not the case at all.  
 
 You're just projecting your own views about not thinking DHCPv6 is valid and 
 making yourself and Lorenzo out to be the some sort of victims of NANOG and 
 the ... 
 

DHCPv6 and Android are just collateral damage here but I think the argument is 
about steering what the generally accepted form of end user IPv6 on WiFi will 
be.  It would be great if we could agree on that so we don't all have to write 
support for many different ways and provide complicated user interfaces for 
configuring it, right?  Plug and play?

  university net nazis
 
 Did you really just write that?  
 

As far as net nazi, I meant it in the same sense as a BOFH.  Someone who is 
intentionally degrading a user's experience by using technical means to block 
specifically targeted applications or behaviors.  And angry old men is also 
not a literal meaning, but an observation of how this has turned into a flame 
war where it's a lot of seemingly angry people mobbing the Android developer.

 What we're arguing for here is choice, the exact opposite of the association 
 you're trying to make here.  It's incredibly poor taste to throw that term 
 around in this context, and adds nothing to the discussion.
 
 People are not logical.  They adopt a position and then look for information 
 to support it rather than counter it; they even go as far as to ignore or 
 dismiss relevant information in the face of logic.  That's religion.  And 
 this entire discussion continues to be rooted in religion rather than 
 pragmatism.
 
 DHCPv6 is a tool, just as SLAAC is a tool.  IPv6 was designed to support both 
 options because they both have valid use cases.  Please allow network 
 operators to use the best tool for the job instead of telling us all we're 
 required to do it your way (can you even see how ridiculous this whole nazi 
 name calling is given the position you're taking)

Without getting into all the actually there is edge case X discussions, when 
you connect to a WiFi network at an office, home or public place today, it's 
pretty 'standard' to find a DHCP server handing out rfc1918 IPv4 addresses, 
recursive name servers, and the network doing some form of NAT or proxying.  
This is pretty much what we expect when we open up a laptop and connect to a 
network, and if it doesn't work we call the help desk and ask why it doesn't do 
what we expect.  Every user application that wants to do peer to peer 
networking has to come up with some complicated workaround to communicate 
through the various forms of NAT and proxies.

What do we expect to happen with regard to IPv6? I think it would be great if 
end to end connectivity was common enough that application developers could 
assume it will be there, and avoid having to do those workarounds.  On the 
other hand, if it becomes common and acceptable to use DHCPv6 to provide a 
single address only, then applications will just circumvent it once again with 
things like NAT, VPNs and reflector servers, which actually makes it worse for 
everyone involved.

 
 You don't get to just say I'm not going to implement this because I don't 
 agree with it, which is what Google is doing in the case of Android.
 
 The reason Lorenzo has triggered such a backlash on NANOG is that is 
 fundamental argument on why he doesn't see DHCPv6 as valid for the Android is 
 quite frankly a very weak argument at best.  If you're going to stand up and 
 say you're not going to do what everyone else has already done, especially 
 when it comes to implementation of fundamental standards that everything 
 depends upon, you need to have a better reason for it than the one Lorenzo 
 provided.
 

It seems like several people have taken the position that they will use their 
influence to steer others away from Android because it doesn't work with their 
chosen network configuration.  This to me sounds very much like Android taking 
the position that the network should support their chosen address configuration 
protocol instead of that other one.  I think in the end we're going to find 
that both the network side and the client side end up having to support the 
whole matrix of possible configurations, if the end goal is to provide a good 
user experience, but this is not a good OS developer and network operator 
experience because it creates more work for everyone and more trouble for users 
when the complicated workarounds don't work.

-Laszlo

 I honestly hope he collects himself and takes the time to respond, because it 
 really is a problem.
 
 As much as you may not want DHCPv6 to be a thing, it's already a thing.
 
 
 
 
 
 On Thu, Jun 11, 2015 at 7:42 PM, Laszlo Hanyecz las...@heliacal.net wrote:
 Lorzenzo is probably not going to post anymore because of this.
 
 It looks to me like Lorenzo wants the same thing as most everyone here, aside 
 from the university net nazis, and he's got some balls to come defend

Re: Android (lack of) support for DHCPv6

2015-06-11 Thread Laszlo Hanyecz
Your phone doesn't work with our network, so you should buy one that does
vs
Hey we can't connect, fix your network

Kind of similar to the streaming video vs eyeball network thing.. blaming the 
bad user experience on the other guy.

-Laszlo



On Jun 12, 2015, at 2:18 AM, Matthew Petach mpet...@netflight.com wrote:

 On Wed, Jun 10, 2015 at 8:26 AM, Lorenzo Colitti lore...@colitti.com wrote:
 Ray,
 
 please do not construe my words on this thread as being Google's position
 on anything. These messages were sent from my personal email address, and I
 do not speak for my employer.
 
 Regards,
 Lorenzo
 
 
 Ah, Lorenzo, Lorenzo...
 
 I was going to just let the thread go quietly by until you pulled
 out the I'm not speaking for my employer card.  :(
 
 Can we take what you posted here
 https://code.google.com/p/android/issues/detail?id=32621#c53
 from your google.com account to be official Google
 position, when you closed the issue requesting DHCPv6
 support as Declined?
 
 Again, in comment #109
 https://code.google.com/p/android/issues/detail?id=32621#c109
 you speak from your Google.com account when you repeat
 *twice* the position that you won't support stateful DHCPv6:
 and not via stateful DHCPv6 address assignment followed by
 while continuing not to support DHCPv6 address assignment.
 
 It's hard to not see _that_ as being Google's position, when you
 post it from your google.com account in response to an issue raised
 about broken functionality on the Android platform.  So perhaps
 you're right, and the words you use on _this_ thread are your
 personal opinion; unfortunately, they seem to be the same
 words and opinions you use from your google.com account when
 denying input from Android users who don't seem to want
 their devices to be crippled by incomplete DHCPv6 support.
 
 I wonder at what point large enterprises will simply say
 sorry, without working DHCPv6 support, Android devices
 will not be supported on this network--at which point this
 will stop being a religious issue, and will shift to being a
 business issue, as Google will have to decide whether
 being stubbornly dogmatic while losing large customers
 is worth it or not.
 
 Thanks!
 
 Matt
 
 PS--just because some poor unfortunate soul found a
 way to scrape neighbor tables to work around the lack
 of DHCPv6 lease logs does *not* make it a practical
 or wise alternative.   A certain network has been trying
 to test out that workaround, and every time they scrape
 the neighbor table, the CPU on the routers pegs at 100%.
 
 PPS--I am likewise posting this from my personal
 account (which is still running an old enough Cisco
 image that it pre-dates IPv6 support entirely, making
 most of this a moot point for me personally).   The
 opinions expressed here are purely my own, and
 should in no way be construed to apply to anyone
 but myself, and possibly the mice living in the garage.



Re: Small IX IP Blocks

2015-04-04 Thread Laszlo Hanyecz
Mike,

I think it's fine to cut it up smaller than /24, and might actually help in 
keeping people from routing the IX prefix globally.

-Laszlo


On Apr 5, 2015, at 12:35 AM, Mike Hammett na...@ics-il.net wrote:

 Okay, so I decided to look at what current IXes are doing. 
 
 It looks like AMS-IX, Equinix and Coresite as well as some of the smaller 
 IXes are all using /64s for their IX fabrics. Seems to be a slam dunk then as 
 how to handle the IPv6. We've got a /48, so a /64 per IX. For all of those 
 advocating otherwise, do you have much experience with IXes? Multiple people 
 talked about routing. There is no routing within an IX. I may grow, but an IX 
 in a tier-2 American city will never scale larger than AMS-IX. If it's good 
 enough for them, it's good enough for me. 
 
 Back to v4, I went through a few pages of PeeringDB and most everyone used a 
 /24 or larger. INEX appears to use a /25 for each of their segments. IX 
 Australia uses mainly /24s, but two locations split a /24 into /25s. A couple 
 of the smaller single location US IXes used /25s and /26s. It seems there's 
 precedent for people using smaller than /24s, but it's not overly common. 
 Cash and address space preservation. What does the community think about IXes 
 on smaller than /24s? 
 
 
 
 
 
 
 - 
 Mike Hammett 
 Intelligent Computing Solutions 
 http://www.ics-il.com 
 
 
 
 - Original Message -
 
 From: Brendan Halley bren...@halley.net.au 
 To: Mike Hammett na...@ics-il.net 
 Cc: nanog@nanog.org 
 Sent: Saturday, April 4, 2015 6:10:34 PM 
 Subject: Re: Small IX IP Blocks 
 
 
 IPv4 and IPv6 subnets are different. While a single IPv4 is taken to be a 
 single device, an IPv6 /64 is designed to be treated as an end user subnet. 
 https://tools.ietf.org/html/rfc3177 section 3. 
 On 05/04/2015 9:05 am, Mike Hammett  na...@ics-il.net  wrote: 
 
 
 That makes sense. I do recall now reading about having that 8 bit separation 
 between tiers of networks. However, in an IX everyone is supposed to be able 
 to talk to everyone else. Traditionally (AFAIK), it's all been on the same 
 subnet. At least the ones I've been involved with have been single subnets, 
 but that's v4 too. 
 
 
 
 
 - 
 Mike Hammett 
 Intelligent Computing Solutions 
 http://www.ics-il.com 
 
 
 
 - Original Message - 
 
 From: Valdis Kletnieks  valdis.kletni...@vt.edu  
 To: Mike Hammett  na...@ics-il.net  
 Cc: NANOG  nanog@nanog.org  
 Sent: Saturday, April 4, 2015 5:49:37 PM 
 Subject: Re: Small IX IP Blocks 
 
 On Sat, 04 Apr 2015 16:06:02 -0500, Mike Hammett said: 
 
 I am starting up a small IX. The thought process was a /24 for every IX 
 location (there will be multiple of them geographically disparate), even 
 though 
 we nqever expected anywhere near that many on a given fabric. Then okay, how 
 do 
  we d o v6? We got a /48, so the thought was a /64 for each. 
 
 You probably want a /56 for each so you can hand a /64 to each customner. 
 
 That way, customer isolation becomes easy because it's a routing problem. 
 If customers share a subnet, it gets a little harder 
 
 
 
 



Re: Purpose of spoofed packets ???

2015-03-10 Thread Laszlo Hanyecz
Is it possible that they are getting return traffic and it's just a localized 
activity?  The attacker could announce that prefix directly to the target 
network in an IXP peering session (maybe with no-export) so that it wouldn't 
set off your bgpmon.  I guess that would make more sense if they were doing 
email spamming instead of ssh though.

-Laszlo

On Mar 10, 2015, at 11:51 PM, Roland Dobbins rdobb...@arbor.net wrote:

 
 On 11 Mar 2015, at 6:40, Matthew Huff wrote:
 
 I assume the source address was spoofed, but this leads to my question. 
 Since the person that submitted the report didn't mention a high packet rate 
 (it was on ssh port 22), it doesn't look like some sort of SYN attack, but 
 any OS fingerprinting or doorknob twisting wouldn't be useful from the 
 attacker if the traffic doesn't return to them, so what gives?
 
 Highly-distributed, pseudo-randomly spoofed SYN-flood happened to momentarily 
 use one of your addresses as a source.  pps/source will be relatively low, 
 whilst aggregate at the target will be relatively high.
 
 Another very real possibility is that the person or thing which sent you the 
 abuse email doesn't know what he's/it's talking about.
 
 ;
 
 ---
 Roland Dobbins rdobb...@arbor.net



RE: Industry standard bandwidth guarantee?

2014-10-31 Thread Laszlo Hanyecz
If you're selling to end users, under promise and over deliver.  Tell them
20Mbit but provision for 25.  That way when they run their speedtest,
they're delighted that they're getting more, instead of being disappointed
and feeling screwed.  In practice they will leave it idle most of the time
anyway.
This isn't a technical problem, it's just a matter of setting expectations
and satisfying them.  Some of the customers might be completely clueless,
but if your goal is to make them happy, then explaining protocol overhead is
probably not the right way.

-Laszlo


-Original Message-
From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Jeff Sorrels
Sent: Friday, October 31, 2014 16:14
To: nanog@nanog.org
Subject: Re: Industry standard bandwidth guarantee?

And if you look at it from the provider's prospective, they have lots of 
customers who want 12 gallons of gas worth of driving time, but only 
want to pay for 11 gallons (or worse, went to gasspeedtest.net and it 
showed their purchased gas only gave them 10 gallons worth of driving time).

Consider a better analogy from the provider side:  A customer bakes a 
nice beautiful fruit cake for their Aunt Eddie in wilds of 
Saskatchewan.   The cake is 10 kg - but they want to make sure it gets 
to Eddie properly, so they wrap it in foil, then bubble wrap, then put 
it in a box.  They have this 10kg cake and 1kg of packaging to get it to 
up north.  They then go to the ISP store to get it delivered - and are 
surprised, that to get it there, they have to pay to ship 11kg.  But the 
cake is only 10kg!  If they pay to ship 11kg for a 10kg cake, obviously 
the ISP is trying to screw them. The ISP should deliver the 10kg cake at 
the 10kg rate and eat the cost of the rest - no matter how many kg the 
packaging is or how much space they actually have on the delivery truck.

And then the customer goes to the Internet to decry the nerve of the ISP 
for not explaining the concept of packaging up front and in big 
letters.  Why they should tell you - to ship 10kg, buy 11kg up front!  
Or better yet, they shouldn't calculate the box when weighing for 
shipping! I should pay for the contents and the wrapping, no matter how 
much it is, shouldn't even be considered! It's plain robbery.  Harrumph.

Jeff

On 10/31/2014 6:02 AM, Joe Greco wrote:
 That's fine as long as they're giving you a resource that can potentially
 transfer the 20Mbps.

 That *is* a silly example.

 A more proper analogy would be that you buy 12 gallons of gas, but the
 station only deposits 11 gallons in your tank because the pumps are
 operated by gasoline engines and they feel it is fine to count the
 number of gallons pulled out of their tank instead of the amount given
 to the customer.


 Finding new ways to give the customer less while making it look like more
 has a long, proud history, yes.

 ... JG

-- 
Jeff Sorrels
Network Administrator
KanREN, Inc
jlsorr...@kanren.net
785-856-9820, #2




Re: Ars Technica on IPv4 exhaustion

2014-06-22 Thread Laszlo Hanyecz

On Jun 23, 2014, at 3:32 AM, Kalnozols, Andris and...@hpl.hp.com wrote:

 
 On 6/22/2014 7:41 PM, Frank Bulk wrote:
 Did they ever explain why?  Did the SMC function as a router, and act as the
 customer side of a stub network that allowed that /29 to hang off the
 router?  If that was the case, and the Motorola D3 modem was L2-only, that
 might explain the change in capability. 
 

The Comcast business SMC gateway speaks RIP to make the routed /29 work.. in 
theory it could be put into bridge mode and you can do the RIP yourself but 
they don't support that configuration (you'd need the key to configure it 
successfully and they didn't want to do when I asked).  If you poke around in 
the web UI, it does support IPv6 in some form, but it doesn't seem to be active 
for me.

If you don't have a static IP block from them and thus don't have the need to 
use RIP you can just use a regular DOCSIS 3 cable modem and get IPv6, but you 
only get one IPv4 number that way.

-Laszlo


 They didn't really go into detail.  Your theory sounds correct; the
 four ports on the SMC router default to 10.1.10.0/24 but will also
 handle a routable /29 address from the WAN side of another router
 plugged into it.
 
 Since Comcast now charges $19.95 instead of $9.95/month for a /29,
 I inquired about the cost of an IPv6 assignment; same price as I
 recall being told.  I then asked if that was for a /60 or /56 and
 he said no, eight IPv6 addresses (/125?).  I politely thanked him
 and ended the phone call.  I realize that I could have gotten a
 more realistic answer from another Comcast rep with more v6-fu
 but I didn't pursue it.
 
 Andris
 
 
 
 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Kalnozols, Andris
 Sent: Sunday, June 22, 2014 9:29 PM
 To: nanog@nanog.org
 Subject: Re: Ars Technica on IPv4 exhaustion
 
 snip
 
 My experience as a Comcast Business customer with a /29 IPv4 subnet was
 that swapping out the SMC modem/router for an IPV6-capable Motorola
 DOCSIS 3 modem meant that I could no longer have the /29.
 
 Andris
 
 
 



Re: Credit to Digital Ocean for ipv6 offering

2014-06-19 Thread Laszlo Hanyecz
On Jun 19, 2014, at 12:18 PM, STARNES, CURTIS 
curtis.star...@granburyisd.org wrote:

 
 At 18,446,744,073,709,551,616 per /64, that is a lot of address.
 Right now I cannot get IPv6 at home so I will take getting screwed with a 
 /56 or /60 and be estatic about it.
 
 Curtis
 
 
 

Would be nice if everyone kept it simple and just stuck to /48s though.

It's complicated enough without everyone deploying different prefix sizes.  
Even the /64 net/host split isn't standard enough.  Think of something like 
DHCP - if there's an understanding that it's 'standard' then you can build 
software/hardware around this assumption and provide an easy to use system, 
without forcing the user to make sub-netting decisions.  Making software that 
works with this necessarily has to involve a complex UI and if certain unusual 
combinations don't work then people cry that it doesn't support IPv6.

The way that it's standard to receive one IPv4 address by DHCP and you can just 
plug in a laptop, imagine if in a few years it was standard to receive a /48 
IPv6 prefix on the local router and end user devices can request as many /64s 
as they want.  You could assign a /64 to each app on your cell phone or 
computer.. and this could happen automatically when possible.  Maybe an app 
wants many /64s, that's fine too.  We've gotten used to multiplexing everything 
onto a single overloaded address because it's a scarce resource.  In IPv6 
addresses are not scarce and in time this can be leveraged to simplify 
applications.  Yes, you can overload a single address, we do it all the time in 
IPv4 with proxies and NATs.  There are even hacks for having multiple SSL 
websites on one IPv4 address.  These things came about because the addresses 
are scarce but it's not correct to use the same justifications in IPv6 where 
the unique addresses are practically unlimited.

If we have to assume that /64s might be scarce and they have to be manually 
managed, then applications end up having to ask that question and configuration 
becomes complex.  If we know we can get at least a few hundred of them 
dynamically anywhere we go, then we only have to bother the user when we run 
out, and things 'just work'.

-Laszlo

Re: Observations of an Internet Middleman (Level3)

2014-05-16 Thread Laszlo Hanyecz
I'd just like to point out that a lot of people are in fact using their 
upstream capability, and the operators always throw a fit and try to cut off 
specific applications to force it back into the idle state.  For example P2P 
things like torrents and most recently the open NTP and DNS servers.  How about 
SMTP?  Not sure about you guys but my local broadband ISP has cut me off and 
told me that my 'unlimited internet' is in fact limited.  The reality is that 
those people who are not using it (99.8%?) are just being ripped off - paying 
for something they were told they need, thinking that it's there when they want 
it, then getting cut off when they actually try to use it.

It's not like whining about it here will change anything, but the prices are 
severely distorted.  Triple play packages are designed to force people to pay 
for stuff they don't need or want - distorting the price of a service hoping to 
recover it elsewhere, then if the gamble doesn't pan out, the customer loses 
again.  The whole model is based on people buying stuff that they won't 
actually come to collect, so then you can sell it an infinite number of times.  
The people who do try to collect what was sold to them literally end up getting 
called names and cut off - terms like excessive bandwidth user and network 
abuser are used to describe paying customers.  With regard to the peering 
disputes, it's hardly surprising that their business partners are treated with 
the same attitude as their customers.  Besides, if you cut off the customers 
and peers who are causing that saturation, then the existing peering links can 
support an infinite number of idle subscribers.  The next phase is 
usage-based-billing which is kind of like having to pay a fine for using it, so 
they can artificially push the price point lower and hopefully get some more 
idle customers.  That will help get the demand down and keep the infrastructure 
nice and idle.  When you're paying for every cat video maybe you realize you 
can live without it instead.

Everyone has been trained so well, they don't even flinch anymore when they 
hear about over subscription, and they apologize for the people who are doing 
it to them.  The restaurant analogy is incorrect - you can go to the restaurant 
next door if a place is busy, thus they have pressure to increase their 
capacity if they want to sell more meals.  With broadband you can't go anywhere 
else, (for most people) there's only one restaurant, and there's a week long 
waiting list.  If you don't like it, you're probably an abuser or excessive 
eater anyway.

-Laszlo


On May 16, 2014, at 5:34 PM, Scott Helms khe...@zcorum.com wrote:

 Michael,
 
 No, its not too much to ask and any end user who has that kind of
 requirement can order a business service to get symmetrical service but the
 reality is that symmetrical service costs more and the vast majority of
 customers don't use the upstream capacity they have today.  I have personal
 insight into about half a million devices and the percentage of people who
 bump up against their upstream rate is less than 0.2%.  I have the ability
 to get data on another 10 million and the last time I checked their rates
 were similar.
 
 This kind of question has been asked of operators since long before cable
 companies could offer internet service.  What happens if everyone in an
 area use their telephone (cellular or land line) at the same time?  A fast
 busy or recorded All circuits are busy message.  Over subscription is a
 fact of economics in virtually everything we do.  By this logic restaurants
 should be massively over built so that there is never a waiting line,
 highways should always be a speed limit ride, and all of these things would
 cost much more money than they do today.
 
 
 Scott Helms
 Vice President of Technology
 ZCorum
 (678) 507-5000
 
 http://twitter.com/kscotthelms
 
 
 
 On Sun, Apr 27, 2014 at 8:21 PM, Michael Thomas m...@mtcc.com wrote:
 
 Scott Helms wrote:
 
 Mark,
 
 Bandwidth use trends are actually increasingly asymmetical because of the
 popularity of OTT video.
 
 
 Until my other half decides to upload a video.
 
 Is it too much to ask for a bucket of bits that I can use in whichever
 direction happens
 to be needed at the moment?
 
 Mike
 



Re: US patent 5473599

2014-05-07 Thread Laszlo Hanyecz
This CARP thing is the best troll I've seen yet.  Over a decade old and people 
are still on about it.

-Laszlo


On May 8, 2014, at 1:15 AM, Blake Dunlap iki...@gmail.com wrote:

 Except for that whole mac address thing, that crashes networks...
 
 -Blake
 
 On Wed, May 7, 2014 at 8:03 PM, Constantine A. Murenin
 muren...@gmail.com wrote:
 On 7 May 2014 17:56,  valdis.kletni...@vt.edu wrote:
 On Wed, 07 May 2014 17:10:32 -0700, Constantine A. Murenin said:
 
 Also, would you please be so kind as to finally explain to us why
 Google can squat on the https port with SPDY,
 
 Because it doesn't squat on the port.  It politely asks Do you speak SPDY,
 or just https? and then listens to what the other end replies.
 
 Same for CARP -- it has its own version number, so, there's no
 conflict with the VRRP spec, either.
 
 C.



Re: Best practices IPv4/IPv6 BGP (dual stack)

2014-05-02 Thread Laszlo Hanyecz
Two different sessions using two different transport protocols.  The v4 BGP 
session should have address family v6 disabled and vice versa.  Exchange v4 
routes over a v4 TCP connection, exchange v6 routes over a v6 TCP connection.  
Just treat them as independent protocols. 

-Laszlo


On May 2, 2014, at 7:44 PM, Deepak Jain dee...@ai.net wrote:

 
 Between peering routers on a dual-stacked network, is it considered best 
 practices to have two BGP sessions (one for v4 and one for v6) between them? 
 Or is it better to put v4 in the v6 session or v6 in the v4 session?
 
 According to docs, obviously all of these are supported and if both sides are 
 dual stacked, even the next-hops don't need to be overwritten.
 
 Is there any community-approach to best practices here? Any FIB weirdness 
 (e.g. IPv4 routes suddenly start sucking up IPv6 TCAM space, etc)  that 
 results with one solution over the other?
 
 Thanks in advance,
 
 DJ



Re: ATT / Verizon DNS Flush?

2014-04-16 Thread Laszlo Hanyecz
The generally accepted and scalable way to accomplish this is to advertise your 
freshness preferences using the SOA record of your domain.  It would be pretty 
tricky to make this work with a swivel chair type system for every domain and 
host on the internet.  You would have to contact every user and ask them to 
invalidate the caches, after asking their recursing server operator to do the 
same.

-Laszlo


On Apr 16, 2014, at 6:15 AM, Steven Briggs stevenbri...@gmail.com wrote:

 Hello,
 
 Not sure where to point this... I was wondering if anybody knows an inroad
 to reach ATT and Verizon systems people to flush their caches for 
 proofpoint.com?
 
 Any help is greatly appreciated!
 
 Steven Briggs
 ᐧ




Re: DMARC - CERT?

2014-04-14 Thread Laszlo Hanyecz
I don't see what the big deal is here.  They don't want your messages and they 
made that clear.  Their policy considers these messages spam.  If you really 
want to get your mailing list messages through, then you need to evade their 
filters just like every other spammer has to.

-Laszlo


On Apr 14, 2014, at 4:32 PM, Miles Fidelman mfidel...@meetinghouse.net wrote:

 Well... how about this, from Yahoo's own posting:
 We know there are about 30,000 affected email sending services, but we also 
 know that the change needed to support our new DMARC policy is important and 
 not terribly  difficult to implement.
 
 To me - this sure looks, smells, and quacks like a denial-of-service attack 
 against a system I operate, and the subscriber to the lists that I support -- 
 somewhat akin to exploding a bomb in a public square, and then taking credit 
 for it.
 
 Miles Fidelman
 
 -- 
 In theory, there is no difference between theory and practice.
 In practice, there is.    Yogi Berra
 
 




Re: DMARC - CERT?

2014-04-14 Thread Laszlo Hanyecz
By their statement it's obvious that yahoo doesn't care about what they broke.  
It's unfortunate that email has become so centralized that one entity can cause 
so much 'trouble'.  Maybe it's a good opportunity to encourage the affected 
mailing list subscribers to use their own domains for email, and host it 
themselves if possible.

-Laszlo


On Apr 14, 2014, at 5:05 PM, Miles Fidelman mfidel...@meetinghouse.net wrote:

 Isn't it the other way around?  They don't want their users to be able to 
 send to mailing lists.  They receive traffic from the lists just fine.  Their 
 policy considers only effects mail originating from their users.  Yahoo 
 subscribers can receive messages form nanog just fine, but they can't send to 
 it.
 
 Miles
 
 Laszlo Hanyecz wrote:
 I don't see what the big deal is here.  They don't want your messages and 
 they made that clear.  Their policy considers these messages spam.  If you 
 really want to get your mailing list messages through, then you need to 
 evade their filters just like every other spammer has to.
 
 -Laszlo
 
 
 On Apr 14, 2014, at 4:32 PM, Miles Fidelman mfidel...@meetinghouse.net 
 wrote:
 
 Well... how about this, from Yahoo's own posting:
 We know there are about 30,000 affected email sending services, but we also 
 know that the change needed to support our new DMARC policy is important 
 and not terribly  difficult to implement.
 
 To me - this sure looks, smells, and quacks like a denial-of-service attack 
 against a system I operate, and the subscriber to the lists that I support 
 -- somewhat akin to exploding a bomb in a public square, and then taking 
 credit for it.
 
 Miles Fidelman
 
 -- 
 In theory, there is no difference between theory and practice.
 In practice, there is.    Yogi Berra
 
 
 
 
 -- 
 In theory, there is no difference between theory and practice.
 In practice, there is.    Yogi Berra
 
 




Re: Serious bug in ubiquitous OpenSSL library: Heartbleed

2014-04-08 Thread Laszlo Hanyecz
You can still potentially access all the same information since it all goes 
through the load balancer.  Interesting bits of info are things like Cookie: 
headers being sent by clients and sitting in a buffer.  Try one of the testing 
tools mentioned and see if you can see any info from other clients.  It's 
almost like having remote tcpdump on the web server - you can copy down the 
in-memory process image.

-Laszlo


On Apr 8, 2014, at 7:12 PM, Frank Bulk frnk...@iname.com wrote:

 If we would front our HTTPS services with a (OpenSSL vulnerable)
 load-balancer that does the SSL work and we just use HTTP to the service,
 will that mitigate information loss that's possible with this exploit?  Or
 will the OpenSSL code on the load-balancer also store or cache content?
 
 Frank
 
 -Original Message-
 From: Paul Ferguson [mailto:fergdawgs...@mykolab.com] 
 Sent: Tuesday, April 08, 2014 12:07 AM
 To: NANOG
 Subject: Fwd: Serious bug in ubiquitous OpenSSL library: Heartbleed
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 I'm really surprised no one has mentioned this here yet...
 
 FYI,
 
 - - ferg
 
 
 
 Begin forwarded message:
 
 From: Rich Kulawiec r...@gsp.org Subject: Serious bug in
 ubiquitous OpenSSL library: Heartbleed Date: April 7, 2014 at
 9:27:40 PM EDT
 
 This reaches across many versions of Linux and BSD and, I'd
 presume, into some versions of operating systems based on them.
 OpenSSL is used in web servers, mail servers, VPNs, and many other
 places.
 
 Writeup: Heartbleed: Serious OpenSSL zero day vulnerability
 revealed 
 
 http://www.zdnet.com/heartbleed-serious-openssl-zero-day-vulnerability-revea
 led-728166/
 
 Technical details: Heartbleed Bug http://heartbleed.com/
 
 OpenSSL versions affected (from link just above):  OpenSSL 1.0.1
 through 1.0.1f (inclusive) are vulnerable OpenSSL 1.0.1g is NOT
 vulnerable (released today, April 7, 2014) OpenSSL 1.0.0 branch is
 NOT vulnerable OpenSSL 0.9.8 branch is NOT vulnerable
 
 
 
 - -- 
 Paul Ferguson
 VP Threat Intelligence, IID
 PGP Public Key ID: 0x54DC85B2
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.22 (MingW32)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
 
 iF4EAREIAAYFAlNDg9gACgkQKJasdVTchbIrAAD9HzKaElH1Tk0oIomAOoSOvfJf
 3Dvt4QB54os4/yewQQ8A/0dhFZ/YuEdA81dkNfR9KIf1ZF72CyslSPxPvkDcTz5e
 =aAzE
 -END PGP SIGNATURE-
 
 
 
 




Re: BGPMON Alert Questions

2014-04-02 Thread Laszlo Hanyecz
They're just leaking every route right?
Is it possible to poison the AS paths you announce with their own AS to get 
them to let go of your prefixes until it's fixed?
Would that work, or some other trick that can be done without their cooperation?

Thanks,
Laszlo




Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-27 Thread Laszlo Hanyecz
Scott,

You are exactly right, in the current environment the things I'm suggesting 
seem unrealistic.  My point is that it doesn't have to work the way it does 
today, with the webmail providers, the mail originators and the spam warriors 
all scratching each others' backs.  There has been a LOT of work done to make 
webmail easy and everything else practically impossible, even if you do know 
how it works.  

What if Google, Apple, Sony or some other household brand, sold a TV with local 
mail capabilities, instead of pushing everyone to use their hosted services?  
If it doesn't work because we are blocking it on purpose, customers would 
demand that we make it work.  Since this isn't a well known option today, 
casual (non tech) users don't know that they should be demanding it.

As far as why someone would want an MTA, it doesn't take long to explain the 
benefits of having control over your own email instead of having a third party 
reading it all.  The problem is that instead users are told they can't have it. 
 MTAs are built into every user operating system and they would work just fine 
if the email community wasn't going out of their way to exclude them.  The lack 
of rDNS is just one of the many ways to identify and discriminate against end 
users who haven't bought their way into the club.

Spam is not a big problem for everyone.  It's at a different scale for 
individuals and for large sites with many users.

-Laszlo


On Mar 26, 2014, at 2:58 PM, Scott Buettner sbuett...@frii.net wrote:

 This is totally ignoring a few facts.
 
 A: That the overwhelming majority of users don't have the slightest idea what 
 an MTA is, why they would want one, or how to install/configure one. ISP/ESP 
 hosted email is prevalent only partially to do with technical reasons and a 
 lot to do with technical apathy on the part of the user base at large. Web 
 hosting is the same way. A dedicated mailbox appliance would be another cost 
 to the user that they would not understand why they need, and thus would not 
 want. In a hypothetical tech-utopia, where everyone was fluent in bash (or 
 powershell, take your pick), and read RFCs over breakfast instead of the 
 newspaper, this would be an excellent solution. Meanwhile, in reality, 
 technology frightens most people, and they are more than happy to pay someone 
 else to deal with it for them.
 
 B: The relevant technical reason can be summarized as good luck getting a 
 residential internet connection with a static IP
 
 (If your response includes the words dynamic DNS then please see point A)
 
 (Also I'm just going to briefly touch the fact that this doesn't address spam 
 as a problem at all, and in fact would make that problem overwhelmingly 
 worse, as MTAs would be expected to accept mail from everywhere, and we 
 obviously can't trust end user devices or ISP CPE to be secure against 
 intrusion)
 
 Scott Buettner
 Front Range Internet Inc
 NOC Engineer
 
 On 3/26/2014 8:33 AM, Laszlo Hanyecz wrote:
 Maybe you should focus on delivering email instead of refusing it.  Or just 
 keep refusing it and trying to bill people for it, until you make yourself 
 irrelevant.  The ISP based email made more sense when most end users - the 
 people that we serve - didn't have persistent internet connections.  Today, 
 most users are always connected, and can receive email directly to our own 
 computers, without a middle man.  With IPv6 it's totally feasible since 
 unique addressing is no longer a problem - there's no reason why every user 
 can't have their own MTA.  The problem is that there are many people who are 
 making money off of email - whether it's the sending of mail or the blocking 
 of it - and so they're very interested in breaking direct email to get 'the 
 users' to rely on them.  It should be entirely possible to build 'webmail' 
 into home user CPEs or dedicated mailbox appliances, and let everyone deal 
 with their own email delivery.  The idea of having to pay other people to 
 host email for you is as obsolete as NAT-for-security, and this IPv6 SMTP 
 thread is basically covering the same ground.  It boils down to: we have an 
 old crappy system that works, and we don't want to change, because we've 
 come to rely on the flaws of it and don't want them fixed.  In the email 
 case, people have figured out how to make money doing it, so they certainly 
 want to keep their control over it.
 
 -Laszlo
 
 
 On Mar 26, 2014, at 2:07 PM, Lamar Owen lo...@pari.edu wrote:
 
 On 03/25/2014 10:51 PM, Jimmy Hess wrote:
 [snip]
 
 I would suggest the formation of an IPv6 SMTP Server operator's club,
 with a system for enrolling certain IP address source ranges as  Active
 mail servers, active IP addresses and SMTP domain names under the
 authority of a member.
 
 ...
 
 As has been mentioned, this is old hat.
 
 There is only one surefire way of doing away with spam for good, IMO.  No 
 one is currently willing to do it, though.
 
 That way?  Make e-mail cost

Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-26 Thread Laszlo Hanyecz
Maybe you should focus on delivering email instead of refusing it.  Or just 
keep refusing it and trying to bill people for it, until you make yourself 
irrelevant.  The ISP based email made more sense when most end users - the 
people that we serve - didn't have persistent internet connections.  Today, 
most users are always connected, and can receive email directly to our own 
computers, without a middle man.  With IPv6 it's totally feasible since unique 
addressing is no longer a problem - there's no reason why every user can't have 
their own MTA.  The problem is that there are many people who are making money 
off of email - whether it's the sending of mail or the blocking of it - and so 
they're very interested in breaking direct email to get 'the users' to rely on 
them.  It should be entirely possible to build 'webmail' into home user CPEs or 
dedicated mailbox appliances, and let everyone deal with their own email 
delivery.  The idea of having to pay other people to host email for you is as 
obsolete as NAT-for-security, and this IPv6 SMTP thread is basically covering 
the same ground.  It boils down to: we have an old crappy system that works, 
and we don't want to change, because we've come to rely on the flaws of it and 
don't want them fixed.  In the email case, people have figured out how to make 
money doing it, so they certainly want to keep their control over it.

-Laszlo


On Mar 26, 2014, at 2:07 PM, Lamar Owen lo...@pari.edu wrote:

 On 03/25/2014 10:51 PM, Jimmy Hess wrote:
 
 [snip]
 
 I would suggest the formation of an IPv6 SMTP Server operator's club,
 with a system for enrolling certain IP address source ranges as  Active
 mail servers, active IP addresses and SMTP domain names under the
 authority of a member.
 
 ...
 
 As has been mentioned, this is old hat.
 
 There is only one surefire way of doing away with spam for good, IMO.  No one 
 is currently willing to do it, though.
 
 That way?  Make e-mail cost; have e-postage.  No, I don't want it either.  
 But where is the pain point for spam where this becomes less painful?  If an 
 enduser gets a bill for sending several thousand e-mails because they got 
 owned by a botnet they're going to do something about it; get enough endusers 
 with this problem and you'll get a class-action suit against OS vendors that 
 allow the problem to remain a problem; you can get rid of the bots.  This 
 will trim out a large part of spam, and those hosts that insist on sending 
 unsolicited bulk e-mail will get billed for it.  That would also eliminate a 
 lot of traffic on e-mail lists, too, if the subscribers had to pay the costs 
 for each message sent to a list; I wonder what the cost would be for each 
 post to a list the size of this one.  If spam ceases to be profitable, it 
 will stop.
 
 Of course, I reserve the right to be wrong, and this might all just be a pipe 
 dream.  (and yes, I've thought about what sort of billing infrastructure 
 nightmare this could be.)
 




Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-25 Thread Laszlo Hanyecz
The usefulness of reverse DNS in IPv6 is dubious.  Maybe the idea is to cause 
enough pain that eventually you fold and get them to host your email too.

-Laszlo


On Mar 25, 2014, at 8:57 PM, Brielle Bruns br...@2mbit.com wrote:

 On 3/25/14, 11:56 AM, John Levine wrote:
 I think this would be a good time to fix your mail server setup.
 You're never going to get much v6 mail delivered without rDNS, because
 receivers won't even look at your mail to see if it's authenticated.
 
 CenturyLink is reasonably technically clued so it shouldn't be
 impossible to get them to fix it.
 
 
 Nothing wrong with my mail server setup, except the lack of RDNS. Lacking 
 reverse should be one of many things to consider with rejecting e-mails, but 
 should not be the only condition.
 
 That would be like outright refusing mail unless it had both SPF and DKIM on 
 every single message.
 
 Sure, great in theory, does not work in reality and will result in lost mail 
 from legit sources.
 
 Already spoken to CenturyLink about RDNS for ipv6 - won't have rdns until 
 native IPv6.  Currently, IPv6 seems to be delivered for those who want it, 
 via 6rd.
 
 And, frankly, I'm not going to get in a fight with CenturyLink over IPv6 
 RDNS, considering that I am thankful that they are even offering IPv6 when 
 other large providers aren't even trying to do so to their residential and 
 small business customers.
 
 It is very easy for some to forget that not everyone has a gigabit fiber 
 connection to their homes with ARIN assigned IPv4/IPv6 blocks announced over 
 BGP.  Some of us actually have to make do with (sometimes very) limited 
 budgets and what the market is offering us and has made available.
 
 
 -- 
 Brielle Bruns
 The Summit Open Source Development Group
 http://www.sosdg.org/ http://www.ahbl.org
 




Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-25 Thread Laszlo Hanyecz
The OP doesn't have control over the reverse DNS on the ATT 6rd.  Spam 
crusades aside, it can be seen as just another case of 'putting people in their 
place', reinforcing that your end user connection is lesser and doesn't entitle 
to you to participate in the internet with the big boys.  How does one dare run 
a 'server' without being a member of a RIR?

One would hope that with IPv6 this would change, but the attitude of looking 
down on end subscribers has been around forever.  As seen in the other thread 
being discussed here, people are already looking for ways to block end users 
from participating.  

-Laszlo


On Mar 25, 2014, at 10:38 PM, Rich Kulawiec r...@gsp.org wrote:

 On Tue, Mar 25, 2014 at 02:57:15PM -0600, Brielle Bruns wrote:
 Nothing wrong with my mail server setup, except the lack of RDNS.
 Lacking reverse should be one of many things to consider with
 rejecting e-mails, but should not be the only condition.
 
 Lack of rDNS means either (a) there is something temporarily wrong with
 rDNS/DNS or (b) it's a spam source or (c) someone doesn't know how to set
 up rDNS/DNS for a mail server.  Over the past decade, (b) has been the
 answer to about five or six 9's (depending on how I crunch the numbers),
 so deferring on that alone is not only sensible, but quite clearly a
 best practice.  If it turns out that it looks like (b) but is actually
 (a), then as long as the DNS issue clears up before SMTP retries stop,
 mail is merely delayed, not rejected.  And although *sometimes* it's
 (c), why would I want to accept mail from a server run by people who
 don't grasp basic email server operation best practices?   (Doubly so
 since long experience strongly suggests people that botch this will very
 likely botch other things as well, some of which can result in negative
 outcomes *for me* if I accomodate them.)
 
 Of all the things that we need to do in order to make our mail servers
 play nice with the rest of the world, DNS/rDNS (and HELO) are among
 the simplest and easiest.
 
 ---rsk
 
 p.s. I also reject on mismatched and generic rDNS.  Real mail servers have
 real names, so if [generic] you insist on making yours look like a bot,
 I'll believe you and treat it like one.
 




Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-25 Thread Laszlo Hanyecz
Maybe we could give everyone globally unique numbers and end to end 
connectivity.  Then maybe the users themselves can send email directly to each 
other without going through this ESP cartel.

-Laszlo


On Mar 26, 2014, at 2:51 AM, Rob McEwen r...@invaluement.com wrote:

 On 3/25/2014 10:25 PM, Brielle Bruns wrote:
 
 Like I said in a previous response, if you are going to make rdns a
 requirement, why not make SPF and DKIM mandatory as well? 
 
 many ISPs ALREADY require rDNS. So making that standard official for
 IPv6 is isn't asking for much! It is a NATURAL progression. As I
 mentioned in a previous message, i think IPv6 should go farther and
 require FCrDNS, with the host name ending with the sender's actual real
 domain so that proper identity is conveyed. (then when a spammer uses a
 throwaway domain or known spammy domain... as the domain at the end of
 the rDNS, they have only themselves to blame when the message is rejected!)
 
 SPF is somewhat dead... because it breaks e-mail forwarding
 situations. Anyone who blocks on a bad SFP is going to have significant
 FPs. And by the time you've dialed down the importance of SPF to prevent
 FPs (either by the receiver not making too big of a deal about ir, or
 the sender using a NOT strict SFP), it then becomes impotent. About the
 only good usage of SPF is to change a domain's record to strict in
 situations where some e-mail on that domain is being picked on by a
 joe job where their address is forged into MANY spams over a period of
 time. (not just the occasional hit that everyone gets). otherwise, SPF
 is worthless.
 
 Maybe we should require DKIM for IPv6, too? But what I suggested about
 FCrDNS seems like a 1st step to me.
 
 -- 
 Rob McEwen
 +1 (478) 475-9032
 
 




Re: misunderstanding scale

2014-03-24 Thread Laszlo Hanyecz

On Mar 24, 2014, at 5:05 PM, Patrick W. Gilmore patr...@ianai.net wrote:

 On Mar 24, 2014, at 12:21, William Herrin b...@herrin.us wrote:
 On Sun, Mar 23, 2014 at 11:07 PM, Naslund, Steve snasl...@medline.com 
 wrote:
 
 I am not sure I agree with the basic premise here.   NAT or Private 
 addressing does not equal security.
 
 Many of the folks you would have deploy IPv6 do not agree. They take
 comfort in the mathematical impossibility of addressing an internal
 host from an outside packet that is not part of an ongoing session.
 These folks find that address-overloaded NAT provides a valuable
 additional layer of security.
 
 Some folks WANT to segregate their networks from the Internet via a
 general-protocol transparent proxy. They've had this capability with
 IPv4 for 20 years. IPv6 poorly addresses their requirement.
 

It's unfortunate that it is the way it is, but many enterprise people have this 
ingrained in them - they don't want to be connected to the internet except for 
a few exceptions.  Just the fact that they can't ping their machines gives them 
a warm and fuzzy.  In a run-of-the-mill default NAT setup, you can deploy a 
network printer with no security and nobody from the internet can print to it.  
It's default deny, even without setting anything else up, by virtue of not 
being on the internet and not having an address.  I know there are ways to 
subvert a NAT but that applies to perimeter and host firewalls too.  IPv6 
global numbers are great for those of us that actually want to connect to the 
internet, but enterprise people with rfc1918 numbering have gotten used to 
being disconnected, and while most of us know that it's trivial to firewall 
IPv6, it's still a big jump from using a NAT/proxy to being 'on the internet'.  
It's even more complex if it's only halfway and there are two different 
protocols to manage.

People will always resist change, and in this case, why should they change when 
it's only going to make their job harder?  Makes sense to me, but I wish it 
weren't that way.  They will probably just find ways to proxy and NAT IPv6 too, 
so that it fits the IPv4 model with 'private' addresses.

Just look at what's been happening with UDP floods.  It's scared people enough 
that some are just blocking certain UDP ports or UDP completely.  I imagine we 
will soon see some big IPv6 specific attacks that result in crashing 
hosts/routers, and that will just make people resist it harder, because why 
would they want that headache?  I think in a lot of situations, unless their 
business is networking specifically, the network is considered good enough if 
you can browse (most) webpages.  For IPv6 only sites, that could be 
accomplished with a web proxy setting on all the desktops.  It's not really 
right, it's inefficient, error prone and bunch of other things, but that 
doesn't mean people won't do it.  They do all this today with v4 anyway, so if 
anything, the 'wrong way' is easier there since they're used to doing it.

There has to be some big compelling reason to convince people that global 
addressing is the right way.  We all know the reasons but they're obviously not 
good enough for enterprise security people.

-Laszlo



 NAT i s not required for the above. Any firewall can stop incoming packets 
 unless they are part of an established session. NAT doesn't add much of 
 anything, especially given that you can have one-to-one NAT.
 
 -- 
 TTFN,
 patrick
 
 




Re: misunderstanding scale (was: Ipv4 end, its fake.)

2014-03-23 Thread Laszlo Hanyecz


On Mar 23, 2014, at 4:57 PM, Mark Andrews ma...@isc.org wrote:

 
 
 Basically because none of them have ever been on the Internet proper
 where they can connect to their home machines from wherever they
 are in the world directly.  If you don't know what it should be
 like you don't complain when you are not getting it.
 

It's ironic that those of us that do understand this are mostly the same ones 
saying that it's ok to give 'the users' NAT.  The reality is that some 
(many/most/all?) of our 'users' are probably smarter than us and they just get 
around it with VPNs/tunnels just like we do.  Just because they aren't 
complaining directly to us, doesn't mean they are satisfied.  Every gamer with 
a console is basically screwed - they have to jump through hoops trying to 
figure out how to forward ports or whatever else, because these home routers 
all give them NAT.  We can probably argue cause/effect on this, but it's all 
tied together - those routers wouldn't have had to do NAT if they could somehow 
request unique numbers for each device.. but now carriers are doing that same 
NAT internally, because hey, 'the users' are already used to it anyway, from 
having done it on their home gateways. 

It's not that the users are ok with NAT, or that they prefer it, it's just all 
they can get.
IPv6 is far from perfect, but it's a direct answer to the resource exhaustion 
problem.  It seems unlikely that IPv4 will ever be dropped, but it can be made 
largely irrelevant by building out IPv6 networks.

As far as the enterprise side of things, many of the people working in that 
area today have likely never known any other kind of network except the NAT 
kind.  A lot of these guys say things like 'private ip' and 'public ip' - 
they've have this ingrained in them for the past 15+ years, and the idea of 
real internet is scary.  I'm not sure how this problem of education is 
addressed, and it might sound stupid, but it's a real problem.

The other side of things is that some software vendors with large market share 
are doing their own share of actively trying to undermine IPv6 deployment in 
subtle ways.  You can read RFC6555 for the details.  Just as an example, on Mac 
OS, users accessing a dual stack website from a dual stack host may not ever 
actually take the IPv6 path, so if there are people auditing how many clients 
are using v4 vs v6 they would get skewed results.

I know everyone has their own parameters that define what's worth it and what's 
not, but I think most people's lives would be made easier by embracing IPv6.

-Laszlo


 ISP's have done a good job of brain washing their customers into
 thinking that they shouldn't be able to run services from home.
 That all their machines shouldn't have a globally unique address
 that is theoritically reachable from everywhere.  That NAT is normal
 and desiriable.
 
 I was at work last week and because I have IPv6 at both ends I could
 just log into the machines at home as easily as if I was there.
 When I'm stuck using a IPv4 only service on the road I have to jump
 through lots of hoops to reach the internal machines.
 
 Mark
 
 R's,
 John
 
 
 -- 
 Mark Andrews, ISC
 1 Seymour St., Dundas Valley, NSW 2117, Australia
 PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
 




Re: L6-20P - L6-30R

2014-03-18 Thread Laszlo Hanyecz
It's temporary unless it works.

-Laszlo


On Mar 18, 2014, at 11:30 PM, Jay Ashworth j...@baylink.com wrote:

 - Original Message -
 From: Stephen Sprunk step...@sprunk.org
 
 On 18-Mar-14 17:54, Niels Bakker wrote:
 * w...@typo.org (Wayne E Bouchard) [Tue 18 Mar 2014, 23:53 CET]:
 I have had to do this at times but it is not strictly allowed by
 codes and not at all recommended.
 
 It's an active fire hazard. The cables aren't rated (= built) for
 the power draw.
 
 That's a problem in the other direction, but plugging a 20A device
 into a 30A feed shouldn't be a hazard at all.
 
 Plugging a 20A *PDU* into a 30A receptacle can be dangerous if 
 
 a) there is more than 20A of load plugged into it
 b) it has no breaker, and 
 c) the cordset is only 12A, which is what you would expect on a 20A PDU.
 
 Cheers,
 -- jr 'up the voltage' a
 -- 
 Jay R. Ashworth  Baylink   
 j...@baylink.com
 Designer The Things I Think   RFC 2100
 Ashworth  Associates   http://www.bcp38.info  2000 Land Rover DII
 St Petersburg FL USA  BCP38: Ask For It By Name!   +1 727 647 1274
 




Re: new DNS forwarder vulnerability

2014-03-15 Thread Laszlo Hanyecz
Good question, but the reality is that a lot of them are this way.  They just 
forward everything from any source.  Maybe it was designed that way to support 
DDoS as a use case.

Imagine a simple iptables rule like -p udp --dport 53 -j DNAT --to 4.2.2.4
I think some forwarders work this way - the LAN addresses can be reconfigured 
and so it's probably easier if the rule doesn't check the source address.. or 
maybe it was designed to work this way on purpose, because it's easy to explain 
as a 'bug' or oversight, rather than deliberate action.  Of course, it's crazy 
to think that some person or organization deliberately did this so they would 
have a practically unlimited amount of DoS sources.

-Laszlo


On Mar 15, 2014, at 4:26 PM, Gary Baribault g...@baribault.net wrote:

 Why would a CPE have an open DNS resolver from the WAN side?
 
 Gary Baribault
 
 On 03/14/2014 12:45 PM, Livingood, Jason wrote:
 Well, at least all this CPE checks in for security updates every night so
 this should be fixable. Oh wait, no, nevermind, they don't. :-(
 
 
 This is getting to be the vulnerability of the week club for home gateway
 devices - quite concerning.
 
 JL
 
 On 3/14/14, 12:05 PM, Merike Kaeo mer...@doubleshotsecurity.com wrote:
 
 On Mar 14, 2014, at 7:06 AM, Stephane Bortzmeyer bortzme...@nic.fr
 wrote:
 
 On Fri, Mar 14, 2014 at 01:59:27PM +,
 Nick Hilliard n...@foobar.org wrote
 a message of 10 lines which said:
 
 did you characterise what dns servers / embedded kit were
 vulnerable?
 He said We have not been able to nail this vulnerability down to a
 single box or manufacturer so it seems the answer is No.
 
 
 It is my understanding  that many CPEs work off of same reference
 implementation(s).  I haven't
 had any cycles for this but with all the CPE issues out there it would be
 interesting to have
 a matrix of which CPEs utilize which reference implementation.  That may
 start giving some clues.
 
 Has someone / is someone doing this?
 
 - merike
 
 
 
 
 




Re: Filter NTP traffic by packet size?

2014-02-20 Thread Laszlo Hanyecz
Filtering will always break something.  Filtering 'abusive' network traffic is 
intentionally difficult - you either just let it be, or you filter it along 
with the 'good' network traffic that it's pretending to be.  How can you even 
tell it's NTP traffic - maybe by the port numbers?  What if someone is running 
OpenVPN on those ports?  What about IP options?  Maybe some servers return 
extra data?  

This is really not a network operator problem, it's an application problem if 
anything.  While it makes sense to temporarily filter a large flood to keep the 
rest of your customers online, it's a very blunt instrument, as the affected 
customer is usually still taken offline - but I'm talking about specific 
targeted filters anyway.  Doing blanket filtering based on packet sizes is sure 
to generate some really hard to debug failure cases that you didn't account for.

Unfortunately, as long as Facebook loads, most of the users are happy, and so 
these kinds of practices will likely be implemented in many places, with some 
people opting to completely filter NTP or UDP.  Maybe it will buy you a little 
peace and quiet today, but tomorrow it's just going to be happening on a 
different port/protocol that you can't inspect deeply, and you don't dare 
block.  I can imagine 10 years from now, where we're writing code that 
fragments replies into 100 byte packets to get past this, and everyone loses.  
Your filter is circumvented, the application performs slower, and the 'bad 
guys' found another way that you can't filter.  When all that's left is TCP 
port 443, that's what all the 'abuse' traffic will be using too.

Laszlo


On Feb 20, 2014, at 8:41 PM, Edward Roels edwardro...@gmail.com wrote:

 Curious if anyone else thinks filtering out NTP packets above a certain
 packet size is a good or terrible idea.
 
 From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are
 typical for a client to successfully synchronize to an NTP server.
 
 If I query a server for it's list of peers (ntpq -np ip) I've seen
 packets as large as 522 bytes in a single packet in response to a 54 byte
 query.  I'll admit I'm not 100% clear of the what is happening
 protocol-wise when I perform this query.  I see there are multiple packets
 back forth between me and the server depending on the number of peers it
 has?
 
 
 Would I be breaking something important if I started to filter NTP packets
 200 bytes into my network?




Re: TWC (AS11351) blocking all NTP?

2014-02-04 Thread Laszlo Hanyecz
Why not just provide a public API that lets users specify which of your 
customers they want to null route?  It would save operators the trouble of 
having to detect the flows.. and you can sell premium access that allows the 
API user to null route all your other customers at once.

Once everyone implements these awesome flow detectors it will just take short 
bursts of flooding to DoS their customers.  If you can detect them in less than 
a second, it might not even show up on any interface graphs.  I think this is 
already the case at a lot of VPS and hosting providers, since they're such 
popular sources as well as targets.

I don't know what, if anything, is the answer to these problems, but building 
complex auto-filtering contraptions is not it.  Filtering NTP or UDP or any 
other specific application will just break things more, which is the goal of a 
'denial of service' attack.  Eventually everything will just be stuffed into 
TCP port 80 packets and the arms race will continue.

The recent abuse of NTP is unfortunate, but it will get fixed.  I just wonder 
if UDP will have to be tunneled inside HTTP by then.

Laszlo





Re: TWC (AS11351) blocking all NTP?

2014-02-04 Thread Laszlo Hanyecz
I was joking, I meant that the operator provides an API for attackers, so they 
can accomplish their goal of taking the customer offline, without having to 
spoof or flood or whatever else.  Automatically installing ACLs in response to 
observed flows accomplishes almost the same thing.  As a concrete example, say 
a customer is running a game server that utilizes UDP port 12345.  An attacker 
sends a large flow to customer:12345 and your switches and routers all start 
filtering anything with destination customer:12345, for say 2 hours.  Then the 
attacker can just repeat in 2 hours and send only a few seconds worth of 
flooding each time.

On Feb 4, 2014, at 6:52 PM, William Herrin b...@herrin.us wrote:

 On Tue, Feb 4, 2014 at 1:45 PM, Laszlo Hanyecz las...@heliacal.net wrote:
 Why not just provide a public API that lets users specify which
 of your customers they want to null route?
 
 They're spoofed packets. There's no way for anyone outside your AS to
 know which of your customers the packets came from. It's not
 particularly easy to trace inside your AS either.
 
 Regards,
 Bill Herrin
 
 
 
 -- 
 William D. Herrin  her...@dirtside.com  b...@herrin.us
 3005 Crane Dr. .. Web: http://bill.herrin.us/
 Falls Church, VA 22042-3004




Re: Will a single /27 get fully routed these days?

2014-01-25 Thread Laszlo Hanyecz
Yes, a /27 is too small.  You need at least a /24.

On Jan 25, 2014, at 9:17 PM, Drew Linsalata drew.linsal...@gmail.com wrote:

 Yeah, its been a while since I had to get involved in this.  We have a
 customer with their own IPv4 allocation that wants us to announce a /27 for
 them. Back in the day, it was /24 or larger or all bets were off.  Is
 that still the case now?




Re: IPv6 /48 advertisements

2013-12-18 Thread Laszlo Hanyecz
It's standard to filter out anything longer than /48.

Your /36 prefix was chosen based on the number of sites, with a /48 per site, 
so just keep it simple.  Trying to manage it in the way IPv4 addresses were 
managed will just ensure that you will have the same headaches of micro 
managing sub allocations and trying to guess the right sizes.  The address 
space in V6 is large enough that you don't have to spend your time worrying 
about this, and that's one of the reasons for using a /48 at each site.

Think of an IPv6 /48 like you would think of an IPv4 /24 - except that it's the 
right size for either your house or your university campus.

Laszlo


On Dec 18, 2013, at 4:11 PM, Cliff Bowles cliff.bow...@apollogrp.edu wrote:

 I accidentally sent this to nanog-request yesterday. I could use some 
 feedback from anyone that can help, please.
 
 Question: will carriers accept IPv6 advertisements smaller than /48?
 
 Our org was approved a /36 based on number of locations. The bulk of those 
 IPs will be in the data centers. As we were chopping up the address space, it 
 was determined that the remote campus locations would be fine with a /60 per 
 site. (16 networks of /64). There are usually less than 50 people at the 
 majority of these locations and only about 10 different functional VLANs 
 (Voice, Data, Local Services, Wireless, Guest Wireless, etc...).
 
 Now, there has been talk about putting an internet link in every campus 
 rather than back hauling it all to the data centers via MPLS. However, if we 
 do this, then would we need a /48 per campus? That is massively wasteful, at 
 65,536 networks per location.  Is the /48 requirement set in stone? Will any 
 carriers consider longer prefixes?
 
 I know some people are always saying that the old mentality of conserving 
 space needs to go away, but I was bitten by that IPv4 issue back in the day 
 and have done a few VLSM network overhauls. I'd rather not massively allocate 
 unless it's a requirement.
 
 Thanks in advance.
 
 CWB
 
 
 
 
 
 This message is private and confidential. If you have received it in error, 
 please notify the sender and remove it from your system.
 




Re: If you're on LinkedIn, and you use a smart phone...

2013-10-26 Thread Laszlo Hanyecz
When a user signs up for a social media account they generally do so by 
providing an email address like vic...@freewebmailsite.com and selecting a 
password.  The social media site can obviously probe freewebmailsite.com and 
attempt to authenticate using the same password that you just provided to them 
(for the purpose of logging into their social media site).  I guess offering an 
email proxy or asking if it's ok to worm through your email for contacts is 
merely a formality.  How many social media users do you guess would use the 
same password on the social media site as they would for freewebmailsite.com 
(and likely their employer's organization's email)?  It's kind of like when 
google asks their users with android phones to provide their mobile phone 
number for SMS password recovery.

Laszlo

On Oct 25, 2013, at 11:43 PM, Chris Hartley hartl...@gmail.com wrote:

 Anyone who has access to logs for their email infrastructure ought
 probably to check for authentications to user accounts from linkedin's
 servers.  Likely, people in your organization are entering their
 credentials into linkedin to add to their contact list.  Is it a
 problem if a social media company has your users' credentials?  I
 guess it depends on your definition of is.  The same advice might
 apply to this perversion of trust as well, but I'm not sure how
 linkedin is achieving this feat.
 
 On Fri, Oct 25, 2013 at 7:25 PM, Phil Bedard bedard.p...@gmail.com wrote:
 I saw some antectdotal stuff on this yesterday but reading their
 engineering blog entry makes me feel all warm and fuzzy inside.  Oh
 nevermind, that's just the alcohol.  This is perhaps one of the worst
 ideas I've seen concocted by a social media company yet.
 
 
 -Phil
 
 On 10/25/13, 6:56 PM, George Bakos gba...@alpinista.org wrote:
 
 next thing you know, Google is going to be offering free email so they
 can do the same thing.
 
 On Fri, 25 Oct 2013 08:45:40 -0700
 Shrdlu shr...@deaddrop.org wrote:
 
 I hate to do this, but it's something that anyone managing email
 servers (or just using a smart phone to update LI) needs to know
 about. I just saw this on another list I'm on, and I know that there
 are folks on NANOG that are on LinkedIn.
 
 ++
 http://www.bishopfox.com/blog/2013/10/linkedin-intro/
 
 LinkedIn released a new product today called Intro.  They call it
 ___doing the impossible___, but some might call it ___hijacking
 email___.
 Why do we say this?  Consider the following:
 
 Intro reconfigures your iOS device (e.g. iPhone, iPad) so that all of
 your emails go through LinkedIn___s servers. You read that right. Once
 you install the Intro app, all of your emails, both sent and received,
 are transmitted via LinkedIn___s servers. LinkedIn is forcing all your
 IMAP and SMTP data through their own servers and then analyzing and
 scraping your emails for data pertaining to___whatever they feel like.
 
 ++
 
 Read the full article. If you're using LI via your smart phone, and
 you have already installed this app, you probably need to save off
 your contacts and data, and wipe the phone. I wouldn't trust
 uninstalling as enough, myself. In the long run, I'll be deleting my
 account.
 
 No, I don't use a smart phone to update any social media. No, I
 especially do not trust LI (never have, never will). BTW, they're
 currently adding back any contacts you've deleted. Thanks for
 reminding me that Joe Barr, Len Sassaman, and Jay D Dyson are gone
 from this world.
 
 --
 Life may not be the party we hoped for, but while we are here,
 we might as well dance.
 
 
 
 
 --