Re: Tata Scenic routing in LAX area?

2018-11-15 Thread John Weekes

Marcus,

From route-views output, it looks like AS9498/airtel is probably 
leaking your route between two of its upstreams (AS6453/Tata and 
AS4637/Telstra) overseas, funneling some of your traffic through their 
router.


route-views>sh ip bgp 23.92.178.22 | i 9498
  3356 6453 9498 4637 29791
  1403 6453 9498 4637 29791
  3549 3356 6453 9498 4637 29791
  19214 3257 6453 9498 4637 29791
  1403 6453 9498 4637 29791
  286 6453 9498 4637 29791
  53364 3257 6453 9498 4637 29791
  3257 6453 9498 4637 29791
  1239 6453 9498 4637 29791
  2497 6453 9498 4637 29791
  57866 6453 9498 4637 29791
  7660 2516 6453 9498 4637 29791
  701 6453 9498 4637 29791
  3561 209 6453 9498 4637 29791

You might try halting advertisements to your AS4637/Telstra peer while 
you contact AS9498.


-John

On 11/15/2018 10:43 AM, Marcus Josephson wrote:


Anyone else seeing an odd Scenic routing in the LAX/SJE area for tata.

traceroute to 23.92.178.22 (23.92.178.22), 30 hops max, 52 byte packets

1  if-ae-13-2.tcore2.lvw-los-angeles.as6453.net (64.86.252.34)  
180.698 ms  180.610 ms  181.712 ms


 MPLS Label=344269 CoS=0 TTL=1 S=1

2  if-ae-7-2.tcore2.svw-singapore.as6453.net (180.87.15.25)  189.327 
ms if-ae-7-2.tcore2.svw-singapore.as6453.net (64.86.252.37) 176.800 ms 
if-ae-7-2.tcore2.svw-singapore.as6453.net (64.86.252.39)  174.631 ms


 MPLS Label=609315 CoS=0 TTL=1 S=1

3  if-ae-20-2.tcore1.svq-singapore.as6453.net (180.87.96.21)  174.287 
ms  173.370 ms  173.804 ms


4  120.29.215.202 (120.29.215.202)  179.104 ms 179.367 ms  179.324 ms

5  182.79.152.247 (182.79.152.247)  180.164 ms 182.79.152.253 
(182.79.152.253)  184.816 ms 182.79.152.247 (182.79.152.247)  250.928 ms


6  unknown.telstraglobal.net (202.127.73.101) [AS  4637]  173.974 ms  
173.986 ms  173.484 ms


7  i-93.sgpl-core02.telstraglobal.net (202.84.224.189) [AS  4637]  
175.094 ms  175.699 ms 174.343 ms


8  i-10850.eqnx-core02.telstraglobal.net (202.84.140.46) [AS  4637]  
280.686 ms  288.703 ms 280.836 ms


9  i-92.eqnx03.telstraglobal.net (202.84.247.17) [AS  4637]  278.021 
ms  276.637 ms 302.249 ms


10  equinix-ix.sjc1.us.voxel.net (206.223.116.4)  174.139 ms  174.163 
ms  174.067 ms


Marcus Josephson

IP Operations

mjoseph...@inap.com

/This message is intended for the use of the intended recipient(s) and 
may contain confidential and privileged information. /


/Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact//the 
sender by reply email and destroy all copies of the original message./






Re: Amazon network engineering contact? re: DDoS traffic

2018-11-08 Thread John Weekes

Zach,

Yes, RTBH is used to distribute the null-routes that I mentioned.

Unfortunately, even brief saturation events lasting just 5-10 seconds (a 
typical amount of time to detect the loss, issue the null-route, and see 
the traffic start to fall off as it is distributed upstream) can cause 
real damage to those customers who are sensitive to latency and packet 
loss. So while null-routes limit the duration of the impact, they can't 
eliminate it entirely. And, of course, the actual target of the attack 
-- the now-null-routed IP address -- becomes unreachable, which was 
presumably the goal of the attacker.


-John

On 11/8/2018 12:54 PM, Zach Puls wrote:

No idea about an Amazon abuse contact, but do you have RTBH communities enabled 
with your upstream provider(s)? As a general practice, when you detect a (D)DoS 
attack in progress, it would help to automatically advertise that prefix to 
your upstream(s) with the black-hole community. This would at least help 
mitigate the effects of the attacks when they do occur, even if they come from 
a different source than AWS.

Thanks,

Zach Puls
Network Engineer | MEF-CECP
KsFiberNet

-Original Message-
From: NANOG  On Behalf Of John Weekes
Sent: Thursday, November 08, 2018 14:44
To: nanog@nanog.org
Subject: Amazon network engineering contact? re: DDoS traffic

We've been seeing significant attack activity from Amazon over the last two 
months, involving apparently compromised instances that commonly send 1-10G of 
traffic per source and together generate Nx10G of total traffic. Even when our 
overall upstream capacity exceeds an attack's overall size, the nature of 
load-balancing over multiple 10G upstream links means that an individual link 
can be saturated by multiple large flows, forcing our systems to null-route the 
target to limit impact.

We've sent an abuse notification about every traffic source to Amazon, and 
specific sources seem to stop their involvement over time (suggesting that 
abuse teams are following up on them), but there is an endless parade of new 
attackers, and each source participates in many damaging attacks before it is 
shut down.

Is there anyone at Amazon who can help with an engineering solution in terms of 
programmatically detecting and rate-limiting attack traffic sources, to our 
networks or overall? Or applying the kludge of a rate-limit for all Amazon 
traffic to our networks? Or working with us on some other option?

At least one other large cloud provider has an automatic rate-limiting system 
in place that is effective in reducing the damage from repeat high-volume 
attacks.

Emails to the Amazon NOC, peering contacts (since that would be another 
possible solution), and abuse department have not connected me with anyone.

Thanks,
John





Amazon network engineering contact? re: DDoS traffic

2018-11-08 Thread John Weekes
We've been seeing significant attack activity from Amazon over the last 
two months, involving apparently compromised instances that commonly 
send 1-10G of traffic per source and together generate Nx10G of total 
traffic. Even when our overall upstream capacity exceeds an attack's 
overall size, the nature of load-balancing over multiple 10G upstream 
links means that an individual link can be saturated by multiple large 
flows, forcing our systems to null-route the target to limit impact.


We've sent an abuse notification about every traffic source to Amazon, 
and specific sources seem to stop their involvement over time 
(suggesting that abuse teams are following up on them), but there is an 
endless parade of new attackers, and each source participates in many 
damaging attacks before it is shut down.


Is there anyone at Amazon who can help with an engineering solution in 
terms of programmatically detecting and rate-limiting attack traffic 
sources, to our networks or overall? Or applying the kludge of a 
rate-limit for all Amazon traffic to our networks? Or working with us on 
some other option?


At least one other large cloud provider has an automatic rate-limiting 
system in place that is effective in reducing the damage from repeat 
high-volume attacks.


Emails to the Amazon NOC, peering contacts (since that would be another 
possible solution), and abuse department have not connected me with anyone.


Thanks,
John


Re: Spitballing IoT Security

2016-10-30 Thread John Weekes

On 10/29/2016 9:43 PM, Eric S. Raymond wrote:

I in turn have to call BS on this.  If it were really that easy, we'd
be inundated by Mirais -- we'd have several attacks a*day*.


Some of us are seeing many significant attacks a day.

That's because botnets are frequently used to hit game servers and game 
players. In fact, the Mirai-targeted devices were not newly-seen; 
easily-exploited devices like older DVRs have been observed for years in 
attacks on game servers. The main difference in the recent botnet 
attacks (mostly, 2016) is that they have been larger and more frequent, 
likely because of incremental improvements to scanners (including in 
time-to-exploitation, which is important to building the botnet because 
these devices are so frequently rebooted) and payloads (to better block 
further exploitation by competitors). If you run a honeypot and take a 
look at what happens to one of these devices over time, you'll see an 
interesting tug-of-war between many different actors that are 
compromising them and running their own binaries.


Reflection attacks are still common, as well, of course. Previously, 
those were the ones that created the largest flows. But, the 
higher-amplification-factor reflection attacks can be mostly mitigated 
upstream with basic ACLs (as long as the upstream is willing to help, 
and has the internal capacity to do it; many NSPs do not). It is not 
uncommon to see a botnet attack at the same time as a reflection attack.


-John


Re: Death of the Internet, Film at 11

2016-10-25 Thread John Weekes

On 10/24/2016 9:37 PM, b...@theworld.com wrote:

As I've suggested before how much would you attribute this to a lack
of English skills by recipients?


I do not think that is a significant factor.

Here are some points along those lines:

- ab...@cnc-noc.net times out. It's not a matter of whether they know 
English; they just don't accept the email.
- Some Hong Kong ISPs /do/ respond and ask questions. In English. (As 
does a sampling of other foreign ISPs around the world, including those 
in Japan, Europe, Russia, etc. -- but mainland China is consistently 
silent.)
- The major Chinese players (including China Mobile, China Telecom, and 
China Unicom) are some of the largest companies in the world, with just 
China Mobile having 241,550 employees, according to their 2014 annual 
report. It is unlikely that they don't have internal translation 
capabilities. I also have no doubt that they have a large NOC, and they 
could have a large abuse team (but perhaps choose not to). Large teams 
are more likely to have some bilingual members, and English is a very 
common second language.
- These large Chinese companies are global companies with PoPs inside 
the U.S, and peering with U.S. providers. They sell services to, and 
interact with, companies around the world, including in English.
- I have had others tell me that engineers at these Chinese providers 
contact them for peering upgrades in English -- but that they ignore 
abuse concerns communicated over the same channels.
- Knowing English is not necessary to read tcpdump output, recognize 
attack traffic, and check IP addresses. Recipients don't have to respond 
back, so that's mostly what they need.

- It's not hard to use online translation services.
- It's not hard to respond back and say "Use Mandarin" (or the 
equivalent, in their preferred language).
- I tried sending emails to Russian providers in Russian for a time. I 
received quite a few responses back along the lines of "please just use 
English." This has made me think twice about trying to pre-translate.



Are they all sent in English?


Currently, mine are.


Just curious but one wonders what most here would do with an abuse
complaint sent to them in Chinese?


If I were to receive one in Chinese, I would personally paste it into 
Google Translate. That is what I do with Japanese complaints/responses, 
which are the main ones I see that aren't in English. Most others ISPs 
seem to use straight English, or both English and another language.


-John


Re: Death of the Internet, Film at 11

2016-10-23 Thread John Weekes

On 10/23/2016 4:19 PM, Ronald F. Guilmette wrote:



... I've recorded
about 2.4 million IP addresses involved in the last two months (a number
that is higher than the number of actual devices, since most seem to
have dynamic IP addresses). The ISPs behind those IP addresses have
received notifications via email...

Just curious... How well is that working out?


For the IoT botnets, most of the emails are ignored or rejected, because 
most go to providers who either quietly bitbucket them or flat-out 
reject all abuse emails. Most emails sent to mainland China, for 
instance, are in that category (Hong Kong ISPs are somewhat better).


For other botnets, such as those using compromised webservers running 
outdated phpMyAdmin installs at random hosts, harnessing spun-up 
services at reputable VPS providers (Amazon, Microsoft, Rackspace, 
etc.), or harnessing devices at large and small US and Canadian ISPs, we 
have had better luck. Usually, we don't hear a response back, but those 
emails are often forwarded to the end-user, who takes action (and may 
ask us for help, which is how we know they are being forwarded). The 
fixes can enough to reduce attack volumes to more manageable levels.


Kudos go out to the large and small ISPs and NSPs who have started 
policing SSDP and other reflection traffic, which we also send out some 
notifications for. In some cases, it may be that our emails spurred them 
to notice how much damage those attacks were doing and how much it was 
costing them to carry the attack traffic.



I've tried this myself a few times in the past, when I've found things
that appear to be seriously compromised, and for my extensive trouble
I've mostly received back utter silence and no action.  I remember that
after properly notifying security@ some large end-luser cable network
in the SouthEast (which shall remain nameless) I got back something
along the lines of "Thank you.  We'll look into it." and was disgusted
to find, two months later, that the boxes in question were still utterly
pwned and in the exact same state they were two months prior, when I
had first reported them.


We do get our share of that, as well, unfortunately, along with our 
share of people who send angry responses calling the notifications spam 
(I disagree with them that sending a legitimate abuse notification to a 
publicly-posted, designated abuse account should be considered spam) or 
who flame us for acting like "internet police". But, we persist. Some 
people change their minds after receiving multiple notifications or 
after we explain that DoS traffic costs them money and hurts their 
customers, who will be experiencing degraded service and may silently 
switch providers over it.



I guess that's just an example of what somebody else already noted here,
i.e. that providers don't care to spend the time and/or effort and/or
money necessary to actually -do- anything about compromised boxes, and
anyway, they don't want to lose a paying customer.

So, you know, let's just say for the sake of argument that right now,
today, I know about a botnet consiting of a quarter million popped
boxes, and that I have in-hand all of the relevant IPs, and that I
have no trouble finding contact email addresses for all of the relevant
ASNs.  So then what?


I use scripts to send out an abuse notification to some percentage of 
the compromised hosts -- the ones sending some significant amount of the 
traffic. The notification includes a description of what we saw and 
timestamped example attack traffic, as interpreted by tcpdump. If 
further traffic is seen later from the same host, another notification 
will be sent, after a cool-off period.


The emails are plain text and we don't try to use them as advertisement. 
We also don't force a link to be clicked to see more details or to 
respond back. I don't like to receive such emails myself and have found 
that those types are more likely to be ignored.



The question is:  Why should I waste my time informing all, or even any
of these ASNs about the popped boxes on their networks when (a) I am
not their customer... as many of them have been only too happy to
gleefully inform me in the past... and when (b) the vast majority
simply won't do anything with the information?


I'm not saying that everyone should send abuse notifications like we do, 
since it can be a big task. But, in response to someone wondering if 
their network is being used for attacks, or asking how they could help 
to police their own network, I am saying that making sure that inbound 
abuse notifications are arriving at the right place and being handled 
appropriately is important.



And while we are on the subject, I just have to bring up one of my
biggest pet peeves.  Why is it that every time some public-spirited
altrusitc well-meaning citizen such as myself reports any kind of a
problem to any kind of a company on the Internet, the report itself
gets immediately labeled and categorized as a "complaint".  

Re: Death of the Internet, Film at 11

2016-10-22 Thread John Weekes




Ok, so this mailing list is a list of network operators.  Swell.  Every
network operator who can do so, please raise your hand if you have
*recently* scanned you own network and if you can -honestly- attest
that you have taken all necessary steps to insure that none of the
numerous specific types of CCVT thingies that Krebs and others identified
weeks or months ago as being fundamentally insecure can emit a single
packet out onto the public Internet.


Most of the time, scanning of your customers isn't strictly necessary, 
though it certainly won't hurt.


That's because attackers will scan your network /for /you, compromise 
the hosts, and use them to attack. When they inevitably attack one of my 
customers, I'll send you an abuse email. Some other networks do the 
same. So if you want to help, the real keys are to make sure that you 
disallow spoofing, that the RIR has up-to-date contact information for 
your organization, and that you handle abuse notifications effectively.


Large IoT botnets have been used extensively this year, launching 
frequent 100+ Gbps attacks (they were also used in prior years, but it 
wasn't to the degree that we've seen since January 2016). I've recorded 
about 2.4 million IP addresses involved in the last two months (a number 
that is higher than the number of actual devices, since most seem to 
have dynamic IP addresses). The ISPs behind those IP addresses have 
received notifications via email, so if you haven't heard anything, 
you're probably in good shape, assuming the RIR has the right abuse 
address on file for you.


The bulk of the compromised devices are non-NA. In a relatively small 40 
Gbps IoT attack a couple of days ago, we saw about 20k devices, for 
instance, and most were from a mix of China, Brazil, Russia, Korea, and 
Venezuela.


-John


Re: 20-30Gbps UDP 1720 traffic appearing to originate from CN in last 24 hours

2015-07-20 Thread John Weekes

Ca,


Folks, it may be time to  take the next step and admit that UDP is too
broken to support

https://tools.ietf.org/html/draft-byrne-opsec-udp-advisory-00

Your comments have been requested


My comment would be that UDP is still widely used for game server 
traffic. This is unlikely to change in the near future because TCP (by 
default) is not well-suited for highly time-sensitive data, as even a 
small amount of packet loss causes significant delays.


In light of this, it is a bad idea for network operators to apply 
overall rate-limits to UDP traffic right now. Rate-limiting specific UDP 
/ports/ that are frequently seen in reflection attacks -- such as 19, 
123, and 1900 -- is a more reasonable practice, however, and it is 
becoming more common/.


/UDP-based application protocols can be implemented correctly, such that 
they also have handshakes that limit their ability to be used for 
reflection attacks, and modern services (including modern game servers) 
do this.


TCP and UDP can both be spoofed and used for direct attacks; we see this 
all the time. UDP is preferred due to many applications protocols' 
susceptibility to amplification attacks, but spoofed TCP attacks are 
often a bit thornier to deal with from the standpoint of a host 
attempting to externally mitigate, because tracking the three-way 
handshake requires keeping state.


I spoke with Drew earlier and his attacks do not appear to be reflected, 
so this is orthogonal to his concern today. He is seeing 
directly-generated traffic, which could use any protocol.


-John


Re: Cogent / Internap issue ??

2014-05-27 Thread John Weekes

On 5/27/2014 11:24 AM, Matthew Huff wrote:

We are having troubles reaching services on the other side of cogent/internap 
peering. Anyone else seeing issues?


We haven't seen Cogent-related issues at SEF today and that IP address 
is currently pingable through the Cogent looking glass. From your trace, 
Internap's equipment is reachable and responding (the two last hops are 
Internap-controlled), so the actual endpoint network (beyond Internap) 
seems to be either explicitly filtering your source or sending traffic 
back to it over a different and broken path.


The Internap NOC is responsive when it comes to investigating routing 
problems, so they're a good place to turn for such concerns. If you're 
having problems reaching them or would like some external help in 
exploring the problem from inside and outside that PNAP, shoot me an 
email off-list.


-John


Re: Filter NTP traffic by packet size?

2014-02-20 Thread John Weekes

On 2/20/2014 12:41 PM, Edward Roels wrote:

Curious if anyone else thinks filtering out NTP packets above a certain
packet size is a good or terrible idea.

 From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are
typical for a client to successfully synchronize to an NTP server.

If I query a server for it's list of peers (ntpq -np ip) I've seen
packets as large as 522 bytes in a single packet in response to a 54 byte
query.  I'll admit I'm not 100% clear of the what is happening
protocol-wise when I perform this query.  I see there are multiple packets
back forth between me and the server depending on the number of peers it
has?


Would I be breaking something important if I started to filter NTP packets

200 bytes into my network?


If your equipment supports this, and you're seeing reflected NTP 
attacks, then it is an effective stopgap to block nearly all of the 
inbound attack traffic to affected hosts. Some still comes through from 
NTP servers running on nonstandard ports, but not much.


Standard IPv4 NTP response packets are 76 bytes (plus any link-level 
headers), based on my testing. I have been internally filtering packets 
of other sizes against attack targets for some time now with no ill-effect.


-John