Re: Dropping support for the .ru top level domain

2022-03-14 Thread Denys Fedoryshchenko
As bad as it is to break an internet service, it's even worse technical 
side of your idea.
Given that there is an agency in Russia that has the ability to 
intercept and modify all DNS queries,
countering your "idea" is trivial. They will just route root servers 
locally and setup their own zones.

And even if they aren't, replacing root hints in recursor is trivial.
It will take a lot less time than reaching a "authoritative consensus".

But the colossal harm that a violation of neutrality will cause when 
each country starts
making sovereign root servers "just in case", their own DNSSEC, RIR, CA 
and etc -

will cause much more significant harm to the rest of world.

Please, people who generate such delusional ideas, stop trying to 
disrupt neutrality of the

Internet.
If you want to get involved in a war, go there, do not drag the rest of 
the world into the conflict.


On 2022-03-12 12:47, Patrick Bryant wrote:

I don't like the idea of disrupting any Internet service. But the
current situation is unprecedented.

The Achilles Heel of general public use of Internet services has
always been the functionality of DNS.

Unlike Layer 3 disruptions, dropping or disrupting support for the .ru
TLD can be accomplished without disrupting the Russian population's
ability to access information and services in the West.

The only countermeasure would be the distribution of Russian national
DNS zones to a multiplicity of individual DNS resolvers within Russia.
Russian operators are in fact implementing this countermeasure, but it
is a slow and arduous process, and it will entail many of the
operational difficulties that existed with distributing Host files,
which DNS was implemented to overcome.

The .ru TLD could be globally disrupted by dropping the .ru zone from
the 13 DNS root servers. This would be the most effective action, but
would require an authoritative consensus. One level down in DNS
delegation are the 5 authoritative servers. I will leave it to the
imagination of others to envision what action that could be taken
there...

ru  nameserver = a.dns.ripn.net [1]
ru  nameserver = b.dns.ripn.net [2]
ru  nameserver = d.dns.ripn.net [3]
ru  nameserver = e.dns.ripn.net [4]
ru  nameserver = f.dns.ripn.net [5]

The impact of any action would take time (days) to propagate.



Links:
--
[1] http://a.dns.ripn.net
[2] http://b.dns.ripn.net
[3] http://d.dns.ripn.net
[4] http://e.dns.ripn.net
[5] http://f.dns.ripn.net


Re: Is soliciting money/rewards for 'responsible' security disclosures when none is stated a thing now?

2022-03-04 Thread Denys Fedoryshchenko

This is typical "Beg bounty".
https://www.troyhunt.com/beg-bounties/

On 2022-03-03 00:30, Brie wrote:

I just got this in my e-mail...

--
From: xxx 
Date: Thu, 3 Mar 2022 03:14:03 +0500
Message-ID: 
Subject: Found Security Vulnerability
To: undisclosed-recipients:;
Bcc: sxx...@ahbl.org

Hi  Team

I am a web app security hunter. I spent some time on your website and 
found

some vulnerabilities. I see on your website you take security very
passionately.

 Tell me will you give me rewards for my finding and responsible
disclosure? if Yes, So tell me where I send those vulnerability 
reports?

share email address.

Thank you

Good day, I truly hope it treats you awesomely on your side of the 
screen :)


x Security
--


Is soliciting for money/rewards when the site makes no indication they
offer them a common thing now?

If you want to see a copy of the original message, let me know off
list and I'll send it to you.


Re: Russian aligned ASNs?

2022-02-28 Thread Denys Fedoryshchenko

AFAIK they don't do that just because they are not being droned.
When they were killed, just because cell towers was used by coordinators
and as a source of information.

Which once again reminds that if telecom doesnt stay neutral as much as
possible, or worse, they side with one of conflicting parties - they 
will

become legitimate target.
To some extent, it resembles the situation with medics.


On 2022-02-25 23:33, Eric Kuhnke wrote:

The four LTE (3GPP rev-whatever) based networks in Afghanistan are all
still operational. Roshan, AWCC, MTN, Etisalat.

In .AF the line between ISP and MNO is very blurry since 98% of
internet using customers do not have fixed line service at home or
office and use a mobile network instead.

These have developed a great deal of institutional knowledge operating
in very difficult conditions. The major change now is that the Taliban
is no longer burning tower site cabinets/shelters.

On Fri, 25 Feb 2022 at 12:20, scott  wrote:


My friend just got a phone call.  Electricity, cell phones and
internet are all functional at this time.


--

Just imagine what it must be like trying to keep those IP networks
functional at a time like this.  Configuring routers while under
fire...
Those engineers should get some kind of award...

scott


Re: IPv6 woes - RFC

2021-09-22 Thread Denys Fedoryshchenko

On 2021-09-19 09:20, Masataka Ohta wrote:

John Levine wrote:


Unless their infrastructure runs significantly on hardware and
software pre-2004 (unlikely), so does the cost of adding IPv6 to
their content servers. Especially if they’re using a CDN such as
Akamai.


I wasn't talking about switches and routers.


But, on routers, IPv6 costs four times more than IPv4 to
look up routing table with TCAM or Patricia tree.

It is not a problem yet, merely because full routing table of
IPv6 is a lot smaller than that of IPv4, which means most
small ISPs and multihomed sites do not support IPv6.


Mark Andrews wrote:


There is nothing at the protocol level stopping AT offering a
similar level of service.


Setting up reverse DNS lookup for 16B address is annoying,
which may stop AT offering it.


Don’t equate poor implementation with the protocol being broken.


IPv6 is broken in several ways. One of the worst thing is its
address length.

Masataka Ohta

+1
Different scope problem: on inexpensive software BRAS solutions 
(PPPoE/IPoE). Enabling ipv6 just jacked up neighbour table usage and 
lookups cost in benchmark profiling, because now it have to keep for all 
users IPv6 /64 + MAC entries.
Another drop is neighbor discovery on device with 10k IPOE termination 
vlans and privacy extensions.
Also, i wonder how this changed? 
https://blog.bimajority.org/2014/09/05/the-network-nightmare-that-ate-my-week/
Another problem is privacy extension and IoT, they are not supported in 
lwip stack shipped with most of IoT SoC. As far as i see in git it is 
not added yet too.
And SLAAC vs DHCPv6, again, first lacking some critical features, and 
second is often not implemented properly.


As many say - this is tiny, a drops of mess and complexities, but the 
ocean is made up of tiny drops. All these little things lead to the fact 
that very few want to mess with v6.


Re: Does anybody here have a problem

2021-08-12 Thread Denys Fedoryshchenko

List-Id: North American Network Operators Group 
IMO good enough for mail filters.

On 2021-08-10 19:20, Mike Hammett wrote:

Are you referring to mailing lists that lack some kind of added prefix
to the subject?

-
Mike Hammett
Intelligent Computing Solutions [1]
 [2] [3] [4] [5]
Midwest Internet Exchange [6]
 [7] [8] [9]
The Brothers WISP [10]
 [11] [12]

-

From: "C. A. Fillekes" 
To: "NANOG mailing list" 
Sent: Monday, August 9, 2021 6:43:50 PM
Subject: Does anybody here have a problem

telling the difference between their NANOG and SCA mail?

since I stopped getting both in digest form, maybe it's easier to mix
the two up by mistake.



Links:
--
[1] http://www.ics-il.com/
[2] https://www.facebook.com/ICSIL
[3] https://plus.google.com/+IntelligentComputingSolutionsDeKalb
[4] https://www.linkedin.com/company/intelligent-computing-solutions
[5] https://twitter.com/ICSIL
[6] http://www.midwest-ix.com/
[7] https://www.facebook.com/mdwestix
[8] https://www.linkedin.com/company/midwest-internet-exchange
[9] https://twitter.com/mdwestix
[10] http://www.thebrotherswisp.com/
[11] https://www.facebook.com/thebrotherswisp
[12] https://www.youtube.com/channel/UCXSdfxQv7SpoRQYNyLwntZg


Re: russian prefixes

2021-07-30 Thread Denys Fedoryshchenko

On 2021-07-30 18:45, Christopher Morrow wrote:

On Fri, Jul 30, 2021 at 10:57 AM Christopher Morrow
 wrote:


On Thu, Jul 29, 2021 at 9:07 PM Denys Fedoryshchenko
 wrote:


On 2021-07-29 20:46, Randy Bush wrote:

Looks like it did shown on news only.


:)

i wondered

They have installed devices called "TSPU" on major operators.
Isolation of specific networks is done without changing BGP
announcements, obviously.


Denys, can you say anything about how these TSPU operate?


Denys is, I'm sure, 'lmgtfy'ing me right now but:

https://therecord.media/academics-russia-deployed-new-technology-to-throttle-twitters-traffic/

https://en.wikipedia.org/wiki/Internet_censorship_in_Russia#Deep_packet_inspection

seems to be the system/device in question.
There is nothing magical or special in these devices, usual inline DPI 
with IDS / IPS functionality, installed between BRAS and CGNAT.
Here is specs/description for one of them: 
https://www.rdp.ru/en/products/service-gateway-engine/
They also sell them abroad. Anybody want to install? (Here must be an 
emoticon that laughs and weeps same time)





I believe they at least swallow/stop TCP SYN packets toward some
destinations
(or across a link generally), but I'm curious as to what steps the
devices take,
to be able to judge impact seen as either: "broken gear" or "funky
TPSU doing it's thing"
They are fully inline, so they can do anything they want, without 
informing ISP.
For example, make a network engineer lose the rest of his mind in search 
of a network fault,

while it's "TSPU doing it's thing".



thanks!
-chris


And the drills do not mean at all "we will turn off the Internet
for all
the clients and see what happens", journalists trivialized it.
Most likely, they checked the autonomous functioning of specific
infrastructurally important networks connected to the Internet,
isolating only them.
It's not so bad idea in general, if someone find another
significant bug
in common software, to be able to isolate important networks from
the
internet at the click of a button and buy time for patching
systems.


Re: russian prefixes

2021-07-29 Thread Denys Fedoryshchenko

On 2021-07-29 20:46, Randy Bush wrote:

Looks like it did shown on news only.


:)

i wondered

They have installed devices called "TSPU" on major operators.
Isolation of specific networks is done without changing BGP 
announcements, obviously.
And the drills do not mean at all "we will turn off the Internet for all 
the clients and see what happens", journalists trivialized it.
Most likely, they checked the autonomous functioning of specific 
infrastructurally important networks connected to the Internet, 
isolating only them.
It's not so bad idea in general, if someone find another significant bug 
in common software, to be able to isolate important networks from the 
internet at the click of a button and buy time for patching systems.


Re: Call for academic researchers (Re: New minimum speed for US broadband connections)

2021-05-31 Thread Denys Fedoryshchenko

It can't be zero.
In 1000BaseT specs, BER, 1 in 1*10^10 bits error is considered 
acceptable on each link.

So it should be defined same way, as acceptable BER.
And until which point? How to measure?
Same for bandwidth, port rate can be 1Gbit, ISP speedtest too, but most 
websites 100Kbit.


On 2021-05-31 21:28, Fred Baker wrote:

I would add packet loss rate. Should be zero, and if it isn’t, it
points to an underlying problem.

Sent from my iPad


On May 31, 2021, at 11:01 AM, Josh Luthman
 wrote:




I think the latency and bps is going to be the best way to measure
broadband everyone can agree on.  Is there a better way, sure, but
how can you quantify it?

Josh Luthman
24/7 Help Desk: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Sun, May 30, 2021 at 7:16 AM Mike Hammett 
wrote:


I think that just underscores that the bps of a connection isn't
the end-all, be-all of connection quality. Yes, I'm sure most of
us here knew that. However, many of us here still get distracted
by the bps.

If we can't get it right, how can we expect policy wonks to get it
right?

-
Mike Hammett
Intelligent Computing Solutions
http://www.ics-il.com

Midwest-IX
http://www.midwest-ix.com

-

From: "Sean Donelan" 
To: "NANOG" 
Sent: Saturday, May 29, 2021 6:25:12 PM
Subject: Call for academic researchers (Re: New minimum speed for
US broadband connections)

I thought in the 1990s, we had moved beyond using average bps
measurements
for IP congestion collapse.  During the peering battles, some ISPs
used to
claim average bps measurements showed no problems.  But in reality
there
were massive packet drops, re-transmits and congestive collapse
which
didn't show up in simple average bps graphs.

Have any academic researchers done work on what are the real-world
minimum
connection requirements for home-schooling, video teams
applications, job
interview video calls, and network background application noise?

During the last year, I've been providing volunteer pandemic home
schooling support for a few primary school teachers in a couple of

different states.  Its been tough for pupils on lifeline service
(fixed
or mobile), and some pupils were never reached. I found lifeline
students
on mobile (i.e. 3G speeds) had trouble using even audio-only group
calls,
and the exam proctoring apps often didn't work at all forcing
those
students to fail exams unnecessarily.

In my experience, anecdotal data need some academic researchers,
pupils
with at least 5 mbps (real-world measurement) upstream connections
at
home didn't seem to have those problems, even though the average
bps graph
was less than 1 mbps.


Re: 10 years from now...

2021-03-28 Thread Denys Fedoryshchenko

No need for all that fancy RF tools.
Moreover, detecting >10Ghz transmission is not such an easy task.
The beam is most likely narrow enough to be difficult to detect.

But, (for example) it's enough to visit from foreign IPs some local 
website,

to have cookie set: SATELLITE_USER=xyz
Then when person use local connection and visit same website, this 
cookie

will send law enforcement hint.
And there are many more automated, software-based ways to detect that a
device has been connected via satellite in past.

Not to mention the fact that any attempt to provide services illegally
is pandora box.
At least it may end up with the fact that the country will start jamming 
uplink

frequencies, which will affect the service in whole region.
And in the worst case, it will give reason to use anti-satellite 
weapons.



On 2021-03-29 03:23, Eric Kuhnke wrote:

I would also concur that the likelihood of Starlink (or a Oneweb, or
Kuiper) terminal being used successfully to bypass the GFW or similar
serious Internet censorship, in an authoritarian environment, is
probably low. This is because:

a) It has to transmit in known bands.

b) It has to be located in a location with a very good, clear view of
the sky in all directions (even a single tree obstruction in one
section of the sky, relative to where the antenna is mounted will
cause packet loss/periodic issues on a starlink beta terminal right
now). Visually identifying the terminal would not be hard.

c) Portable spectrum analyzers capable of up to 30 GHz are not nearly
as expensive as they used to be. They also have much better GUIs and
visualization tools than what was available 6-10 years ago.

d) You could successfully train local law enforcement to use these
sort of portable spectrum analyzers in a one-day, 8-hour training
course.

e) The equipment would have to be smuggled into the country

f) Many people such as in a location like Iran may lack access to a
standard payment system for the services (the percentage of Iranians
with access to buy things online with visa/mastercard/american express
or similar is quite low).

There are already plenty of places in the world where if you set up a
1.2, 1.8 or 2.4 meter C, Ku or Ka band VSAT terminal using some sort
of geostationary based services, without appropriate government
"licenses", men with guns will come to dismantle it and arrest you.

I am not saying it is an impossible problem to solve, but any system
intended for that sort of purpose would have to be designed for
circumvention, and not a consumer/COTS adaptation of an off the shelf
starlink terminal.

On Sat, Mar 27, 2021 at 8:31 PM na...@jima.us  wrote:


Please don't forget that RF sources can be tracked down by even
minimally-well-equipped adversaries.

- Jima

-Original Message-
From: NANOG  On Behalf Of
scott
Sent: Saturday, March 27, 2021 19:36
To: nanog@nanog.org
Subject: Re: 10 years from now... (was: internet futures)

On 3/26/2021 9:42 AM, Michael Thomas wrote:

LEO internet providers will be coming online which might make a
difference in the corners of the world where it's hard to get

access,

but will it allow internet access to parachute in behind the Great



Firewall?



How do the Chinas of the world intend to deal with the Great

Firewall

implications?


This is what I hope will change in the next 10 years.  "Turning off
the
internet" will be harder and harder for folks suppressing others,
many
times violently, and hiding it from everyone else.  A small-ish
antenna
easily hidden would be necessary.

scott


Re: Uganda Communications Commission shutdown order

2021-01-19 Thread Denys Fedoryshchenko

On 2021-01-19 15:45, Mark Tinka wrote:

On 1/19/21 11:49, adamv0...@netconsultings.com wrote:

Hopefully starlink and other similar projects will help bring these 
numbers

down a bit.
But I think starlink has been already outlawed in some countries?


Moonshine satellite links abound in many places they shouldn't be.
It's cops & robbers stuff...

Mark.
Starlink needs expensive modem, that is not only too expensive for such 
countries, hard to import, but can be also reason for very long prison 
sentence.
Some nanosatellite with amplified BLE compatible frontend might do 
miracles. It is impossible to block ISM band countrywide, anybody can 
climb a hill and point mobile to sky and receive regional news. No need 
even for custom software, just any software that can receive BLE data 
(development/debugging tools).
As kind of PoC, Norby cubesat with LoRa telemetry is being received over 
the world on 1000+km distances on DIY antennas.


Re: Apple moved from CDN, and ARIN whois

2020-09-24 Thread Denys Fedoryshchenko

Mine is default whois on latest stable ubuntu.

x@x:~$ whois --version
Version 5.5.6.

Report bugs to .
x@x:~$ whois 17.0.0.0/8

#
# ARIN WHOIS data and services are subject to the Terms of Use
# available at: https://www.arin.net/resources/registry/whois/tou/
#
# If you see inaccuracies in the results, please report at
# https://www.arin.net/resources/registry/whois/inaccuracy_reporting/
#
# Copyright 1997-2020, American Registry for Internet Numbers, Ltd.
#


No match found for n + 17.0.0.0/8.


#
# ARIN WHOIS data and services are subject to the Terms of Use
# available at: https://www.arin.net/resources/registry/whois/tou/
#
# If you see inaccuracies in the results, please report at
# https://www.arin.net/resources/registry/whois/inaccuracy_reporting/
#
# Copyright 1997-2020, American Registry for Internet Numbers, Ltd.
#


On 2020-09-24 18:57, Tom Beecher wrote:

Apple started moving traffic off 3rd party CDNs to their own CDN six
years ago. This is not a new development.

Also, I see no issues with an ARIN whois lookup for that prefix.

~ % whois 17.0.0.0/8 [13]
% IANA WHOIS server
% for more information on IANA, visit http://www.iana.org
% This query returned 1 object

inetnum:  17.0.0.0 - 17.255.255.255
organisation: Apple Computer Inc.
status:   LEGACY

whois:whois.arin.net [14]

changed:  1992-07
source:   IANA

# whois.arin.net [14]

NetRange:   17.0.0.0 - 17.255.255.255
CIDR:   17.0.0.0/8 [13]
NetName:APPLE-WWNET
NetHandle:  NET-17-0-0-0-1
Parent:  ()
NetType:Direct Assignment
OriginAS:
Organization:   Apple Inc. (APPLEC-1-Z)
RegDate:1990-04-16
Updated:2017-07-08
Ref:https://rdap.arin.net/registry/ip/17.0.0.0

...
...

On Thu, Sep 24, 2020 at 11:41 AM Mike Hammett 
wrote:


Breaking from current CDN infrastructure without reasonable
accessibility to the new CDN is a problem.

-
Mike Hammett
Intelligent Computing Solutions [1]
[2] [3] [4] [5]
Midwest Internet Exchange [6]
[7] [8] [9]
The Brothers WISP [10]
[11] [12]

-

From: "Denys Fedoryshchenko" 
To: nanog@nanog.org
Sent: Thursday, September 24, 2020 9:27:07 AM
Subject: Apple moved from CDN, and ARIN whois

Hi,

Interesting, it seems AS6185 moved traffic from all CDN to their own

content network.
I noticed big spikes in traffic and complaints about slowness,
figured
out, Apple content (especially updates) are not coming from a
numerous
co-hosted CDN, but became "live",
congesting upstreams.
So much efforts on collocating endless CDN in premises to keep
things
closer to users and handle traffic surges, and yet again, some
companies
keep inventing their own.

P.S. I dont know if it is bug, but whois at ARIN return "No match
found
for n + 17.0.0.0/8 [13]" for 17.0.0.0/8 [13],
but works fine for single ip from this range, like 17.0.0.0, and
returns
info about 17.0.0.0/8 [13]



Links:
--
[1] http://www.ics-il.com/
[2] https://www.facebook.com/ICSIL
[3] https://plus.google.com/+IntelligentComputingSolutionsDeKalb
[4] https://www.linkedin.com/company/intelligent-computing-solutions
[5] https://twitter.com/ICSIL
[6] http://www.midwest-ix.com/
[7] https://www.facebook.com/mdwestix
[8] https://www.linkedin.com/company/midwest-internet-exchange
[9] https://twitter.com/mdwestix
[10] http://www.thebrotherswisp.com/
[11] https://www.facebook.com/thebrotherswisp
[12] https://www.youtube.com/channel/UCXSdfxQv7SpoRQYNyLwntZg
[13] http://17.0.0.0/8
[14] http://whois.arin.net


Apple moved from CDN, and ARIN whois

2020-09-24 Thread Denys Fedoryshchenko

Hi,

Interesting, it seems AS6185 moved traffic from all CDN to their own 
content network.
I noticed big spikes in traffic and complaints about slowness, figured 
out, Apple content (especially updates) are not coming from a numerous 
co-hosted CDN, but became "live",

congesting upstreams.
So much efforts on collocating endless CDN in premises to keep things 
closer to users and handle traffic surges, and yet again, some companies 
keep inventing their own.


P.S. I dont know if it is bug, but whois at ARIN return "No match found 
for n + 17.0.0.0/8" for 17.0.0.0/8,
but works fine for single ip from this range, like 17.0.0.0, and returns 
info about 17.0.0.0/8


Re: 60ms cross continent

2020-07-09 Thread Denys Fedoryshchenko
Proprietary startups for M2M in most of cases bad idea, especially if 
they require

custom hardware (those operate in VHF band).
And with such history: >

https://www.satellitetoday.com/government-military/2019/10/18/swarm-receives-fcc-approval-to-launch-150-satellites/

Here is example, Sigfox in UK seems powered by startup, and startup went 
defunct:

https://twitter.com/cybergibbons/status/1280892048787243008

And my own experience, if you embed in your design proprietary modem, it 
will be very pricey to replace it,

if startup fail to reach profit margin.
I rather will trust technologies based on open standards, like FossaSat 
or Lacuna,
often they are built with terrestrial fallback, and in fact you can 
build your own gateways, if required.
And more than that, some modules, like Murata, support both Sigfox + 
LoRaWAN, and technically possible
to support LoRa satellites as well at same time, without significant 
hardware mods.


On 2020-07-09 05:56, Mike Lyon wrote:

For the IoT/M2M stuff that doesn’t require huge amounts of data,
there is  a Silicon Valley startup that is deploying cube sats for
just that.

Swarm Technologies

https://www.swarm.space/

-Mike



Re: 60ms cross continent

2020-07-08 Thread Denys Fedoryshchenko

On 2020-07-08 10:05, Mark Tinka wrote:

On 7/Jul/20 21:58, Eric Kuhnke wrote:

Watching the growth of terrestrial fiber (and PTP microwave) networks
going inland from the west and east African coasts has been
interesting. There's a big old C-band earth station on the hill above
Freetown, Sierra Leone that was previously the capital's only link to
the outside world. Obsoleted for some years now thanks to the
submarine cable and landing station. I imagine they might keep things
live as a backup path with a small C-band transponder MHz commit and
SCPC modems linked to an earth station somewhere in Europe, but not
with very much capacity or monthly cost.

The landing station in Mogadishu had a similar effect.


The early years of submarine fibre in Africa always had satellite as a
backup. In fact, many satellite companies that served Africa with
Internet prior to submarine fibre were banking on subsea and 
terrestrial

failures to remain relevant. It worked between 2009 - 2013, when
terrestrial builds and operation had plenty of teething problems. Those
companies have since either disappeared or moved their services over to
fibre as well.

In that time, it has simply become impossible to have any backup
capacity on satellite anymore. There is too much active fibre bandwidth
being carried around and out of/into Africa for any satellite system to
make sense. Rather, diversifying terrestrial and submarine capacity is
the answer, and that is growing quite well.

Plenty of new cable systems that are launching this year, next year and
the next 3 years. At the moment, one would say there is sufficient
submarine capacity to keep the continent going in case of a major 
subsea

cut (like we saw in January when both the WACS and SAT-3 cables got cut
at the same time, and were out for over a month).

Satellite earth stations are not irrelevant, however. They still do get
used to provide satellite-based TV services, and can also be used for
media houses who need to hook up to their network to broadcast video
when reporting in the region (even though uploading a raw file back 
home

over the Internet is where the tech. has now gone).

Mark.


I don't think traditional satellites have much future as backbone. Only 
as broadcasting media.
Most are still acting as dumb RF converters, but we can't expect much 
more from them.
On geostationary orbit, it is not only expensive to bring each 
additional kg, but also they
need to keep it simple as possible, as it is all above van allen belt, 
and it needs to run there

without any maintenance for 7+ years.
So if SpaceX managed to squeeze in their satellites at least basic 
processing (and seems they did),

it will improve satellite capabilities (and competitiveness) greatly.
The only thing i hope, if they had space for some M2M IoT stuff, similar 
to ORBCOMM.




Re: 60ms cross continent

2020-07-07 Thread Denys Fedoryshchenko

On 2020-07-07 08:32, Eric Kuhnke wrote:

"no clouds" is overstating the effect somewhat. I've operated a number
of mission critical Ku band based systems that met four nines of
overall link uptime. The operational effect of a cloud that isn't an
active downpour of rain is negligible. Continual overcast of clouds is
not much of a problem at all, it's active rain rate in mm/hour and its
statistical likelihood, climate parameters of the location.

Yes, during rain fade events, current generation VSAT modems will drop
all the way down to BPSK 1/2 code rate to maintain a link, with
corresponding effect on real world throughput in kbps each direction,
but entirely dropping a link is rare.

BPSK 1/2 is quite extreme. In my case it was 32APSK 8/9 at 36Mhz 
transponder
(yes it was quite large antenna), ~140Mbit, so switching to 1/2 BPSK 
will make it

~16Mbit/s, which is pretty useless for telco purposes.
For corporate, end-users, with QoS - it can be ok, but still depends on 
climatic zone.
Remember, it is not downlink only issue, but uplink too. And depends on 
antenna elevation angle

as well.
Even for end-user it is not fun to have 1/10 of capacity, most likely 
means unable to do
video conferencing anymore, for few days, just because it is few rainy 
days.
And as Ku is often covering specific regions, often it means rainy days 
for most transponder customers.
This is why in zones closer to equator, with their long-term monsoon, 
C-Band was only option,

no idea about now.


Re: 60ms cross continent

2020-07-06 Thread Denys Fedoryshchenko

On 2020-07-07 06:48, Eric Kuhnke wrote:

This is why adaptive coding and modulation systems exist. Also dynamic
channel size changes and advanced computationally intensive FECs.

You don't think people working on microwave band projects above 10GHz
with dollar figures in the hundreds of millions are unaware of basic
rain fade and link budget methodology, do you?

On Mon, Jul 6, 2020, 8:44 PM Denys Fedoryshchenko
 wrote:


On 2020-07-07 05:04, joe mcguckin wrote:

Theoretically, Starlink should be faster cross country than

terrestrial

fiber.


Joe McGuckin
ViaNet Communications

j...@via.net
650-207-0372 cell
650-213-1302 office
650-969-2124 fax


When there is no clouds.


In my experience, all that ACM has achieved is that when link becomes 
"slow" and if it rains outside, it means that it will be down completely 
after few seconds.
Previously with CCM or DVB-S without 2, it simply disappear without 
warning.

And yes, I have and cheap and expensive Microwaves >10Ghz too.
ACM/VCM really helps if you want to live on the edge, milking each db, 
(edge of link budget, e.g. small antenna size, interference), and this 
is actually very important to increase profitability, especially in case 
of multipoint VSAT, but it is near useless against rain fade.


Re: 60ms cross continent

2020-07-06 Thread Denys Fedoryshchenko

On 2020-07-07 05:04, joe mcguckin wrote:
Theoretically, Starlink should be faster cross country than terrestrial 
fiber.



Joe McGuckin
ViaNet Communications

j...@via.net
650-207-0372 cell
650-213-1302 office
650-969-2124 fax


When there is no clouds.


Re: netflix proxy/unblocker false detection

2020-06-25 Thread Denys Fedoryshchenko

On 2020-06-26 01:32, Mike Hammett wrote:

IPv6?

-


By some reason my smart TV doesn't use IPv6 for Netflix, even everything 
else in same network using it properly (even developed for ESP8266/ESP32 
- IPv6 enabled apps).


And what is worse:
"Netflix Kimberly
The Network settings is to check if it is in Automatic not specifically 
to search for VPN and Proxy in that area, but that is okay. Then please 
remember that IPv6 is not allowed and should be disabled. With all these 
done, please contact your Internet Service provider to get further 
clarification on this matter. I will send you an email with some other 
information to consult with . Please give me a moment to send it to 
you"


Honestly, this is very confusing suggestion from Netflix support (i have 
native ipv6!).
Looking to 
https://www.reddit.com/r/ipv6/comments/evv7r8/ipv6_and_netflix/ there is 
definitely some issues for other users too.


And final nail, local providers with OCA who does peering - don't 
provide IPv6 peering at all, and ISP i am using is too small to be 
qualified for OCA. Since bandwidth is very expensive here, it is no-go 
to push ipv6 and cutting off themself from cheaper(than "international 
capacity") OCA peering.
Still, i tried, in browser it seems worked, but anyway i'm not going to 
watch movies on my desktop, while i have 4k screen, and also there is 
tons of users who don't have IPv6 enabled routers (they just buy 
cheapest brand).


Re: netflix proxy/unblocker false detection

2020-06-25 Thread Denys Fedoryshchenko

On 2020-06-25 19:20, Dave Temkin via NANOG wrote:

If you or others are not receiving a satisfactory reply from us
(Netflix) on this issue, please feel free to reach out directly and
I'll make sure it gets handled.

So far as we know, we handle CGNAT (and IPv6) appropriately. Sometimes
ranges get reassigned and the data that we have gets stale - this
happens quite often since formal runout, and so sometimes we're behind
the ball on it, but be assured that we take this seriously.

Thanks,
-Dave

This problem has been bothering operators in Lebanon for more than a 
month, and frankly they have not received any reasonable answers yet. 
IP's are the same for several years, no changes, but all of sudden users 
start to get reduced list of titles (only netflix originals) and popup 
messages.
Maybe some of the clients are doing something bad, but in fact its not 
right to block legitimate clients with them because they are behind same 
CGNAT IP, I know for sure that I am using an absolutely normal account 
of the highest plan, on my absolutely ordinary Smart TV for last year, 
without any changes, i am in the same IP pool, but yet i have problem.
And if someone doing something bad, we(ISP) can assist and if there is 
enough info, we move such people to different IP pool or if there is 
clear proof of wrongdoing we can even disconnect such clients. But we 
are getting nothing at all from support, except template "we are working 
hard on your problem", which is kind of disrespectful and enough.


Today I tried it myself as a client, and as result it was 4 hour 
standoff in live chat, as support tried to feed me usual "we are working 
hard on your problem" and as i didnt accepted usual script/templates 
anymore, it turned into outright mockery on me, sending me literally 
same message template again and again, until i realised that i was 
wasting my time with reasoning.
At the end, i received an answer that temporarily ok for me, but i hope 
the problem will be resolved properly soon, if it reached the right 
person, due my polite persistence.*
At least today we got new contact, email for geosupport, and i have some 
hope that it will be more helpful, at least 3 ISP representatives mailed 
them.
And i know for sure that i'm not going to give up until i find proper 
solution.


*Which cost me and my cat a lot of stress today.
(I couldn’t feed the cat because of the live chat timeouts, and he just 
keep meowing under the table demanding food).


netflix proxy/unblocker false detection

2020-06-25 Thread Denys Fedoryshchenko
Did anybody noticed that Netflix just became useless due to tons of 
proxy/unblocker false detection on CGNAT ranges?
Even my home network is dual stack, i am absolutely sure there is no 
proxy/vpn/whatsoever (but ipv4 part is over CGNAT) - and i got 
"proxy/unblocker" message on my personal TV.
And many other ISP sysadmins told me that recently this is a massive 
problem, and netflix support is frankly inadequate and does not want to 
solve the problem.
I will not be surprised that they will begin to actively lose users due 
to such a shameful silly screwed up algorithm.
Who in sober mind blocks all legit users due probably one or two 
suspicious users behind same IP range?


Re: understanding IPv6

2020-06-07 Thread Denys Fedoryshchenko

On 2020-06-07 19:02, Brandon Martin wrote:

On 6/7/20 6:01 AM, Denys Fedoryshchenko wrote:
There are very interesting and unobvious moments on IPv4 vs IPv6, for 
example related to battery lifetime in embedded electronics. In ipv4, 
many devices are forced to send "keepalives" so that the NAT entry 
does not disappear, with IPv6 it is not required and bidirectional 
communications possible at any time. And in fact, it has a huge impact 
on the cost and battery life of IoT devices.
When I developed some IoT devices for clients, it turned out that if 
"IPv6-only" is possible, this significantly reduces the cost of the 
solution and simplify setup.


This is difficult to understate.  "People" are continually amazed when
I show them that I can leave TCP sessions up for days at a time (with
properly configured endpoints) with absolutely zero keepalive traffic
being exchanged.

As amusingly useful as this may be, it pales in comparison to the
ability to do the same on deeply embedded devices running off small
primary cell batteries.  I've got an industrial sensor network product
where the device poll interval is upwards of 10 minutes, and even then
it only turns on its receiver.  The transmitter only gets lit up about
once a day for a "yes I'm still here" notification unless it has
something else to say.

In the end, we made it work across IPv4 by inserting an application
level gateway.  We just couldn't get reliable, transparent IPv6
full-prefix connectivity from any of the cellular telematics providers
at the time.  I don't know if this has changed.  For our application,
this was fine, but for mixed vendor "IoT" devices, it would probably
not work out well.

"Cellular telematics" are bad in general.
I had problem during development of OTA,because operator decided that he 
should
put TCP session data limit about 1MB, so people dont abuse unlimited 
data plan.

As their DPI was not very precise, i was getting
random hangs during data transfer. Did workaround by splitting OTA to 
several 256Kb chunks.
With IPv6 major problems that most of operators open only some ports, 
block inbound connections,

often run stateful firewall.
Usually customers prefer to pay extra to get custom firmware that is 
working properly on

particular cellular operator. Still IPv6 works better than IPv4.


Re: understanding IPv6

2020-06-07 Thread Denys Fedoryshchenko

On 2020-06-07 12:35, Daniel Sterling wrote:
On Sun, Jun 7, 2020 at 2:00 AM Fred Baker  
wrote:
I'm sorry you have chosen to ignore documents like RFC 3315, which is 
where DHCP PD was first described (in 2003). It's not like anyone's 
hiding it.

So while it may be true that no one is hiding this information, in my
experience no one is shining a spot light on it either, and until I
was told about it, I was simply unable to understand IPv6.

I can give you that easily reasons to understand it:

1 - we can avoid using virtual hosts, when you can identify things on 
L3, things become much more clear in software. Making virtual hosts in 
some protocols are living hell.

2 - P2P communications are possible again.
2.1 As soon as you need to access ANYTHING at your home, your choice 
only begging ISP for one real IP(most often dynamic), then you struggle 
with port forwarding stuff. With IPv6 it gets really simple.
2.2 Direct P2P file transfers from friend to a friend, you don't need 
cloud services anymore and/or headache with NAT pinning and etc

2.3 Gaming
2.4 Some industrial equipment really love P2P VPN, with IPv4 they are 
forced to use some "middle point" in "cloud", that decrease reliability, 
increase latency and most important jack up operational costs and 
require continuous support of this "middle point".
3 - As user can be easily identified, no more "captcha" stuff or 
struggling with NAT pool IP bans (very painful with gaming services, 
twitter, google).

4 - Dealing with LEA requests is much easier and cheaper

There are very interesting and unobvious moments on IPv4 vs IPv6, for 
example related to battery lifetime in embedded electronics. In ipv4, 
many devices are forced to send "keepalives" so that the NAT entry does 
not disappear, with IPv6 it is not required and bidirectional 
communications possible at any time. And in fact, it has a huge impact 
on the cost and battery life of IoT devices.
When I developed some IoT devices for clients, it turned out that if 
"IPv6-only" is possible, this significantly reduces the cost of the 
solution and simplify setup.


But there is one huge minus. We cannot switch to ipv6 completely and are 
forced to bear the costs of ipv4 too.
In addition, many services (like Sony playstation stuff) continue to ban 
ipv4 address, and doesn't bother themself to implement ipv6 (which is 
supreme stupidity and technical idiocy).


Re: RIPE NCC Executive Board election

2020-05-13 Thread Denys Fedoryshchenko

On 2020-05-13 22:53, Töma Gavrichenkov wrote:

Peace,

On Wed, May 13, 2020 at 10:43 PM Elad Cohen  wrote:

For you nothing will work.


Is it a personal attack?

IPv6 is working good for me so far ;-)

--
Töma

It works for Elad as well.
He is pushing others for IPv4+ suffering, while he is happily using 
IPv6.

(Anybody can check his x-originating-ip header)
ROTFL


Re: An appeal for more bandwidth to the Internet Archive

2020-05-13 Thread Denys Fedoryshchenko

On 2020-05-13 13:10, Bill Woodcock wrote:

On 2020-05-13 11:00, Mark Delany wrote:

On 13May20, Denys Fedoryshchenko allegedly wrote:
What about introducing some cache offloading, like CDN doing? 
(Google,

Facebook, Netflix, Akamai, etc)
Maybe some opensource communities can help as well

Surely someone has already thought thru the idea of a community CDN?
Perhaps along the lines of pool.ntp.org? What became of that
discussion?


Yes, Jeff Ubois and I have been discussing it with Brewster.

There was significant effort put into this some eighteen or twenty
years ago, backed mostly by the New Zealand government…  Called the
“Internet Capacity Development Group.”  It had a NOC and racks full of
servers in a bunch of datacenters, mostly around the Pacific Rim, but
in Amsterdam and Frankfurt as well, I think.  PCH put quite a lot of
effort into supporting it, because it’s a win for ISPs and IXPs to
have community caches with local or valuable content that they can
peer with.  There’s also a much higher hit-rate (and thus efficiency)
to caching things the community actually cares about, rather than
whatever random thing a startup is paying Akamai or Cloudflare or
whatever to push, which may never get viewed at all.  It ran well
enough for about ten years, but over the long term it was just too
complex a project to survive at scale on community support alone.  It
was trending toward more and more of the hard costs being met by PCH’s
donors, and less and less by the donors who were supporting the
content publishers, which was the goal.

The newer conversation is centered around using DAFs to support it on
behalf of non-profit content like the Archive, Wikipedia, etc., and
that conversation seems to be gaining some traction.  Unfortunately
because there are now a smaller number of really wealthy people who
need places to shove all their extra money.  Not how I’d have liked to
get here.

I think this is a simple equation.

1) The minimum cost of implementation and technical support efforts
I think earlier this was the main problem, 10 years ago there was no 
such level

of software automation as it is available today.
2) Win for operators.
Before it was more trivial by running squid and trivial cache, now, with 
HTTPS it is

not possible.
3) Proud badge of non-profit projects supporter and charity activities.
(Whether it is possible to write off tax/etc as donations - depends on 
the laws of your country)


Re: An appeal for more bandwidth to the Internet Archive

2020-05-13 Thread Denys Fedoryshchenko

On 2020-05-13 11:00, Mark Delany wrote:

On 13May20, Denys Fedoryshchenko allegedly wrote:

What about introducing some cache offloading, like CDN doing? (Google,
Facebook, Netflix, Akamai, etc)



Maybe some opensource communities can help as well


Surely someone has already thought thru the idea of a community CDN?
Perhaps along the lines of pool.ntp.org? What became of that
discussion?

Maybe a TOR network could be repurposed to cover the same ground.


Mark.
I believe tor is not efficient at all for this purposes. Privacy have 
very high overhead.


Several schemes exist:
1)ISP announce in some way subnets he want to be served from his cache.
1.A)Apple cache way - just HTTP(S) request will turn specific IP to ISP 
cache. Not secure at all.
1.B)BGP + DNS, most common way. ISP does peering with CDN, CDN will 
return ISP cache nodes IP's to DNS requests.
It means for example content.archive.org will have local node A/ 
records (btw where is IPv6 for archive?) for

customers of ISP with this node, or anybody who is peering with it.
Huge drawback - archive.org will need to provide TLS certificates for 
web.archive.org each local node, this is bad and probably no-go.
Yes, i know some schemes exist, that certificate is not present on local 
node, but some "precalculated" result used, but it is too complex.
1.C)BGP + HTTP redirect. If ISP has peering with archive.org, to all 
subnets announced users will get 302 or some HTTP redirect.
Next is almost same and much better, but will require small 
modifications of content engine or frontend balancers.
1.D)BGP + HTTP rewrite. If ISP <*same as before*> URL is rewritten 
within content
e.g. 
http://web.archive.org/web/20200511193226/https://git.kernel.org/torvalds/t/linux-5.7-rc5.tar.gz 
will appear as

http://emu.st.node.archive.org/web/20200511193226/https://git.kernel.org/torvalds/t/linux-5.7-rc5.tar.gz
or
http://archive-org.proxy.emu.st/web/20200511193226/https://git.kernel.org/torvalds/t/linux-5.7-rc5.tar.gz
In second option ISP can handle SSL certificate by himself.
2)BGP announce of archive.org subnets locally. Prone to leaks, require 
TLS certificates and etc, no-go.


You can still modify some schemes, and make other options that no one 
has yet implemented.
For example, to do everything through javascript (CDNs cannot afford it, 
because of way they work),
and for example, website generate content links dynamically, for that 
client request some /config.json file
(which is dynamically generated and cached for a while), so we give it 
to IPs that have a local node - URL of the local node, for the rest -

default url.




Re: An appeal for more bandwidth to the Internet Archive

2020-05-13 Thread Denys Fedoryshchenko
What about introducing some cache offloading, like CDN doing? (Google, 
Facebook, Netflix, Akamai, etc)
I think it can be rolled pretty quickly, with minimum labor efforts, at 
least for heavy content.
Maybe some opensource communities can help as well, and same scheme can 
be applied then to other non-profits.
But sure something more smooth like nginx caching, not bunch of 
rsync/ssh scripts, as many Linux mirrors have.


On 2020-05-13 08:25, Tim Požár wrote:

Internet Archive primary office is located at 300 Funston in San
Francisco.  This was a Christian Science church so it has the roman
columns you would expect for a church / library.  You can see it on
Google Street Views at:

https://www.google.com/maps/place/300+Funston+Ave,+San+Francisco,+CA+94118

Although they serve content out of this site, their primary site for
bandwidth is at 2512 Florida Ave, Richmond, CA.

IA does have satellite offices around the world for scanning, etc.,
the public facing servers are location in these two locations.

Tim

On 5/12/20 9:24 PM, Terrence Koeman wrote:
Aren't they in a former church or something? I vaguely remember their 
location to be significant for some reason or another. So location may 
weigh heavily.



-- Regards,
    Terrence Koeman, PhD/MTh/BPsy
      Darkness Reigns (Holding) B.V.

Please quote relevant replies.
Spelling errors courtesy of my 'smart'phone.

*From:* David Hubbard 
*Sent:* Wednesday, 13 May 2020 06:02
*To:* nanog@nanog.org
*Subject:* Re: An appeal for more bandwidth to the Internet Archive

Could the operation be moved out of California to achieve
dramatically reduced operating costs and perhaps solve some 
problems
via cost savings vs increased donation?  I have to imagine with 
the

storage and processing requirements that the footprint and power
usage in SFO is quite costly.  I have equipment in a few 
California

colo's and it's easily 3x what I pay for similar in Nevada, before
even getting into tax abatement advantages.



On 5/12/20, 1:33 PM, "NANOG on behalf of colin johnston"
 
wrote:


     Is the increased usage due to more users or more existing 
users

having higher bandwidth at home to request faster ?
     Would be interested if IPS configured firewall used to block
out invalid traffic/spam traffic and if such traffic increased 
when

back end network capacity increased ?
     What countries are requesting the most data and does this
analysis throw up questions as to why ?
     Are there high network usage hitters which raise question as 
to

why asking for so much data time and time again and is this valid
traffic use ?

     Colin


     > On 12 May 2020, at 17:33, Tim Požár  wrote:
     >
     > Jared...
     >
     > Thanks for sharing this.  I was the first Director of
Operations from '96 to '98, at was was then Internet Archive/Alex. 
I was the network architect back then got them their ASN and
original address space. Folks may also know, I help start SFMIX 
with

Matt Peterson.
     >
     > A bit more detail in this...  Some of this I got from Jonah
Edwards who is the current Network Architect at IA.  Yes, the 
bottle
neck was the line cards.  They have upgraded and that has 
certainly

helped the bandwidth of late.
     >
     > Peering would be a big help for IA. At this point they have
two 10Gb LAG interfaces that show up on SFMIX that was turned up
last February. Looking at the last couple of weeks the 95th
percentile on this 20Gb LAG is 3 Gb.  As they just turned up on
SFMIX, they are just starting to get peers turned up there. 
Eyeball

networks that show up on SFMIX are highly encouraged to start
peering with them.  Alas, they are v4 only at this point.
     >
     > Additionally, if folks do have some fat pipes that can 
donate

bandwidth at 200 Paul, I am sure Jonah won't turn it down.
     >
     > Tim
     >
     > On 5/12/20 4:45 AM, Jared Brown wrote:
     >> Hello all!
     >> Last week the Internet Archive upgraded their bandwidth 
30%
from 47 Gbps to 62 Gbps. It was all gobbled up immediately. 
There's

a lovely solid green graph showing how usage grows vertically as
each interface comes online until it too is 100% saturated. 
Looking

at the graph legend you can see that their usage for the past 24
hours averages 49.76G on their 50G of transport.
     >> To see the pretty pictures follow the below link:
     >>

https://blog.archive.org/2020/05/11/thank-you-for-helping-us-increase-our-bandwidth/


     >> Relevant parts from the blog post:
     >> "A year ago, usage was 30Gbits/sec. At the beginning of 
this

year, we were at 40Gbits/sec, and we were handling it. ...
     >> Then Covid-19 hit and demand rocketed to 

Paid abuse desks idea? Was: Urgently need contact at Facebook of Instagram and also Omegle

2020-05-02 Thread Denys Fedoryshchenko

On 2020-05-03 01:10, Anne P. Mitchell, Esq. wrote:

There is a woman torturing animals on Omegle, she is advertising it on
her Instagram account.  Need to get this in front of the right people
to have her traced and shut down.

Please let me know if you can provide a contact for either org.

Anne

---
Anne P. Mitchell, Attorney at Law
Dean of Cyberlaw & Cybersecurity, Lincoln Law School
Advisor, Colorado Innovation Response Team Task Force
CEO/President, SuretyMail Email Reputation Certification
Policy Drafting and Review for Businesses
Author: Section 6 of the CAN-SPAM Act of 2003 (the Federal anti-spam 
law)
Legislative Consultant, GDPR, CCPA (CA) & CCDPA (CO) Compliance 
Consultant

Board of Directors, Denver Internet Exchange
Chair Emeritus, Asilomar Microcomputer Workshop
Legal Counsel: The CyberGreen Institute
Former Counsel: Mail Abuse Prevention System (MAPS)
Location: Boulder, Colorado


Many times I came across the fact that large content operators do not 
have adequate support.
I understand that increasing the support is not reasonable, its too 
expensive.
Why they don't make a separate "route" with payment and a list of urgent 
complaints criteria,

where the complainant pays a deposit of say $ 100-$200/hr,
and if he writes nonsense or not urgent case, qualified for free 
support, he simply loses money

and pays the operator for the time spent.

If case qualifies for urgent support, the money is returned to person 
seeking support.

But how things are going on now, it is totally wrong.
Large content networks are turning into trash bin and a hotbed of crime 
due to the inability to

quickly control illegal content.

I wouldn’t refuse such $100 option for ISP, if someone wants to send me 
complaints about torrents for $100,

i don’t mind looking at it carefully for compliance with local law.
And at the same time I could respond more quickly to complaints on DDOS 
and SPAM, hire dedicated staff, its for good

too. Also he can handle additionally regular abuse complaints channels.



Re: mail admins? Proposal of solution

2020-05-01 Thread Denys Fedoryshchenko

On 2020-04-30 02:43, Mark Andrews wrote:

And it is still on going.  Just got 4 of these.

Mark


Technical proposal how to solve that.

At 1st of month send monthly reminder manually, to each subscriber, but 
encode recipient address in Reply-To: a bit special way.

First, you need catch-all alias on special domain.
Then, you calculate reply-to, for example if destination recipient:
hash("secret-keyword" + u...@domain.com)=aabbccdd
So, it becomes:
Reply-To: aabbc...@monthly.nanog.org

Then just check all bounces received to *@monthly.nanog.org, and 
unsubscribe relevant users.


Re: Abuse Desks

2020-04-29 Thread Denys Fedoryshchenko

On 2020-04-28 18:57, Mike Hammett wrote:

I noticed over the weekend that a Fail2Ban instance's complain
function wasn't working. I fixed it. I've noticed a few things:

1) Abusix likes to return RIR abuse contact information. The vast
majority are LACNIC, but it also has kicked back a couple for APNIC
and ARIN. When I look up the compromised IP address in Abusix via the
CLI, the APNIC and ARIN ones return both ISP contact information and
RIR information. When I look them up on the RIR's whois, it just shows
the ISP abuse information. Weird, but so rare it's probably just an
anomaly. However, almost everything I see in LACNIC's region is
returned with only the LACNIC abuse information when the ones I've
checked on LACNIC's whois list valid abuse information for that
prefix. Can anyone confirm they've seen similar behavior out of
Abusix? I reached out to them, but haven't heard back.
2) Digital Ocean hits my radar far more than any other entity.
3) Azure shows up a lot less than GCP or AWS, which are about similar
to each other.
4) Around 5% respond saying it's been addressed (or why it's not in
the event of security researchers) within a couple hours. The rest I
don't know. I've had a mix of small and large entities in that
response.
5) HostGator seems to have an autoresponder (due to a 1 minute
response) that just indicates that you sent nothing actionable,
despite the report including the relevant log file entries.
6) Charter seems to have someone actually looking at it as it took
them 16 - 17 hours to respond, but they say they don't have enough
information to act on, requesting relevant log file entries...  which
were provided in the initial report and are even included in their
response. They request relevant log file entries with the date, time,
timezone, etc. all in the body in plain text, which was delivered.
7) The LACNIC region has about 1/3 of my reports.

Do these mirror others' observations with security issues and how
abuse desks respond?


Although many people write here - no need to worry about such minor 
things, i strongly disagree.


If someone littering server ssh logs for an hour, most likely on the 
other side:
1) A botnet-infected computer that needs to be fixed. Today ssh 
bruteforce,
tomorrow spam and hosting scam and very real financial losses for some 
people.
2) A hacker who is looking for an easy target. If he succeed, law 
enforcement
will come to you tomorrow and might waste lot of your time. And 
sometimes it’s
some kid who, possibly will get an early warning, will not break his 
life by getting

a criminal term.

And how to fight with lazy operators who start differentiate on abuse, 
which is worth their

majestic attention.
I send proper abuse reports if there is no reaction to them - I make a 
null route of incoming SYN
requests on all my servers, and sometimes i share an IP list with other 
operators who want to live

in a "clean" internet, and not in a garbage dump.
I have several resources hosted, so at the end techies of those 
"majestic ISPs" come with tears,
when their customers start to torture their support and sales, and beg 
to be unlocked and

most start to read abuse mailbox.
Or they just lose customers.


Re: FlowSpec

2020-04-23 Thread Denys Fedoryshchenko

On 2020-04-23 19:12, Roland Dobbins wrote:

On 23 Apr 2020, at 22:57, Denys Fedoryshchenko wrote:


In general operators don't like flowspec


Its increasing popularity tens to belie this assertion.

Yes, you're right that avoiding overflowing the TCAM is very
important.  But as Rich notes, a growing number of operators are in
fact using flowspec within their own networks, when it's appropriate.

One of operators told me why they dont provide flowspec anymore:
customers are installing rules by scripts, stacking them,
and not removing then when they dont need them anymore.
RETN solved that by limiting number of rules customer can install.



Smart network operators tend to do quite a bit of lab testing,
prototyping, PoCs, et. al. against the very specific combinations of
platforms/linecards/ASICs/OSes/trains/revisions before generally
deploying new features and functionality; this helps ameliorate many
concerns.
Definitely, and i know some hosting operators who provide Flowspec 
functionality
different way - over their own web interface/API. This way they can do 
unit tests,

and do additional verifications.

But if there is direct BGP, things like 
https://dyn.com/blog/longer-is-not-better/
might happen, if customer is using some exotic, "nightly-build" BGP 
implementation.




Re: FlowSpec

2020-04-23 Thread Denys Fedoryshchenko

On 2020-04-23 18:13, Colton Conor wrote:

Do any of the large transit providers support FlowSpec to transit
customers / other carriers, or is that not a thing since they want to
sell DDoS protection services? FlowSpec sounds much better than RTBH
(remotely triggered blackhole), but I am not sure if  FlowSpec is
widely implemented. I see the large router manufacturers support it.


RETN

They have extended blackholing, and FlowSpec, sure its all have costs.
I'm using both services from them and quite satisfied.

In general operators don't like flowspec, because it is not easy to 
implement it right,

there is bugs and most important its "eating" TCAM.
For example: 
https://blog.cloudflare.com/todays-outage-post-mortem-82515/


Re: FlowSpec

2020-04-23 Thread Denys Fedoryshchenko

On 2020-04-23 18:13, Colton Conor wrote:

Do any of the large transit providers support FlowSpec to transit
customers / other carriers, or is that not a thing since they want to
sell DDoS protection services? FlowSpec sounds much better than RTBH
(remotely triggered blackhole), but I am not sure if  FlowSpec is
widely implemented. I see the large router manufacturers support it.

RETN considered Tier-2?
They offer it, but it is more expensive than


Re: "Is BGP safe yet?" test

2020-04-20 Thread Denys Fedoryshchenko

On 2020-04-20 22:01, Rubens Kuhl wrote:

On Mon, Apr 20, 2020 at 3:37 PM Denys Fedoryshchenko
 wrote:


There is simple use case that will prove this page is giving false
positive
for their "name" strategy.
Any AS owner with default route only (yes it happens a lot) users
will
get:
"YOUR ISP TERRIBLE, HIS BGP NOT SAFE!".
But he have nothing to validate! His BGP is implemented safely,
its just his upstream is not validating routes.


So, that same ISP who is not validating because it has a default route
could push its providers to do validation and then be as safe as other
validating themselves ?

Rubens
Typically, those who have "default route only" are too small to be 
heard,

and their "wishes" doesn't go beyond the first line of support.
Not to mention that it does not work at all if upstream is a monopoly,
especially a state monopoly, who wont move a finger for "optional 
features".


And most important, the most common answer:
All Tier-1 implemented it? No.
Major hosting operators, such as AWS, gcloud, etc? - No.
So...


Re: "Is BGP safe yet?" test

2020-04-20 Thread Denys Fedoryshchenko

On 2020-04-20 19:24, Tom Beecher wrote:

Technical people need to make the business case to management for RKPI
by laying out what it would cost to implement (equipment, resources,
ongoing opex), and what the savings are to the company from protecting
themselves against hijacks. By taking this step, I believe RPKI will
become viewed by non-technical decision makers as a 'Cloudflare
initiative' instead of a 'good of the internet' initiative, especially
by some companies who compete with Cloudflare in the CDN space.

I believe that will change the calculus and make it a more difficult
sell for technical people to get resources approved to make it happen.

If i am not wrong, for most routers implementing RPKI means spinning up 
VM

with RPKI cache that need significant tinkering?
I guess it is a blocker for many, unless some "ready made" solutions 
offered

by vendors.
Also, if ISP configure his router and it did crashed because he 
installed
some "no warranty whatsoever" software from cloudflare github, what is 
next?

I guess this might be not welcome in support contracts.

P.S. Sorry for previous post top-posted. Just by mistake hit "Send" 
before i finished it


Re: "Is BGP safe yet?" test

2020-04-20 Thread Denys Fedoryshchenko
There is simple use case that will prove this page is giving false 
positive

for their "name" strategy.
Any AS owner with default route only (yes it happens a lot) users will 
get:

"YOUR ISP TERRIBLE, HIS BGP NOT SAFE!".
But he have nothing to validate! His BGP is implemented safely,
its just his upstream is not validating routes.

On 2020-04-20 21:21, Andrey Kostin wrote:

Mark Tinka писал 2020-04-20 12:57:

On 20/Apr/20 18:50, Tom Beecher wrote:


I (and Ben, and a few others) are all too familiar with the ARIN 
madness

around their TAL.

Simple - we just don't accept it, which means our networks will be
unsafe against North American resources. Highly doubtful my 
organization

is that interested in how the ARIN region may or may not impact our
interest in deploying RPKI on this side of the planet, when the rest 
of

the world are less mad about it :-).


So this means that there is no single source of truth for PRKI
implementation all around the world and there are different shades,
right? As a logical conclusion, the information provided on that page
may be considered incorrect in terms of proclaiming particular network
safe or not safe, but when it's claimed (sometimes blatantly) we now
have to prove to our clients that we are not bad guys.

Kind regards,
Andrey


Re: Constant Abuse Reports / Borderline Spamming from RiskIQ

2020-04-13 Thread Denys Fedoryshchenko

On 2020-04-13 17:25, Kushal R. wrote:

From the past few months we have been receiving a constant stream of
abuse reports from a company that calls themselves RiskIQ
(RiskIQ.com).

The problem isn’t the abuse reports themselves but the way they send
them. We receive copies of the report, on our sales, billing,
TECH-POCs and almost everything other email address of ours that is
available publicly. It doesn’t end there, they even online on our
website and start using our support live chat and as recently as
tomorrow they I see that they have now started using Twitter
(@riskiq_irt) to do the same.

We understand these reports and deal with them as per our policies and
timelines but this constant spamming by them from various channels is
not appreciated.

Does anyone have a similar experience with them?


If the problem of abuse legit and arises with enviable constancy, maybe 
it is time to take fundamental measures to combat abuse?
I had to block port 25 by default on some operators and create a 
self-care web page for removing it,
 with the requirement to read legal agreement where consequences stated, 
if the client start spamming.
For those who are bruteforcing other people's servers / credentials, 
soft-throttling ACL had to be implemented.
And as they wrote earlier, it’s better to kick out exceptionally bad 
customers than to destroy your reputation.


Re: South Africa On Lockdown - Coronavirus - Update!

2020-03-24 Thread Denys Fedoryshchenko

On 2020-03-24 18:59, Randy Bush wrote:
He's a network operator. From North America, on the North American 
Network
Operators mailing list. Something you are not, so please stop spouting 
your

drivel on a list that has nothing to do with you.


this is not how we should act in under pressure

+1

NANOG very often inspired me and my friends as the best example of the 
discussion, process of solving various problems related to internet in 
general. And now everybody around the world got major, similar problem. 
And someone now says, go away from here, is this a maillist for North 
American operators? If this is really decision supported by the 
majority(which i doubt), i would like to know that.


For example, a reminder that the VPN is very poorly balanced over LAG - 
was very useful.


And like this short message about South Africa was highly useful for my 
country of residence, Lebanon, too. We are under full, quite severe 
lock-down, with fines, checkpoints and etc.
Several ISPs here who kept NOC support (single person) near server room, 
in office - got fined too, as usual businesses, because there is no such 
regulation to allow them to run. This is terrible, because in such 
circumstances ISP cannot continue to provide reliable service, and their 
customers without internet might not keep isolation.
Therefore, I carefully read this group, how everyone solves similar 
problems.
And this is why, how is it solved in North America or other countries or 
regions can be an example for other countries.


Re: TCP-AMP DDoS Attack - Fake abuse reports problem

2020-02-21 Thread Denys Fedoryshchenko
Good luck responding to such SYN/ACK, when you get 10+Gbps of them (real 
case happened while ago with colleague).
Sure those SYN/ACK are not from single location, and attackers might use 
whole /24 for SYN spoofing.


On 2020-02-21 03:34, Amir Herzberg wrote:

If I read your description correctly:

- Attacker sends spoofed TCP SYN from your IP address(es) and
different src ports, to some TCP servers (e.g. port 80)
- TCP servers respond with SYN/ACK  ; many servers resend the SYN/ACK
hence amplification .
- *** your system does not respond ***
- Servers may think you're doing SYN-Flood against them, since
connection remains in SYN_RCVD, and hence complain. In fact, we don't
really know what is the goal of the attackers; they may in fact be
trying to do SYN-Flood against these servers, and you're just a
secondary victim and not the even the target, that's also possible.

Anyway, is this the case?

If it is... may I ask, do you (or why don't you) respond to the
unsolicited SYN/ACK with RST as per the RFC?

I suspect you don't, maybe due to these packets being dropped by
FW/NAT, that's quite common. But as you should understand by now from
my text, this (non-standard) behavior is NOT recommended. The problem
may disappear if you reconfigure your FW/NAT (or host) to respond with
RST to unsolicited SYN/ACK.

As I explained above, if my conjectures are true, then OVH as well as
the remote servers may have a valid reason to consider you either as
the attacker or as an (unknowning, perhaps) accomplice.

I may be wrong - sorry if so - and would appreciate, in any case, if
you can confirm or clarify, thanks.

--
Amir Herzberg

Comcast professor of Security Innovations, University of Connecticut

Homepage: https://sites.google.com/site/amirherzberg/home

Foundations of Cyber-Security (part I: applied crypto, part II:
network-security):
https://www.researchgate.net/project/Foundations-of-Cyber-Security

On Thu, Feb 20, 2020 at 5:23 PM Octolus Development
 wrote:


A very old attack method called TCP-AMP (
https://pastebin.com/jYhWdgHn ) has been getting really popular
recently.

I've been a victim of it multiple times on many of my IP's and every
time it happens - My IP's end up getting blacklisted in major big
databases. We also receive tons of abuse reports for "Port
Scanning".

Example of the reports we're getting:

tcp: 51.81.XX.XX:19342 -> 209.208.XX.XX:80 (SYN_RECV)
tcp: 51.81.XX.XX:14066 -> 209.208.XX.XX:80 (SYN_RECV)

OVH are threatening to kick us off their network, because we are
victims of this attack. And requesting us to do something about it,
despite the fact that there is nothing you can do when you are being
victim of an DDoS Attack.

Anyone else had any problems with these kind of attacks?

The attack basically works like this;
- The attacker scans the internet for TCP Services, i.e port 80.
- The attacker then sends spoofed requests from our IP to these TCP
Services, which makes the remote service attempt to connect to us to
initiate the handshake.. This clearly fails.
... Which ends up with hundreds of request to these services,
reporting us for "port flood".


Re: akamai yesterday - what in the world was that

2020-02-12 Thread Denys Fedoryshchenko

It would be really nice if the major CDNs had virtual machines small
network operators with very expensive regional transport costs could
spin up.  Hit rate would be very low, of course, but the ability to
grab some of these mass-market huge updates and serve them on the
other end of the regional transport at essentially no extra cost would
be great. I'm sure legal arrangements make that difficult, though.

+1

I think primary reason is that many major CDN offload nodes implemented 
such way that they require significant amount of maintenance and 
support. And doesnt matter, small or big ISP - they will have problems, 
and when the company that installed this CDN node is huge, like Facebook 
or Google, to crank all the bureaucratic wheels to change silly power 
supply or HDD - it comes at a huge cost for them. Also add that small 
ISP often dont have 24/7 support shifts, less qualified for complex 
issues, more likely to have poor infrastructure 
(temperature/power/reliability), that means more support expenses.
And they don’t give a damn that because of their "behemothness", they 
increase the digital inequality gap. When a large ISP or ISP cartel 
member enter some regional market, local providers will not be able to 
compete with him, since they cannot afford CDN nodes due traffic volume.


Many of CDN also do questionable "BGP as signalling only" setups with 
proprietary TCP probing/loss, that often doesn't work reliably. Each of 
them is trying to reinvent the wheel, "this time not round, but 
dodecahedral". And when it fails, ISP will waste time of support, until 
it reach someone who understand issue. In most cases, this is a blackbox 
setup, and when problem happens ISP are endlessly trying to explain 
problem to outsourced support, who have very limited access as well, and 
responding like robot according to the his "support workflow", with zero 
feedback to common problems.


Honestly, it's time to develop an open standard for caching content on 
open CDN nodes, which should be easy to use for both content providers 
and ISPs.
For example, at one time existed a special hardcoded "retracker.local" 
server in many torrent clients, which optionally(if resolved on ISP, 
static entry in recursor) was used for the discovery of nearest seeders 
inside network of a local provider.

http://retracker.local/
Maybe it is possible to make a similar scheme, if the content provider 
wants "open" CDN to work, it will set some alternative scheme 
cdn://content.provider.com/path/file or other kind of hint, with content 
validity/authenticity mechanism. After that, the browser will attempt to 
do CDN discovery, for example: "content.provider.com.reservedtld" and 
will push request through it.

I'm sure someone will have a better idea how to do that.

As a result, the installation of such "offloading node" will be just 
installing container/vm and, if the load is increased, increasing the 
number of servers/vm instances.




Re: Recommended DDoS mitigation appliance?

2019-11-18 Thread Denys Fedoryshchenko

On 2019-11-18 04:23, Richard wrote:

I would say you are making some assumptions that are not fact based.
The OP is very knowledgeable and would not mince words or waste
bandwidth. Let us see what he has to say in regards to your remarks.
He will be able to make this more clear once he has read what people
have stated in other responses.

Respectfully, of course, Richard Golodner
On 11/17/19 8:12 PM, Töma Gavrichenkov wrote:


Peace,

On Mon, Nov 18, 2019, 1:49 AM Rabbi Rob Thomas 
wrote:


I am going to assume you want it to spit out 10G clean, what

size

dirty traffic are you expecting it to handle?


Great question!  Let's say between 6Gbps and 8Gbps dirty.


As someone making a living as a DDoS mitigation engineer for the
last 10 years (minus 1 month) I should say your threat model is sort
of unusual.  Potential miscreants today should be assumed to have
much more to show you even on a daily basis.

Is it like you also have something filtering upstream for you, e.g.
flowspec-enabled peers?

--
Töma





AFAIK new threats (SYN+ACK amplification) can't be mitigated over 
flowspec and they can reach 40+Gbps easily.


Re: SFP oraganizers / storage recommendations

2019-10-30 Thread Denys Fedoryshchenko

On 2019-10-30 15:35, Matthew Huff wrote:

Any recommendations to keep track of different SFP and keep them
organized? Any storage boxes / trays designed for SFPs?

3D printed some, but i have small amounts.
Like this one: https://www.thingiverse.com/thing:2855165
There is many more designs, for example whole tray for 50 pieces.


Re: Mx204 alternative

2019-09-02 Thread Denys Fedoryshchenko

On 2019-09-02 17:16, Saku Ytti wrote:

On Mon, 2 Sep 2019 at 16:26, Denys Fedoryshchenko
 wrote:


or some QFX, for example, Broadcom Tomahawk 32x100G switches only do
line-rate with >= 250B packets according to datasheets.


Only is peculiar term here. 100Gbps is 148Mpps, give or take 100PPM,
at 250B it's still some 50Mpps. Times 32 that's 1600Mpps, or 1.6Gpps.
Only implies it's modest compared to some other solution, what is that
solution? XEON doing ~nothing (not proper lookup even) is some couple
hundred Mpps, far cry from 1.6Gpps with ACL, QoS and L3 lookup.
I don't care about wire rate on chip with lot of ports, because
statistics. 250B average size on 32x100GE on a chip is fine to me.
250B average size on 32x100GE with 32 chips, would be horrifying.

I'm not saying XEON does not have application, I'm just saying XEON is
bps and pps expensive chip compared to almost anything out there,
however there are some application with very deep touch where it is
marketable.
Btw. technically Tomahawk and Trio are very different, Trio has tens
or hundreds of cores executing software, cores happen to have domain
specific instruction set, but still software box with lot of cores.
Tomahawk is pipeline box, having domain specific hardware and largely
not running a software (but all pipelines today are somewhat
programmable anyhow). On Trio you are mostly just time limited on what
you can do, on Tomahawk you have physical hardware restrictions on
what you can do.
Of course, they are much stronger (and cheaper in $/bps or $/pps) when 
it comes to L2/L3 lookup, basic stateless filters, simple QoS.
But can Trio perform stateful firewall filtering for millions of flows+ 
lot of mpps that Xeon easily handle? Thats the case of recent DDoS 
attacks.




Re: Mx204 alternative

2019-09-02 Thread Denys Fedoryshchenko

On 2019-09-02 15:52, Baldur Norddahl wrote:


Maturity is such a subjective word. But yes there are plenty of
options for routing protocols on a Linux. Every internet exchange is
running BGP on Linux for the route server after all.

I am not recommending a server over MX204. I think MX204 is brilliant.
It is one of the cheapest options and if that is not cheap enough,
THEN the server solution is probably what you may be looking for.

You can move a lot of traffic even with an old leftover server.
Especially if you are not concerned with moving 64 bytes DDoS at line
speed, because likely you would be down anyway in that case.

As to the OPEX I would claim there are small shops that would have an
easier time with a server, because they know how to do that. They
would have only one or two routers and learning how to run JUNOS just
for that might never happen. It all depends on what workforce you
have. Network people or server guys?

Regards

Baldur





I think that such types of DDoS are much easier to solve on a server 
with XDP/eBPF than on MX.
And much cheaper if we are talking about the new SYN+ACK DDoS and it is 
exactly 64b ddos case. I used multiple 82599.


From snabbco discussion, issue #1013, "If you read Intel datasheets then 
the minimum packet rate they are guaranteeing is 64B for 10G (82599), 
128B for 40G (XL710), and 256B for 100G (FM10K)."


But "hardware", ASIC enabled routers such as MX might be not better and 
even need some tuning.

https://kb.juniper.net/InfoCenter/index?page=content=KB33477=METADATA
"On summit MX204 and MX10003 platforms, the line rate frame size is 119 
byte for 10/40GbE port and 95 byte for 100GbE port."
or some QFX, for example, Broadcom Tomahawk 32x100G switches only do 
line-rate with >= 250B packets according to datasheets.


Re: Reflection DDoS last week

2019-08-28 Thread Denys Fedoryshchenko

On 2019-08-28 02:23, Damian Menscher via NANOG wrote:

On Wed, Aug 21, 2019 at 3:21 PM Töma Gavrichenkov 
wrote:


On Thu, Aug 22, 2019 at 12:17 AM Damian Menscher 
wrote:

Some additional questions, if you're able to answer them (off-list

is fine if there are things that can't be shared broadly):

- Was the attack referred to law enforcement?


It is being referred to now.  This would most probably get going
under
the jurisdiction of the Netherlands.


Deeper analysis and discussion indicates there were several victims:
we saw brief attacks targeting some of our cloud customers with
syn-ack peaks above 125 Mpps; another provider reported seeing 275Mpps
sustained.  So presumably there are a few law enforcement
investigations under way, in various jurisdictions.


- Were any transit providers asked to trace the
source of the spoofing to either stop the attack
or facilitate the law enforcement investigation?


No tracing the source was not deemed a high priority task.


Fair enough.  I just didn't want to duplicate effort.

The source of the spoofing has been traced.  The responsible hosting
provider has kicked off their problem customer, and is exploring the
necessary filtering to prevent a recurrence.

If anyone sees more of this style of attack please send up a flare so
the community knows to track down the new source.

Damian


One of my clients suffered from such attacks.
And you know what the secondary harm is? Typical false flag issue.
Even if you have decent DDoS protection setup, it is highly likely that 
involuntary reflectors administrators will not puzzle what to do with 
this, they will simply block your subnet/ASN.
For example attacker spoof hosting operator subnets, did SYN flood to 
all credit card processing gateways, and sure legit hosting gets 
SYN+ACK.
And this hosting after suffering to block this SYN+ACK reflection will 
find an unpleasant thing - not a single credit card processing gateway 
is available from his subnets.
Good example is EAGames, Rockstar, fs.com of those, who just set static 
ACL


Re: Reflection DDoS last week

2019-08-24 Thread Denys Fedoryshchenko

Hi,

Same happened in Lebanon(country). Similar pattern: carpet bombing for 
multiple prefixes of specific ASN.
I suspect it is a new trend in DDoS-for-hire, and ISP who did not 
install data scrubbing appliances will feel severe pain from such 
attacks, since they use SYN + ACK from legit servers.



On 2019-08-21 22:44, Töma Gavrichenkov wrote:

Peace,

Here's to confirm that the pattern reported before in NANOG was indeed
a reflection DDoS attack. On Sunday, it also hit our customer, here's
the report:

https://www.prnewswire.com/news-releases/root-cause-analysis-and-incident-report-on-the-august-ddos-attack-300905405.html

tl;dr: basically that was a rather massive reflected SYN/ACK carpet
bombing against several datacenter prefixes (no particular target was
identified).

--
Töma

On Sat, Aug 17, 2019, 1:06 AM Jim Shankland 
wrote:


Greetings,

I'm seeing slow-motion (a few per second, per IP/port pair) syn
flood
attacks ostensibly originating from 3 NL-based IP blocks:
88.208.0.0/18 [1]
, 5.11.80.0/21 [2], and 78.140.128.0/18 [3] ("ostensibly" because
... syn flood,
and BCP 38 not yet fully adopted).

Why is this syn flood different from all other syn floods? Well ...

1. Rate seems too slow to do any actual damage (is anybody really
bothered by a few bad SYN packets per second per service, at this
point?); but

2. IPs/port combinations with actual open services are being
targeted
(I'm seeing ports 22, 443, and 53, just at a glance, to specific IPs

with those services running), implying somebody checked for open
services first;

3. I'm seeing this in at least 2 locations, to addresses in
different,
completely unrelated ASes, implying it may be pretty widespread.

Is anybody else seeing the same thing? Any thoughts on what's going
on?
Or should I just be ignoring this and getting on with the weekend?

Jim



Links:
--
[1] http://88.208.0.0/18
[2] http://5.11.80.0/21
[3] http://78.140.128.0/18


Re: really amazon?

2019-07-31 Thread Denys Fedoryshchenko

On 2019-07-31 23:13, Scott Christopher wrote:

Valdis Klētnieks wrote:


On Wed, 31 Jul 2019 16:36:08 -, Richard Williams via NANOG said:

>  To contact AWS SES about spam or abuse the correct email address is 
ab...@amazonaws.com

You know that, and I know that, but why doesn't the person at AWS 
whose job it

is to keep the ARIN info correct and up to date know that?


Because it will get spammed if publicly listed in WHOIS.


They can send autoreply with correct address (even as picture, but yes, 
From: can be spoofed, so might be bad idea), make error message with 
link to captcha, custom error

in reject (e.g. web url to submit report), and etc.
So many ways to be more helpful in such critical matters.

But at least not "User not found".


Re: Colo in Africa

2019-07-18 Thread Denys Fedoryshchenko

Africa, Russia...

You can take as example Lebanon.
Capital and major city in tiny country, ~40km away from each other, and 
only way you can get 2 points connected over microwaves(due mountains - 
several hops), over "licensed" providers, DSP, who hook this points for 
$10-$30/mbps/month. And many of them don't have support at evenings and 
weekend. Of course, due crappy electricity in country and economical 
situation, discharged batteries and outages at evening/night at 
"licensed" DSP sites - common case.
The laws of the country are so cool, that it is even forbidden to lay 
optics from the building standing next to other building, unless you are 
government monopoly (and they don't sell fiber connectivity).


In Africa, many people do not have electricity at all and cook on open 
fire, i imagine what difficulties they have with connectivity.
The last time when I worked with a team on study to invest in telecom in 
Africa - results discouraged even trying to engage in telecom subject 
there.
I think the only ones who are interested in decent connectivity there - 
mobile operators. Maybe worth to find connections and talk to them.



On 2019-07-17 20:16, Mark Tinka wrote:

On 17/Jul/19 17:04, Rod Beck wrote:


The cross continent connectivity is not going to be particularly
reliable. Prone to cuts due to wars and regional turmoil. And
imagine how it takes to repair problems at the physical layer.


I think that view is too myopic... you make it sound like Namibia,
Botswana, Zimbabwe and Zambia are at war. Just like all other
continents, unrest exists in some states, not all of them.

For the regions the OP is interested in, there isn't any conflict
there that would prevent him from deploying network.

Terrestrial connectivity is not a viable solution because:

* It costs too much.
* Different countries (even direct neighbors) do not share social,
economic or political values.
* Most of the available network is in the hands of incumbents,
typically controlled by the gubbermint.
* It costs too much.
* There isn't sufficient capacity to drive prices down when crossing
2 or more countries.
* It costs too much.
* Many markets are closed off and it's impossible to obtain licenses
to compete.
* It costs too much.
* Much of the network is old and has barely been upgraded.
* It costs too much.
* For those bold enough to build, the terrain in some parts is not a
walkover.

* It costs too much.

Mark.


Re: Cost effective time servers

2019-06-21 Thread Denys Fedoryshchenko

On 2019-06-21 14:19, Niels Bakker wrote:

* j...@west.net (Jay Hennigan) [Fri 21 Jun 2019, 05:19 CEST]:

On 6/20/19 07:39, David Bass wrote:
What are folks using these days for smaller organizations, that need 
to dole out time from an internal source?


If you want to go really cheap and don't value your time, but do value 
knowing the correct time, a GPS receiver with a USB interface and a 
Raspberry Pi would do the trick.


Have you tried this?  Because I have, and it's absolutely terrible.
GPS doesn't give you the correct time, it's supposed to give you a
good 1pps clock discipline against which you can measure your device's
internal clock and adjust accordingly for drift due to it not being
Cesium-based, influenced by room temperature etc.

You're unlikely to get the 1pps signal across USB, and even then
there'll likely be significant latencies in the USB stack compared to
the serial interface that these setups traditionally use.


I think it depends on recipe you are using.
Raspberry have low latency GPIO, and some receivers have 1pps output.
https://www.satsignal.eu/ntp/Raspberry-Pi-NTP.html


Re: Free Program to take netflow

2019-05-17 Thread Denys Fedoryshchenko
Fastnetmon have that: 
https://fastnetmon.com/fastnetmon-advanced-traffic-persistency/

I used it for such purposes.

On 2019-05-17 17:26, Dennis Burgess via NANOG wrote:

I am looking for a free program to take netflow and output what the
top traffic ASes to and from my AS are.   Something that we can look
at every once in a while, and/or spin up and get data then shutdown..
Just have two ports need netflow from currently.

Thanks in advance.

DENNIS BURGESS, MIKROTIK CERTIFIED TRAINER

Author of "Learn RouterOS- Second Edition"

LINK TECHNOLOGIES, INC -- Mikrotik & WISP Support Services

OFFICE: 314-735-0270  Website: http://www.linktechs.net [1]

Create Wireless Coverage's with www.towercoverage.com [2]



Links:
--
[1] http://www.linktechs.net/
[2] http://germany.nuclearcat.com/www.towercoverage.com


Re: BGP prefix filter list / BGP hijacks, different type

2019-05-17 Thread Denys Fedoryshchenko
I wanted to mention one additional important point in all these 
monitoring discussion.

Right now, for one of my subnets Google services stopped working.
Why? Because it seems like someone from Russia did BGP hijack, BUT, 
exclusively for google services (most likely some kind of peering).
Quite by chance, I noticed that the traceroute from the google cloud to 
this subnet goes through Russia, although my country has nothing to do 
with Russia at all, not even transit traffic through them.
Sure i mailed noc@google, but reaching someone in big companies is not 
easiest job, you need to search for some contact that answers. And good 
luck for realtime communications.
And, all large CDNs have their own "internet", although they have BGP, 
they often interpret it in their own way, which no one but them can 
monitor and keep history. No looking glass for sure, as well.
If your network is announced by a malicious party from another country, 
you will not even know about it, but your requests(actually answers from 
service) will go through this party.


Re: QFX5k question

2019-03-24 Thread Denys Fedoryshchenko

On 2019-03-24 00:32, Thomas Bellman wrote:

They do have limited feature set, though.  E.g, they only look at
the first 64 octets of each packet (and that includes L2 and L2.5
headers) when deciding what to do with a packet, and can't chase
the IPv6 header chain; thus, if there is an extension header before
the TCP/UDP header, they won't know what TCP/UDP ports are used,
or even if it is TCP, UDP or something else.  Dealing with packets
exiting tunnels (MPLS, VXLAN, et.c) is also limited.

Some declared features - do not work.
For example, IPIP termination through filters is claimed, but does not 
work.

https://www.juniper.net/documentation/en_US/junos/topics/task/configuration/ipip-tunnel-services-filter-qfx-series.html
Perhaps "not implemented yet", possibly errata, nevertheless it is very 
unpleasant when you buy equipment and this is a key necessary function.
Therefore, if any more or less complex (uncommon) features are used, it 
is better to test them first.


Facebook dropping MSS on congestion

2019-03-20 Thread Denys Fedoryshchenko

Good day,

I am writing here, as in technical support ticket I will most likely end 
up to the outsourcing guys, who will try to write some formal reply and 
close the ticket quickly to keep KPI high:)
I have a faint hope that someone will read and listen. It may also be 
useful to colleagues.
I noticed at last few month, if some congestion occurs on the network 
(specific subnet), facebook reduces the maximum segment size (MSS), even 
down to 256 bytes. Purely academically, on paper - this will reduce 
latency.

In reality - it will cause avalanche effect.
If ISP have CGNAT, or some other appliances - with great probability 
they will encounter the fact that pps will increase 4-5 times, and might 
hit pps limit on hardware. Additionally, overhead on IP headers will 
increase significantly, especially on ipv6, and this will further 
aggravate the congestion.
Facebook don't do that, please. And thank you, if you listen to 
suggestions.


Denys


Re: Webzilla

2019-03-19 Thread Denys Fedoryshchenko

On 2019-03-18 23:24, Ronald F. Guilmette wrote:
In message 
,

Eric Kuhnke  wrote:

Looking at the AS adjacencies for Webzilla, what would prevent them 
from
disconnecting all of their US/Western Euro based peers and transits, 
and
remaining online behind a mixed selection of the largest Russian ASes? 
I do
not think that any amount of well-researched papers and appeals to 
ethical

ISPs on the NANOG mailing list will bring down those relationships.


In the early years of the 20th century, Vladimir Lenin, leader of the
Bolshevik, revolution, famously quipped to his communist collegues that
"The capitalists will sell us the rope to hang them with."  His 
prescient

words have endured even the fall of the empire he founded because they
clarify a simple and fundamental truth -- in capitalist systems, short
term greed often overrides both rationality and simple common sense.
My hope is that it will not be so on this occasion, and that enligtened
long-term self interest will prevail, at least among those companies 
that

are peering with any of Webzilla's ASNs.

Your speech is very reminiscent of this very Lenin, who climbed on an 
armored car and broadcasted speech to the "worker class" and told how 
bad are rich and how to restore justice.
Only instead of rich people you have "those pesky Russians", and instead 
of the working class - "Western democracies". But let's not get into 
politics too deep.
What prevents those who consider the activities of this hosting to be so 
harmful that they are worth blocking - to filter and add to the ACL 
lists of networks, where Webzilla AS is origin?
Or make some easy to use lists, API, BGP feed, and those who decide to 
participate will null-route offenders, and you will see how many people 
will support you.
If this list is compiled carefully, then I am sure it will interest 
many(including me). If it turns into a political tool or a tool for 
extortion ... then of course not.


And generally speaking, all these speeches from an armored cars end with 
a witch hunt, and almost always entire nations or categories of people 
are appointed as witches, depending on the trends.
Who will be next? Cloudflare? Their attempt to maintain neutrality 
annoys many.

Amazon? They react very slowly to abuse.
OVH? It seems they do not care about abuse at all.
Or maybe it will go into fashion to make the guilty - legal arms 
sellers? Or internet-stores who sell alcohol?
Just create a cause for a depeering, and a lot of people with their 
special views will demand a depeering at every opportunity.


P.S. North Korea, as far as I know, is very limited in connectivity 
choice, and this does not prevent them from creating a bunch of 
problems.
As Max Tulyev said, and they are good example, just sprayed through 
countless proxies.


Re: bloomberg on supermicro: sky is falling

2018-10-04 Thread Denys Fedoryshchenko

On 2018-10-04 23:37, Naslund, Steve wrote:

I was wondering about where this chip tapped into all of the data and
timing lines it would need to have access to.  It would seem that
being really small creates even more problems making those
connections.  I am a little doubtful about the article.  It would seem
to me better to create a corrupted copy of something like a front side
bus chipset, memory controller or some other component that handles
data lines than create a new component that would then require a
motherboard redesign to integrate correctly.  It would seem that as
soon as the motherboard design was changed someone would wonder "hey,
where are all those data lines going?"  It would also require less
people in on the plan to corrupt or replace a device already in the
design.  All you need is a way to intercept the original chip supply
and insert your rogue devices.

On the opposite side of the argument, does anyone think it is strange
that all of the companies mentioned in the article along with the PRC
managed to get a simultaneous response back to Bloomberg.  Seems
pretty pre-calculated to me.  Or did some agency somewhere tell
everyone they better shut up about the whole thing?

Steven Naslund
Chicago IL


Just theory - tapping on same lines as SPI flash (let's assume it is not 
QSPI), so we are "in parallel", as "snooper" chip.

First - it can easily snoop by listening MISO/MOSI/CS/CLK.
When required data pattern and block detected during snooping, it can 
remember offset(s) of required data.
When, later, BMC send over MOSI request for this "offset", we override 
BMC and force CS high (inactive), so main flash chip will not answer, 
and answer instead of him our, different data from "snooper".

Voila... instead of root:password we get root:nihao


Re: bloomberg on supermicro: sky is falling

2018-10-04 Thread Denys Fedoryshchenko

On 2018-10-04 21:52, Scott Weeks wrote:

--- matlock...@gmail.com wrote:
From: Ken Matlock 

Would be remiss in our duties if we didn't also link
AWS' blog, in response to the Bloomberg article.
--


Every company and the Chinese gov't is saying "no,
Bloomberg is wrong":

https://www.bloomberg.com/news/articles/2018-10-04/the-big-hack-amazon-apple-supermicro-and-beijing-respond

Can't wait to see how this evolves...

scott
It would be better for them(AMZN, SMCI, AAPL)  to prove that these 
events did not take place - in court.
In the opposite case, even if this article is full of inaccuracies, 
judging by the discussions of security specialists, the scenario 
indicated in the article is quite possible.
Unpopulated SOIC-8 near populated SOIC-16 flash for BMC firmware is 
sweet spot for custom MCU - snooping on flash SPI(most likely) bus and 
probably altering some data.
At the same time there will be a good precedent, if this article is 
fabricated - such journalists need to be taught a lesson.

And if they wont go to the court, there is something to think about.


Re: OpenDNS CGNAT Issues

2018-09-12 Thread Denys Fedoryshchenko



On 2018-09-12 19:40, Lee Howard wrote:

On 09/11/2018 09:31 AM, Matt Hoppes wrote:

So don't CGNat?  Buy IPv4 addresses at auction?


Buy IPv4 addresses until CGN is cheaper. If a customer has to call,
and you have to assign an IPv4 address, you have to recover the cost
of that call and address.
While ((CostOfCall + CostOfAddress)*NumberOfCalls) >
(CostOfAddress*NumberOfNewCustomers):
 BuyAddresses(NumberOfNewCustomers)

Meanwhile, deploy IPv6, and move toward IPv4aaS, probably 464xlat or
MAP, but your religion may vary. That way your "CGN" is an IPv6-IPv4
translator, and that's easier than managing dual-stack.

At the very least, dual-stack your web sites now, so the rest of us
can get to it without translation.



Just regarding ipv4 issue solution, this process can be somehow 
automated by detecting those who use opendns(by netflow, for example), 
to avoid "CostOfCall" part.
Also, to avoid false claiming of nat pool, he can nat DNS requests for 
OpenDNS to different ip pool, that cannot be claimed.


Re: Linux BNG

2018-07-15 Thread Denys Fedoryshchenko

On 2018-07-15 19:00, Raymond Burkholder wrote:

On 07/15/2018 09:03 AM, Denys Fedoryshchenko wrote:

On 2018-07-14 22:05, Baldur Norddahl wrote:
I have considered OpenFlow and might do that. We have OpenFlow 
capable
switches and I may be able to offload the work to the switch 
hardware.
But I also consider this solution harder to get right than the idea 
of

using Linux with tap devices. Also it appears the Openvswitch
implements a different flavour of OpenFlow than the hardware switch
(the hardware is limited to some fixed tables that Broadcom made up),
so I might not be able to start with the software and then move on to
hardware.


AFAIK openflow is suitable for datacenters, but doesn't scale well for 
users termination purposes.

You will run out from TCAM much sooner than you expect.


Denys, could you expand on this?  In a linux based solution (say with
OVS), TCAM is memory/software based, and in following their dev
threads, they have been optimizing flow caches continuously for
various types of flows: megaflow, tiny flows, flow quantity and
variety, caching, ...

When you mate OVS with something like a Mellanox Spectrum switch (via
SwitchDev) for hardware based forwarding, I could see certain hardware
limitations applying, but don't have first hand experience with that.

But I suppose you will see these TCAM issues on hardware only
specialized openflow switches.
Yes, definitely only on hardware switches and biggest issue it is 
vendor+hardware
dependent. This means if i find "right" switch, and make your solution 
depending on it,
and vendor decided to issue new revision, or even new firmware, there is 
no guarantee

"unusual" setup will keep working.
That what makes many people afraid to use it.

Openflow IMO by nature is built to do complex matching, and for example 
for
typical 12-tuple it is 750-4000 entries max in switches, but you go to 
l2 only matching
which was possible at moment i tested, on my experience, only on PF5820 
- you can do L2

entries only matching, then it can go 80k flows.
But again, sticking to specific vendor is not recommended.

About OVS, i didnt looked much at it, as i thought it is not suitable 
for BNG purposes,
like for tens of thousands users termination, i thought it is more about 
high speed

switching for tens of VM.




On edge based translations, is hardware based forwarding actually
necessary, since there are so many software functions being performed
anyway?
IMO at current moment 20-40G on single box is a boundary point when 
packet forwarding
is preferable(but still not necessary) to do in hardware, as passing 
packets

thru whole Linux stack is really not best option. But it works.
I'm trying to find an alternative solution, bypassing full stack using 
XDP,

so i can go beyond 40G.



But then, it may be conceivable that by buying a number of servers,
and load spreading across the servers will provide some resiliency and
will come in at a lower cost than putting in 'big iron' anyway.

Because then there are some additional benefits:  you can run Network
Function Virtualization at the edge and provide additional services to
customers.

+1

For IPoE/PPPoE - servers scale very well, while on "hardware" eventually 
you will
hit a limit how many line cards you can put in chassis and then you need 
to buy new chassis.
I am not talking that some chassis have countless unobvious limitations 
you might hit
inside chassis (in pretty old Cisco 6500/7600, which is not EOL, it is a 
nightmare).


If ISP have big enough chassis, he need to remember, that he need second 
one
at same place, and preferable with same amount of line cards, while with 
servers

you are more reliable even with N+M(where M for example N/4) redundancy.

Also when premium customers ask me for some unusual things, it is much 
easier
to move them to separate nodes with extended options for termination, 
where i can

implement their demands over custom vCPE.


Re: Linux BNG

2018-07-15 Thread Denys Fedoryshchenko

On 2018-07-14 22:05, Baldur Norddahl wrote:

I have considered OpenFlow and might do that. We have OpenFlow capable
switches and I may be able to offload the work to the switch hardware.
But I also consider this solution harder to get right than the idea of
using Linux with tap devices. Also it appears the Openvswitch
implements a different flavour of OpenFlow than the hardware switch
(the hardware is limited to some fixed tables that Broadcom made up),
so I might not be able to start with the software and then move on to
hardware.
AFAIK openflow is suitable for datacenters, but doesn't scale well for 
users termination purposes.

You will run out from TCAM much sooner than you expect.
Linux tap device has very high overhead, it suits no more than working 
as some hotspot gateway for 100s of users.


Regards,

Baldur


Re: Linux BNG

2018-07-15 Thread Denys Fedoryshchenko

On 2018-07-15 06:09, Jérôme Nicolle wrote:

Hi Baldur,

Le 14/07/2018 à 14:13, Baldur Norddahl a écrit :

I am investigating Linux as a BNG


As we say in France, it's like your trying to buttfuck flies (a local
saying standing for "reinventing the wheel for no practical reason").

You can say that about whole opensource ecosystem, why to bother, if
*proprietary solution name* exists. It is endless flamewar topic.



Linux' kernel networking stack is not made for this kind of job. 6WIND
or fd.io may be right on the spot, but it's still a lot of dark magic
for something that has been done over and over for the past 20 years by
most vendors.

And it just works.

Linux developers are working continuously to improve this, for example
latest feature, XDP, able to process several Mpps on <$1000 server.
Ask yourself, why Cloudflare "buttfuck flies" and doesn't buy some
proprietary vendor who 20 years does filtering in hardware?
https://blog.cloudflare.com/how-to-drop-10-million-packets/
I am doing experiments with XDP as well, to terminate PPPoE, and it is
doing that quite well over XDP.



DHCP (implying straight L2 from the CPE to the BNG) may be an option
bust most codebases are still young. PPP, on the other hand, is
field-tested for extremely large scale deployments with most vendors.

DHCP here, at least from RFC 2131 existence in March 1997.
Quite old, isn't it?
When you stick to PPPoE, you tie yourself with necessary layers of
encapsulation/decapsulation, and this is seriously degrading performance 
at
_user_ level at least. With some development experience of firmware for 
routers,

i can tell that hardware offloading of ipv4 routing (DHCP) obviousl
is much easier and cheaper, than offloading PPPoE encap/decap+ipv4 
routing.

Also Vendors keep screwing up their routers with PPP, and for example
one of them failed processing properly PADO in newest firmware revision.
Another problem, with PPPoE you subscribe to headache called reduced 
mtu, that also

will give a lot of unpleasant hours for ISP support.



If I were in you shooes, and I don't say I'd want to (my BNGs are 
scaled

to less than a few thousand of subscribers, with 1-4 concurrent session
each), I'd stick to plain old bitstream (PPP) model, with a decent
subscriber framework on my BNGs (I mostly use Juniper MXs, but I also
like Nokia's and Cisco's for some features).

I am consulting operators from few hundreds to hundreds of thousands.
It is very rare, when Linux bng doesn't suit them.



But let's say we would want to go forward and ditch legacy / 
proprietary

code to surf on the NFV bullshit-wave. What would you actually need ?

Linux does soft-recirculation at every encapsulation level by memory
copy. You can't scale anything with that. You need to streamline
decapsulation with 6wind's turborouter or fd.io frameworks. It'll cost
you a few thousand of man-hours to implement your first prototype.

6wind/fd.io is great solutions, but not suitable for mentioned task.
They are mostly created for very tailor made tasks or even as core of 
some
vendor solution. Implementing your BNG based on such frameworks, or 
DPDK, is really
reinventing the wheel, unless you will sell it or can save by that 
millions

of US$.



Let's say you got a woking framework to treat subsequent headers on the
fly (because decapsulation is not really needed, what you want is just
to forward the payload, right ?)… Well, you'd need to address
provisionning protocols on the same layers. Who would want to rebase a
DHCP server with alien packet forms incoming ? I gess no one.
accel-ppp does all that and exactly for IPoE termination, and no black 
magic

there.



Well, I could dissert on the topic for hours, because I've already 
spent

months to address such design issues in scalable ISP networks, and the
conclusion is :

- PPPoE is simple and proven. Its rigid structure alleviates most of 
the

dual-stack issues. It is well supported and largelly deployed.

PPPoE has VERY serious flaws.
1)Security of PPPoE sucks big time. Anybody who run rogue PPPoE server 
in your network
will create significant headache for you, while with DHCP you have at 
least "DHCP snooping".
DHCP snooping supported in very many vendors switches, while for PPPoE 
most of them

have nothing, except... you stick each user to his own vlan.
Why to pppox them then?
2)DHCP can send some circuit information in Option 82, this is very 
useful for

billing and very cost efficient on last stage of access switches.
3)Modern FTTX(GPON) solutions are built with QinQ in mind, so IPoE fits 
there flawlessly.


- DHCP requires hacks (in the form of undocummented options from 
several
vendors) to seemingly work on IPv4, but the multicast boundaries for 
NDP

are a PITA to handle, so no one implemented that properly yet. So it is
to avoid for now.
While you can do multicast(mostly for IPTV, yes it is not easy, and need 
some
vendor magic on "native" layer (DHCP), with PPP you can forget about 
multicast

entirely.


Re: Linux BNG

2018-07-14 Thread Denys Fedoryshchenko

On 2018-07-14 15:13, Baldur Norddahl wrote:

Hello

I am investigating Linux as a BNG. The BNG (Broadband Network Gateway)
being the thing that acts as default gateway for our customers.

The setup is one VLAN per customer. Because 4095 VLANs is not enough,
we have QinQ with double VLAN tagging on the customers. The customers
can use DHCP or static configuration. DHCP packets need to be option82
tagged and forwarded to a DHCP server. Every customer has one or more
static IP addresses.

IPv4 subnets need to be shared among multiple customers to conserve
address space. We are currently using /26 IPv4 subnets with 60
customers sharing the same default gateway and netmask. In Linux terms
this means 60 VLAN interfaces per bridge interface.

However Linux is not quite ready for the task. The primary problem
being that the system does not scale to thousands of VLAN interfaces.

We do not want customers to be able to send non routed packets
directly to each other (needs proxy arp). Also customers should not be
able to steal another customers IP address. We want to hard code the
relation between IP address and VLAN tagging. This can be implemented
using ebtables, but we are unsure that it could scale to thousands of
customers.

I am considering writing a small program or kernel module. This would
create two TAP devices (tap0 and tap1). Traffic received on tap0 with
VLAN tagging, will be stripped of VLAN tagging and delivered on tap1.
Traffic received on tap1 without VLAN tagging, will be tagged
according to a lookup table using the destination IP address and then
delivered on tap0. ARP and DHCP would need some special handling.

This would be completely stateless for the IPv4 implementation. The
IPv6 implementation would be harder, because Link Local addressing
needs to be supported and that can not be stateless. The customer CPE
will make up its own Link Local address based on its MAC address and
we do not know what that is in advance.

The goal is to support traffic of minimum of 10 Gbit/s per server.
Ideally I would have a server with 4x 10 Gbit/s interfaces combined
into two 20 Gbit/s channels using bonding (LACP). One channel each for
upstream and downstream (customer facing). The upstream would be layer
3 untagged and routed traffic to our transit routers.

I am looking for comments, ideas or alternatives. Right now I am
considering what kind of CPU would be best for this. Unless I take
steps to mitigate, the workload would probably go to one CPU core only
and be limited to things like CPU cache and PCI bus bandwidth.

accel-ppp supports IPoE termination for both IPv4 and IPv6, with radius 
and everything.
It is also done such way, that it will utilize multicore server 
efficiently (might need some tuning, depends on hardware).
It should handle 2x10G easily on decent server, about 4x10 it depends on 
your hardware and how well tuning are done.




Re: New Active Exploit: memcached on port 11211 UDP & TCP being exploited for reflection attacks

2018-02-28 Thread Denys Fedoryshchenko
I want to add one software vendor, who is major contributor to ddos 
attacks.
Mikrotik till now shipping their quite popular routers, with wide open 
DNS recursor,
that don't have even mechanism for ACL in it. Significant part of DNS 
amplification attacks

are such Mikrotik recursors.
They don't care till now.

On 2018-02-28 14:31, Job Snijders wrote:

Dear all,

Before the group takes on the pitchforks and torches and travels down 
to
the hosting providers' headquarters - let's take a step back and look 
at

the root of this issue: the memcached software has failed both the
Internet community and its own memcached users.

It is INSANE that memcached is/was[1] shipping with default settings
that make the daemon listen and respond on UDP on INADDR_ANY. Did 
nobody

take notes during the protocol wars where we were fodder for all the
CHARGEN & NTP ordnance?

The memcached software shipped with a crazy default that required no
authentication - allowing everyone to interact with the daemon. This is
an incredibly risky proposition for memcached users from a
confidentiality perspective; and on top of that the amplification 
factor

is up to 15,000x. WHAT?!

And this isn't even new information, open key/value stores have been a
security research topic for a number of years, these folks reported 
that

in the 2015/2016 time frame they observed more than 100,000 open
memcached instances: https://aperture-labs.org/pdf/safeconf16.pdf

Vendors need to ensure that a default installation of their software
does not pose an immediate liability to the user itself and those 
around

them. No software is deployed in a vacuum.

A great example of how to approach things is the behavior of the
PowerDNS DNS recursor: this recursor - out of the box - binds to only
127.0.0.1, and blocks queries from RFC 1918 space. An operator has to
consciously perform multiple steps to make it into the danger zone.
This is how things should be.

Kind regards,

Job

[1]:
https://github.com/memcached/memcached/commit/dbb7a8af90054bf4ef51f5814ef7ceb17d83d974

ps. promiscuous defaults are bad, mmkay?
Ask your BGP vendor for RFC 8212 support today! :-)


Re: Blockchain and Networking

2018-01-08 Thread Denys Fedoryshchenko

Each offsite copy of git repository will give alert then, as all
hashes in chain changed at some moment.
Same principle as blockchain.

On 2018-01-08 09:54, tglas...@earthlink.net wrote:

Uh since MITM Bill perk of custody is key.

//tsg

Sent from my HTC
- Reply message -
From: "Denys Fedoryshchenko" <de...@visp.net.lb>
To: <nanog@nanog.org>
Subject: Blockchain and Networking
Date: Mon, Jan 8, 2018 10:03

On 2018-01-08 08:59, Peter Kristolaitis wrote:

On 2018-01-08 12:52 AM, William Herrin wrote:

I'm having trouble envisioning a scenario where blockchain does

that >> any

better than plain old PKI.
>> Blockchain is great at proving chain of custody, but when do you

need >> to do

that in computer networking?
>> Regards,
Bill Herrin

> There's probably some potential in using a blockchain for things

like

configuration management.  You can authenticate who made what change
and when (granted, we can kinda-sorta do this already with the

various

authentication and logging mechanisms, but the blockchain is an
immutable, permanent record inherently required for the system to

work

at all).
> That immutable, sequenced chain of events would let you do things

like

"make my test environment look like production did last Thursday at
9AM" trivially by reading the blockchain up until that timestamp,

then

running a fork of the chain for the new test environment to track

its

own changes during testing.
> Or when you know you did something 2 months ago for client A, and

you

need your new NOC guy to now do it for client B -- the blockchain
becomes the documentation of what was done.
> We can build all of the above in other ways today, of course.  But
there's certainly something to be said for a vendor-supported

solution

that is inherent in the platform and requires no additional
infrastructure.  Whether or not that's worth the complexities of
managing a blockchain on networking devices is, perhaps, a whole

other

discussion.   :)
> - Peter

Why to reinvent git? :)
Lot of tools available also, to see diff on git commits, to see who
did commit, and what exactly he changed.
(it is possible to cryptographically sign commits, as well, and yes,
they are chain signed, as "blockchain")


Re: Blockchain and Networking

2018-01-07 Thread Denys Fedoryshchenko

On 2018-01-08 08:59, Peter Kristolaitis wrote:

On 2018-01-08 12:52 AM, William Herrin wrote:
I'm having trouble envisioning a scenario where blockchain does that 
any

better than plain old PKI.

Blockchain is great at proving chain of custody, but when do you need 
to do

that in computer networking?

Regards,
Bill Herrin


There's probably some potential in using a blockchain for things like
configuration management.  You can authenticate who made what change
and when (granted, we can kinda-sorta do this already with the various
authentication and logging mechanisms, but the blockchain is an
immutable, permanent record inherently required for the system to work
at all).

That immutable, sequenced chain of events would let you do things like
"make my test environment look like production did last Thursday at
9AM" trivially by reading the blockchain up until that timestamp, then
running a fork of the chain for the new test environment to track its
own changes during testing.

Or when you know you did something 2 months ago for client A, and you
need your new NOC guy to now do it for client B -- the blockchain
becomes the documentation of what was done.

We can build all of the above in other ways today, of course.  But
there's certainly something to be said for a vendor-supported solution
that is inherent in the platform and requires no additional
infrastructure.  Whether or not that's worth the complexities of
managing a blockchain on networking devices is, perhaps, a whole other
discussion.   :)

- Peter

Why to reinvent git? :)
Lot of tools available also, to see diff on git commits, to see who did 
commit, and what exactly he changed.
(it is possible to cryptographically sign commits, as well, and yes, 
they are chain signed, as "blockchain")


Re: Spectre/Meltdown impact on network devices

2018-01-07 Thread Denys Fedoryshchenko
AFAIK, Meltdown/Spectre require access to some proper programming 
language and ability to run attacker own code.
If underprivileged user can't spawn shell on device or run some python 
code - i guess you are safe.


I guess people need to push support of vendors, for equipment who has 
programming languages/shell, to release statement about possibility of 
vulnerability.
As fixing require significant changes in "memory" operation model, i 
doubt they will do such thing, i guess in best case they will restrict 
access to insert

code under nonprivileged users (if it is allowed now).
For example, even old Cisco IOS has TCL, but logically under level 15, 
so i assume it is safe.


On 2018-01-07 21:02, Jean | ddostest.me via NANOG wrote:

Hello,

I'm curious to hear the impact on network devices of this new hardware
flaws that everybody talk about. Yes, the Meltdown/Spectre flaws.

I know that some Arista devices seem to use AMD chips and some say that
they might be immune to one of these vulnerability. Still, it's 
possible

to spawn a bash shell in these and one with limited privileges could
maybe find some BGP/Ospf/SNMP passwords. Maybe it's also possible to
leak a full config.

I understand that one need access but still it could be possible for 
one

to social engineer a NOC user, hijack the account with limited access
and maybe run the "exploit".

I know it's a lot of "if" and "maybe", but still I'm curious what is 
the

status of big networking systems? Are they vulnerable?

Thanks

Jean


Re: Bandwidth distribution per ip

2017-12-20 Thread Denys Fedoryshchenko

<>


Are you claiming that your bandwidth is being equally divided 1024
ways (you mentioned a /22) or just that each host (IP) is not
receiving the full bandwidth? What is the bandwidth ordered and what
is the bandwidth you're seeing per host(IP)?

Some facts from today.
Ordered capacity 3.3Gbit
Received capacity ~2.1Gbit if they apply bandwidth limit
In this example they removed limit, but you can have approximate picture 
how bandwidth is distributed (top 20 ips):


[x.x.x.14   ] 22433902b 36435p avg 615b 0.81%b 1.10%p 26596 Kbit/s
[x.x.x.13   ] 22715108b 34887p avg 651b 0.82%b 1.06%p 26929 Kbit/s
[x.x.x.10   ] 22741911b 31719p avg 716b 0.83%b 0.96%p 26961 Kbit/s
[x.x.x.11   ] 23874482b 34157p avg 698b 0.87%b 1.04%p 28304 Kbit/s
[x.x.x.15   ] 24393258b 29622p avg 823b 0.89%b 0.90%p 28919 Kbit/s
[x.x.x.12   ] 24715746b 33880p avg 729b 0.90%b 1.03%p 29301 Kbit/s
[x.x.x.9] 25720774b 36000p avg 714b 0.93%b 1.09%p 30492 Kbit/s
[x.x.x.8] 29599218b 40647p avg 728b 1.07%b 1.23%p 35090 Kbit/s
[y.y.y.122  ] 52015361b 52743p avg 986b 1.89%b 1.60%p 61666 Kbit/s
[y.y.y.116  ] 52161788b 55435p avg 940b 1.89%b 1.68%p 61839 Kbit/s
[y.y.y.114  ] 55409677b 56945p avg 973b 2.01%b 1.73%p 65690 Kbit/s
[y.y.y.120  ] 59971853b 59782p avg 1003b 2.18%b 1.81%p 71098 Kbit/s
[y.y.y.126  ] 60821991b 65184p avg 933b 2.21%b 1.98%p 72106 Kbit/s
[y.y.y.117  ] 61811624b 58374p avg 1058b 2.24%b 1.77%p 73279 Kbit/s
[y.y.y.113  ] 62492070b 63001p avg 991b 2.27%b 1.91%p 74086 Kbit/s
[y.y.y.119  ] 63128246b 63545p avg 993b 2.29%b 1.93%p 74840 Kbit/s
[y.y.y.121  ] 64392950b 66418p avg 969b 2.34%b 2.01%p 76340 Kbit/s
[y.y.y.115  ] 65723751b 64100p avg 1025b 2.39%b 1.94%p 77917 Kbit/s
[y.y.y.124  ] 66646572b 62637p avg 1064b 2.42%b 1.90%p 79011 Kbit/s
[y.y.y.123  ] 70332553b 68284p avg 1030b 2.55%b 2.07%p 83381 Kbit/s
[y.y.y.125  ] 70545386b 67441p avg 1046b 2.56%b 2.04%p 83634 Kbit/s
[y.y.y.118  ] 71393238b 69490p avg 1027b 2.59%b 2.11%p 84639 Kbit/s
[x.x.x.6] 123028709b 137530p avg 894b 4.47%b 4.17%p 145855 Kbit/s
[x.x.x.4] 124816100b 137221p avg 909b 4.53%b 4.16%p 147974 Kbit/s
[x.x.x.7] 126130939b 143443p avg 879b 4.58%b 4.35%p 149532 Kbit/s
[x.x.x.3] 128316371b 139360p avg 920b 4.66%b 4.22%p 152123 Kbit/s
[x.x.x.0] 132445418b 143143p avg 925b 4.81%b 4.34%p 157018 Kbit/s
[x.x.x.1] 133197094b 143713p avg 926b 4.84%b 4.35%p 157910 Kbit/s
[x.x.x.2] 135346483b 146510p avg 923b 4.91%b 4.44%p 160458 Kbit/s
[x.x.x.5] 135366769b 147766p avg 916b 4.92%b 4.48%p 160482 Kbit/s
Average packet size 834 (with ethernet header, max avg sz 1514)
Time 6748, total bytes 2753819139, total speed 3188235 Kbit/s

As you can see max single ip takes is 4.48% of bandwidth.
Also i cannot waste ipv4 for larger pools, just because of some deadly
flawed equipment/configuration.



Re: Bandwidth distribution per ip

2017-12-20 Thread Denys Fedoryshchenko

On 2017-12-20 19:12, Saku Ytti wrote:
On 20 December 2017 at 19:04, Denys Fedoryshchenko <de...@visp.net.lb> 
wrote:


As person who is in love with embedded systems development, i just 
watched
today beautiful 10s of meters long 199x machine, where multi kW VFDs 
manage
huge motors(not steppers), dragging synchronously and printing on thin 
paper
with crazy speed and all they have is long ~9600 link between a bunch 
of

encoders
and PLC dinosaur managing all this beauty. If any of them will apply a 
bit

wrong
torque, stretched paper will rip apart.
In fact nothing complex there, and technology is ancient these days.
Engineers who cannot synchronize and update few virtual "subinstances"
policing ratio based on feedback, in one tiny, expensive box, with
reasonable
update ratio, having in hands modern technologies, maybe incompetent?


As appealing it is to say everyone, present company excluded, is
incompetent, I think explanation is more complex than that. Solution
has to be economic and marketable. I think elephant flow detection and
unequal mapping of hash result to physical interface is economic and
marketable solution, but it needs that extra level of abstraction,
i.e. you cannot just backport it via software if hardware is missing
that sort of capability.

Even highly incompetent in such matters person as me, know, that some of
modern architecture challenges is when NPU consists of a large number of
"processing cores", each having his own counters, and additionally it 
might

be also multiple NPU handling same customer traffic. On such conditions
updating _precise_ counters(for bitrate measurements, for example) is 
not
trivial anymore as sum = a(1) + .. + a(n), due synchronization, shared 
resources

access and etc.
But still it's solvable in most of cases, even dead wrong way of running 
script
and changing policer value on each "unit" once per second mostly solve 
problem.
And if architecturally some NPU cannot do such job, it means they are 
flawed,
and should be avoided for specific tasks, same as some BCM chipset 
switches
with claimed 32k macs, but choking from 20k macs, because of 8192 
entries tcam and
probably imperfect hash + linear probing on collision. Sure such switch 
is not

suitable for aggregation and termination.
Still, i am running some dedicated servers on colo in EU/US, some over 
10G(bonding),
and _single_ ip on server, i never faced such balancing issues, thats 
why i am asking,
if someone had such carrier, who require to balance bandwidth between 
many ips,

with quite high precision, to not lose expensive bandwidth.


Re: Bandwidth distribution per ip

2017-12-20 Thread Denys Fedoryshchenko

On 2017-12-20 19:16, Blake Hudson wrote:

Denys Fedoryshchenko wrote on 12/20/2017 8:55 AM:
National operator here ask customers to distribute bandwidth between 
all ip's equally, e.g. if i have /22, and i have in it CDN from one of 
the big content providers, this CDN use only 3 ips for ingress 
bandwidth, so bandwidth distribution is not equal between ips and i am 
not able to use all my bandwidth.


And for me, it sounds like faulty aggregation + shaping setup, for 
example, i heard once if i do policing on some models of Cisco switch, 
on an aggregated interface, if it has 4 interfaces it will install 25% 
policer on each interface and if hashing is done by dst ip only, i 
will face such issue, but that is old and cheap model, as i recall.


Did anybody in the world face such requirements?
Is such requirements can be considered as legit?


Not being able to use all of your bandwidth is a common issue if you
are provided a bonded connection (aka Link Aggregation Group). For
example, you are provided a 4Gbps service over 4x1Gbps ethernet links.
Ethernet traffic is not typically balanced across links per frame,
because this could lead to out of order delivery or jitter, especially
in cases where the links have different physical characteristics.
Instead, a hashing algorithm is typically used to distribute traffic
based on flows. This results in each flow having consistent packet
order and latency characteristics, but does force a flow over a single
link, resulting in the flow being limited to the performance of that
link. In this context, flows can be based on src/dst MAC address, IP
address, or TCP/UDP port information, depending on the traffic type
(some IP traffic is not TCP/UDP and won't have a port) and equipment
type (layer 3 devices typically hash by layer 3 or 4 info).

Your operator may be able to choose an alternative hashing algorithm
that could work better for you (hashing based on layer 4 information
instead of layer 3 or 2, for example). This is highly dependent on
your provider's equipment and configuration - it may be a global
option on the equipment or may not be an option at all. Bottom line,
if you expected 4Gbps performance for each host on your network,
you're unlikely to get it on service delivered through 4x 1Gbps links.
10Gbps+ links between you and your ISP's peers would better serve
those needs (any 1Gbps bonds in the path between you and your
provider's edge are likely to exhibit the same characteristics).

--Blake


No bonding to me, usually it is dedicated 1G/10G/etc link.
Also i simulated this bandwidth for "hashability", and any layer4 aware 
hashing

on cisco/juniper provided perfectly balanced bandwidth distribution.
On my tests i can see that they have some balancing clearly by dst ip 
only.




Re: Bandwidth distribution per ip

2017-12-20 Thread Denys Fedoryshchenko

On 2017-12-20 17:52, Saku Ytti wrote:
On 20 December 2017 at 16:55, Denys Fedoryshchenko <de...@visp.net.lb> 
wrote:


And for me, it sounds like faulty aggregation + shaping setup, for 
example,

i heard once if i do policing on some models of Cisco switch, on an
aggregated interface, if it has 4 interfaces it will install 25% 
policer on

each interface and if hashing is done by dst ip only, i will face such
issue, but that is old and cheap model, as i recall.


One such old and cheap model is ASR9k trident, typhoon and tomahawk.

It's actually pretty demanding problem, as technically two linecards
or even just ports sitting on two different NPU might as well be
different routers, they don't have good way to communicate to each
other on BW use. So N policer being installed as N/member_count per
link is very typical.

ECMP is fact of life, and even thought none if any provider document
that they have per-flow limitations which are lower than nominal rate
of connection you purchases, these do exist almost universally
everywhere. People who are most likely to see these limits are people
who tunnel everything, so that everything from their say 10Gbps is
single flow, from POV of the network.
In IPv6 world at least tunnel encap end could write hash to IPv6 flow
label, allowing core potentially to balance tunneled traffic, unless
tunnel itself guarantees order.

I don't think it's fair for operator to demand equal bandwidth per IP,
but you will expose yourself to more problems if you do not have
sufficient entropy. We are slowly getting solutions to this, Juniper
Trio and BRCM Tomahawk3 can detect elephant flows and dynamically
unequally map hash results to physical ports to alleviate the problem.
As person who is in love with embedded systems development, i just 
watched
today beautiful 10s of meters long 199x machine, where multi kW VFDs 
manage
huge motors(not steppers), dragging synchronously and printing on thin 
paper
with crazy speed and all they have is long ~9600 link between a bunch of 
encoders
and PLC dinosaur managing all this beauty. If any of them will apply a 
bit wrong

torque, stretched paper will rip apart.
In fact nothing complex there, and technology is ancient these days.
Engineers who cannot synchronize and update few virtual "subinstances"
policing ratio based on feedback, in one tiny, expensive box, with 
reasonable

update ratio, having in hands modern technologies, maybe incompetent?

National operator doesn't provide IPv6, that's one of the problems.
In most of cases there is no tunnels, but imperfection still exist.
When ISP pays ~$150k/month (bandwidth very expensive here), and because
CDN has 3 units & 3 ingress ips, and carrier have bonding somewhere over
4 links, it just means ~$37.5k is lost in rough estimations,
no sane person will accept that.
Sometimes one CDN unit are in maintenance, and 2 existing can perfectly
serve demand, but because of this "balancing" issues - it just go down,
as half of capacity missing.

But, tunnels in rare cases true too, but what we can do, if they don't 
have

reasonable DDoS protection tools all world have (not even blackholing).
Many DDoS-protection operators charge extra for more tunnel endpoints 
with
balancing, and this balancing is not so equal as well (same src+dst ip 
at best).

And when i did round-robin on my own solution, i noticed except
this "bandwidth distribution" issue, latency on each ip is unequal,
so RR create for me "out of order" issues.
Another problem, most popular services in region (in matters of 
bandwidth)
is facebook, whatsapp, youtube. Most of them is big fat flows running 
over

few ips. I doubt i can convince them to balance over more.


Bandwidth distribution per ip

2017-12-20 Thread Denys Fedoryshchenko
National operator here ask customers to distribute bandwidth between all 
ip's equally, e.g. if i have /22, and i have in it CDN from one of the 
big content providers, this CDN use only 3 ips for ingress bandwidth, so 
bandwidth distribution is not equal between ips and i am not able to use 
all my bandwidth.


And for me, it sounds like faulty aggregation + shaping setup, for 
example, i heard once if i do policing on some models of Cisco switch, 
on an aggregated interface, if it has 4 interfaces it will install 25% 
policer on each interface and if hashing is done by dst ip only, i will 
face such issue, but that is old and cheap model, as i recall.


Did anybody in the world face such requirements?
Is such requirements can be considered as legit?


Broadcom chipset limitations Was: Switch/Router

2017-12-12 Thread Denys Fedoryshchenko

What are those limitations?

I started to be afraid from those, because just hit recently nasty hash 
collision issue with EX4550,
with declared 32k mac's it badly choked on 28k macs, and even magic 
"mac-lookup-length" didn't helped.


I'm considering EX4600, but afraid from it and that possibly other 
juniper models have hash collision issues too.
(As said on KB32325, affected models EX3200, EX3300, EX4200, EX4500, 
EX4550, and EX6210 have 8192 hash table FDB entries, but it might be 
outdated)
And till now vendor didn't make troubleshooting of this problem easier, 
when you upgrade from 12.x to 15.x, you get dead RE hitting load average 
30+,

and you can't even ssh to switch.

On 2017-12-12 17:37, Chuck Anderson wrote:

Juniper MX150, except only single PS.  But they are cheap enough you
could buy two.  Upside: most of the MX feature set is available
because it is vMX (software) inside.

QFX5110 is more expensive but has more ports and dual PS.  Downside:
Broadcom chipset limitations.

On Tue, Dec 12, 2017 at 09:47:17AM -0500, K MEKKAOUI wrote:

Hi



I am looking for a router preferably (or switch) with the following 
specs:


1-  Carrier grade

2-  Dual power supply

3-  1RU

4-  Gig and 10Gig interfaces.

5-  Does support protocols like BGP, etc.



Any recommendation please? Your help will be appreciated.


Re: PCIe adapters supporting long distance 10GB fiber?

2017-06-20 Thread Denys Fedoryshchenko

On 2017-06-20 23:44, Baldur Norddahl wrote:
But what foundation do you have for asserting that switch hardware is 
any

different in this regard? I can say that we are using 80 km modules in
various hardware without any issues. I admittedly do not use any high 
power

modules in servers, but I will need better evidence than this to assume
that it would not work just fine.
For switches i guess it is same story as for PoE on them - total power 
budget matters.
So if you will pack whole EX4500 with 10G 80km SFP+ it might have 
problems as well,
but for normal use, and if few only are "long distance/high power", at 
any case 3.3V supply rail
by design in switch should handle many SFP, so if there is 48 ports, it 
should handle by specs at least 72W peak load.
It might be multiple power rails for groups of ports, but still, much 
better than just 750mA on network card.
But that's just guessing, i never seen circuit diagrams of good 
switches, or at least reference design,

as it is all NDA material.




Den 20. jun. 2017 22.24 skrev "Denys Fedoryshchenko" 
<de...@visp.net.lb>:


On 2017-06-20 22:07, Baldur Norddahl wrote:

I would expect anything mounted in a computer to have all the power 
you
could want. It is not like the ATX power supply cares about an extra 
watt

or two.

As I understand the issue it is more about cooling than power and is
primarly a concern in high density switches were you could have 48 or 
more

to power and cool.

SFP needs 3.3V, it might be supplied from regulator on the card or 
directly

PCI-Express,
can't be absolutely sure, in reference design it is just 3.3V_NIA and 
then

filter,
also reference design SFP power circuit define max 750mA/3.3V max to 
SFP,

thats only 2.475W.

FTLX1471D3BCV (10km SM) - up to 285mA
FTLX1671D3BCL (40km SM) - up to 400mA, and thumb rule in electronics it 
is

better to not exceed 50%
of max specs of designed max current, as for many parts it is stated 
for

25C & etc operating conditions.

I expect it might work, but noone knows how long, and how reliable, if 
it

is not cooled very well.
And 82599 sensitive to cooling(it is very old card after all), as soon 
as

it is not enough, it starts to glitch.





Den 20. jun. 2017 18.09 skrev "Denys Fedoryshchenko" 
<de...@visp.net.lb>:


I guess it depends on NIC, there is many spinoffs of Intel X520 with 
much

weaker power supply circuitry.
It might work with good NIC, but you can't rely on it on long term, 
IMHO.
Even 40km Finisar SFP+ has Pdiss 1.5W. Also they mention: "The 
typical

power consumption of the FTLX1672D3BTL may exceed the limit of 1.5W
specified for the Power Level II transceivers"
If we talk about 80km, Pdiss is 1.8W.
While 10GBASE-LR is <1W

On 2017-06-20 16:30, Max Tulyev wrote:

We use Intel NICs with SFP+ holes. It works good with long and short

range SFP+ modules, including CWDM/DWDM.

On 15.06.17 12:10, chiel wrote:

Hello,


We are deploying more and more server based routers (based on BSD). 
We
have now come to the point where we need to have 10GB uplinks one 
these
devices and I prefer to plug in a long range 10GB fiber straight 
into
the server without it going first into a router/switch from vendor 
x. It

seems to me that all the 10GB PCIe cards only support either copper
10GBASE-T, short range 10GBASE-SR or the 10 Km 10GBASE-LR (but only 
very
few). Are there any PCIe cards that support 10GBASE-ER and 
10GBASE-ZR? I

can't seem to find any.

Chiel





Re: PCIe adapters supporting long distance 10GB fiber?

2017-06-20 Thread Denys Fedoryshchenko

On 2017-06-20 22:07, Baldur Norddahl wrote:

I would expect anything mounted in a computer to have all the power you
could want. It is not like the ATX power supply cares about an extra 
watt

or two.

As I understand the issue it is more about cooling than power and is
primarly a concern in high density switches were you could have 48 or 
more

to power and cool.
SFP needs 3.3V, it might be supplied from regulator on the card or 
directly PCI-Express,
can't be absolutely sure, in reference design it is just 3.3V_NIA and 
then filter,
also reference design SFP power circuit define max 750mA/3.3V max to 
SFP, thats only 2.475W.


FTLX1471D3BCV (10km SM) - up to 285mA
FTLX1671D3BCL (40km SM) - up to 400mA, and thumb rule in electronics it 
is better to not exceed 50%
of max specs of designed max current, as for many parts it is stated for 
25C & etc operating conditions.


I expect it might work, but noone knows how long, and how reliable, if 
it is not cooled very well.
And 82599 sensitive to cooling(it is very old card after all), as soon 
as it is not enough, it starts to glitch.





Den 20. jun. 2017 18.09 skrev "Denys Fedoryshchenko" 
<de...@visp.net.lb>:


I guess it depends on NIC, there is many spinoffs of Intel X520 with 
much

weaker power supply circuitry.
It might work with good NIC, but you can't rely on it on long term, 
IMHO.

Even 40km Finisar SFP+ has Pdiss 1.5W. Also they mention: "The typical
power consumption of the FTLX1672D3BTL may exceed the limit of 1.5W
specified for the Power Level II transceivers"
If we talk about 80km, Pdiss is 1.8W.
While 10GBASE-LR is <1W

On 2017-06-20 16:30, Max Tulyev wrote:


We use Intel NICs with SFP+ holes. It works good with long and short
range SFP+ modules, including CWDM/DWDM.

On 15.06.17 12:10, chiel wrote:


Hello,

We are deploying more and more server based routers (based on BSD). 
We
have now come to the point where we need to have 10GB uplinks one 
these
devices and I prefer to plug in a long range 10GB fiber straight 
into
the server without it going first into a router/switch from vendor 
x. It

seems to me that all the 10GB PCIe cards only support either copper
10GBASE-T, short range 10GBASE-SR or the 10 Km 10GBASE-LR (but only 
very
few). Are there any PCIe cards that support 10GBASE-ER and 
10GBASE-ZR? I

can't seem to find any.

Chiel




Re: PCIe adapters supporting long distance 10GB fiber?

2017-06-20 Thread Denys Fedoryshchenko

On 2017-06-20 18:59, Hunter Fuller wrote:

On Tue, Jun 20, 2017 at 10:29 AM Chris Adams  wrote:


For Linux at least, the standard driver includes a load-time option to
disable vendor check.  Just add "options ixgbe 
allow_unsupported_sfp=1"

to your module config and it works just fine.



For anyone who may be going down this road, if you have a two-port 
Intel
NIC, I discovered you have to pass "allow_unsupported_sfp=1,1" or it 
will

only apply to the first port. Hope that helps someone.
Also it wont work with X710, you need to do NVRAM hack for it, SFP are 
checked in firmware.


Re: PCIe adapters supporting long distance 10GB fiber?

2017-06-20 Thread Denys Fedoryshchenko
I guess it depends on NIC, there is many spinoffs of Intel X520 with 
much weaker power supply circuitry.
It might work with good NIC, but you can't rely on it on long term, 
IMHO. Even 40km Finisar SFP+ has Pdiss 1.5W. Also they mention: "The 
typical power consumption of the FTLX1672D3BTL may exceed the limit of 
1.5W specified for the Power Level II transceivers"

If we talk about 80km, Pdiss is 1.8W.
While 10GBASE-LR is <1W

On 2017-06-20 16:30, Max Tulyev wrote:

We use Intel NICs with SFP+ holes. It works good with long and short
range SFP+ modules, including CWDM/DWDM.

On 15.06.17 12:10, chiel wrote:

Hello,

We are deploying more and more server based routers (based on BSD). We
have now come to the point where we need to have 10GB uplinks one 
these

devices and I prefer to plug in a long range 10GB fiber straight into
the server without it going first into a router/switch from vendor x. 
It

seems to me that all the 10GB PCIe cards only support either copper
10GBASE-T, short range 10GBASE-SR or the 10 Km 10GBASE-LR (but only 
very
few). Are there any PCIe cards that support 10GBASE-ER and 10GBASE-ZR? 
I

can't seem to find any.

Chiel



Re: Russian diplomats lingering near fiber optic cables

2017-06-02 Thread Denys Fedoryshchenko

On 2017-06-02 12:19, Ben McGinnes wrote:

On Fri, Jun 02, 2017 at 10:28:38AM +0300, Denys Fedoryshchenko wrote:


American diplomats are doing also all sort of nasty stuff in
Russia(and not only),


Yes they have and for a very long time.


but that's a concern of the equivalent of FBI/NSA/etc, not operators
public discussion places, unless it really affect operators anyhow.
Just amazing, how NANOG slipped into pure politics.


The network(s) have been political for a very long time and will only
become more so as time passes.  Remember, the engineers wishing for
the purity of technical discussion are usually the same ones crying
that, "information wants to be free."

https://www.nanog.org/list
6. Postings of political, philosophical, and legal nature are 
prohibited.

It is quite clear.

I do not deny networks sometimes are deeply affected by political 
factors,
but current discussion is pure FUD, based on very questionable MSM 
source.
IMHO any sane person wont like to receive this trash in his mailbox in 
list,
that supposed to be politics-free, as there is enough of this garbage in 
internet.
I do discuss such things too, when i have mood for that, but in 
designated places only.




Well, no matter.  You want purely technical, okay, let's start with
authorised mail hosts.

You need to add 144.76.183.226/32 to the SPF record for visp.net.lb,
which is currently triggering softfails everywhere.  It might be wise
to explicitly state whether or not it is just 144.76.183.226/32 in the
SPF record for nuclearcat.com given the deny all instruction for that
domain.
Thanks for the hint, fixed, i use this domain only for old maillist 
subscriptions,

so i missed that, after i migrated SMTP to my private server.


Re: Russian diplomats lingering near fiber optic cables

2017-06-02 Thread Denys Fedoryshchenko

On 2017-06-02 05:42, Ben McGinnes wrote:

On Thu, Jun 01, 2017 at 07:15:12PM -0700, Joe Hamelin wrote:


The Seattle Russian Embassy is in the Westin Building just 4 floors
above the fiber meet-me-room and five floors above the NRO tap room.
They use to come ask us (an ISP) for IT help back in '96 when they
would drag an icon too far off the screen in Windows 3.11. We were
on the same floor.


So when Flynn & Friends in the Trump Transition Team were trying to
establish that back channel link to Vladimir Putin they should've just
wandered into the nearest colo facility ... okay, then.  I guess they
did it the other way because they wanted the trench coats.


Regards,
Ben
American diplomats are doing also all sort of nasty stuff in Russia(and 
not only),

but that's a concern of the equivalent of FBI/NSA/etc, not operators
public discussion places, unless it really affect operators anyhow.
Just amazing, how NANOG slipped into pure politics.


Re: difference with caching when connected to telia internet

2017-03-17 Thread Denys Fedoryshchenko

On 2017-03-17 18:04, Aaron Gould wrote:
Thanks, but James, you would not believe how rapidly the traffic to my 
local
caches drop off, *and* on the same day I brought up my new Telia 
internet
connection.  ...and furthermore, my internet inbound traffic went 
*through

the roof*

-Aaron
Most probably they silently readvertise BGP announces of customers to 
their caches.
Maybe just announce less specific prefixes to Telia, and more specific 
to caches?

Also at least GGC has BGP communities for priorities.


Re: Recent NTP pool traffic increase (update)

2016-12-21 Thread Denys Fedoryshchenko

Hello,

I'm not sure i should continue to CC nanog, if someone interested to be 
in CC for further updates this story please let me know.


TP-Link not related, it was misunderstanding or wrong customer report.

Tenda routers i believe most of cheap models are affected by this 
problem.
On ISPs i have access i see too many of them sending requests to 
133.100.9.2 (if it is unreachable, repeating each 10 seconds), this 
particular ip seems hardcoded there. I am sure as soon as your server is 
down, way they coded - it will make all this routers to ddos this 
particular ip, repeating NTP queries very often without any 
backoff/protection mechanism.
Particular model i tested - W308R / Wireless N300 Home Router, but i 
believe many models are affected.

Firmware: System Version: 5.0.7.53_en hw version : v1.0

Another possible vendor is LB-Link, but i dont have yet any info from 
customers who own them.


On 2016-12-21 11:00, FUJIMURA Sho wrote:

Hello.

I'm Sho FUJIMURA.
Thank you for your information.
I operate the public NTP Services as 133.100.9.2 and 133.100.11.8.
I'd like to reduce the traffic because I have trouble with too much
traffic recently.
So, I'm interested in the root of the the problem.
If possible, would you please tell me the model numbers of Tenda and
TP-Link??

--
Sho FUJIMURA
Information Technology Center, Fukuoka University.
8-19-1, Nanakuma, Jyonan-ku, Fukuoka, 8140180, Japan

fujim...@fukuoka-u.ac.jp

2016-12-20 5:33 GMT+09:00 Denys Fedoryshchenko <de...@visp.net.lb>:


I'm not sure if this issue relevant to discussed topic, Tenda
routers here for a while on market, and i think i noticed this issue
just now,
because NTP servers they are using supposedly for healthcheck went
down (or NTP owners blocked ISP's i support, due such routers).

At least after checking numerous users, i believe Tenda hardcoded
those NTP IPs. What worsen issue, that in Lebanon several times per
day, for example at 18pm - short electricity cutoff,
and majority of users routers will reboot and surely reconnect, so
it will look like a countrywide spike in NTP traffic.

I checked for a 10min also this NTP ips in dns responses, none of
thousands of users tried to resolve any name with them over any DNS
server, so i conclude they are hardcoded somewhere in firmware.

Here is traffic of Tenda router after reconnecting (but not full
powercycle, i dont have it in my hands). But as you can see, no DNS
resolution attempts:

20:15:59.305739 PPPoE  [ses 0x1483] CHAP, Success (0x03), id 1, Msg
S=XX M=Authentication succeeded
20:15:59.306100 PPPoE  [ses 0x1483] IPCP, Conf-Request (0x01), id 1,
length 12
20:15:59.317840 PPPoE  [ses 0x1483] IPCP, Conf-Request (0x01), id 1,
length 24
20:15:59.317841 PPPoE  [ses 0x1483] IPCP, Conf-Ack (0x02), id 1,
length 12
20:15:59.317867 PPPoE  [ses 0x1483] IPCP, Conf-Nack (0x03), id 1,
length 18
20:15:59.325253 PPPoE  [ses 0x1483] IPCP, Conf-Request (0x01), id 2,
length 24
20:15:59.325273 PPPoE  [ses 0x1483] IPCP, Conf-Ack (0x02), id 2,
length 24
20:15:59.335589 PPPoE  [ses 0x1483] IP 172.17.49.245.123 >
133.100.9.2.123: NTPv3, Client, length 48
20:15:59.335588 PPPoE  [ses 0x1483] IP 172.17.49.245.123 >
192.5.41.41.123: NTPv3, Client, length 48
20:15:59.335588 PPPoE  [ses 0x1483] IP 172.17.49.245.123 >
192.5.41.40.123: NTPv3, Client, length 48

Here is example of Tenda traffic if it is unable to reach
destination, it repeats request each 10 seconds endlessly, my guess
they are using ntp to show
status of internet connection.
So, now that NTP servers getting quite significant DDoS such way.

19:57:52.162863 IP (tos 0x0, ttl 64, id 38515, offset 0, flags
[none], proto UDP (17), length 76)
172.16.31.67.123 > 192.5.41.40.123: [udp sum ok] NTPv3, length
48
Client, Leap indicator:  (0), Stratum 0 (unspecified), poll
0 (1s), precision 0
Root Delay: 0.00, Root dispersion: 0.00,
Reference-ID: (unspec)
Reference Timestamp:  0.0
Originator Timestamp: 0.0
Receive Timestamp:0.0
Transmit Timestamp:   3691177063.0 (2016/12/19
22:57:43)
Originator - Receive Timestamp:  0.0
Originator - Transmit Timestamp: 3691177063.0
(2016/12/19 22:57:43)
19:57:52.163277 IP (tos 0x0, ttl 64, id 38516, offset 0, flags
[none], proto UDP (17), length 76)
172.16.31.67.123 > 192.5.41.41.123: [udp sum ok] NTPv3, length
48
Client, Leap indicator:  (0), Stratum 0 (unspecified), poll
0 (1s), precision 0
Root Delay: 0.00, Root dispersion: 0.00,
Reference-ID: (unspec)
Reference Timestamp:  0.0
Originator Timestamp: 0.0
Receive Timestamp:0.0
Transmit Timestamp:   3691177063.0 (2016/12/19
22:57:43)
Originator - Receive Timestamp:  0.0
Originator - Transmit Timestamp: 3691177063.0
(2016/12/19 22:57:43)
19:57:52.164435 IP (tos 0x0, ttl 64, id 38517, offset 0, flags
[none], proto UDP (17), length 76)
172.16.31.67.123 > 133.100.9.2.123: [udp sum ok] NTPv3, length
48
Client, Leap indicator:  (0)

Re: Recent NTP pool traffic increase (update)

2016-12-19 Thread Denys Fedoryshchenko
	Client, Leap indicator:  (0), Stratum 0 (unspecified), poll 0 (1s), 
precision 0

Root Delay: 0.00, Root dispersion: 0.00, Reference-ID: (unspec)
  Reference Timestamp:  0.0
  Originator Timestamp: 0.0
  Receive Timestamp:0.0
  Transmit Timestamp:   3691177073.0 (2016/12/19 22:57:53)
Originator - Receive Timestamp:  0.0
	Originator - Transmit Timestamp: 3691177073.0 (2016/12/19 
22:57:53)
19:58:02.165061 IP (tos 0x0, ttl 64, id 38520, offset 0, flags [none], 
proto UDP (17), length 76)

172.16.31.67.123 > 133.100.9.2.123: [udp sum ok] NTPv3, length 48
	Client, Leap indicator:  (0), Stratum 0 (unspecified), poll 0 (1s), 
precision 0

Root Delay: 0.00, Root dispersion: 0.00, Reference-ID: (unspec)
  Reference Timestamp:  0.0
  Originator Timestamp: 0.0
  Receive Timestamp:0.0
  Transmit Timestamp:   3691177073.0 (2016/12/19 22:57:53)
Originator - Receive Timestamp:  0.0
	Originator - Transmit Timestamp: 3691177073.0 (2016/12/19 
22:57:53)



On 2016-12-19 21:40, Roland Dobbins wrote:

On 20 Dec 2016, at 2:22, Denys Fedoryshchenko wrote:


If it is necessary i can investigate further.


Yes, please!

---
Roland Dobbins <rdobb...@arbor.net>


Re: Recent NTP pool traffic increase (update)

2016-12-19 Thread Denys Fedoryshchenko
Many sorry! Update, seems illiterate in english (worse than me, hehe) 
customer was not precise about model of router, while he reported issue.


I noticed now many customers using specific models of routers reported 
issues with internet connection.
Analyzing internet traffic, i noticed that this routers seems 
excessively requesting ntp from those ip addresses, and not trying 
others:


 > 192.5.41.40.123: NTPv3, Client, length 48
 > 192.5.41.41.123: NTPv3, Client, length 48
 > 133.100.9.2.123: NTPv3, Client, length 48

I'm asking customer to make photo of device, to retrieve model and 
revision, and checking other customers as well, if they are abusing same 
servers.
There is definitely pattern, that all of them are using just this 3 
hardcoded servers. Problem is that many customers are changing mac of 
router, so i cannot clearly

identify vendor by first mac nibbles.
He sent me 2 photos, one of them LB-Link (mac vendor lookup 20:f4:1b 
says Shenzhen Bilian electronic CO.,LTD), another is Tenda (c8:3a:35 is 
Tenda).

If it is necessary i can investigate further.


On 2016-12-19 20:33, Ca By wrote:
My WAG is that the one plus updated firmeware on that day and they 
baked in

the pool.

Complete WAG, but time and distributed sources including wireless 
networks



On Mon, Dec 19, 2016 at 10:30 AM Laurent Dumont 


wrote:


I also have a similar experience with an increased load.



I'm running a pretty basic Linode VPS and I had to fine tune a few

things in order to deal with the increased traffic. I can clearly see 
a


date around the 14-15 where my traffic increases to 3-4 times the 
usual


amounts.



I did a quick dump and in 60 seconds I was hit by slightly over 190K 
IPs




http://i.imgur.com/mygYINk.png



Weird stuff



Laurent





On 12/17/2016 10:25 PM, Gary E. Miller wrote:

> Yo All!

>

> On Sat, 17 Dec 2016 17:54:55 -0800

> "Gary E. Miller"  wrote:

>

>> # tcpdump -nvvi eth0 port 123 |grep "Originator - Transmit Timestamp:"

>>

>> And I do indeed get odd results.  Some on my local network...

> To follow up on my own post, so this can be promply laid to rest.

>

> After some discussion at NTPsec.  It seems that chronyd takes a lot

> of 'creative license' with RFC 5905 (NTPv4).  But it is not malicious,

> just 'odd', and not new.

>

> So, nothing see here, back to the hunt for the real cause of the new

> NTP traffic.

>

> RGDS

> GARY

>
---

> Gary E. Miller Rellim 109 NW Wilmington Ave., Suite E, Bend, OR 97703

>   g...@rellim.com  Tel:+1 541 382 8588






Re: Recent NTP pool traffic increase

2016-12-19 Thread Denys Fedoryshchenko
I noticed now many customers using tp-links reported issues with 
internet connection.
Analyzing internet traffic, i noticed that tp-link seems excessively 
requesting ntp from those ip addresses, and not trying others:


 > 192.5.41.40.123: NTPv3, Client, length 48
 > 192.5.41.41.123: NTPv3, Client, length 48
 > 133.100.9.2.123: NTPv3, Client, length 48

I'm asking customer to make photo of device, to retrieve model and 
revision, and checking other customers as well, if they are abusing same 
servers.


On 2016-12-19 20:33, Ca By wrote:
My WAG is that the one plus updated firmeware on that day and they 
baked in

the pool.

Complete WAG, but time and distributed sources including wireless 
networks



On Mon, Dec 19, 2016 at 10:30 AM Laurent Dumont 


wrote:


I also have a similar experience with an increased load.



I'm running a pretty basic Linode VPS and I had to fine tune a few

things in order to deal with the increased traffic. I can clearly see 
a


date around the 14-15 where my traffic increases to 3-4 times the 
usual


amounts.



I did a quick dump and in 60 seconds I was hit by slightly over 190K 
IPs




http://i.imgur.com/mygYINk.png



Weird stuff



Laurent





On 12/17/2016 10:25 PM, Gary E. Miller wrote:

> Yo All!

>

> On Sat, 17 Dec 2016 17:54:55 -0800

> "Gary E. Miller"  wrote:

>

>> # tcpdump -nvvi eth0 port 123 |grep "Originator - Transmit Timestamp:"

>>

>> And I do indeed get odd results.  Some on my local network...

> To follow up on my own post, so this can be promply laid to rest.

>

> After some discussion at NTPsec.  It seems that chronyd takes a lot

> of 'creative license' with RFC 5905 (NTPv4).  But it is not malicious,

> just 'odd', and not new.

>

> So, nothing see here, back to the hunt for the real cause of the new

> NTP traffic.

>

> RGDS

> GARY

>
---

> Gary E. Miller Rellim 109 NW Wilmington Ave., Suite E, Bend, OR 97703

>   g...@rellim.com  Tel:+1 541 382 8588






Re: Arista unqualified SFP

2016-08-18 Thread Denys Fedoryshchenko

No, this driver patch (or similar) wont work on new model.
But honestly, on my experience, X520 perform still better than 710 
series on 10G links.


https://sourceforge.net/p/e1000/mailman/message/34991760/
From: Wesley W. Terpstra <wesley@te...> - 2016-04-03 14:03:52
He did unlocked by modifying NVM in card(and i guess losing warranty, 
for sure)
Somehow it is even better, because X520 needed modification of driver, 
and that is not possible on "blackbox" software solutions using them.


On 2016-08-18 15:55, Mike Hammett wrote:

https://sourceforge.net/p/e1000/mailman/message/28698959/

That or similar doesn't work for that model?




-
Mike Hammett
Intelligent Computing Solutions

Midwest Internet Exchange

The Brothers WISP

- Original Message -----

From: "Denys Fedoryshchenko" <de...@visp.net.lb>
To: "Mike Hammett" <na...@ics-il.net>
Cc: "NANOG Mailing List" <nanog@nanog.org>
Sent: Thursday, August 18, 2016 7:51:13 AM
Subject: Re: Arista unqualified SFP

Not a case with Intel X*710 new chipset, check is in firmware.
Someone hacked it, but ...

On 2016-08-18 15:41, Mike Hammett wrote:

Intel does allow DAC of any vendor (assuming they properly identify as
DACs. You can also disable Intel's check in the Linux drivers.




-
Mike Hammett
Intelligent Computing Solutions

Midwest Internet Exchange

The Brothers WISP

- Original Message -

From: "Mikael Abrahamsson" <swm...@swm.pp.se>
To: "Mark Tinka" <mark.ti...@seacom.mu>
Cc: "nanog list" <nanog@nanog.org>
Sent: Thursday, August 18, 2016 7:32:55 AM
Subject: Re: Arista unqualified SFP

On Thu, 18 Aug 2016, Mark Tinka wrote:


All other vendors, explicitly or silently, adopt the same approach.


I've heard from people running Intel NICs and HP switches, that this
can't
be turned off there. You run into very interesting problems when 
you're

trying to use DAC cables between multi vendor.

Any pointers to how to turn this of on Intel NICs and HP switches?


Re: Arista unqualified SFP

2016-08-18 Thread Denys Fedoryshchenko

Not a case with Intel X*710 new chipset, check is in firmware.
Someone hacked it, but ...

On 2016-08-18 15:41, Mike Hammett wrote:

Intel does allow DAC of any vendor (assuming they properly identify as
DACs. You can also disable Intel's check in the Linux drivers.




-
Mike Hammett
Intelligent Computing Solutions

Midwest Internet Exchange

The Brothers WISP

- Original Message -

From: "Mikael Abrahamsson" 
To: "Mark Tinka" 
Cc: "nanog list" 
Sent: Thursday, August 18, 2016 7:32:55 AM
Subject: Re: Arista unqualified SFP

On Thu, 18 Aug 2016, Mark Tinka wrote:


All other vendors, explicitly or silently, adopt the same approach.


I've heard from people running Intel NICs and HP switches, that this 
can't

be turned off there. You run into very interesting problems when you're
trying to use DAC cables between multi vendor.

Any pointers to how to turn this of on Intel NICs and HP switches?


Re: Arista unqualified SFP

2016-08-18 Thread Denys Fedoryshchenko
Same here, i was considering Arista, because they are quite cost 
effective,feature rich, interesting hardware for developing some custom 
solutions. But no more, after reading about unreasonable vendor lock-in.
But such inflexibility are very bad sign, this "openness" looks like 
marketing only, under the hood it seems worse than other solutions on 
market. Also when support shows such inflexibility, it is very bad sign. 
And very sad.



On 2016-08-18 14:29, Dovid Bender wrote:

And I was about to jump on to the Arista train.

Regards,

Dovid

-Original Message-
From: Stanislaw 
Sender: "NANOG" Date: Thu, 18 Aug 2016 
13:24:05

To: nanog list
Subject: Re: Arista unqualified SFP

Hi all,
If somebody is following my epic adventure of getting uqualified SFP to
work on Aristas, here is the unhappy end of it.

I've written to Arista support and got the following dialogue:
Support guy:
Hi,
Thank you for contacting Arista Support. My name is  and I'll be
assisting you on this case.
Could you please provide the "show version" output from this switch?

Me:
Hi,
Here it is:


Support guy:
Hi,
Thank you for the information.
Unfortunately, we are unable to activate your 3rd party components. To
ensure ongoing quality, Arista devices are designed to support only
properly qualified transceivers.
Please let me know if you have any other questions.

Me:
I do not understand,
But there is a command which allows using non-Arista transceivers. Why
have you implemented it but don't provide an access key to your
customers when they ask for it?
If it is required to sign some papers which declare that I am aware of
all the risks and losing my warranty - I agree with that, lets do it.
Any way what are the conditions to receive that access key?

Support guy:
I'm afraid that there is nothing I'm able to do regarding this
situation. If you have any other questions regarding enabling 3rd party
options in Arista switches, I suggest to contact your local account 
team

(or sales) for further discussion on this matter.


Next, i've tried inserting various QSFP+ DAC cables I have - none of
them has been even detected on the switch, it was acting like nothing
has been inserted. I guess that even if I get the key, most of my
transceivers/DAC (which work like a champ in Juniper or Extreme
switches) cables wouldnt work.

I'm writing this post to make somebody who considers buying their
switches be aware of what they'd get. Just buy Juniper instead.


Stanislaw wrote at 2016-08-17 23:25:

Hi Tim,

Thanks for your expressive answer. Will try it :)

Tim Jackson писал 2016-08-17 22:57:


I'd suggest bitching and moaning at your account team & support until
they give you the key to unlock them..

--
Tim

On Wed, Aug 17, 2016 at 2:50 PM, Stanislaw  wrote:


Hi all,
Is there a way for unlocking off-brand transceivers usage on Arista
switches?

I've got an Arista 7050QX switch with 4.14 EOS version. Then it has
been found out that Arista switches seem to not have possibility to
unlock off-brand xcievers usage (by some service command or so).

I've patched /usr/lib/python2.7/site-packages/XcvrAgent.py, made the
checking function bypass the actual check and it helped: ports are
not in errdisable state anymore. But despite of xceivers are 
detected

correctly, links aren't coming up (they are in notconnect state).

If anyone possibly have does have a sacred knowledge of bringing
off-branded transceivers to life on Arista switches, your help'd be
very appreciated. Thanks.


Re: OT - Small DNS appliances for remote offices.

2015-02-19 Thread Denys Fedoryshchenko

On 2015-02-19 18:26, valdis.kletni...@vt.edu wrote:

On Thu, 19 Feb 2015 14:52:42 +, David Reader said:


I'm using several to connect sensors, actuators, and such to a private
network, which it's great for - but I'd think at least twice before 
deploying
one as a public-serving host in user-experience-critical role in a 
remote

location.


I have a Pi that's found a purpose in life as a remote smokeping sensor 
and

related network monitoring, a task it does quite nicely.

Note that they just released the Pi 2, which goes from the original 
single-core
ARM V6 to a quad-core ARM V7, and increases memory from 256M to1G. All 
at the
same price point.  That may change the calculus. I admit not having 
gotten one

in hand to play with yet.

Weird thing - it still has Ethernet over ugly USB 2.0
That kills any interest to run it for any serious networking 
applications.


---
Best regards,
Denys


Re: OT - Small DNS appliances for remote offices.

2015-02-19 Thread Denys Fedoryshchenko
Beaglebone has gigabit mac, but due some errata it is not used in 
gigabit mode, it is 100M (which is maybe enough for small office). But 
it is hardware mac.

Another hardware MAC on inexpensive board it is Odroid-C1.
But stability of all this boards in heavy networking use is under 
question, i didn't tested them yet intensively for same purpose.


On 2015-02-19 02:27, Geoff Mulligan wrote:

The BeagleBone Black uses flash memory to hold the system image which
allows it to boot quickly.  I'm running Ubuntu Trusty 14.04 and it
seems stable.

Geoff

*--
Presidential Innovation Fellow | The White House*

On 02/18/2015 05:20 PM, Bacon Zombie wrote:

You also have to watch out for issues with the Pi corrupting SD cards.
On 19 Feb 2015 01:04, Geoff Mulligan nano...@mulligan.org wrote:

I have used the BeagleBone to run a few simple servers.  I don't know 
if

the ethernet port on the Bone is on the USB bus. It is slightly more
expensive than a PI, but they have worked well for me.

 Geoff

On 02/18/2015 04:44 PM, Peter Loron wrote:

For any site where you would use a Pi as the DNS cache, it won't be 
an

issue. DNS isn't that heavy at those query rates.

Yeah, it would be awesome if they'd been able to get a SoC that 
included

ethernet.

-Pete

On 2015-02-18 15:08, Robert Webb wrote:

What I do not like about the Pi is the network port is on the USB 
bus

and thus limited to USB speeds.

div Original message /divdivFrom: Maxwell 
Cole

mcole.mailingli...@gmail.com /divdivDate:02/18/2015  4:30 PM
(GMT-05:00) /divdivTo: nanog@nanog.org  'NANOG list'
nanog@nanog.org /divdivSubject: Re: OT - Small DNS 
appliances

for remote offices. /divdiv
/div



---
Best regards,
Denys


Re: OT - Small DNS appliances for remote offices.

2015-02-19 Thread Denys Fedoryshchenko

On 2015-02-19 15:13, Rob Seastrom wrote:

Denys Fedoryshchenko de...@visp.net.lb writes:


Beaglebone has gigabit mac, but due some errata it is not used in
gigabit mode, it is 100M (which is maybe enough for small office). But
it is hardware mac.


The Beaglebone Black rev C BOM calls out the ethernet phy chip as
LAN8710A-EZC-TR which is 10/100 so there's your constraint.  The MAC
is built into the SoC and according to the datasheet the AM3358B is
10/100/1000.


Another hardware MAC on inexpensive board it is Odroid-C1.


Difficulty: hardware MAC tells you nothing about how it's connected,
either on the board or internally in the SoC.  Ethernet on Multibus
and Ethernet on PCIe (neither likely on an embedded ARM ;-) are both
hardware MAC yet the bus-constrained bandwidths will differ by
several orders of magnitude.

-r
Well, i guess for DNS it wont matter much(400Mbit or full capacity). But 
stability of driverand archievable pps rate on it,
due poor code - can be a question. And mostly this products are Network 
enabled, but networking are very lightly
used, not as it is used on appliances, 24/7 traffic, sometimes 
malicious.

About Beaglebone, probably reason is this errata:
While the AM335x GP EVM has a Gb Ethernet PHY, AR8031A, on the base 
board,
the PCB was designed to use internal clock delay mode of the RGMII 
interface and
the AM335x does not support the internal clock delay mode. Therefore, if 
operating
the Ethernet in Gb mode, there may be problems with the 
performance/function due
to this. The AR8031A PHY supports internal delay mode. This can be 
enabled by
software to guarantee Gb operation. However, this cannot be done to 
enable

internal delay mode for Ethernet booting of course. 
Or maybe they just put 100Mbit PHY to make BOM cost less.

As far as i know, Raspberry PI ethernet over USB might be fine for DNS 
too, but before it had issues with

large data transfers (ethernet driver hangs). No idea about now.


---
Best regards,
Denys


Re: DDOS, IDS, RTBH, and Rate limiting

2014-11-22 Thread Denys Fedoryshchenko

On 2014-11-22 18:00, freed...@freedman.net wrote:

 Cisco ASRs and MXs with inline jflow can do hundreds of K flows/second
 without affecting packet forwarding.

Yes, i agree,those are good for netflow, but when they already exist 
in

network.

Does it worth to buy ASR, if L3 switch already doing the job
(BGP/ACL/rate-limit/routing)?


Not suggesting that anyone should change out their gear though per my 
other
message, I've seen SPAN make things go wonky on almost every vendor 
that

ISPs use for switching.

Well, i always try to stay on safe side. Additionally, sure, i do
mirror for RX only, RX+TX often can exceed interface rate too :)




Well, if it is available, except hardware limitations, there is second
obstacle, software licensing cost. On latest JunOS, for example on 
EX2200,
you need to purchase license (EFL), and if am not wrong it is $3000 
for

48port units.

So if only sFlow feature is on stake, it worth to think, to purchase 
license,
or to purchase server. Prices for JFlow license on MX, just for 5/10G 
is way

above cost of very decent server.


I believe that smaller MXs can run it for free.  Larger providers we've
worked with often have magic cookies they can call in to get it 
enabled,
but I understand you're talking about the smaller-provider (or at least 
~

10gig per POP across multiple POPs) case.

We see a lot of Brocade for switching in hosting providers, which makes
sFlow easy, of course.
Oh, Brocade, recent experience with ServerIron taught me new lesson, 
that i can't
do bonding on ports as i want, it has limitations about even/odd port 
numbers and

etc.
Most amazing part i just forgot, that i have this ServerIron, and it is 
a place where
i run DDoS protection (but it works perfectly over tap way). Thanks 
for reminding

about this vendor :)




 And with the right setup you can run FastNetMon or other tools in
 addition to generating flow that can be of use for other purposes
 as well...

Technically there is ipt_NETFLOW, that can generate netflow on same 
box,
for statistical/telemetry purposes. But i am not sure it is possible 
to

run them together.


At frac 10gig you can just open pcap on a 10gig interface on a Linux
box getting a tap, of course.

What we did was use myricom cards and the myri_snf drivers and take 
from

the single-consumer ring buffers into large in-RAM ring buffers, and
make those ring buffers available via LD_PRELOAD or cli tools to allow
flow, snort, p0f, tcpdump, etc to all be run at the same time at 10gig.

The key for that is not going through the kernel IP stack, though.
Ntop's pf_ring, which is basically same idea, but can run on Intel 
cards.
Just maybe because never had myricom in hands, and it is difficult to 
obtain

them here.




 But taps can be difficult or at least time consuming for people to
 put in at scale.  Even, we've seen, for folks with 10G networks.
 Often because they can get 90% of what they need for 4 different
 business purposes from just flow :)

About scaling, i guess it depends on proper deployment strategy and
sysadmins/developers capabilities. For example to deploy new ruleset
for my pcap-based homemade analyser to 150 probes across the country 
-

is just one click.


Sounds cool.  You should write up that use case.  Hopefully you've 
secured

the metadata/command push channel well enough :)
For servers it is ssh with key authentication, and push system doesn't 
contain private key, it is forwarded
over ssh agent from developer pc. Sure, it is better also to sign by 
assymmetric crypto update also,

keep keys on smartcard, but in this case it is not necessary.


---
Best regards,
Denys


Re: DDOS, IDS, RTBH, and Rate limiting

2014-11-21 Thread Denys Fedoryshchenko

On 2014-11-21 03:12, Roland Dobbins wrote:

On 21 Nov 2014, at 6:22, Denys Fedoryshchenko wrote:


Netflow is stateful stuff,


This is factually incorrect; NetFlow flows are unidirectional in
nature, and in any event have no effect on processing of data-plane
traffic.
Word stateful has nothing common with stateful firewall.Stateful 
protocol. a protocol which requires keeping of the internal state on 
the server is known as a stateful protocol. And sure 
unidirectional/bidirectional is totally unrelated.




and just to run it on wirespeed, on hardware, you need to utilise 
significant part of TCAM,


Again, this is factually incorrect.

http://en.wikipedia.org/wiki/NetFlow#NetFlow_support
Proof, that majority of solutions runs *flow not in software.

Cisco 65xx (yes, they are obsolete, but they run stuff wirespeed)
Aug 24 12:30:53: %EARL_NETFLOW-SP-4-TCAM_THRLD: Netflow TCAM threshold 
exceeded, TCAM Utilization [97%]
This is best example. Also on many Cisco's if you use UBRL, then you 
cannot use NetFlow, just because they use same part of TCAM resources. 
Others, for example Juniper, are using sampling (read - missing data), 
just to not overflow resources, and has various limitations, such as 
RE-DPC communication pps limit, licensing limit.
For example MS-DPC is pretty good one, few million flows in hardware, 
7-8Gbps of traffic, and... cost $12.


i am not talking that on some hardware it is just impossible to run 
it.


This is also factually incorrect.  Some platforms/linecards do not in
fact support NetFlow (or other varieties of flow telemetry) due to
hardware limitations.

But still they can run fine mirroring, and fastnetmon will do it's job.



And last thing, from one of public papers, netflow delaying factors:
1. Flow record expiration


This is tunable.
In certain limits. You can't set flow-active-timeout less than 60 
seconds in Junos 14 for example.
On some platforms even if you can, you just run in the limits of 
platforms again (forwarding - management communications).



• Typical delay: 15-60 sec.


This is an entirely subjective assessment, and does not reflect
operational realities.  These are typically *maximum values* - and
they are well within operationally-useful timeframes.  Also, the
effect of NetFlow cache size and resultant FIFOing of flow records is
not taken into account, nor is the effect on flow termination and
flow-record export of TCP FIN or RST flags denoting TCP traffic taken
into account.

So for a small hosting(up to 10G), i believe, FastNetMon is best 
solution.


This is a gross over-generalization unsupported by facts.  Many years
of operational experience with NetFlow and other forms of flow
telemetry by large numbers of network operators of all sizes and
varieties contract this over-generalization.

Fastnetmon and similar tools popularity says for itself.


It is generally unwise to make sweeping statements regarding
operational impact which are not borne out by significant operational
experience in production networks.
What can be asserted without evidence can be dismissed without 
evidence.





Faster, and no significant investments to equipment.


This statement indicates a lack of understanding of opex costs,
irrespective of capex costs.
Sweet marketing buzzwords, that is used together with some unclear 
calculations,
to sell suffering hosting providers various expensive tools, that is not 
necessary for them.
OPEX of fastnetmon is a small fee for qualified sysadmin, and often not 
required,

because already hosting operator should have him.



Bigger hosting providers might reuse their existing servers, segment 
the network, and implement inexpensive monitoring on aggregation 
switches without any additional cost again.


This statement indicates a lack of operational experience in networks
of even minimal scale.

Ah, and there is one more huge problem with netflow vs FastNetMon - 
netflow just by design cannot be adapted to run pattern matching, 
while it is trivial to patch FastNetMon for that, turning it to 
mini-IDS for free.


This statement betrays a lack of understanding of NetFlow-based (and
other flow telemetry-based) detection and classification, as well as
the undesirability and negative operational impact of stateful
IDS/'IPS' deployments in production networks.

You should also note that FastNetMon is far from unique; there are
multiple other open-source tools which provide the same type of
functionality, and none of them have replaced flow telemetry, either.
Thats a power of opensource. Since FastNetMon is not only tool, worth to 
mention others,
people here will benefit from using it, for free. And i'm sure, author 
of FastNetMon will

not feel offended at all.



Tools such as FastNetMon supplement flow telemetry, in situations in
which such tools can be deployed.  They do not begin to replace flow
telemetry, and they are not inherently superior to flow telemetry.

Again, I'm sure FastNetMon is a useful tool in many circumstances

Re: DDOS, IDS, RTBH, and Rate limiting

2014-11-21 Thread Denys Fedoryshchenko

On 2014-11-21 06:45, freed...@freedman.net wrote:
Netflow is stateful stuff, and just to run it on wirespeed, on 
hardware,

you need to utilise significant part of TCAM,


Cisco ASRs and MXs with inline jflow can do hundreds of K flows/second
without affecting packet forwarding.
Yes, i agree,those are good for netflow, but when they already exist in 
network.
Does it worth to buy ASR, if L3 switch already doing the job 
(BGP/ACL/rate-limit/routing)?


i am not talking that on some hardware it is just impossible to run 
it.
So everything about netflow are built on assumption that hosting or 
ISP

can run it. And based on some observations, majority of small/middle
hosting providers are using minimal,just BGP capable L3 switch as 
core,
and cheapest but reliable L2/L3 on aggregation, and both are capable 
in

best case to run sampled sFlow.


Actually, sFlow from many vendors is pretty good (per your points about 
flow

burstiness and delays), and is good enough for dDoS detection.  Not for
security forensics, or billing at 99.99% accuracy, but good enough for
traffic visibility, peering analytics, and (d)DoS detection.
Well, if it is available, except hardware limitations, there is second 
obstacle,
software licensing cost. On latest JunOS, for example on EX2200, you 
need
to purchase license (EFL), and if am not wrong it is $3000 for 48port 
units.
So if only sFlow feature is on stake, it worth to think, to purchase 
license,
or to purchase server. Prices for JFlow license on MX, just for 5/10G is 
way above cost

of very decent server.



snip


So for a small hosting(up to 10G), i believe, FastNetMon is best
solution. Faster, and no significant investments to equipment. Bigger
hosting providers might reuse their existing servers, segment the
network, and implement inexpensive monitoring on aggregation switches
without any additional cost again.


It can be useful to have a 10G network monitoring box of course...

And with the right setup you can run FastNetMon or other tools in
addition to generating flow that can be of use for other purposes
as well...
Technically there is ipt_NETFLOW, that can generate netflow on same box, 
for
statistical/telemetry purposes. But i am not sure it is possible to run 
them

together.




Ah, and there is one more huge problem with netflow vs FastNetMon -
netflow just by design cannot be adapted to run pattern matching, 
while

it is trivial to patch FastNetMon for that, turning it to mini-IDS for
free.


It's true, having a network tap can be useful for doing PCAP-y stuff.

But taps can be difficult or at least time consuming for people to
put in at scale.  Even, we've seen, for folks with 10G networks.
Often because they can get 90% of what they need for 4 different
business purposes from just flow :)
About scaling, i guess it depends on proper deployment strategy and 
sysadmins/developers
capabilities. For example to deploy new ruleset for my pcap-based 
homemade

analyser to 150 probes across the country - is just one click.


---
Best regards,
Denys


Re: DDOS, IDS, RTBH, and Rate limiting

2014-11-21 Thread Denys Fedoryshchenko

On 2014-11-21 14:50, Roland Dobbins wrote:

On 21 Nov 2014, at 15:17, Denys Fedoryshchenko wrote:

Word stateful has nothing common with stateful firewall.Stateful 
protocol. a protocol which requires keeping of the internal state on 
the server is known as a stateful protocol.


Correct - and NetFlow is not stateful, by this definition.

Not stateful, if you pick on server word.
To be able to make bytes/packets accounting for a flow, you need to keep 
this specific flow previous state. To be able to differentiate between 
flows with same src/dst ip+ports (if one is ended, next is started with 
same data) you need to track it's state, again. And just to keep track 
of _flows_ in packet switched network you need states. Surprising lack 
of knowledge.





And sure unidirectional/bidirectional is totally unrelated.


On the contrary, it is quite relevant.


Cisco 65xx (yes, they are obsolete, but they run stuff wirespeed)


They are not obsolete - they perform very well with Sup2T and
EARL8-based linecards.
Seems yes, i'm wrong on that point, i was not successful to run netflow 
reliable way , but it was before CSCul90377 and CSCui17732 fixed.





Others, for example Juniper, are using sampling (read - missing data),


The largest networks in the world use sampled NetFlow every hour of
every day for many purposes, including DDoS
detection/classification/traceback.  It works quite well for all those
purposes.
Use case of fastnetmon is not largest networks. Sampled netflow is 
useless for per-traffic billing purpose for example.




just to not overflow resources, and has various limitations, such as 
RE-DPC communication pps limit, licensing limit.
For example MS-DPC is pretty good one, few million flows in hardware, 
7-8Gbps of traffic, and... cost $12.


You get what you pay for.
While i can pay $1500 for a server, and get netflow and ~3second BGP 
blackholing with fastnetmon.




But still they can run fine mirroring, and fastnetmon will do it's 
job.


On the contrary - SPAN nee port mirroring cuts into the
frames-per-second budget of linecards, as the traffic is in essence
being duplicated.  It is not 'free', and it has a profound impact on
the the switch's data-plane traffic forwarding capacity.

Unlike NetFlow.
In hosting case mirroring usually done for uplink port, but i have to 
agree, it might be a problem.



Yes, it does - they are far less popular that NetFlow, because they do
not scale on networks of any size, nor do they provide traceback
(given your lack of comments on traceback elsewhere in this thread, it
appears that you aren't familiar with this concept).
You make my point very well, thank you.  There is overwhelming
evidence that NetFlow and similar forms of flow telemetry scale well
and provide real, measurable, actionable operational value on networks
of all types and sizes.  The reason for the popularity of flow
telemetry is that it is low-opex (no probes to deply); low-capex (no
probes to deploy); scales to tb/sec speeds; is practicable for large
networks (no probes to deploy); provides instantaneous traceback
(probes can't do this); and provides statistics on dropped traffic
(probes can't do this, either).
And again and again we are going to tb/s. I don't need TB/s, i dont need 
traceback,nor on relatively small ISP nor on VDS provider i dont need 
all that above. I just need inexpensive way to block attacked ip and/or 
announce it from different location within minimal timeframe, to 
minimize impact on other customers.
You might be highly professional with large scale operators, but small 
guys needs and capabilities are very different.
I had developed tool similar to fastnetmon for almost same purpose, 
detecting attacks and switching affected network by BGP to protected 
backbone. After calculating OPEX/CAPEX, capable server turned to be 
much cheaper alternative in short and long term than buying netflow 
capable hardware (and support for it) just for netflow purposes, and 
buying hardware for netflow collector.

Let's talk numbers.
My case is small hosting, 4G, C4948-10G, one 10G uplink, one 10G port is 
free. Switch is not capable to run sFlow or Netflow.
Decent server is available already, since it is hosting company, so the 
only expenses are 10G 82599 card, which is around $500. Even in case 
server is not available, based on data from fastnetmon author still 
total cost is within $1500. Deployment time - hours from installing 
hardware, without distrupting existing traffic.
Major expenses - tuning server according author recommendations, and 
writing shell script that will send to 4948 command to blackhope IP. For 
qualified sysadmin it is 2 hours of work, and $500 max as a labor 
cost. Thats it. What can be cheaper than $2000 in this case? I guess i 
wont get answer.



I'm uninterested in selling anyone anything.  What I'm interested in
doing is correcting the misinformation you are promulgating regarding
the utility of flow telemetry coupled with open-source flow

Re: DDOS, IDS, RTBH, and Rate limiting

2014-11-21 Thread Denys Fedoryshchenko

On 2014-11-21 18:41, Peter Phaal wrote:
Actually, sFlow from many vendors is pretty good (per your points 
about

flow
burstiness and delays), and is good enough for dDoS detection.  Not 
for
security forensics, or billing at 99.99% accuracy, but good enough 
for

traffic visibility, peering analytics, and (d)DoS detection.


Well, if it is available, except hardware limitations, there is second
obstacle,
software licensing cost. On latest JunOS, for example on EX2200, you 
need
to purchase license (EFL), and if am not wrong it is $3000 for 48port 
units.

So if only sFlow feature is on stake, it worth to think, to purchase
license,
or to purchase server.


Juniper no longer charges for sFlow on the EX2200 (as of Junos 11.2):

http://www.juniper.net/techpubs/en_US/junos11.2/information-products/topic-collections/release-notes/11.2/junos-release-notes-11.2.pdf

I am not aware of any vendor requiring an additional license to enable 
sFlow.


sFlow (packet sampling) works extremely well for the DDoS flood
detection / mitigation use case. The measurements are build into low
cost commodity switch hardware and can be enabled operationally
without adversely impacting switch performance.  A flood attack
generates high packet rates and sampling a 10G port at 1-in-10,000
will reliably detect flood attacks within seconds.

For most use cases, it is much less expensive to use switches to
perform measurement than to attach taps / mirror port probes. If your
switches don't already support sFlow, you can buy a 10G capable white
box switch for a few thousand dollars that will let you monitor 1.2
Terabits/sec. If you go with an open platform such as Cumulus Linux,
you could even run your DDoS mitigation software on the switch and
dispense with the external server. Embedded instrumentation is simple
to deploy and reduces operational complexity and cost when compared to
add on probe solutions.

Peter Phaal
InMon Corp.
Wow, that's great news then, i'm using mostly Cisco gear now, but seems 
will have to take a look to Juniper, thanks for information.
If it is free, then if EX2200 available, it is much easier to run sFlow 
and write custom collector for it, than installing custom probe(in most 
common cases).


---
Best regards,
Denys


Re: DDOS, IDS, RTBH, and Rate limiting

2014-11-21 Thread Denys Fedoryshchenko
Thanks! Most important there is plugin API,so it is easy to write custom 
code to do some analysis and on events - actions.


On 2014-11-21 20:32, Tim Jackson wrote:

pmacct includes sfacctd which is an sflow collector.. Accessible via
the same methods as it's nfacctd collector or pcap based collector..

--
Tim

On Fri, Nov 21, 2014 at 9:06 AM, Denys Fedoryshchenko 
de...@visp.net.lb wrote:

On 2014-11-21 18:41, Peter Phaal wrote:


Actually, sFlow from many vendors is pretty good (per your points 
about

flow
burstiness and delays), and is good enough for dDoS detection.  Not 
for
security forensics, or billing at 99.99% accuracy, but good enough 
for

traffic visibility, peering analytics, and (d)DoS detection.



Well, if it is available, except hardware limitations, there is 
second

obstacle,
software licensing cost. On latest JunOS, for example on EX2200, you 
need
to purchase license (EFL), and if am not wrong it is $3000 for 
48port

units.
So if only sFlow feature is on stake, it worth to think, to purchase
license,
or to purchase server.



Juniper no longer charges for sFlow on the EX2200 (as of Junos 11.2):


http://www.juniper.net/techpubs/en_US/junos11.2/information-products/topic-collections/release-notes/11.2/junos-release-notes-11.2.pdf

I am not aware of any vendor requiring an additional license to 
enable

sFlow.

sFlow (packet sampling) works extremely well for the DDoS flood
detection / mitigation use case. The measurements are build into low
cost commodity switch hardware and can be enabled operationally
without adversely impacting switch performance.  A flood attack
generates high packet rates and sampling a 10G port at 1-in-10,000
will reliably detect flood attacks within seconds.

For most use cases, it is much less expensive to use switches to
perform measurement than to attach taps / mirror port probes. If your
switches don't already support sFlow, you can buy a 10G capable white
box switch for a few thousand dollars that will let you monitor 1.2
Terabits/sec. If you go with an open platform such as Cumulus Linux,
you could even run your DDoS mitigation software on the switch and
dispense with the external server. Embedded instrumentation is simple
to deploy and reduces operational complexity and cost when compared 
to

add on probe solutions.

Peter Phaal
InMon Corp.


Wow, that's great news then, i'm using mostly Cisco gear now, but 
seems will

have to take a look to Juniper, thanks for information.
If it is free, then if EX2200 available, it is much easier to run 
sFlow and
write custom collector for it, than installing custom probe(in most 
common

cases).

---
Best regards,
Denys


---
Best regards,
Denys


Re: DDOS, IDS, RTBH, and Rate limiting

2014-11-20 Thread Denys Fedoryshchenko

On 2014-11-20 23:59, Roland Dobbins wrote:

On 21 Nov 2014, at 4:36, Pavel Odintsov wrote:


I tried to use netflow many years ago but it's not accurate enough and
not so fast enough and produce big overhead on middle class network
routers.


These statements are not supported by the facts.  NetFlow (and other
varieties of flow telemetry) has been used for many years for traffic
engineering-related analysis, capacity planning, and security
purposes.  I've never seen the CPU utilization on even a modest
mid-range router rise above single-digits, except once due to a bug
(which was fixed quickly).

Flow telemetry scales and provides invaluable edge-to-edge traceback
information.  NetFlow telemetry is accurate enough to be used for all
the purposes noted above by network operators across the world, from
the smallest to the largest networks in the world.

There are several excellent open-source NetFlow analysis tools which
allow folks to benefit from NetFlow analysis without spending a lot of
money. Some of these projects have been maintained and enhanced for
many years; their authors would not do that if NetFlow analytics
weren't sufficient to needs.

Packet-based analysis is certainly useful, but does not scale and does
not provide traceback information.

FastNetMon can handle 2-3 million of packets per second and ~20Gbps on 
standard i7 2600 Linux box with Intel 82599 NIC.


See the comments above with regards to scale.  This is inadequate for
a network of any size, it does not provide traceback information, and
it does not lend itself to broad deployment across a network of any
size.

I'm sure FastNetMon is a fine tool, and it's very good of you to spend
the time and effort to develop it and to make it available.  However,
making demonstrably-inaccurate statements about other technologies
which are in wide use by network operators and which have a proven
track record in the field is probably not the best way to encourage
folks to try FastNetMon.


Netflow is stateful stuff, and just to run it on wirespeed, on hardware, 
you need to utilise significant part of TCAM,

i am not talking that on some hardware it is just impossible to run it.
So everything about netflow are built on assumption that hosting or ISP 
can run it. And based on some observations, majority of small/middle 
hosting providers are using minimal,just BGP capable L3 switch as core, 
and cheapest but reliable L2/L3 on aggregation, and both are capable in 
best case to run sampled sFlow.

And last thing, from one of public papers, netflow delaying factors:
1. Flow record expiration
2. Exporting process
• Typical delay: 15-60 sec.
So for a small hosting(up to 10G), i believe, FastNetMon is best 
solution. Faster, and no significant investments to equipment. Bigger 
hosting providers might reuse their existing servers, segment the 
network, and implement inexpensive monitoring on aggregation switches 
without any additional cost again.
Ah, and there is one more huge problem with netflow vs FastNetMon - 
netflow just by design cannot be adapted to run pattern matching, while 
it is trivial to patch FastNetMon for that, turning it to mini-IDS for 
free.


---
Best regards,
Denys


Re: 30% packet loss between cox.net and hetzner.de, possibly at tinet.net

2013-04-06 Thread Denys Fedoryshchenko

On 2013-04-07 02:20, Constantine A. Murenin wrote:



Although hetzner.de claims that this whole loss is outside of their 
own

network, I'm inclined to deduce that the loss might actually be
concentrated on their own KPN / eurorings.net router --
kpn-gw.hetzner.de (134.222.107.21), and perhaps occurs only
in one direction.
I think too. Btw as i said, have host on tinet, and it is 100% clear 
from EC2.

So seems tinet is fine for sure.

HOST: ip-10-203-61-XSnt   Rcv Loss%   
Best Gmean   Avg  Wrst StDev
  1.|-- ip-10-203-60-2.ec2.internal 6060  0.0%
0.3   0.6   1.1  19.1   2.7
  2.|-- ip-10-1-36-21.ec2.internal  6060  0.0%
0.4   0.6   0.8   9.3   1.3
  3.|-- ip-10-1-34-0.ec2.internal   6060  0.0%
0.4   0.7   1.0  14.5   2.0
  4.|-- 100.64.20.436060  0.0%
0.4   0.6   0.6   2.0   0.2
  5.|-- ??? 60 0 100.0
0.0   0.0   0.0   0.0   0.0
  6.|-- ??? 60 0 100.0
0.0   0.0   0.0   0.0   0.0
  7.|-- ??? 60 0 100.0
0.0   0.0   0.0   0.0   0.0
  8.|-- 100.64.16.157   6060  0.0%
0.5   2.4   9.7  68.8  16.1
  9.|-- 72.21.222.154   6060  0.0%
1.5   1.9   2.6  36.4   4.8
 10.|-- 72.21.220.466060  0.0%
1.5   2.1   3.3  59.2   7.6
 11.|-- xe-7-2-0.was10.ip4.tinet.net6060  0.0%
1.6   2.1   2.6  17.2   3.1
 12.|-- xe-0-1-0.fra23.ip4.tinet.net6060  0.0%   
92.2  92.8  92.8 104.3   1.9
 13.|-- ge-1-1-0.pr1.g310.fra.de.eurotransit.net6060  0.0%   
92.2  93.1  93.2 112.8   3.3
 14.|-- ??? 60 0 100.0
0.0   0.0   0.0   0.0   0.0

last hop icmp blocked, it is my host




Although there is no traffic loss from he.net if you try to traceroute
the router itself (I'm not sure what that means, though, other than a
potential attack vector from exposing a router globally like that):
I don't think there is attack vector, proper control plane ACL will 
make them safe.



I've been a fan of hetzner.de, but I think it's staggering that
they won't do anything about this huge and persistent packet loss.
Indeed, i noticed that transfers from EC2 are terrible last days to 
Hetzner.


Maybe worth to open topic at www.webhostingtalk.com ?




Best regards,
Constantine.


---
Denys Fedoryshchenko, Network Engineer, Virtual ISP S.A.L.



Re: 30% packet loss between cox.net and hetzner.de, possibly at tinet.net

2013-04-05 Thread Denys Fedoryshchenko

On 2013-04-06 04:32, Constantine A. Murenin wrote:

Hello,

There has been at least a 25% packet loss between hetzner.de and 
cox.net

in the last couple of hours.

Tried contacting hetzner.de, but they said it's not on their network.
This has already happened a couple of days ago, too (strangely, on 
April 1),

but then was good for the rest of the week -- no problems whatsoever.

I wouldn't really care about this, if not for ssh:
it just doesn't work on such huge loss.

No other routes or networks seem affected.

Any advice?


Doesnt looks like tinet for me.
I have colo in Europe and it is connected over tinet and mtr to 
static.33.203.4.46.clients.your-server.de is clean
visp-probe ~ # mtr --report{,-wide,-cycles=60} --order SRL BGAWV 
static.33.203.4.46.clients.your-server.de
HOST: visp-probe  Snt   Rcv Loss%   
Best Gmean   Avg  Wrst StDev
  1.|-- X.X.X.X  6060  0.0%
0.1   0.9   6.1  50.7  11.7
  2.|-- r1fra1.core.init7.net6060  0.0%
1.0   2.6   3.9  12.8   3.6
  3.|-- r1fra3.core.init7.net6060  0.0%
1.0   2.2   3.2  12.0   3.2
  4.|-- r1nue2.core.init7.net6060  0.0%
3.7   5.3   6.0  15.5   3.7
  5.|-- r1nue1.core.init7.net6060  0.0%
3.9   5.8   6.5  15.3   3.6
  6.|-- gw-hetzner.init7.net 6060  0.0%
3.9   5.1   8.1  89.6  15.2
  7.|-- hos-bb2.juniper2.rz13.hetzner.de 6060  0.0%
6.1   7.7  10.4  74.9  14.8
  8.|-- static.33.203.4.46.clients.your-server.de6060  0.0%
6.5   7.7   7.8  13.1   1.4




---
Denys Fedoryshchenko, Network Engineer, Virtual ISP S.A.L.



Re: Constant low-level attack

2012-06-28 Thread Denys Fedoryshchenko

On 2012-06-28 23:31, Lou Katz wrote:

The other day, I looked carefully at my auth.log (Xubuntu 11.04) and
discovered many lines
of the form:

  Jun 28 13:13:54 localhost sshd[12654]: Bad protocol version
identification '\200F\001\003\001' from 94.252.177.159

In the past day, I have recorded about 20,000 unique IP addresses
used for this type of probe.
I doubt if this is a surprise to anyone - my question is twofold:

1. Does anyone want this evergrowing list of, I assume, compromised 
machines?

2. Is there anything useful to do with this info other than put the
IP addresses into a firewall reject table? I have done
   that and do see a certain amount of repeat hits.

-=[L]=-
You can use fail2ban to block bruteforcing hosts automatically and even 
report to your mail their whois info

http://www.fail2ban.org/

---
Denys Fedoryshchenko, Network Engineer, Virtual ISP S.A.L.



RE: VPN over satellite

2012-04-30 Thread Denys Fedoryshchenko
I did developed my own accelerator in 2006(globax) and have customers 
till now, but only for one-way ISP's in CIS region, and partially Europe 
(Germany). Sure worked with satellite internet all that years.
But since i am not interested to advertise it here(working only for 
ISPs), i will mention possible alternatives:
There was few solutions, most of them was from Tellinet and Mentat. 
Tellinet are for Newtec now, and Mentat are for Packeteer(and Packeteer 
for Bluecoat). Last time i seen optimization option in Packetshaper from 
Bluecoat. Probably worth to visit Newtec, as i see your domain are .be, 
and their HQ in Belgium.
Riverbed, i heard about them, but never tried. Most of TDMA VSAT modems 
also has embedded accelerators.


Please let me know if you want to know anything else.

On 2012-04-30 15:06, Rens wrote:

IPSec does not run well over satellite since the TCP headers are also
encrypted

-Original Message-
From: Gmail [mailto:jason.tre...@gmail.com]
Sent: maandag 30 april 2012 13:30
To: Rens
Cc: nanog@nanog.org
Subject: Re: VPN over satellite

Why not use a standard Cisco router or Asa for the routing and VPN 
and put a
riverbed steelhead on both ends to do Tcp optimization and 
compression.


On Apr 30, 2012, at 5:42 AM, Rens r...@autempspourmoi.be wrote:


Dear,



Could anybody recommend any hardware that can build a VPN that works 
well

over satellite connections? (TCP enhancements)

I want to setup a L3 VPN between 2 satellite connections



Even additionally if that hardware would also support WAN bonding 
even
better because I also have a scenario to connect 2 times 2 
satellites to

have more capacity for my L3 VPN



Regards,



Rens







---
Network engineer
Denys Fedoryshchenko

Dora Highway - Center Cebaco - 2nd Floor
Beirut, Lebanon
Tel:+961 1 247373
E-Mail: de...@visp.net.lb



Re: antisocial security

2012-02-02 Thread Denys Fedoryshchenko

On Thu, 02 Feb 2012 09:10:05 +0100, Jaap Akkerhuis wrote:

and, i noticed the problem because i can not get to the web site at
http://www.ssa.gov/ from tokyo.

Lot's of .gov web sites are not available outside (at least what
somebody thinks is outside) the US.

jaap

Just tested:
Lebanon, Greece, Saudi Arabia, Netherlands, Germany - all is fine

---
System administrator
Denys Fedoryshchenko
Virtual ISP S.A.L.



DNS 8.8.8.8 was down

2011-11-29 Thread Denys Fedoryshchenko

Google DNS was down few minutes, at least for few European locations

Here is sample traceroute:

 4  213.242.116.25 (213.242.116.25)  39.692 ms  39.776 ms  39.774 ms
 5  ae-7-7.ebr1.Paris1.Level3.net (4.69.143.238)  50.933 ms  50.792 ms  
50.793 ms
 6  ae-47-47.ebr1.London1.Level3.net (4.69.143.109)  57.353 ms 
ae-45-45.ebr1.London1.Level3.net (4.69.143.101)  57.580 ms 
ae-48-48.ebr1.London1.Level3.net (4.69.143.113)  57.582 ms
 7  ae-57-112.csw1.London1.Level3.net (4.69.153.118)  57.577 ms 
ae-58-113.csw1.London1.Level3.net (4.69.153.122)  57.693 ms 
ae-57-112.csw1.London1.Level3.net (4.69.153.118)  57.817 ms
 8  ae-1-51.edge3.London1.Level3.net (4.69.139.73)  57.654 ms  57.631 
ms  57.768 ms
 9  unknown.Level3.net (212.113.15.186)  57.958 ms  57.531 ms  57.643 
ms

10  209.85.255.78 (209.85.255.78)  57.732 ms  57.598 ms  57.573 ms
11  209.85.253.92 (209.85.253.92)  58.300 ms 209.85.253.196 
(209.85.253.196)  57.941 ms 209.85.253.94 (209.85.253.94)  58.002 ms
12  72.14.232.134 (72.14.232.134)  63.726 ms 66.249.95.173 
(66.249.95.173)  63.614 ms 72.14.232.134 (72.14.232.134)  63.699 ms


After this hop it was dead, now it is working, but seems still there is 
minor problems

14  216.239.46.117 (216.239.46.117)  64.171 ms * *
15  google-public-dns-a.google.com (8.8.8.8)  63.749 ms  63.729 ms  
63.680 ms


---
System administrator
Denys Fedoryshchenko
Virtual ISP S.A.L.



  1   2   >