Re: SNMP "bridging"/proxy?

2016-05-20 Thread Robert Drake



On 5/20/2016 7:43 PM, Nathan Anderson wrote:

'lo all,

Is anybody out there aware of a piece of software that can take data from an 
arbitrary source and then present it, using a MIB or set of OIDs of your 
choosing, as an SNMP-interrogatable device?

We have some CPE that supports SNMP, but considers it to be a mutually-exclusive 
"remote management" protocol such that if you use another supported method for 
deployment and provisioning (e.g., TR-069), you cannot have both that AND SNMP enabled 
simultaneously.  It's one or the other.

We currently monitor and graph some device stats for these CPE with Cacti, but we 
want to be able to provision using a TR-069 ACS.  The ACS can collect some of the 
same data we are graphing right now, but cannot present it in a fashion that is 
nearly as useful as the way Cacti/RRDtool does (not to mention the staff is 
already used to navigating Cacti).  We know what SQL database table the stats are 
being stored in by the ACS, though, so my thought was that there must be some way 
that we can have a host respond to SNMP gets and then have it turn around and 
collect the value to be returned from a database.  Basically, an ODBC -> SNMP 
proxy.  We'd then point Cacti at that IP instead of the individual CPEs.  But I 
can't seem to find anything like this.
I would move away from this CPE vendor.  Your solution has merit in the 
short term, but monitoring through the ACS is pointlessly putting more 
load on a server that already has it's own responsibilities.  You can't 
scale out with this.  Well, not without deploying more ACS servers.. 
which are a bit more heavyweight than SNMP pollers.


As mentioned already, net-snmp can do this easily enough.  The biggest 
problem you'll face is figuring out how you want to name OIDs to match 
up to each CPE and the elements you're graphing.   you might be 
better off pulling the data out of the database via SQL queries to a 
remote host and proxying the data there.  Or possibly have cacti run the 
SQL query directly.  It looks like they have many general (non SNMP) 
templates that you could use to base it on.


http://docs.cacti.net/templates
http://forums.cacti.net/viewtopic.php?f=12=15067

Thanks,

-- Nathan


Thanks,
Robert


Re: Best practices for sending network maintenance notifications

2016-04-06 Thread Robert Drake
I've been on hold a few times with some companies that had great 80's 
music.  I almost asked them to put me back on hold when they finally 
took me off.  Sometimes it's a party when one of the people on the call 
hits the hold button, it depends on how bad the outage is :)



On 4/6/2016 4:56 PM, Ray Orsini wrote:

"The other "don't do that" is never configure Music on Hold for any NOC/SOC
lines.  Few things are more annoying than a eight hour trouble shooting
conference bridge, and one of the dozen NOC/SOCs on the bridge hits the Hold
button."


Now that you've said it it seems so obvious. But, honestly I'd never thought
it until right now. Thanks!

Regards,
Ray Orsini – CEO
Orsini IT, LLC – Technology Consultants
VOICE DATA  BANDWIDTH  SECURITY  SUPPORT
P: 305.967.6756 x1009   E: r...@orsiniit.com   TF: 844.OIT.VOIP
7900 NW 155th Street, Suite 103, Miami Lakes, FL 33016
http://www.orsiniit.com | View My Calendar | View/Pay Your Invoices | View
Your Tickets






tel script

2016-03-28 Thread Robert Drake

This is a program for logging into devices.  You can find it here:

https://github.com/rfdrake/tel

I don't like to self promote things, but I'm interested in feedback.  
I'm also interested in alternatives if someone wrote something better.


I started it a long time ago as a lighter clogin which didn't hang as 
much.  Moving from expect to perl cut and bunch of the cruft out because 
of how verbose the TCL language is.   My script is now larger than 
clogin and no longer consists of a single file.  It's also arguably much 
cruftier in places, but it's gained some features I think people may like:


Color syntax highlighting for "show interface" and some other commands
An attempt to make backspace work on platforms with differing opinions 
of ^H vs ^?
Run multiple script files and multiple devices in the same command (tel 
-x script1.txt -x script2.txt router router2 router3)

Unified logout command: ctrl-d
send slow (tel -s .5 -x script1.txt router) for sending scripts to 
devices with buffering issues
send slow interactively (tel -S .5 router) for cut and paste to devices 
with buffering issues  (some terminal emulators will do this if it's the 
only thing you need)
change your device profile after login (tel -A zhone oob-test-lab:2004) 
helpful if you login through an out of band console which is made by one 
vendor into a device made by another vendor
Support for KeePass, Pass, PWSafe3, and Gnome/KDE/MacOS Keyring, as well 
as a combination of these for storing passwords in encrypted files


The entire thing is very customizeable for people who know perl or 
scripting languages.  It's designed for NOC use on bounce servers where 
the administrator might setup the global profile in /etc/telrc, then the 
individual users would make their own profiles in their home directories 
to override individual settings.


Limitations:

I'm pretty sure I tried to build this on Cygwin once and it failed for 
reasons that probably can never be fixed.  I've also never tested on MacOSX.


I've run it successfully on various Linux as well as FreeBSD and OpenBSD.


Re: About inetnum "ownership"

2016-02-22 Thread Robert Drake



On 2/22/2016 5:03 AM, Jérôme Nicolle wrote:

I'm wondering how did we made "Temporary and conditionnal liabality
transfer" a synonym of "perpetual and inconditional usufruct transfer".

May you please enlight me ?
There are always ways around the system.  I suspect what has happened is 
that IRR's require that a company hold addresses but they have 
provisions for companies to be sold or change names.  So the IP address 
brokers probably sell a "business" with the IP address block.


It might be simpler than that.  I don't know what the loopholes are but 
I'm sure some lawyers have read through all the documents and found a 
way.  I imagine it's never been tested in court because proving the 
system is being exploited would be hard, and the parties with a vested 
interest probably don't have the resources to make a fight of it.


Thanks !

-- Jérôme Nicolle +33 6 19 31 27 14

Thanks,
Robert


Re: Automated alarm notification

2016-02-15 Thread Robert Drake
OpenNMS has direct support for SNMP traps and multistage alerting. It's 
a pain in the ass to setup (depending on what you're doing*) but it's 
free and very high performance.


* if all your MIBS are already supported then 90% of the work is done 
and it's not so bad.  Just setup multistage alerts for 5 and 10 minute 
intervals depending on if something clears or if someone responds to the 
alert.  They support lots of alert types.  SMTP, SMS, voice call, a few 
ticketing systems, XMPP, twitter and probably more.



On 2/11/2016 4:51 PM, Frank Bulk wrote:

Is anyone aware of software, or perhaps a service, that will take SNMP
traps, properly parse them, and perform the appropriate call outs based on
certain content, after waiting 5 or 10 minutes for any alarms that don't
clear?

I looked at PagerDuty, but they don't do any SNMP trap parsing, and nothing
with set/clear.

Frank





Re: Devices with only USB console port - Need a Console Server Solution

2016-02-02 Thread Robert Drake


On 2/2/2016 5:02 AM, Bjørn Mork wrote:


No inside pictures :)

Assuming that this is really an USB device, and that the console port is
really an USB host port, it would be useful to know the USB decriptors
of the device.  You wouldn't be willing to connect it to a Linux PC and
run "lsusb -vd", would you?
I'm inconveniently consoled into one via a combination of remote desktop 
into windows -- linux console on a virtual machine -- screen 
/dev/ttyACM0.   Because of this posting lsusb -vd is taxing.


Linux has full support for the device.  It sees it as cdc_acm.

The vendor id is 0x04e2 (Exar Corp).  Product ID is 0x1410.   I've got 
two connected right now.  This is in our lab and the windows box is 
temporary.  Our intention is to use a raspberry pi for the terminal server.


I'm obviously not in front of it, but I'm wondering if they can be 
enumerated by something other than when they were plugged in. That's my 
biggest hurdle for making a console server for them.. how to figure out 
what router is connected to which USB port after a reboot, or someone 
getting unpluggy with cables.




Bjørn



Robert


Re: Cisco CMTS SNMP OID's

2016-01-25 Thread Robert Drake
This is from some internal PHP thing that isn't very good (well, it's 
lovely actually.. the problem is that it uses a forking method to query 
everything and isn't that fast.  I'm trying to rewrite it)


Throw any of these into google if you're confused about them.  It should 
return the correct MIB (except for the Casa ones.  I'm not sure how I 
found those but you can ignore them if you don't have any Casa CMTS)


'.1.3.6.1.2.1.10.127.1.3.3.1.2' => 'macs',
'.1.3.6.1.2.1.10.127.1.3.3.1.3' => 'ips',
'.1.3.6.1.2.1.10.127.1.3.3.1.6' => 'rxpwr',
'.1.3.6.1.2.1.10.127.1.3.3.1.9' => 'status', // genericstatus 0-7
'.1.3.6.1.2.1.10.127.1.3.3.1.13' => 'snr',
'.1.3.6.1.2.1.10.127.1.3.3.1.5' => 'dwnchnl', // this is actually 
upchannel ifindex

'.1.3.6.1.2.1.31.1.1.1.1' => 'ifname',

// this is probably for any Cisco Docsis3 CMTS
if ($cmts['DeviceModel']['name'] == 'UBR7225VXR') {
unset($oids['.1.3.6.1.2.1.10.127.1.3.3.1.5']); // remove 
dwnchnl, we'll get that from SNR

unset($oids['.1.3.6.1.2.1.10.127.1.3.3.1.13']);
$oids['.1.3.6.1.4.1.4491.2.1.20.1.4.1.4'] = 'snr';
}

switch ($cmts['DeviceType']['name']) {
case 'cisco':
  '.1.3.6.1.4.1.9.9.116.1.3.2.1.1' => 'status2',  // cisco 
specific status cdxCmtsCmStatusValue

  '.1.3.6.1.4.1.9.9.114.1.1.5.1.18' => 'flapcount',
  '.1.3.6.1.4.1.9.9.114.1.1.5.1.10' => 'flaptime'

break;
case 'Casa':
  '.3.6.1.4.1.20858.10.22.2.1.1.1' => 'status3',  
// casa specific status (totally different values from cisco)

  '.1.3.6.1.4.1.20858.10.11.1.2.1.10' => 'flaptime',
  '.1.3.6.1.4.1.20858.10.11.1.2.1.9' => 'flapcount'

--

things you need to pull from each cable modem:

system.sysUpTime.0
transmission.127.1.1.1.1.6.3  down_pwr
transmission.127.1.2.2.1.3.2  up_pwr
transmission.127.1.1.4.1.5.3  down_snr


You can also pull the modems log via an OID but I don't have that one handy.



On 1/25/2016 6:45 PM, Lorell Hathcock wrote:

Thanks all for your suggestions.  I am now successfully graphing SNR for each 
upstream channel.



-Original Message-
From: Yang Yu [mailto:yang.yu.l...@gmail.com]
Sent: Sunday, January 24, 2016 5:11 PM
To: Lorell Hathcock 
Cc: NANOG list 
Subject: Re: Cisco CMTS SNMP OID's

On Sun, Jan 24, 2016 at 1:06 PM, Lorell Hathcock  wrote:


 Signal to Noise per upstream channel

CISCO-CABLE-SPECTRUM-MIB::ccsUpSpecMgmtSNR
http://tools.cisco.com/Support/SNMP/do/BrowseOID.do?local=en=Translate=ccsUpSpecMgmtSNR


 Cable Modem counts of all kinds
 connected / online
 ranging
 offline

Not there if there are OIDs for `show cable modem docsis version summary`





Re: SNMP - monitoring large number of devices

2015-09-29 Thread Robert Drake
OpenNMS has a poller that will do what you want.  The problem is 
figuring out what you wish to collect and how to use it.  Most of the 
time it's not as simple as pointing at the modem and saying go.


I've added a few oids for some of the modems we support, just so I can 
get SNR on them.  I don't usually add customer modems directly to 
monitoring unless I'm tracking a long term problem and want to watch the 
SNR for that customer for weeks.


I monitor our CMTS' with a threshold system that says if number of 
active modems decreases by around 20 then alert.  This can cause false 
positives with modems migrating between cards, but if you tweak the 
numbers right it works okay.

We also have graphs for signal and other things on each CMTS.

Now that I'm thinking about it, I believe I could get away with adding 
all our modems for SNR, then try to write something to add/remove them 
and keep it in sync with our provisioning system.  I would need to make 
sure everything was in order so I don't get 400 emails when a site goes 
down, but it all should be possible.  I'm not sure if the I/O would be 
worth it, but being able to aggregate some of the data and look at SNR 
across an entire plant would be nice.  At one point I had a project to 
put modems at the tail end of each leg of a plant then monitor them.  
This is because we don't have monitor-able amplifiers.  It never 
happened though.


The truth is that balancing a plant is easy enough once you're used to 
it, and the extra metrics you might get from doing some of these things 
isn't worth the long term I/O.  We do have other (non-NMS) systems that 
will poll and get instantaneous results like this for entire plants.  
That has been very useful.



My guess is no matter what system you pick, you will either need to 
spend a couple of weeks hacking on it or pay someone to implement it.  
There isn't a turnkey system that does exactly what you want because 99% 
of network monitoring companies target systems rather than networks (the 
market is much larger..).


If you want to roll your own:

https://github.com/tobez/snmp-query-engine

I recently discovered this and wanted it years ago.  I actually 
considered stripping the poller out of OpenNMS so there would be a 
bare-bones poller you could send oids to and get back results.  The 
reason being that almost everyone who does SNMP does a bad job of it and 
is slow.  So, don't start at the library layer and don't write your own 
thing (unless you have to..).  You need asynchronous communication, bulk 
and gettable support, and you don't want to worry about max PDU size.  
That's what snmp-query-engine does (maybe.. I've just looked at the tin, 
I haven't used it)


Second note about rolling your own:  Skip whisper, rrdtool, mrtg, and 
any other single-system datacollection.  You want 1 million oids or more 
in 5 minutes?  You need SSD for hardware and will probably want to 
distribute data writes eventually.  Research things that make this 
easier.  Cassandra based storage... but nothing good is fully formed.  
You should still probably begin with OpenTSDB, InfluxDB or another 
established time series database rather than rolling your own.  They 
have warts but fixing the warts is better than creating new one-use 
TSDB's with their own flaws.   See 
https://github.com/OpenNMS/newts/wiki/Comparison-of-TSDB




On 9/29/2015 4:20 PM, Pavel Dimow wrote:

Hi all,

recently I have been tasked with a NMS project. The idea is to pool about
20 OID's from 50k cable modems in less then 5 minutes (yes, I know it's a
one million OID's). Before you say check out some very professional and
expensive solutions I would like to know are there any alternatives like
open source "snmp framework"? To be more descriptive many of you knows how
big is the mess with snmp on cable modem. You always first perform snmp
walk in order to discover interfaces and then read the values for those
interfaces. As cable modem can bundle more DS channels, one time you can
have one and other time you can have N+1 DS channels = interfaces. All in
all I don't believe that there is something perfect out there when it comes
to tracking huge number of cable modems so I would like to know is there
any "snmp framework" that can be exteded and how did you (or would you)
solve this problem.

Thank you.





Re: Extraneous "legal" babble--and my reaction to it.

2015-09-06 Thread Robert Drake



On 9/4/2015 6:31 PM, Stephen Satchell wrote:


I, for one, feel your pain in this matter.  When I was a consultant in 
The Bad Ol' Days, I had so many telephone numbers where I *could* be 
that my .sig would be a run-on one as well.  As a compromise, I had my 
cell number and a hyperlink to a Web site page with the full monte.


That was before I joined NANOG, so I never tested the tolerance of the 
people here with that solution.


When I was employed as a full-timer (including now) my "work" mail has 
the same sort of crap.  One option you might want to consider is to 
use a personal e-mail account for places like NANOG with the 
single-line disclaimer "Views expressed herein may not be my 
employer's view"


Maybe people could adopt an unofficial-official end-of-signature flag.  
Then you could have procmail strip everything after the flag:

--
This is my signature
My phone number goes here
I like dogs
-- end of signature --
Everything below here and to the right of here was inserted by my 
mailserver, which is run by lawyers who don't understand you can't 
enforce contracts through emails to public mailing lists. Please delete 
if you're not the intended recipient.



Of course, when you route around something like this it usually comes 
back 10 fold, but maybe if it became worthless they might do things the 
right way and put stuff like this in email headers.


X-Optional-Flags:  Delete-if-not-intended-recipient, 
might-contain-secret-company-information-we-didn't-bother-to-encrypt


Then let the email clients try to work out what that means.


Re: A simple perl script to convert Cisco IOS configuration to HTML with internal links for easier comprehension

2015-08-07 Thread Robert Drake
I was going to look at this because it sounded interesting.  Maybe some 
extra things it could do would be to set div/classes in some parts of 
the config to denote what it is so that the user could apply css to 
style it.  That would allow user-defined color syntax highlighting of a 
sort.


Another nice thing would be collapsible sections so if you're only 
interested in BGP you can skip interfaces, or if you want to look at 
route-maps, access-lists, etc.


The project looks a bit disorganized, but I only took a quick glance at 
it so perhaps it does everything exactly as you intend.  Are you 
thinking of making any of it into modules, or defining tests?  I like 
the idea of running this as part of a post-rancid process, but it might 
also be nice if it was a module that could be run in real-time on a 
config.  Then I could have a mojo wrapper daemon that called it when 
users accessed /configs/*-confg, or whatever and returned the parsed 
version.


Anyway, I don't want to create any more work for you, I just wanted to 
kick out some ideas.  If I have time I will contribute what I can, but 
I'm already neck deep in some random projects.  I don't mind starting 
another one, but I don't want to say I'm doing something and then never 
deliver.  :)





Re: Bright House IMAP highwater warning real?

2015-08-05 Thread Robert Drake



On 8/2/2015 3:53 PM, Jay Ashworth wrote:

I think the body text of the message should identify it as coming from the 
Bright House email system? I think it should be written in standard USAdian 
English, which that is decidedly not.

Or perhaps the problem is that that subject line was supposed to be 
parameterized, and the number of bytes is missing for some reason. But in any 
event that is a common message to spoof, and the more bits of identity that are 
in it the harder it is to do so. That message format has almost zero bit of 
provider-identifiable data.
That's not even mentioning that the term High Water and even bytes 
is just confusing to end users who probably don't know computer 
terminology.  At best, they can expect calls to support over these emails.


OTOH, 99% of their users probably have an inbox full of spam and don't 
use their ISP provided mailbox, having switched to a third-party email 
provider years ago.  So the Please in the message might be 
desperation.  Please come back and read me, then delete this message and 
the 5,000 other spam messages. :)


High Water Mark Notification, bytes in the mailbox!

A new action thriller series coming to you this fall on TV.  Please, 
please turn on the TV and don't watch it on Netflix..




Re: SEC webpages inaccessible due to Firefox blocking servers with weak DH ciphers

2015-07-17 Thread Robert Drake



On 7/17/2015 4:26 AM, Alexander Maassen wrote:

Well, this block also affects people who have old management hardware
around using such ciphers that are for example no longer supported. In my
case for example the old Dell DRAC's. And it seems there is no way to
disable this block.

Ok, it is good to think about security, but not giving you any chance to
make exceptions is simply forcing users to use another browser in order to
manage those devices, or to keep an old machine around that not gets
updated.

Or just fallback to no SSL in some cases :(  We have some old vendor 
things that were chugging along until everyone upgraded firefox and then 
suddenly they stopped working.  The fix was to use the alternate 
non-SSL web port rather than upgrade because even though the software is 
old, it's too critical to upgrade it in-line.


The long term fix is to get new hardware and run it all in virtual 
machines with new software on top, but that may be in next years 
budget.  I've also got a jetty server (opennms) that broke due to this, 
so I upgraded and fixed the SSL options and it's still broken in some 
way that won't log errors.  I have no time to track that down so the 
workaround is to use the unencrypted version until I can figure it out.


Having said that, it seems that there is a workaround in Firefox if 
people need it.  about:config and re-enabling the weak ciphers. 
Hopefully turning them on leaves you with a even bigger warning than 
normal saying it's a bad cert, but you could get back in.  This doesn't 
help my coworkers.  I'm not going to advise a bunch of people with 
varying levels of technical competency to turn on weak ciphers, but it 
does help with a situation like yours where you absolutely can't update 
old DRAC stuff.


https://support.mozilla.org/en-US/questions/1042061


Re: Fwd: [ PRIVACY Forum ] Windows 10 will share your Wi-Fi key with

2015-07-08 Thread Robert Drake



On 7/7/2015 5:39 PM, Joe Greco wrote:
Unclear at best. The way it is implemented, the user has the potential 
to go either way. A network might not want the user to have the 
choice, clearly, but there is certainly a subset of users who will opt 
out of the feature and I cannot see how those would be in violation of 
any sane network usage policy. It's certainly a mess in any case.
Now that windows mobile and desktop versions are converging, I doubt 
there is a way to really tell if a device is a PC or a phone or a 
tablet.  Some network administrators banned mobile phones from wifi 
connections because of Google's password storage violating their 
security policy.


Now administrators don't even get that knob.

We could fix it in a couple of ways (or, they could fix it.. depending 
on who pushes around money and if anyone cares enough to bother):


1.  Wifi sends password policy during handshaking.  If you save 
passwords you aren't allowed to connect here (or, you aren't allowed to 
backup/share this password) but we will allow the user to connect.  This 
can be transparent to the user and handled by the OS.*
2.  The client device sends I am configured to backup/share passwords 
to the wifi.  This allows the AP to either deny the user outright, or 
redirect them to a page explaining what is wrong or whatever.  This 
might be accomplished via DHCP option if we want to keep it all in software.


* The fact that we need an IEEE level fix for a security problem created 
by Google and then propagated by Microsoft is just pathetic.  These are 
two companies that should know better than to do this.




... JG




Re: Looking for information on IGP choices in dual-stack networks

2015-06-10 Thread Robert Drake



On 6/9/2015 11:14 AM, Victor Kuarsingh wrote:
We are looking particularly at combinations of the following IGPs: 
IS-IS, OSPFv2, OSPFv3, EIGRP.
If you run something else (RIP?) then we would also like to hear about 
this, though we will likely document these differently. [We suspect 
you run RIP/RIPng only at the edge for special situations, but feel 
free to correct us]. 


When we first were moving to IPv6 in the core network we evaluated IS-IS 
because it was what we were using for IPv4 and we would have preferred 
to run a single protocol for both.  We had problems with running a mix 
of routers where some supported IPv6 and others did not.  From what I 
recall, if any router did not support IPv6 then it wouldn't connect to a 
router running v6 and v4.


It's possible these were bugs and they were worked out later or just a 
messed up design in the lab, but we also like the idea of keeping IPv4 
and IPv6 away from each other so if one is broken the other one might 
still work.


So we use OSPFv3 for IPv6 routing and IS-IS for IPv4 routing.



Is anyone working on an RFC for standardized maintenance notifications

2015-05-13 Thread Robert Drake
Like the Automated Copyright Notice System 
(http://www.acns.net/spec.html) except I don't think they went through 
any official standards body besides their own MPAA, or whatever.


I get circuits from several vendors and get maintenance notifications 
from them all the time.  Each has a different format and each supplies 
different details for their maintenance.  Most of the time there are 
core things that everyone wants and it would be nice if it were 
automatically readable so automation could be performed (i.e., our NOC 
gets the email into our ticketing system. It is recognized as being part 
of an existing maintenance due to maintenance id# (or new, whatever) and 
fields are automatically populated or updated accordingly.


If you're uncomfortable with the phrase automatically populated 
accordingly for security reasons then you can replace that with NOC 
technician verifies all fields are correct and hits update ticket. or 
whatever.


The main fields I think you would need:

1.  Company Name
2.  Maintenance ID
3.  Start Date
4.  Expected length
5.  Circuits impacted (if known or applicable)
6.  Description/Scope of Work (free form)
7.  Ticket Number
8.  Contact



PDU for high amp 48Vdc

2015-01-28 Thread Robert Drake
For larger DC devices with ~50amps per side, does anyone have a software 
accessible way to turn off power?


I've looked into PDU's but the ones I find have a max of 10amps.

I've considered building something with solenoids or a rotary actuator 
that would turn the switches on or off, but that's a complete one-off 
and would need to be done for each device we manage (not to mention it 
involves janky wiring all over the place I've got to explain to the colo)


My use case is pretty infrequent so it needs to be remote-hands cheap.. 
it's for emergencies when you need to completely power cycle a 
redundantly powered DC device.  The last time I needed this it was 
because a router was stuck in a boot loop due to a bad IOS upgrade and 
wouldn't break to rommon since it had been 60 seconds.  It came up 
again tonight because we wanted to disable one power supply to 
troubleshoot something.


FWIW, I believe I've seen newer Cisco gear with high-end power supplies 
that have a console or ethernet port which would possibly let you shut 
them down remotely.  That solves the problem nicely if you're dealing 
with only one bit of hardware, but I'd like a general solution that 
worked with any vendor.  Possibly a fuse panel with solenoids that could 
add/remove fuses when needed.. or would that be considered dangerous in 
code-ways or in telco fire regulation ways?







Re: The state of TACACS+

2014-12-29 Thread Robert Drake


On 12/29/2014 10:32 AM, Colton Conor wrote:
My fear would be we would hire an outsourced tech. After a certain 
amount of time we would have to let this part timer go, and would 
disabled his or her username and password in TACAS. However, if that 
tech still knows the root password they could still remotely login to 
our network and cause havoc. The thought of having to change the root 
password on hundreds of devices doesn't sound appealing either every 
time an employee is let go. To make matters worse we are using an 
outsourced firm for some network management, so the case of hiring and 
firing is fairly consistent.


You can setup your aaa in most devices so tacacs+ is allowed first and 
the local password is only usable if tacacs+ is unreachable.  In that 
case, even if you fire someone you can just remove them from tacacs and 
they can't get in.


At that point you will want to do a global password change of the local 
password since it's compromised, but it's not an immediate concern.


You should also have access lists or firewall rules on all your devices 
which only allow login from specific locations.  If you fire someone 
then you remove their access to that location (their VPN credentials, 
username and password for UNIX login, etc), which also makes it harder 
for them to log back into your network even if they know the local 
device password.




Re: The state of TACACS+

2014-12-29 Thread Robert Drake


On 12/28/2014 10:21 PM, Christopher Morrow wrote:

and I wonder what percentage of 'users' a vendor has actually USE tac+
(or even radius). I bet it's shockingly low...
true.. even in large-ish environments centralized authentication 
presents problems and can have a limited merit.  Up to some arbitrary 
size, nobody really can be bothered unless some business case comes up 
like splitting responsibilities between groups. Accounting is probably 
the best early reason to turn it on in small networks.  Being able to 
see who made a change makes it easier to figure out why.



Maybe there is a simpler solution that keeps you happy about redundancy but
doesn't increase complexity that much (possibly anycast tacacs, but the
session basis of the protocol has always made that not feasible).  It's

does it really? :)
Well, the chance of two geographically close servers getting 
load-balanced made it not feasible for us to do.  Not to mention the 
fact that we had only two tacacs servers and the use-case for anycasting 
wasn't worth the hassle of implementation.



juniper, cisco, arista, sun, linux, freebsd still can't get TCP-AO working...
they don't all have ssl libraries in their os either...
With it being a TCP extension, my guess is that it's harder to find 
someone at those companies willing to change things inside the kernel 
because it's used by too many people, and if nobody is asking for it 
then they don't want to build it just to advertise they're first to market.


Even the ISP's who probably asked for it ultimately don't put money on 
getting it done because the engineer who says they need it still doesn't 
turn down the new chassis that lacks support.  The money is all flowing 
through the hardware guys now and if it's not directly related to moving 
packets quickly then they don't care.




Getting to some answer other than: F-it, put it i clear text for new
protocols on routers really is a bit painful... not to mention ITARs
sorts of problems that arise.

Now you're making me depressed.   :)

The question is should we be trying to move things along or just leave 
it as it is?  There are certainly more important things on everyone's 
TODO list right now, but I'd rather the vendors have an open ticket in 
their queue saying secure-tacacs+-rfc unimplemented rather than 
letting them off the hook.




-chris


Robert


Re: The state of TACACS+

2014-12-28 Thread Robert Drake
Picking back up where this left off last year, because I apparently only 
work on TACACS during the holidays :)



On 12/30/2013 7:28 PM, Jimmy Hess wrote:

Even 5 seconds extra for each command may hinder operators, to the extent
it would be intolerable; shell commands should run almost
instantaneously  this is not a GUI, with an hourglass.   Real-time
responsiveness in a shell is crucial --- which remote auth should not
change.   Sometimes operators paste a  buffer with a fair number of
commands,  not expecting a second delay between each command ---  a
repeated delay, may also break a pasted sequence.

It is very possible for two of three auth servers to be unreachable,  in
case of a network break, but that isn't necessary.  The response
timeout  might be 5 seconds,  but in reality, there are cases where you
would wait  longer,  and that is tragic,   since there are some obvious
alternative approaches that would have had results  that would be more
'friendly'  to the interactive user.

(Like remembering which server is working for a while,   or remembering
that all servers are down -- for a while,  and having a  50ms  timeout,
  with all servers queried in parallel,  instead of a 5 seconds timeout)

I think this needs to be part of the specification.

I'm sure the reason they didn't do parallel queries was because of both 
network and CPU load back when the protocol was drafted.  But it might 
be good to have local caching of authentication so that can happen even 
when servers are down or slow.  Authorization could be updated to send 
the permissions to the router for local handling. Then if the server 
dies while a session is open only accounting would be affected.


That does increase the vendors/implementors work but it might be doable 
in phases and with partial support with the clients and servers 
negotiating what is possible.  The biggest drawback to making things 
like this better is you don't gain much except during outages and if you 
increase complexity too much you make it wide open for bugs.


Maybe there is a simpler solution that keeps you happy about redundancy 
but doesn't increase complexity that much (possibly anycast tacacs, but 
the session basis of the protocol has always made that not feasible).  
It's possible that one of the L4 protocols Saku Ytti mentioned, QUIC or 
MinimaLT would address these problems too.  It's possible that if we did 
the transport with BEEP it would also provide this, but I'm reading the 
docs and I don't think it goes that far in terms of connection assurance.

--
-JH



So, here is my TACACS RFC christmas list:

1.  underlying crypto
2.  ssh host key authentication - having the router ask tacacs for an 
authorized_keys list for rdrake.  I'm willing to let this go because 
many vendors are finding ways to do key distribution, but I'd still like 
to have a standard (https://code.google.com/p/openssh-lpk/ for how to do 
this over LDAP in UNIX)

3.  authentication and authorization caching and/or something else



Re: abuse reporting tools

2014-11-18 Thread Robert Drake


On 11/18/2014 8:11 PM, Michael Brown wrote:

We need to come up with some sort of international Abuse Reduction and 
Reporting Engagement Suite of Tools as a Service.

M.

I've been considering a post for a couple of weeks but decided most of 
my complaints were petty.  I've been getting lots of ssh attacks 
against my network emails from various people on the internet.  All of 
them have no standard for what logs they show or what format they show 
them in, or what format the whole email is in, so frequently I'm being 
told Trust me, based on this one connection attempt to this 
non-qualified hostname that occured on this non-TZ timestamp, you need 
to stop your users abuse.


Immediately thereafter they tell me the IP address has already been 
blocked in their firewall for an unspecified length of time and give no 
routes for amelioration.  So I'm left with a very unsatisfactory feeling 
of either shutting down a possibly innocent customer based on a machines 
word, or attempting to start a dialog with 
random_script_user...@hotmail.com.


I suspect someone is going to pipe up in a second and say that there is 
a suite of tools, but the real problem is that nobody is using it.


Robert


Re: Greenfield Access Network

2014-08-01 Thread Robert Drake


On 7/31/2014 12:07 PM, Colton Conor wrote:

1. The article mentioned DHCP doesn't do the other part of what PPPoE or
PPPoA does, which is generate RADIUS accounting records that give us the
bandwidth information. So that’s one of the main challenges in switching to
a DHCP based system. So, how do you handle bandwidth tracking in an all
DHCP environment then? If I want to track how many GB a customer used last
month, or the average Mbps used how do you do so?
A medium sized NMS could do 95th percentile usage on 10k ports. Normally 
I wouldn't want to use an NMS for billing usage but the capability is there.

2. I liked your option 82 example, and that works well for DSL networks
where one port is tied to one customer. But how does option 82 work when
you have multiple customers hanging off a GPON port? What does GPON use a
subport identifier?
The ONT can put an option-82 header on the packet and tag whichever port 
the DHCP request came from.



3. You mentioned, DHCP is again, not a authentication protocol. So what
handles authentication then if only DHCP is used, and there are no
usernames and passwords? I guess for DSL networks you can enable or disable
the port to allow or disallow access, and Option 82 for identification? I
assume you wouldn't want to shut off the GPON OLT port if one customer
wasn't paying their bill as it would affect the other customers on that
port. I assume access vendors allow you to shut down the sub port or ONT in
this situation for GPON? Still that seems messy having to login to a shelf
or EMS system or API to an EMS system especially if you have multiple
access vendors in a network. Is there a way to do authentication with DHCP?
What about open networks like wifi where anyone can connect, so you don't
have the ability to turn of the port or disable the end device?
Most GPON vendors either support TR-69 or some other means to remote 
provision the ONTs.  You can use the DHCP option-82 to identify who a 
customer is and then send their ONT a specific config.  Like DOCSIS you 
could make a disable profile, or you could make them hop on a different 
VLAN that redirects all traffic to a billing page or something.  There 
is also DPoE/DPoG (DOCSIS Provisioning of EPON/GPON) that converts 
DOCSIS provisioning into something PON can use.



4. I don't think anyone is buying a BRAS anymore, but looks like Cisco,
Juniper, and ALU have what they call BGN, Broadband Subscriber Management,
and other similar software. How are these different from BRAS functionality?
I've got no experience with BRAS so I'm not sure.  I think the ASR1k can 
do pppoe termination if you want a Cisco solution.

So it looks like there are open source and commercial solutions for DHCP
and DNS. Some providers like Infloblox seems to integrate all these into
one.


Infoblox, Bluecat, 6connect, Incognito, Promptlink, VitalQIP, Cisco BAC

There are a bunch of vendors and they all have their ups and downs. A 
DHCP system can be an expensive part of your network and it's a very 
critical one, so you might want to look at multiple offerings before 
deciding.



So if we have a core router that speaks BGP, a 10G aggregation switch to
aggregate the the chassis, and a device like Infloblox or the other
commercial solutions you mentioned that do DHCP/DNS, is there anything else
that is needed besides the access gear already mentioned in the
assumptions?  Are these large and expensive commercial BGN/Broadband
Subscriber management products a thing of the past or still very relevant
in todays environment?


Make sure you've got your provisioning system planned out and working 
before you run with it.  Your DHCP systems will tie heavily into your 
OSS so you'll need to work that piece out.  If you use an NMS for 
billing reasons then that will need to tie into the OSS as well.  It's 
always possible to roll out a network that just works, turn up a bunch 
of devices and then realize a critical piece is broken or badly 
designed.  You don't want to be in a position where everything works 
except and you can't take it down because everyone is using it.





Re: Carrier Grade NAT

2014-07-29 Thread Robert Drake


On 7/29/2014 12:42 PM, Chris Boyd wrote:


There's probably going to be some interesting legal fallout from that practice. 
 As an ISP customer, I'd be furious to find out that my communications had been 
intercepted due to the bad behavior of another user.

--Chris

Usually, unless the judge is being super generous, they'll provide a 
timestamp and a destination IP.  That should be pretty unique unless 
they're looking for fraud against large website or something.  In the 
unlikely event that two people hit the same IP at the same time(window) 
they would probably just throw that information out as unusable for 
their case.


Usually the window they give is ~ 3-5 seconds so they're pretty specific.


Re: Carrier Grade NAT

2014-07-29 Thread Robert Drake


On 7/29/2014 6:42 PM, Matt Palmer wrote:

Of course, getting anything back*out*  of that again in any sort of
reasonable timeframe would be... optimistic.  I suppose if you're storing it
all in hadoop you can map/reduce your way out of trouble, but that's going
to mean a lot of equipment sitting around doing nothing for 99.99% of the
time.  Perhaps mine litecoin between searches?
The timestamp is a natural index.  You shouldn't need to run a 
distributed query for finding information about a specific incident.  
You would have to write your own custom tools to access and manage the 
db, so that's just impractical.  The timestamp as well as most of the 
other fields should be fairly easily compressible since most of the bits 
are the same.  You might as well use a regular plaintext logfile and 
gzip it.





Re: Verizon Public Policy on Netflix

2014-07-12 Thread Robert Drake


On 7/11/2014 11:38 AM, Miles Fidelman wrote:


Well... if you make a phone call to a rural area, or a 3rd world 
country, with a horrible system, is it your telco's responsibility to 
go out there and fix it?


One might answer, of course not.  It's a legitimate position, and by 
this argument, Netflix should be paying for bigger pipes.


Then again, I've often argued that the universal service fund used 
to subsidize rural carriers - which the large telcos always scream 
about - is legitimate, because when we pick up the phone and dial, 
we're paying for the ability to reach people, not just empty 
dial-tone.  This is also legitimate, and by this argument, Verizon 
should be paying to improve service out to Netflix.


If you're a competitor to the monopoly then you don't get access to 
those funds.  It sucks for you, but that's just how it works.  The 
county/state government has determined that they need to pay someone to 
make their network better in that region.  They chose to pay the 
monopoly (whoever that is) and it wasn't you.


It's the monopolies job to ensure good connectivity to Netflix.  Oh, the 
monopoly is Comcast and they have a Netflix caching box but you don't?


That is the cost of doing business in a rural market.  You've got a few 
choices.  Build out a fiber backbone to larger or more diverse markets, 
buy more transit, or go out of business.


I service customers in small markets.  Frequently they've got 
underpowered circuits because the incumbent won't sell MetroE or charges 
astronomical amounts for everything.  If those were my only customers 
I'm not sure what I would do because I don't like their networks.  I 
want to upgrade them but I'm being held back by various things.  I've 
had situations where Monopoly entered a building at my expense to 
provide me fiber service so I could upgrade the users speed, then use 
that new fiber to undercut me on prices and take all the customers.


People say the exclusive agreements for multi-dwelling units were bad 
for the little guy, but the truth is that the little guy could use 
exclusive agreements to allow the community to collective bargain for 
better internet.


Now that those are gone, the competition is who can bribe the property 
manager more in pay-per-home connect fees.


Either way, if one is a customer of both, one will end up paying for 
the infrastructure - it's more about gorillas fighting, which bill it 
shows up on, who ends up pocketing more of the profits, and how many 
negative side-effects result.


No, it isn't.  It's about monopolies telling a large company that isn't 
a monopoly that they need to pay them money to stay in business.


Methinks all of the arguments and finger-pointing need to be 
recognized as being mostly posturing for position.


Miles Fidelman





Re: Question on Cisco EEM Policies

2014-07-07 Thread Robert Drake


On 7/6/2014 5:07 PM, Daniel van der Steeg wrote:

Hello all,

I have implemented two EEM Policies using TCL on a Cisco Catalyst 6500,
both of them running every X seconds. Now I am trying to find a way to
monitor the CPU and memory usage of these policies, to determine their
footprint. Does anyone have a good idea how I can do this?


It looks like cpmProcExtUtil5SecRev is what you need.   This should be 
available but it might depend on your IOS. CISCO-PROCESS-MIB shows all 
the different incarnations of it.  You can also use 
cpmProcExtMemAllocatedRev and cpmProcExtMemFreedRev to track memory usage.


Use cpmProcessName to find the process you want to monitor (in this case 
grepping for PID but you can look for name):


[rdrake@machine ~]$ snmpwalk -v2c -c community routername 
1.3.6.1.4.1.9.9.109.1.2.1.1.2 | grep 318

SNMPv2-SMI::enterprises.9.9.109.1.2.1.1.2.1.318 = STRING: ISIS Upd PUR

The 1.318 is the important bit.

[rdrake@machine ~]$ snmpwalk -v2c -c community routername 
1.3.6.1.4.1.9.9.109.1.2.3.1.5 | grep 318

SNMPv2-SMI::enterprises.9.9.109.1.2.3.1.5.1.318 = Gauge32: 0

One problem being that this is a percentage with a minimum resolution of 
1% (integer based) so even though this is the busiest process on the box 
I tested on, I always got zero percent.  It should be good for 
thresholding if you want to make sure your process doesn't spike the CPU 
though.  Also, the PID might change every reboot so long term monitoring 
might be problimatic unless you can associate the process name with the 
other thing.


Reference:
http://tools.cisco.com/Support/SNMP/do/BrowseMIB.do?local=enmibName=CISCO-PROCESS-MIB

Look at this for oids:
ftp://ftp.cisco.com/pub/mibs/oid/CISCO-PROCESS-MIB.oid




Thanks,
Daniel



Hats,
Robert


Re: Cheap LSN/CGN/NAT444 Solution

2014-06-30 Thread Robert Drake


On 6/30/2014 1:59 AM, Skeeve Stevens wrote:

Hi all,

I am sure this is something that a reasonable number of people would have
done on this list.

I am after a LSN/CGN/NAT444 solution to put about 1000 Residential profile
NBN speeds (fastest 100/40) services behind.

I am looking at a Cisco ASR1001/2, pfSense and am willing to consider other
options, including open source Obviously the cheaper the better.


Total PPS or bandwidth is the number you need rather than number of 
customers.  Assuming 1Gbps aggregation then almost anything will work 
for your requirements and support NAT.  Obviously if you have a large 
number of 100Mbps customers then 1Gbps wouldn't cut it for aggregation.


Based on your looking at the ASR I would guess you're somewhere around 
1Gbps, maybe 2Gbps.  If you're closer to 1Gbps and want to stay with a 
1RU solution then I would advise checking out the ASA5512 which is much 
cheaper than an ASR.


If you want to go ultra cheap but scalable to 4Gbps you could use a 
Cisco 6500/sup2/FWSM (all used.. probably totals less than $1000USD, but 
I don't know how much it is in Australia).  That would let you replace 
parts later to move to SUP720/ASASM for around 16Gbps throughput.


FWIW, I doubt you'll find a NAT platform with no IPv6 support, so you 
can start your IPv6 work now if need be.  Older stuff like the FWSM 
won't support things like DS-Lite though, so if you plan to go v6-only 
in your backbone then that's something to think about.




This solution is for v4 only, and needs to consider the profile of the
typical residential users.  Any pitfalls would be helpful to know - as in
what will and and more importantly wont work - or any work-arounds which
may work.

This solution is not designed to be long lasting (maybe 6-9 months)... it
is to get the solution going for up to 1000 users, and once it reaches that
point then funds will be freed up to roll out a more robust, carrier-grade
and long term solution (which will include v6). So no criticism on not
doing v6 straight up please.
Be wary if someone thinks this is going to last 6-9 months.  That's less 
than a funding cycle for a company and longer than an outage. That means 
the boss is pulling the number out of his ass and it could last anywhere 
from 30 days to 10 years depending on any number of factors.





Happy for feedback off-list of any solutions that people have found work
well...

Note, I am in Australia so any vendors which aren't easily accessible down
here, won't be useful.


...Skeeve

*Skeeve Stevens - *eintellego Networks Pty Ltd
ske...@eintellegonetworks.com ; www.eintellegonetworks.com

Phone: 1300 239 038; Cell +61 (0)414 753 383 ; skype://skeeve

facebook.com/eintellegonetworks ;  http://twitter.com/networkceoau
linkedin.com/in/skeeve

experts360: https://expert360.com/profile/d54a9

twitter.com/theispguy ; blog: www.theispguy.com


The Experts Who The Experts Call
Juniper - Cisco - Cloud - Consulting - IPv4 Brokering





Re: question about bogon prefix

2014-06-09 Thread Robert Drake


On 6/9/2014 11:00 PM, Song Li wrote:

Hi everyone,

I found many ISP announced bogon prefix, for example:

OriginAS Announcement Description
AS7018  172.116.0.0/24unallocated
AS209   209.193.112.0/20 unallocated

my question is why the tier1 and other ISP announce these unallocated 
bogon prefixes, and another interesting question is:


You could also ask why are other providers accepting the route, since I 
could announce 209.193.112.0/20 from my router and my upstream would 
reject it.


Of course, those two ASNs have a huge number of routes so they probably 
aren't filtered as closely by their peers.


But.. even if you're hyper diligent and blocking bogon routes, you'll 
need to ask yourself why it's not in the bogon list:


curl -s http://www.team-cymru.org/Services/Bogons/fullbogons-ipv4.txt | 
grep 209.193


It's also not on http://www.cymru.com/BGP/bogons.html but 172.116.0.0/24 is.

Now, according to this:
http://myip.ms/view/ip_addresses/3519115264/209.193.112.0_209.193.112.255

It belongs to the Franciscan Health System.  The one IP that is in DNS 
seems to back this up (it's called mercyvpn.org)


Whois Record created: 04 Jan 2005
Whois Record updated: 24 Feb 2012

My guess is one of two things.  Maybe they renumbered out of the /20 but 
left a VPN server up and haven't managed to migrate off it yet, but they 
have asked to return the block.. or, they forgot to pay their bill to 
ARIN and the block has been removed from whois but Qwest isn't as 
diligent because they're still being paid.


I've CC'd the technical contact listed in the old whois information so 
maybe he can get things corrected.


If I am ISP, can I announce the same bogon prefix(172.116.0.0/24) with 
AS7018 announced? Will this result in prefix hijacking?


Thanks!

I can find nothing on google that offers any legitimacy for 
172.116.0.0/24, but it is has been announced for 2 years so maybe there 
is some squatters rights at least.  It doesn't appear to be a spam 
source and I don't think any hosts are up on it right now. Maybe it's a 
test route that never got removed.  It makes me sad that nobody at ATT 
reads the CIDR report.  They've only got a couple of bogon announcements 
so it would be trivial for them to either acknowledge them and claim 
legitimacy or clean them up.


Re: ipmi access

2014-06-04 Thread Robert Drake


On 6/2/2014 1:42 PM, Brian Rak wrote:
They do publish it.  The problem is, it's not documented, and it takes 
a bunch of work to get into a usable state.See 
ftp://ftp.supermicro.com/GPL/SMT/SDK_SMT_X9_317.tar.gz


Plus, the firmware environment is pretty hostile.  If you flash some 
bad firmware, your only option is to desolder the IPMI flash chip and 
program it externally.  It cannot be reprogrammed in circuit, and 
there's no recovery method.


There is a market here for first or third parties to make money, or for 
open source people to hack a new firmware into existence.  Since HP 
charges a yearly license fee for their ILO, it should remain secured 
until they stop support for that platform.


People would probably revolt if supermicro started charging for 
something that has been free though.  The ideal situation would be if 
they continued to provide what they do for free and upsold some extra 
features.  Maybe the ability to group manage thousands of boxes, but you 
can already pretty much do that with the CLI impi tools.


It's unfortunate that free means complete security nightmare.



Re: US patent 5473599

2014-05-07 Thread Robert Drake


On 5/7/2014 9:47 PM, Rob Seastrom wrote:

The bar for an informational RFC is pretty darned low.  I don't see
anything in the datagram nature of i'm alive, don't pull the trigger
yet that would preclude a UDP packet rather than naked IP.  Hell,
since it's not supposed to leave the LAN, one could even get a
different ethertype and run entirely outside of IP.  Of course, the
organization that has trouble coming up with the bucks for an OUI
might have trouble coming up with the (2014 dollars) $2915 for a
publicly registered ethertype too.


Meh.. it's open source.  If I design a toaster that spits flames when 
you put bagels in it, then I put the design on github and forget about 
it, I shouldn't be held responsible for someone adding it to their 
network and setting fire to a router or two.


A problem that the developer doesn't have isn't a problem.  Oh, the user 
community noticed an interoperability issue?  What user community?  I 
was building this toaster for myself.  I released the plans in case it 
inspires or helps others.  If fire isn't what you need then maybe you 
can modify it to do what's needed.*


Now, the bar for an informational RFC is pretty low.  Especially for 
people who have written them before.  Those people seem to think one is 
needed in this case so they might want to get started writing it.  Then 
patches to the man pages covering the past issues can be added to 
document things, and a patch can be issued with the new OUI, ethertype, 
or port number, whichever the RFC decides to go for.



Must be a pretty horrible existence (I pity the fool?) to live on
donated resources but lack the creativity to figure out a way to run a
special fund raiser for an amount worthy of a Scout troop bake sale.
Makes you wonder what the OpenBSD project could accomplish if they had
smart people who could get along with others to the point of shaking
them down for tax-deductible donations, doesn't it?

-r


The money could also be donated by parties interested in solutions.

Open source is about people finding a problem and fixing it for their 
own benefit then giving the fix away to the community for everyone's 
benefit.  I know in the past the OpenBSD community has been harsh with 
outsiders who submit patches.  I honestly expect the same response in 
this case, especially because of the underlying drama associated with 
it, but without trying first it just seems like the network community is 
whining without being helpful at all.


To be fair, we're used to dealing with vendors where we can't change 
things so we bitch about them until they fix code for us.  In this case 
there is no them to bitch about.  We (the community) wrote the code 
and it's up to us to fix it.  If you don't consider yourself part of the 
OpenBSD community then you shouldn't be using their products and 
encountering problems, right?


* yeah, that's a very insular view and not really acceptable in the 
grown up world, but everyone's been beating them down over this and 
sometimes you end up taking your ball and going home because you're 
tired of people criticizing your plays.


Re: We hit half-million: The Cidr Report

2014-05-01 Thread Robert Drake


On 4/29/2014 10:54 PM, Jeff Kell wrote:

Yeah, just when we thought Slammer / Blaster / Nachi / Welchia / etc /
etc  had been eliminated by process of can't get there from here... we
expose millions more endpoints...

/me ducks too (but you know *I* had to say it)

Slammer actually caused many firewalls to fall over due to high pps and 
having to track state.  I thought about posting in the super-large 
anti-NAT/statefull firewall thread a few weeks ago but decided it wasn't 
worth it to stir up trouble.


Here is some trivia though:

Back when Slammer hit I was working for a major NSP.  I had gotten late 
dinner with a friend and was at his work chatting with him since he 
worked the night shift by himself.  It became apparent that something 
bad was wrong with the Internet.  I decided to drive to my office and 
attempt to do what I could to fix the issues.


This was a mistake.  Because of corporate reasons, my office was in a 
different city from the POP I connected to.  I was 3 hops away from our 
corporate firewall, one of which was a T1.


We had access lists on all the routers preventing people from getting to 
them from the Internet, so I thought my office was the only place I 
could fix the issue.  Well, someone had put a SQL server in front of or 
behind the firewall, somewhere where it would cause fun.  That DOS'd the 
firewall.  It took 3-4 hours of hacking things to get to the inside and 
outside routers and put an access-list blocking SQL.  Once that was done 
the firewall instantly got better and I was able to push changes to 
every 7500 in the network blocking SQL on the uplink ports.


This didn't stop it everywhere because we had 12000's in the core and 
they didn't support ACLs on most of the interfaces we had.  The access 
lists had to stick around for at least 6 months while the Internet 
patched and cleaned things up.


Fun fact:  the office network I was using pre-dated RFC1918 so we were 
using public IPs.  The software firewall that fell over either did so 
because statefull rules were included for SQL even when they weren't 
needed, or it died due to pure packets/sec.  Regardless, all of the 
switching and routing hardware around it were fine.


This isn't an argument against firewalls, I'm just saying that people 
tend to put stock in them even when they're just adding complexity.  If 
you have access lists blocking everything the firewall would block then 
you might think having both gives you defense in depth, but what it also 
does is gives a second place where typos or human error might cause 
problems.  It also gives a second point of failure and (if state 
synchronization and load-balance/failover are added) compounded 
complexity.  It depends on the goals you're trying to achieve.  
Sometimes redundant duties performed by two different groups gives you 
piece of mind, sometimes it's just added frustration.




Re: We hit half-million: The Cidr Report

2014-05-01 Thread Robert Drake


On 5/1/2014 7:10 PM, Jean-Francois Mezei wrote:


Pardon my ignorance here. But in a carrier-grade NAT implementation that
serves say 5000 users, when happens when someone from the outside tries
to connect to port 80 of the shared routable IP ?  you still need to
have explicit port forwarding to specific LAN side hosts (like the web
server) right ?

Trying to be devil's advocate here: (and discussing only incoming calls)
That's the problem with your devil.  The first outgoing connection 
negates every protection you've assumed with one-to-many NAT.  What you 
really need is a policy that says explicitly what traffic is permitted 
in each direction.  established/new outbound is the problem in this 
scenario, not what internal addresses you use.


On a secure server LAN, the ideal configuration for outbound would only 
allow connections to update servers you control, and acknowledgement 
traffic for protocols you are allowing inbound.



In a NAT setup for a company, wouldn't the concept be that you
explicitely have to open a few ports to specific hosts ? (for instance
80 points to the web server LAN IP address) All the rest of the
gazillion ports are blocked by default since the router doesn't know to
which LAN host they should go.

On the other hand, for a LAN with routable IPs, by default, all ports
are routed to all computers, and security then depends on ACLs or other
mechanisms to implement a firewall.

Auditors probably prefer architecture where everything is blocked by
default and you open specific ports compared to one where everything is
open by default and you then add ACLs to implement security.
Blocked by default or allowed by default are just concepts on a 
firewall.  People make the mistake of thinking that allowed by default 
is the default, but that's only true of the underlying host OS, and only 
if that host OS isn't hardened.


Specifically, ip forwarding shouldn't be turned on until needed. Linux 
doesn't turn this on by default, so by default you don't permit routing 
no matter the source or destination IP addresses.


The mistake that everyone is making is they think with NAT, secure rules 
are easier to write so getting the firewall online for the first time is 
easier, and maintaining it is easier.  The problem with this statement 
is it's only true to a certain extent.  If you care about whatever 
you're securing at a PCI/SOX level then NAT bought you nothing.  You 
still need to write secure inbound and outbound rules.


If whatever you're securing doesn't have to be that tightly controlled 
then NAT buys you a little, but it comes with a glaring false sense of 
security.  I know at my office the IT department has dealt with several 
worm outbreaks that spread through email and then use the local LAN to 
send outbound port 25.  I had to block port 25 outbound for the 
corporate network when it became apparent that IT was using NAT is a 
firewall as their security methodology.




(Not judging whether one is better, just trying to figure out why
auditors might prefer NAT).

Also, home routers have NAT which is really a combo of NAT with basic
firewall, so if you don't have NAT, they may equate this to not having
a firewall.



Auditors prefer NAT because everyone in the world understands security 
and computers on different levels.  You don't know if you're getting an 
auditor that writes their own pen-test suites or just runs nessus and 
prints the results.  They may have been trained by someone who developed 
the intuitive logical understanding that NAT systems fail-closed so 
they're better for defense in depth.  The problem with this is, as 
stated above, it's not buying enough to be worth it and it causes a 
false sense of security.  They may have even reached this conclusion 
themselves and it's hard to convince someone their ideas are wrong.  
Honestly they aren't even wrong, they're just picking a battle that 
shouldn't mean as much as they think it does.


Ideally, your security group should have unit-tests or just a network 
monitoring process that nmaps from both directions.   On inbound there 
are PCI compliance auditors that will do this for you regularly so that 
you can be assured the protection is still there.


Outbound you need to be just as vigilant to make sure the rules are just 
as strict.  Ultimately, things like CGNAT are completely ineffective for 
security because the outbound rules have to be wide open so people can 
use it.


Re: DNSSEC?

2014-04-11 Thread Robert Drake


On 4/11/2014 5:47 PM, Matt Palmer wrote:
That's not DNSSEC that's broken, then. - Matt 


You're correct about that, but everything depends on your level of 
paranoia.


The bug has a potential to show 64k of memory that may or may not be a 
part of the TLS/SSL connection*.  In that 64k their may be ssh keys, 
dnssec keys, pictures of cats, or anything else that needs to be safely 
protected.  If something is very important to keep secure and it was on 
a box that has a TLS/SSL connection then you should regenerate keys for 
it, but largely this effort would be just in case and not because it's 
compromised.


* technically it is part of the connection, it's just malloc() and not 
zeroed so whatever data was in it before was not cleared.  If you can be 
sure all your cat picture applications zero memory on exit and none of 
them exited uncleanly then this isn't a problem. At high levels of 
paranoia this isn't really something that you can be sure of though.  
I'm not even sure if it's done in most crypto apps aside from gpg.  
OpenSSL is double-faulted here for both not checking the length and not 
zeroing the memory on malloc**.


** probably making this all up since I haven't done a real look at the 
library, I'm just going by what I've read on the internet.


I expect we may see more bugs revealed in openssl soon.  It's getting 
lots of scrutiny from this so I expect the code is being audit by 
everyone and that's good.







Re: Cisco warranty

2014-04-06 Thread Robert Drake


On 4/3/2014 12:44 PM, Laurent CARON wrote:

Hi,

I bought a C3750G-12S which is now end of sale on cisco website. This 
device is now defective.


Since I bought it from a reseller and not directly from cisco, cisco 
is refusing to take it under warranty and tells me to have the 
reseller take care of it.


The reseller doesnt wan't to hear about this device since it is end of 
sale.


According to cisco website, end of sale means the device is still 
covered for 5 years.


These have reached a price point where a used one will cost less than a 
smartnet contract for one, and you get better turnaround time too.


My question is: Is it normal for my supplier to refuse to take it 
under warranty ?


Probably depends on the supplier.  Most of them would have warranty 
terms of their own and if it's passed that time period then they won't 
take it back.


Is there (from your experience) a chance I might get cisco to deal 
with it ?


If you're a huge customer of Cisco and have multi-year contracts with 
them then sure.  You could get them to RMA a toaster if they think they 
could make money in the long run on it.




Thanks

Laurent






Re: Just wondering

2014-03-31 Thread Robert Drake


On 3/31/2014 10:51 PM, Joe wrote:


I received several reports today regarding some scans for udp items from
shadowservers hosted out of H.E. Seems to claim to be checking for issues
regarding udp issues, amp issues, which I am all fine for, but my issue is
this. It trips several IDP/IPS traps pretty much causing issues that I have
to resolve. I have one user that is a home user (outside one of my /16)
that has seen this as well. Now with that said are these folks that do this
going to pay for one of my users that pay per bit for this? Does garbage in
to this really provide a garbage clean? I see they are planing on a bunch
of other protocols too, so that's nice.
If I was paying per bit I would probably want my ISP to rate limit and 
firewall lots of traffic before it ever reached my pay-per-bit line.  
Otherwise I would be paying for huge amounts of unsolicited traffic from 
everywhere.



I'm not sure where to go with this other than to advise my other folks to
drop this traffic from their 184.105.139.64/26 networks and hope for the
best regarding my FAP folks.

Regards,
-Joe

If you're comfortable that your internal audits are accurate and what 
these people are doing won't provide you any value, I don't see what 
harm it would do to block them.  Since they also have to worry about 
botnet authors blocking their traffic, I imagine they might change IP 
ranges after a while.  You might complain to them directly and see if 
they can add you to a do not poll list.  It looks like they have a 
couple of emails for issues listed here: 
https://www.shadowserver.org/wiki/pmwiki.php/Involve/GetReportsOnYourNetwork






Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-30 Thread Robert Drake


On 3/30/2014 12:11 AM, Barry Shein wrote:

I don't know what WKBI means and google turns up nothing. I'll guess
Well Known Bad Idea?

Since I said that I found the idea described above uninteresting I
wonder what is a WKBI from 1997? The idea I rejected?

Also, I remember ideas being shot down on the ASRG (Anti-Spam Research
Group) list primarily because they would take ten years to gain
acceptance.

Over ten years ago.

Maybe they were bad ideas for other reasons. Some certainly were.

But there's this tone of off-the-cuff dismissal, oh that would take
TEN YEARS to gain traction, or that's a WKBI, which I don't find
convincing.

I read your paper, for example, and said it's a nice paper.

But I don't find it compelling to the degree you seem to want it to be
because it mostly makes a bunch of assumptions about how an e-postage
system would work and proceeds to argue that the particular model you
describe (and some variants) creates impossible or impractical
hurdles.

But what if it worked differently?

At some point you're just reacting to the term e-postage and
whatever it happens to mean to you, right?
Imagine living in a world where this system is implemented.  Then 
imagine ways to break it.   The first thing I can think of is money 
laundering through hundreds of source and destination email accounts.  
The second is stolen identities or credit cards where the money doesn't 
exist to begin with (Who pays when this happens?)


Third is administrative overhead.  Banks/paypal/exchanges/someone is 
going to want a cut for each transaction, and they deserve one since 
they're going to end up tracking all of them and need to be able to 
reverse charges when something goes wrong.  But then you have a central 
point of failure and central monitoring point so you want to involve 
multiple exchanges, banks, etc.


Then you've got a dictatorship somewhere who says they want an extra 
$0.03 tacked on to each transaction, only it's not $0.03 it's insert 
famously unstable currency here so any mail that goes to that country 
has to have custom rules that fluctuate multiple times a day.


Then there is my mom, who knows just enough about computers to send cat 
pictures and forward me chain letters.  She'll not understand that email 
costs something now, or how to re-up her email account when it runs 
out.  The administrative burden will either fall to me or her ISP, and 
each phone call to the ISP probably costs them $$ because they must pay 
a live human to walk someone through email.



You can't really say you've exhaustively worked out every possibility
which might be labelled e-postage. Only a particular interpretation,
a fairly specific model, or a few.

When people talked of virtual currency over the years, often arguing
that it's too hard a problem, how many described bitcoin with its
cryptographic mining etc?

Bitcoin might well be a lousy solution. But there it is nonetheless,
and despite the pile of papers which argued that this sort of thing
was impossible or nearly so.

Note: Yes, I can also argue that Bitcoin is not truly a virtual
currency.

Sometimes a problem is like the Gordian Knot of ancient lore which no
one could untie. And then Alexander The Great swung his sword and the
crowds cried cheat! but he then became King of Asia just as
prophesized.

  
   Regards,
   John Levine, jo...@iecc.com, Primary Perpetrator of The Internet for 
Dummies,
   Please consider the environment before reading this e-mail. http://jl.ly

The answer is that you can't do this to SMTP.  Nobody will ever have the 
answers to all the questions involved with adding cost transactions to 
the protocol.  The only way to do this is to reboot with a new protocol 
that people start to adopt, and the only way they'll do that is if it's 
markedly better than the old way.  You have to remember some people when 
given the choice of paying for email or accepting 10 spams/day will opt 
for accepting a little spam.


The good news is, with email consolidated into 5 or so large providers 
and most people using webmail or exchange, you've got an opportunity to 
change the backend.  Not much software has to be modified, but you do 
need those large providers to buy-in to the idea.




Re: Cisco Security Advisory

2014-03-28 Thread Robert Drake


On 3/28/2014 4:11 PM, Scott Weeks wrote:

If a person is on multiple of *NOG mailing lists a lot of these're
received.  For example, I got well over 30 of them this round.  It'd be
nice to get something brief like this:


--
The Semiannual Cisco IOS Software Security Advisory has been released.

For information please goto this URL:
http://www.cisco.com/web/about/security/intelligence/Cisco_ERP_mar14.html

Advisory titles:
- Session Initiation Protocol Denial of Service Vulnerability
- Cisco 7600 Series Route Switch Processor 720 with 10 Gigabit Ethernet Uplinks 
Denial of Service Vulnerability
- Internet Key Exchange Version 2 Denial of Service Vulnerability
- Network Address Translation Vulnerabilities
- SSL VPN Denial of Service Vulnerability
- Crafted IPv6 Packet Denial of Service Vulnerability
---

Not everyone uses cisco and not everyone needs to see every vulnerability
detail email multiple times.  Imagine if all vendors started doing what
cisco is doing.
I hate that it's spam for some and relevant for others, but in the NSP 
world you can almost be certain that someone is going to have at least 
some Cisco equipment (even companies who are known to dislike Cisco 
enough to avoid them religiously have bought other companies who might 
have Cisco gear)


Having the vulnerability in the subject draws attention to the problems 
and makes people less likely to ignore it.   When I see keywords of 
technologies I'm using, like IPv6 or 6500 I tend to read through 
carefully to see if I'm vulnerable.  Because it can be difficult and 
time consuming to see if all your gear is vulnerable, If it's a bug in 
obscure card I didn't buy one of or weird technology I haven't had a 
chance to run then I'm not as diligent.  I guess I might be selfish 
because seeing 5 advisories at once is like a giant line break in NANOG 
discussions, so it's harder to tune it out and skip the emails :)


They could Bcc: all the lists they are sending to in one set of emails 
so the message-id is the same, then you could filter duplicates at 
least.  Or they could do the summary email like you guys want, whichever 
makes people happy.  :)




:-(

scott



:-(
Robert



Re: IPv6 isn't SMTP

2014-03-26 Thread Robert Drake


On 3/26/2014 10:16 PM, Franck Martin wrote:


and user@2001:db8::1.25 with user@192.0.2.1:25. Who had the good idea to use : 
for IPv6 addresses while this is the separator for the port in IPv4? A few MTA 
are confused by it.
At the network level the IPv6 address is just a big number.  No 
confusion there.  At the plaintext level the naked IPv6 address should 
be wrapped in square brackets.


From:
http://tools.ietf.org/html/rfc3986#section-3.2.2




Re: IPv6 address literals probably aren't SMTP either

2014-03-26 Thread Robert Drake


On 3/26/2014 11:28 PM, John Levine wrote:


It's messier than that.  See RFC 5321 section 4.1.3.  I have no idea
whether anyone has actually implemented IPv6 address literals and if
so, how closely they followed the somewhat peculiar spec.

R's,
John

I'm not sure why the SMTP RFC defines IPv6-addr so thoroughly and in an 
incompatible way with the other RFCs.  It would make more sense to refer 
back to another RFC with authoritative definitions. They're completely 
missing the fun that's happening with Zone Identifiers in RFC6874 and 
the hacks to support them some have been doing with the IPvFuture 
definition.


I'm not saying John Klensin shouldn't have a say in how the IPv6 address 
is defined, but I do think it would be best for everyone to work it out 
in an official place somewhere so that email software isn't doing the 
complete opposite of everyone else.




Re: Managing IOS Configuration Snippets

2014-03-02 Thread Robert Drake


On 2/28/2014 9:19 PM, Dale W. Carder wrote:

If I'm understanding what you're trying to do, you could script around
our rather unsophisticated 'sgrep' (stanza grep) tool combined with
scripting around rancid  rcs to do what I think you are looking for.

http://net.doit.wisc.edu/~dwcarder/scripts/sgrep

sgrep can dump out a stanza of ios-like config, then you can rcsdiff
that to your master, per 'chunk' of config.
Dale




I'm digging the idea of your command.   Along the same lines I've got 
this awk snippet that I made and then forgot about.  It functions like 
the cisco pipe begin/end commands:


#!/bin/sh

if [ x${2} = x ]; then
awk /${1}/{temp=1}; temp==1{print}
else
awk /${1}/{temp=1};/${2}/{temp=0}; temp==1{print}
fi


Usage:
cat router-confg | begin_regex 'line vty' '!'

If you omit the second argument it just shows you from your match until 
the end of file.





Re: Managing IOS Configuration Snippets

2014-02-26 Thread Robert Drake


On 2/26/2014 4:22 PM, Ryan Shea wrote:

Howdy network operator cognoscenti,

I'd love to hear your creative and workable solutions for a way to track
in-line the configuration revisions you have on your cisco-like devices.
Let me clearify/frame:

You have a set of tested/approved configurations for your routers which use
IOS style configuration. These configurations of course are always refined
and updated. You break these pieces of configuration into logical sections,
for example a configuration file for NTP configuration, a file for control
plane filter and store these in some revision control system. Put aside for
the moment whether this is a reasonable way to comprehend deployed
configurations. What methods do some of you use to know which version of a
configuration you have deployed to a given router for auditing and update
purposes? Remarks are a convenient way to do this for ACLs - but I don't
have similar mechanics for top level configurations. About a decade ago I
thought I'd be super clever and encode versioning information into the snmp
location - but that is just awful and there is a much better way everyone
is using, right? Flexible commenting on other vendors/platforms make this a
bit easier.

Assume that this version encoding perfectly captures what is on the router
and that no person is monkeying with the config... version 77 of the
control plane filter is the same everywhere.

I started a long email that really should just be a blog post.  I need 
to get a blog or something.


Short story is this:

NETCONF is probably the future of change management on all types of 
routers and switches.  It's not supported everywhere yet and is missing 
lots of features but they're working on it.  Look at the talk given at 
NANOG60 for more information.


There is a puppet module that is also incomplete.  I'm not sure this is 
the right way to go 
(http://puppetlabs.com/blog/puppet-network-device-management)


Most people roll their own solution.  If you're looking to do that 
consider using augeas for parsing the configuration files.  It can be 
really useful for documenting changes, and probably to diff parts of the 
config.  You might also consider rabbitmq or another message queue to 
handle scheduling and deploying the changes.  It can retry failed 
updates.  You should work towards all or nothing commits (not all cisco 
gear supports this, but you can fake it in a couple of ways.  Ultimately 
you want to rollback to a known good configuration if things go wrong)


If you have money and want this right now:

Consider looking at Tail-F's NCS, which according to marketing 
presentations appears to do everything I want right now.  I'd like to 
believe them but I don't have any money so I can't test it out. :)


Cheers,
Robert



Re: Filter NTP traffic by packet size?

2014-02-26 Thread Robert Drake


On 2/26/2014 5:33 PM, valdis.kletni...@vt.edu wrote:

On Wed, 26 Feb 2014 11:44:55 -0600, Brandon Galbraith said:


Blocking chargen at the edge doesn't seem to be outside of the realm of
possibilities.

What systems are (a) still have chargen enabled and (b) common enough to make
it a viable DDoS vector?  Just wondering if I need to go around and find
users of mine that need to be smacked around with a large trout
I would do it.  I scanned all my public and private networks and found a 
few.  I've added it to our customer acls to stop it.  There were also a 
couple of internal routers that someone had turned or left it on that 
were missed.  Those are now fixed.


nmap -T4 -oG chargen_scan.txt -sS -sU -p 19 your netblocks here




Re: Managing IOS Configuration Snippets

2014-02-26 Thread Robert Drake


On 2/26/2014 5:37 PM, Robert Drake wrote:


Most people roll their own solution.  If you're looking to do that 
consider using augeas for parsing the configuration files.  It can be 
really useful for documenting changes, and probably to diff parts of 
the config.  You might also consider rabbitmq or another message queue 
to handle scheduling and deploying the changes.  It can retry failed 
updates.  You should work towards all or nothing commits (not all 
cisco gear supports this, but you can fake it in a couple of ways.  
Ultimately you want to rollback to a known good configuration if 
things go wrong) 


I should amend that even though I recommend all this I haven't used any 
of it for networking.  I guess those are more shiny ball ideas than 
actual things I've used.  We have perl scripts that wrap an in-house API 
to access our IPAM which generates initial configuration.  The template 
files are a mix of m4 and Template::Toolkit.


We use basically one-off perl scripts for auditing sections of the 
configs to find discrepancies.  We use rancid to collect configs. We 
just started using netdot which is nice for topology discovery. TACACS 
and DHCP logs are parsed and stored in logstash.  All of those tools 
provide the who, what, where and when but not the why. The why would 
require a bit more custom stuff and forcing people to use a frontend 
interface instead of directly touching the routers. We aren't ready for 
that yet.




Re: Filter NTP traffic by packet size?

2014-02-26 Thread Robert Drake


On 2/26/2014 11:03 PM, Jimmy Hess wrote:


The well known port assignments are advisory or recommended,  for use by
other unknown processes.  the purpose of well known port
assignments is for service location;  the port number is not a sequence of
application identification bits.


The QUIC protocol using port 80/udp, was a great example of a different
application using a well-known port address, besides
the one that would appear as the well-known port registration.


Sometimes bypassing IANA for port registration works in your favor, 
sometimes it doesn't.  Of course there should be a way to setup 
connections that aren't listed in IANA, but using well-known low ports 
isn't safe.  It's biting us and we've got to counter it.  UDP doesn't do 
enough setup on a connection for you to really figure out if it's 
chargen or some new traffic type.  Even if you have the luxury of 
putting a stateful firewall in a place and filtering based on what 
traffic is there, the only valid choice for an ISP would be to say 
permit only the registered service chargen on port 19, oh, and block it 
anyway because nobody should be using chargen.


Taking the high road about blocking services was an option 10 years 
ago.  The gear couldn't do it and most internet users were still 
somewhat tech savvy.  The landscape has changed.  I can't convince my 
cousin not to click on ransomware.  I think my only viable option is to 
filter residential customers for their own good, and if someone actually 
wants/needs one of these ports opened then we can work with them.*


* ISPs have also reduced their abuse staffing by blocking port 25. It's 
either that or just acknowledge that you won't be able to process all 
your abuse emails because there are too many people spamming/too many 
compromised machines.  So in some ways it's a financial need for us to 
block even more aggressively than big ISPs because we can't afford to 
staff abuse for things that are automatically fixable.




Re: Atlanta - Patch Cables

2014-02-24 Thread Robert Drake
Cables and Kits is local to Atlanta and is great for last minute 
orders.  You can pickup there if needed.


http://www.cablesandkits.com/


On 2/21/2014 5:06 AM, Bobby Lacey wrote:

In Atlanta doing an install for a client this weekend and it appears that
the fiber/ethernet patch cables won't be delivered in time from supplier.
Would anyone know of a good resource for patch cables (both fiber and
ethernet) in the metro area? Just wondering if there are any other
resources for these? Frys?

Offlist please. Thank you!

Bobby






Re: GEO location issue with google

2014-02-19 Thread Robert Drake
For future reference, the last time this issue came up someone said 
doing this was a good way to get their geo stuff fixed automatically:


http://tools.ietf.org/html/draft-google-self-published-geofeeds-02

I haven't messed with it yet, but it seems like a good idea.  I want to 
write something that lets me export this from our IPAM but I've been 
busy and it isn't a problem for us at the moment.


Thanks,
Robert

On 2/19/2014 8:02 AM, Praveen Unnikrishnan wrote:

Hi Heather,

Thanks you very much for sorting out this issue.

Praveen Unnikrishnan
Network Engineer
PMGC Technology Group Ltd
T:  020 3542 6401
M: 07827921390
F:  087 1813 1467
E: p...@pmgroupuk.commailto:p...@pmgroupuk.com

[cid:image004.png@01CF2D72.C52965E0]


[cid:image002.jpg@01CE1663.96B300D0]
www.pmgroupuk.comhttp://www.pmgroupuk.com/ | www.pmgchosting.com 
http://www.pmgchosting.com/
How am I doing? Contact my manager, click 
heremailto:sha...@pmgroupuk.com?subject=How's%20Praveen%20doing?.


[cid:image003.jpg@01CE1663.96B300D0]

PMGC Managed Hosting is now live!  After a successful 2012, PMGC continues to 
innovate and develop by offering tailored IT solutions designed to empower you 
through intelligent use of technologies. 
www.pmgchosting.comhttp://www.pmgchosting.com/.

PMGC Technology Group Limited is a company registered in England and Wales. 
Registered number: 7974624 (3/F Sutherland House, 5-6 Argyll Street, London. 
W1F 7TE). This message contains confidential (and potentially legally 
privileged) information solely for its intended recipients. Others may not 
distribute copy or use it. If you have received this communication in error 
please contact the sender as soon as possible and delete the email and any 
attachments without keeping copies. Any views or opinions presented are solely 
those of the author and do not necessarily represent those of the company or 
its associated companies unless otherwise specifically stated. All incoming and 
outgoing e-mails may be monitored in line with current legislation. It is the 
responsibility of the recipient to ensure that emails are virus free before 
opening.

PMGC® is a registered trademark of PMGC Technology Group Ltd.


From: Heather Schiller [mailto:h...@google.com]
Sent: 13 February 2014 05:43
To: Praveen Unnikrishnan
Cc: nanog@nanog.org
Subject: Re: GEO location issue with google

Reported to the appropriate folks.

Going to www.google.co.ukhttp://www.google.co.uk directly, should return you 
English language results.   Appending /en to the end of a google url should also 
return you English language results.  You should also be able to set your language 
preference in your search settings.

  --Heather

On Fri, Feb 7, 2014 at 10:20 AM, Praveen Unnikrishnan 
p...@pmgroupuk.commailto:p...@pmgroupuk.com wrote:
Hi,

We are an ISP based in UK. We have got an ip block from RIPE which is 
5.250.176.0/20http://5.250.176.0/20. All the main search engines like yahoo shows we are based 
in UK. But Google thinks we are from Saudi Arabia and we redirected to 
www.google.com.sahttp://www.google.com.sahttp://www.google.com.sa instead of 
googlw.co.ukhttp://googlw.co.uk. I have sent lot of emails to google but no luck. All the 
information from google are in Arabic and youtube shows some weird videos as well.

Could anyone please help me to sort this out?

Would be much appreciated for your time.

Praveen Unnikrishnan
Network Engineer
PMGC Technology Group Ltd
T:  020 3542 6401
M: 07827921390
F:  087 1813 1467
E: 
p...@pmgroupuk.commailto:p...@pmgroupuk.commailto:p...@pmgroupuk.commailto:p...@pmgroupuk.com

[cid:image001.png@01CF2418.27F29CA0]


[cid:image002.jpg@01CE1663.96B300D0]
www.pmgroupuk.comhttp://www.pmgroupuk.comhttp://www.pmgroupuk.com/ | 
www.pmgchosting.comhttp://www.pmgchosting.com http://www.pmgchosting.com/
How am I doing? Contact my manager, click 
heremailto:sha...@pmgroupuk.commailto:sha...@pmgroupuk.com?subject=How's%20Praveen%20doing?.


[cid:image003.jpg@01CE1663.96B300D0]

PMGC Managed Hosting is now live!  After a successful 2012, PMGC continues to innovate and 
develop by offering tailored IT solutions designed to empower you through intelligent use 
of technologies. 
www.pmgchosting.comhttp://www.pmgchosting.comhttp://www.pmgchosting.com/.

PMGC Technology Group Limited is a company registered in England and Wales. 
Registered number: 7974624 (3/F Sutherland House, 5-6 Argyll Street, London. 
W1F 7TE). This message contains confidential (and potentially legally 
privileged) information solely for its intended recipients. Others may not 
distribute copy or use it. If you have received this communication in error 
please contact the sender as soon as possible and delete the email and any 
attachments without keeping copies. Any views or opinions presented are solely 
those of the author and do not necessarily represent those of the company or 
its associated companies unless otherwise specifically 

Re: Everyone should be deploying BCP 38! Wait, they are ....

2014-02-18 Thread Robert Drake


On 2/18/2014 2:19 PM, James Milko wrote:

Is using data from a self-selected group even meaningful when
extrapolated?  It's been a while since Stats in college, and it's very
likely the guys from MIT know more than I do, but one of the big things
they pushed was random sampling.

JM


Isn't it probable that people who know enough to download the spoofer 
projects program and run it might also be in position to fix things when 
it's broken, or they may just be testing their own networks which 
they've already secured, just to verify they got it right.


I may put it on my laptop and start testing random places like 
Starbucks, my moms house, conventions and other things, but if I'm 
running it from my home machine it's just to get the gold I did this star.


So yeah, data from the project is probably meaningless unless someone 
uses it as a worm payload and checks 50,000 computers randomly (of 
course I don't advise this.  I just wish there was a way to really push 
this to be run by everyone in the world for a week)


Maybe with enough hype we could get CNN to advise people to download 
it.  Actually, it would be nice if someone who writes security software 
like NOD32 or Malwarebytes, or spybot, adaware, etc, would integrate it 
into their test suite.  Then you get the thousands of users from them 
added to the results.




Re: BCP38 [Was: Re: TWC (AS11351) blocking all NTP?]

2014-02-05 Thread Robert Drake


On 2/5/2014 1:20 PM, Christopher Morrow wrote:

I here tell the spoofer project people are looking to improve their data
and stats... And reporting.
I know it's not possible due to the limitations of javascript 
sandboxing, but this really needs to be browser based so it can be like 
DNSSEC or MX or IPV6 testing.  Users (and reddit) can be coerced into 
clicking a link if it shows a happy green sign when they pass the test.  
Asking them to download an executable is too much for most of them.


I'd also love a way as a network administrator that I could audit my own 
network.  Even with all the correct knobs tweaked I occasionally find a 
site where someone turned up an interface and forgot some template 
commands, or in the case of gear that doesn't support it there might not 
be a filter on an upstream device even though there should be.


It'd be nice to have a CM profile that would attempt to spoof something 
to a control server then alert if it works.




The state of TACACS+

2013-12-30 Thread Robert Drake
Ever since first using it I've always liked tacacs+.  Having said that 
I've grown to dislike some things about it recently.  I guess, there 
have always been problems but I've been willing to leave them alone.


I don't have time to give the code a real deep inspection, so I'm 
interested in others thoughts about it.  I suspect people have just left 
it alone because it works.  Also I apologize if this is too verbose or 
technical, or not technical enough, or just hard to read.


History:

TACACS+ was proposed as a standard to the IETF.  They never adopted it 
and let the standards draft expire in 1998.  Since then there have been 
no official changes to the code.  Much has happened between now and 
then.  I specifically was interested in parsing tac_plus logs 
correctly.  After finding idiosyncrasies I decided to look at the source 
and the RFC to see what was really happening.


Logging, or why I got into this mess:

In the accounting log, fields are sometimes logged in different order.  
It appears the client is logging whatever it receives without parsing it 
or modifying it.  That means the remote system is sending them in 
different orders, so technically the fault lies with them.  However, it 
seems too trusting to take in data and log it without looking at it.  
This can also cause issues when you send a command like (Cisco) dir 
/all nvram: on a box with many files. The device expands the command to 
include everything on the nvram (important because you might want to 
deny access to that command based on something it expanded), but it gets 
truncated somewhere (not sure if it's the device buffer that is  full, 
tac_plus, or the logging part.  I might tcpdump for a while to see if I 
can figure out what it looks like on the wire) I'm not sure if there are 
security implications there.


Encryption:

The existing security consists of md5 XOR content with the md5 being 
composed of a running series of 16 byte hashes, taking the previous hash 
as part of the seed of the next hash.  A sequence number is used so 
simple replay shouldn't be a factor.  Depending on how vulnerable 
iterative md5 is to it, and how much time you had to sniff the traffic, 
I would think this would be highly vulnerable to chosen plaintext if you 
already have a user-level login, or at least partial known plaintext 
(with the assumption they make backups, you can guess that at least some 
of the packets will have show running-config and other common 
commands).  They also don't pad the encrypted string so you can guess 
the command (or password) based on the length of the encrypted data.


For a better description of the encryption you can read the draft: 
http://tools.ietf.org/html/draft-grant-tacacs-02
I found an article from May, 2000 which shows that the encryption scheme 
chosen was insufficient even then.

http://www.openwall.com/articles/TACACS+-Protocol-Security

For new crypto I would advise multiple cipher support with negotiation 
so you know what each client and server is capable of. If the client and 
server supported multiple keys (with a keyid) it would be  easier to 
roll keys frequently, or if it isn't too much overhead they could use 
public key.



Clients:

As for clients, Wikipedia lists several that seem to be based on the 
original open-source tac_plus from Cisco.  shrubbery.net has the 
official version that debian and freebsd use.  I looked at some of the 
others and they all seemed to derive from Cisco's code directly or 
shrubbery.net code, but they retained the name and started doing their 
own versioning.  All the webpages look like they're from 1995.  In some 
cases I think it's intentional but in some ways it shows a lack of care 
for the code, like it's been dropped since 2000.


Documentation is old:

This only applies to shrubbery.net's version.  I didn't look at the 
other ones that closely.  While all of it appears valid, one QA in the 
FAQ was about IOS 10.3/11.0.   Performance questions use the sparc 2 as 
a target machine.  There isn't an INSTALL or README, just the 
FAQ/CHANGES/COPYING (and a tac_plus.conf manpage), so the learning curve 
for new users is probably pretty steep.  Also there isn't a clear 
maintainer.  The best email address I found was listed in the 
tacacs+.spec file, for packaging on rpm systems.


If you hit the website they give some hints with some outdated, though 
still functional links.  And they list the official email as 
tac_p...@shrubbery.net



Conclusion:

Did everyone already know this but me?  If so have you moved to 
Kerberos?  Can Kerberos do everything TACACS+ was doing for router 
authorization?  I've got gear that only supports radius and tacacsplus, 
so in some cases I have no choice but to use one of those, neither of 
which I would trust over an unencrypted wire.  If TACACS+ isn't a dead 
end then it needs a push to bring the protocol to a new version.  There 
are big name vendors involved in making supported clients and servers.  
There should 

Re: apt-mirror near ashburn

2013-10-07 Thread Robert Drake
My suggestion is to use http://http.debian.net/debian as your source.  
It uses geo thingie to figure out the closest mirror to you.



Code is on github if you're interested in how it works.
https://github.com/rgeissert/http-redirector


On 10/5/2013 11:11 PM, Christopher Morrow wrote:

On Sat, Oct 5, 2013 at 11:06 PM, Randy Bush ra...@psg.com wrote:

{{{
$apt-get install libsm6
...
After this operation, 717 kB of additional disk space will be used.
Do you want to continue [Y/n]?
WARNING: The following packages cannot be authenticated!
   x11-common libice6 libsm6
Install these packages without verification [y/N]?
}}}

hm.  ft meade can't even hack the checksums?

possibly the mirror is just misbehaving :( it's been a while since i
ran it, so I'm honestly not sure what's going on with it these days.

-chris





NANOG58 bad helo for hotel reservation emails

2013-05-07 Thread Robert Drake
Sorry for the noise, but I thought this might be of interest to anyone 
waiting for their hotel confirmation:


NOQUEUE: reject: RCPT from feport01.hiltonhhonors.net[63.122.201.171]: 
450 4.7.1 ironport.hhonorscrm.net: Helo command rejected: Host not 
found; from=waldorfastoriahotelsreso...@res.hilton.com to=excised 
proto=ESMTP helo=ironport.hhonorscrm.net


So if you run your own server and/or are capable of compensating for a 
misconfigured MTA, you should add an exception for 
feport01.hiltonhhonors.net.







Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2013-01-31 Thread Robert Drake


On 1/30/2013 9:10 PM, David Barak wrote:


IPv6 has been launched on all Arris DOCSIS 3.0 C4 CMTSes, covering
over 50% our network.


The update you sent is lovely, except I can tell you that the one (also an 
Arris, running DOCSIS 3.0) which was installed in late October in my house in 
Washington simply does not run v6 with the pre-installed load.
In this particular case C4 CMTSes is the important bit of that 
update.   The CMTS is what your modem connects to on the other end. You 
might be connected to a different type of CMTS which doesn't support or 
isn't configured for IPv6.  You wouldn't be able to know that without 
contacting someone with a good knowledge of the network at Comcast though.


It could be as you say, that the modem only supports it when wireless is 
disabled and that is the only thing stopping it from working for you.  
If that was the case I would ask for a different modem, or go buy a 
modem that you think will work.






Re: DNS resolver addresses for Sprint PCS/3G/4G

2013-01-22 Thread Robert Drake


On 1/16/2013 7:13 PM, Jay Ashworth wrote:

I've noticed, for quite some time, that there seems to be a specific category
of slow that I see in using apps on my HTC Supersonic/Sprint EVO, on both
their 3G and 4G networks, and I wonder if it isn't because the defined
resolvers are 8.8.4.4 and 8.8.8.8, which aren't *on* Sprint's networks.

Does anyone have, in their magic bag of tricks, IP addresses for resolvers
that *are* native to that network, that they wouldn't mind whispering in my
ear?
Not directly PCS, but ns[1-3].sprintlink.net are used for their 
recursive DNS for customers.  They're anycasted so you should be able to 
just use the first IP (204.117.214.10).  I don't know what these are 
because I haven't been a customer for years, but it appears they may 
have servers specific to PCS as well:


PING ns1.sprintpcs.net (69.43.160.200) 56(84) bytes of data.
rdrake@terminal:~$ host ns2.sprintpcs.net
ns2.sprintpcs.net has address 69.43.160.200
rdrake@terminal:~$ host ns3.sprintpcs.net
ns3.sprintpcs.net has address 69.43.160.200

well, at least one server.

Except:

69.43.160.200 traces through to an XO customer in California:

Castle Access Inc ARIN-CASTLE-ALLOC (NET-69-43-128-0-1) 69.43.128.0 - 
69.43.207.255


Perhaps they do some special things on the PCS side to make this DNS 
server work.


I would use 204.117.214.10.


Offlist is fine.  Yes, I owe the list summaries on a couple earlier
questions; I still have the details to write from.  :-}

Cheers,
-- jra





Re: Question about DOCSIS DHCP vs ARP

2013-01-12 Thread Robert Drake



On Friday, January 11, 2013 8:29:23 PM, Jean-Francois Mezei wrote:

Many thanks. In particular, you need cable-source-verify dhcp to
prevent self assigned IPs that are unused by neighbours.

Is this something that is now basically a default for all cable
operators ? Or does this command add sufficient load to the CMTS that
some cable operators choose to not use it for performance purposes ?



Nobody would turn it off for that reason.  They might fail to turn it 
on if they didn't read best practices for at least 10 years.   It's 
pretty much part of a fundamental set of commands turned on to prevent 
cable modem theft (along with requiring BPI+ and other things)


Here's an article I just found searching for docsis bpi+

http://volpefirm.com/blog/security/hacking-docsis-cable-modems/



What happens when a CMTS reboots and has an enpty database of DHCP
leases ? Does it then query the DHCP server for every IP/MAC it sees
that it doesn't yet know about ?



Most of the time when a CMTS reboots they don't even get to the point 
of failing due to DHCP issues.  In any case the CMTS would ask the DHCP 
server and be happy with it's reply since it's the equivalent of a new 
modem coming online.


Most of the time the modems would fail into reject(pk) due to the 
public key negotiation not being valid now that the CMTS has been 
rebooted.  To fix that you could either wait for the modems to try 
again or run clear cable modem reject delete if it's a Cisco CMTS.






Re: the little ssh that (sometimes) couldn't

2012-10-29 Thread Robert Drake



On 10/29/2012 02:54 PM, Jon Lewis wrote:


Bush league.  I debugged a similar issue on Sprint's network about 15
years ago, also nailing it down to which router/router hop had the problem


When I was working for Sprint about 12 years ago, we had a circuit where 
the customer complained that we were blocking executable downloads.


We essentially dismissed his complaints because they were ridiculous. 
We would test his T1 and it would show everything fine.  I was willing 
to entertain his concern because it sounded weird and he had a UNIX box 
I could login to.


Running wget I saw the same issues.  If I zipped a file I could download 
it without issue, anything that was an exe would not.


We narrowed it down to 2-4 bytes of the exe header that the circuit just 
wouldn't pass.  Called the local telco and had them test the circuit 
from the customer prem, they found errors on the reverse.


We fixed it and he could download executables again.  I got an award for 
persistence and the customer canceled his account.





Re: Update from the NANOG Communications Committee regarding recent off-topic posts

2012-08-02 Thread Robert Drake

On 7/30/2012 1:42 PM, Patrick W. Gilmore wrote:

I'm sorry Panashe is upset by this rule.  Interestingly, Your search - Panashe 
Flack nanog - did not match any documents.  So my guess is that a post from that 
account has not happened before, meaning the post was moderated yet still made it through.

Has anyone done a data mining experiment to see how many posts a month are from 
new members?  My guess is it is a trivial percentage.



Ignoring many harder to determine things like who has changed their 
email address and reducing it to simple shell commands, I got this:


for i in `cat ../nanog_archive_index.html | grep txt | cut -f2 -d\` ; 
do wget http://mailman.nanog.org/pipermail/nanog/$i; done
du -sh=41M (uncompressed=100M).  That seems small for all the mail since 
random 2007 but I'd rather use an official archive so people can 
duplicate results and refine things.

 grep -h ^From:  * |  sort | uniq -c | sort -nr

First of all I will say Owen is winning by a fair margin:

   1562 From: owen at delong.com (Owen DeLong)
929 From: randy at psg.com (Randy Bush)
775 From: Valdis.Kletnieks at vt.edu (Valdis.Kletnieks at vt.edu)
688 From: morrowc.lists at gmail.com (Christopher Morrow)
621 From: jbates at brightok.net (Jack Bates)
558 From: jra at baylink.com (Jay Ashworth)
480 From: gbonser at seven.com (George Bonser)
450 From: patrick at ianai.net (Patrick W. Gilmore)
446 From: cidr-report at potaroo.net (cidr-report at potaroo.net)

Total count:
grep -h ^From:  * | wc -l
54166

# Totals for  10 contributors
for i in 1 2 3 4 5 6 7 8 9; do grep -h ^From:  * | sort | uniq -c | 
sort -nr | grep   $i | wc -l; done

3129

552
319
208
157
131
103
94

Total for less than 10 posts contributors:  5804

Percentages:  5804/54166=1% of posts from low contributors.

# shows the number of people who've contributed that number of times.
grep -h ^From:  * | sort | uniq -c | sort -nr | awk '{print $1}' | 
uniq -c | sort -nr


# another interesting thing to look at is posts by month per user 
(dropping the -h from grep):

grep ^From:  * | sort | uniq -c | sort -nr

# not the most efficient, but tells you who posted the most in a month:
for i in *; do grep ^From:  * | sort | uniq -c | sort -nr | grep $i | 
head -n 1; done


# Per month, how many single post contributions happen/total.  The 
numbers can be higher here since people who posted in a different month 
may still be counted as a new contributor
 for i in *; do echo -n $i ; grep ^From:  $i | sort | uniq -c | 
sort -nr | grep   1  | wc -l | tr '\n' '/'; grep ^From:  $i | wc 
-l ; done






Re: Outgoing SMTP Servers

2011-10-25 Thread Robert Drake

On 10/25/2011 11:17 AM, Owen DeLong wrote:

But that applies to port 25 also, so, I'm not understanding the difference.


Other people running open port 587s tends to be quite self-correcting.


At this point, so do open port 25s.


The differences is in intentions from the user.   All SMTP servers are 
supposed to accept incoming email to their domain on port 25, if they 
get a connection from a random IP they can check spf, dkim and dns 
blacklists but that's all they can do to see the reputation of the 
sender.  Blocking port 25 is an ISP based list of who is allowed to send 
SMTP.


Port 587 is supposed to only be used for MUA-MTA communications.  If 
mx.hello.com gets a 587 connection from anyone and they say mail from: 
anyone other than hello.com the server can drop that as wrong.


Yes it's nasty and dumb, but it works better than spf, DKIM and other 
technology right now.Maybe spf could be extended into reverse zones 
and who they're permitted to send mail for (too many ISP's don't let 
even business users update reverse records), maybe spf or a protocol 
like it will become required in the future so you know who can be 
trusted when they connect, or reputation or greylisting will take off, 
except for having to store reputation about all IP's and all /64s so the 
database isn't easily maintained.  I think spf with dkim (with caveats 
worked out) would be the best solution but anything that requires a flag 
day with SMTP basically isn't gonna happen.




Owen


Robert




Re: Outgoing SMTP Servers

2011-10-25 Thread Robert Drake

On 10/25/2011 10:19 PM, Blake Hudson wrote:

I didn't see anyone address this from the service provider abuse
department perspective. I think larger ISP's got sick and tired of
dealing with abuse reports or having their IP space blocked because of
their own (infected) residential users sending out spam. The solution
for them was to block the spam. The cheapest/easiest way to do this was
to block TCP 25 between subs and the internet, thus starting a trend. If
587 becomes popular, spammers will move on and the same ISPs that
blocked 25 will follow suit.
Actually, it doesn't work that way because of what submission is 
designed to do.  I just posted another email about it so I won't repeat 
it, but basically you should think of blocking port 25 as a list of 
who's authorized to send emails, not as a port we just killed for fun 
and we're waiting for the spammers next move.




A better solution would have been to prevent infection or remove
infected machines from the network(strong abuse policies, monitoring,
give out free antivirus software, etc). Unfortunately, several major
players (ATT, for example) went down the road of limiting internet
access. Now that they've had a taste, some of them feel they can block
other ports or applications like p2p (Comcast), Netflix (usage based
billing on Bell, ATT, others).


As an ISP, I liked seeing abuse complaints drop to near zero when we did 
this.  We spent about a month fixing some people who don't use webmail 
(most regular customers don't use an MUA anymore) and had our share of 
third-party MTA's that refused to turn on submission (no idea why, these 
were usually business-class comp accounts so we moved them to a business 
pool and dropped their acls) but overall we probably had less than 100 
calls from doing this and it made our lives easier.


Now I know you said you wanted us to be preventative and to treat the 
problem, but that's just impractical.  We got 5000 abuse emails a month 
for (at the time) ~20k customers.  Were 1/4 of them spamming?  No, but 
the ones that were spamming generated automated reports from everyone.


None of them were ever legitimate spammers.  They were all users who 
clicked on a funny puppy picture their mom sent, or some other thing 
that set their computer on fire and had it spitting out gobs of porno 
links to everyone it could find.  So it wasn't a set of problem users, 
it was just a random sampling of everyone's not-so-PC-savvy relatives.


So, lets say we wrote software to collate those reports and got it down 
to 30 legitimate people (if we're lucky).  Do we block their IP's and 
wait for them to call in then send them to geek squad?  Do we try to fix 
their infected PC over the phone?   At this point, no matter what we do 
they're going to get sent to a tier 2 tech which means at least 2 phone 
calls and whatever revenue we might have gotten from them is gone for 
quite a while.  We can have one guy tied up all day every day trying to 
process abuse issues or we can just shut down port 25 and the problem 
magically disappears.


Is their laptop uninfected?  No, but they can no longer infect any other 
customer in our network or anyone elses network, thus reducing global 
infections.  We've made the world a better place and saved ourselves 
some money.   Unfortunately, the first coffee shop they go to that 
doesn't block port 25 is going to be a new spam source but we can't save 
them all.


It may be possible in the future we'll have a more convenient method to 
police PC's but the network access controls that exist right now aren't 
flexible enough to allow different networks to set different policies, 
so if it's a work laptop and they have a domain administrator then 
802.1x might not be possible, and mandating they have firewall or 
anti-virus turned on (or a specific version/that it's updated, etc) 
might not be possible.


Most customers rail against controls anyway.  You don't want port 25 
blocked so how would you feel if we mandated you install our ad-ware 
mcafee client and scanned your computer every 15 minutes?  And when you 
think about it, if the big boys gave up and blocked port 25 and stopped 
offering free anti-virus and a backrub when you call in, how can we 
afford to compete with that?




Unfortunately, I don't see the trend reversing. I'm afraid that Internet
freedoms are likely to continue to decline and an Unlimited Internet
experience won't exist at the residential level in 5+ years.


I hope that you're exaggerating for effect, but you might be right.  
Small providers have trouble competing right now because of all the 
advantages the carriers have in the market.  Some of the ways small 
providers can distinguish themselves is through support, or offering 
things a big player won't.  So in some cases it's better to find a 
regional ISP and go with them because they may work with you, and they 
may be a little more lenient with some things.


I don't think port 25 is worth making a stand on 

Re: Yahoo and IPv6

2011-05-14 Thread Robert Drake

On 5/10/2011 12:57 AM, Jeff Wheeler wrote:

Your suggestion has two main disadvantages:
1) it doesn't work on some platforms, because input ACL won't stop ND
learn/solicit -- obviously this is bad
2) it requires you to configure a potentially large input ACL on every
single interface on the box, and adjust that ACL whenever you
provision more IPv6 addresses for end-hosts -- kinda like not having a
control-plane filter, only worse



Might need to rewrite some portion of ND to do this, but can't a cookie 
be encoded in the ND packet and no state kept?  That should reduce the 
problem to one of a packet flood which everyone already deals with now.


Sorry if this has been suggested/shot down before.  The ND problems keep 
being mentioned and I never see this proposed and it seems like an 
obvious solution.


Robert