RE: Broadcast television in an IP world

2017-11-17 Thread shawn wilson
Besides Netflix, does anyone else offer CDN boxes for their services?

I'm also guessing that most content won't benefit from multicast to homes
too much?

I can see where multicast benefits sports and news (and probably catching
commercials for people). But in a world where I'm more than happy to pay
Amazon $25-40 a show/season to avoid commercials, I'm guessing
live/broadcast TV will get even less popular (I get news via YouTube - so
that's not even live for me anymore).

On Nov 17, 2017 18:03, "Luke Guillory"  wrote:

> This use to be the case.
>
> While it might lower OPX that surely won't result in lower retrans, will
> just be more profit for them.
>
> We're down as well on video subs, this is 99% due to rising prices.
>
> This is where it's heading for sure, in the end it will cost more as well
> since each will be charging more than the per sub rates we're getting
> charge. They'll have to in order to keep revenue the same.
>
> When ESPN offers an OTT product I have no doubt it will be near the $20
> per month, for 5 channels or so?
>
>
>
> Luke Guillory
> Vice President – Technology and Innovation
>
> Tel:985.536.1212
> Fax:985.536.0300
> Email:  lguill...@reservetele.com
>
> Reserve Telecommunications
> 100 RTC Dr
> Reserve, LA 70084
>
> 
> _
>
> Disclaimer:
> The information transmitted, including attachments, is intended only for
> the person(s) or entity to which it is addressed and may contain
> confidential and/or privileged material which should not disseminate,
> distribute or be copied. Please notify Luke Guillory immediately by e-mail
> if you have received this e-mail by mistake and delete this e-mail from
> your system. E-mail transmission cannot be guaranteed to be secure or
> error-free as information could be intercepted, corrupted, lost, destroyed,
> arrive late or incomplete, or contain viruses. Luke Guillory therefore does
> not accept liability for any errors or omissions in the contents of this
> message, which arise as a result of e-mail transmission. .
>
>


Re: A perl script to convert Cisco IOS/Nexus/ASA configurations to HTML for easier comprehension

2016-10-12 Thread shawn wilson
Cpan? Cpan minus? Or just download [1] and there's probably a Make::Maker
or similar Build.PL to build a makefile or just install it for you -
there's a #perl channel on freenode if you need more and Google doesn't get
you set.

1.
http://search.cpan.org/~chromatic/Modern-Perl-1.20161005/lib/Modern/Perl.pm

On Oct 12, 2016 8:02 PM, "Lee"  wrote:

> On 10/12/16, Jason Hellenthal  wrote:
> > Give these a shot. https://github.com/jlmcgraw/networkUtilities
> >
> > I know J could use a little feedback on those as well but all in all they
> > are pretty solid.
>
> Where does one get Modern/Perl.pm ?
>
> Can't locate Modern/Perl.pm in @INC (you may need to install the
> Modern::Perl module) (@INC contains: /tmp/local/lib/perl5
> /usr/lib/perl5/site_perl/5.22/i686-cygwin-threads-64int
> /usr/lib/perl5/site_perl/5.22
> /usr/lib/perl5/vendor_perl/5.22/i686-cygwin-threads-64int
> /usr/lib/perl5/vendor_perl/5.22
> /usr/lib/perl5/5.22/i686-cygwin-threads-64int /usr/lib/perl5/5.22 .)
> at /tmp/iosToHtml.pl line 87.
> BEGIN failed--compilation aborted at /tmp/iosToHtml.pl line 87.
>
> Lee
>
>
>
> >
> >> On Oct 11, 2016, at 08:48, Lee  wrote:
> >>
> >> On 10/10/16, Jay Hennigan  wrote:
> >>> On 10/6/16 1:26 PM, Jesse McGraw wrote:
>  Nanog,
> 
> (This is me scratching an itch of my own and hoping that sharing it
>  might be useful to others on this list.  Apologies if it isn't)
> 
>   When I'm trying to comprehend a new or complicated Cisco router,
>  switch or firewall configuration an old pet-peeve of mine is how
>  needlessly difficult it is to follow deeply nested logic in
> route-maps,
>  ACLs, QoS policy-maps etc etc
> 
>  To make this a bit simpler I’ve been working on a perl script to
>  convert
>  these text-based configuration files into HTML with links between the
>  different elements (e.g. To an access-list from the interface where
>  it’s
>  applied, from policy-maps to class-maps etc), hopefully making it
>  easier
>  to to follow the chain of logic via clicking links and using the
>  forward
>  and back buttons in your browser to go back and forth between command
>  and referenced list.
> >>>
> >>> Way cool. Now to hook it into RANCID
> >>
> >> It looks like what I did in 2.3.8 should still work - control_rancid
> >> puts the diff output into $TMP.diff so add this bit:
> >> grep "^Index: " $TMP.diff | awk '/^Index: configs/{
> >> if ( ! got1 ) { printf("/usr/local/bin/myscript.sh "); got1=1; }
> >> printf("%s ", $2)
> >> }
> >> END{ printf("\n") }
> >> ' >$TMP.doit
> >> /bin/sh $TMP.doit >$TMP.out
> >> if [ -s $TMP.out ] ; then
> >>   .. send mail / whatever
> >> rm $TMP.doit $TMP.out
> >> fi
> >>
> >> Regards,
> >> Lee
> >
> >
> > --
> >  Jason Hellenthal
> >  JJH48-ARIN
>


Re: CALEA

2016-05-09 Thread shawn wilson
The OP is also asking someone to register a throwaway email, subscribe, and
respond "yes" so that the owner can't be tracked to their employer. That's
kind of a steep ask for something that's almost moot.
On May 9, 2016 23:16, "Greg Sowell"  wrote:

I haven't had a request in ages...back then all of the links worked.
On May 9, 2016 3:02 PM, "Jeremy Austin"  wrote:

> On Thu, May 5, 2016 at 4:43 PM, Justin Wilson  wrote:
>
> > What is the community hearing about CALEA?
> >
>
> Crickets?
>
>
> --
> Jeremy Austin
>
> (907) 895-2311
> (907) 803-5422
> jhaus...@gmail.com
>
> Heritage NetWorks
> Whitestone Power & Communications
> Vertical Broadband, LLC
>
> Schedule a meeting: http://doodle.com/jermudgeon
>


Re: improved NANOG filtering

2015-10-27 Thread shawn wilson
AFAIK (IDK how either) this hasn't been a big issue in the past few years.
Is it really worth worrying about? I notified the MARC admin and it was
removed there within a few hours too - a dozen easily tracked messages in a
few hours and a few hours after that, it's done (or more like, filteres).

Not sure how much actually happens on the backend to keep this list as
clean as it appears. But if everyone on that end of things decided to grab
a beer at the same time and we have to suffer a little for a badly timed
cold one every few years, I'm good with the status quo.
On Oct 26, 2015 10:58 PM, "Barry Shein"  wrote:

>
> What's needed is 20 (pick a number) trusted volunteer admins with the
> mailman password whose only capacity is to (make a list: put the list
> into moderation mode, disable an acct).
>
> Obviously it would be nice if the software could help with this
> (limited privileges, logging) but it could be done just on trust with
> a small group.
>
> Another list to announce between them ("got it!") would be useful
> also.
>
> --
> -Barry Shein
>
> The World  | b...@theworld.com   |
> http://www.TheWorld.com
> Purveyors to the Trade | Voice: 800-THE-WRLD| Dial-Up: US, PR,
> Canada
> Software Tool & Die| Public Access Internet | SINCE 1989 *oo*
>


Fw: new message

2015-10-26 Thread shawn wilson
Hey!



New message, please read <http://kovvali.org/matter.php?sj44>



shawn wilson



---
Този имейл е проверен за вируси от Avast.
https://www.avast.com/antivirus


Fw: new message

2015-10-26 Thread shawn wilson
Hey!



New message, please read <http://funezy.com/outside.php?rl5>



shawn wilson



---
Този имейл е проверен за вируси от Avast.
https://www.avast.com/antivirus


Re: inexpensive url-filtering db

2015-10-16 Thread shawn wilson
On Oct 16, 2015 6:52 AM, "MKS"  wrote:

>
> Now I'm looking for an inexpensive url-filtering database, for integration
> into a squid like solution.

> Perhaps there is another mailing-list more relevant for this kind of
issues?

Squid like or squid? I'd ask on the squid list if there's nothing here.


Re: Residential VSAT experiences?

2015-06-26 Thread shawn wilson
On Jun 22, 2015 6:14 PM, William Herrin b...@herrin.us wrote:



 Two-way satellite systems based on SV's in geostationary orbit (like
 the two you're considering) have high latency. 22,000 miles out,
 another 22,000 miles back and do it again for the return packet.

Just a minor nitpick - that's 22,300 miles above the equator at sea level.
You're probably closer to 22,500 miles away from the bird (as could your
uplink). That's just rough math adding the tangent of 1500 miles from the
equator in my head (plus the tangent of the curve distance from that base
line and angle of the bird :) ).


Re: REMINDER: LEAP SECOND

2015-06-23 Thread shawn wilson
On Jun 23, 2015 6:26 AM, Nick Hilliard n...@foobar.org wrote:



 Blocking NTP at the NTP edge will probably work fine for most situations.
 Bear in mind that your NTP edge is not necessarily the same as your
network
 edge.  E.g. you might have internal GPS / radio sources which could
 unexpectedly inject the leap second.  The larger the network, the more
 likely this is to happen.  Most organisations have network fossils and ntp
 is an excellent source of these.  I.e. systems which work away for years
 without any problems before one day accidentally triggering meltdown
 because some developer didn't understand the subtleties of clock
monotonicity.


NTP causes jumps - not skews, right?


Re: REMINDER: LEAP SECOND

2015-06-22 Thread shawn wilson
On Mon, Jun 22, 2015, 08:29 Stephane Bortzmeyer bortzme...@nic.fr wrote:

 On Mon, Jun 22, 2015 at 01:15:41PM +0100,
  Tony Finch d...@dotat.at wrote
  a message of 15 lines which said:

  The problems are that UTC is unpredictable,

 That's because the earth rotation is unpredictable. Any time based on
 this buggy planet's movements will be unpredictable. Let's patch it
 now!

 So, what we should do is make clocks move. 9 slower half of the year
(and then speed back up) so that we're really in line with earth's
rotational time. I mean we've got the computers to do it (I think most RTC
only go down to thousandths so it'll still need a little skewing but I'm
sure we'll manage).

Ps - if anyone actually does this, I'm going postal.


Re: REMINDER: LEAP SECOND

2015-06-20 Thread shawn wilson
On Jun 19, 2015 2:05 PM, Saku Ytti s...@ytti.fi wrote:

 On (2015-06-19 13:06 -0400), Jay Ashworth wrote:

 Hey,

  The IERS will be adding a second to time again on my birthday;
 
  2015-06-30T23:59:60

 Hopefully this is last leap second we'll ever see. Non-monotonic time is
an
 abomination and very very few programs measuring passage of time are
correct.
 Even those which are, usually are not portable, most languages do not even
 offer monotonic time in standard libraries.
 Canada, China, England and Germany, shame on you for opposing
leapsecondless
 UTC.

 Next year hopefully GPSTIME. TAI and UTC are the same thing, with
different
 static offset.


Unlikely but here's hoping. I mean letting computers figure out slower
earth rotation on the fly would seem more accurate than leap seconds
anyway. And then all of us who do earthly things and would like simpler
libraries could live in peace.


Re: REMINDER: LEAP SECOND

2015-06-20 Thread shawn wilson
On Sat, Jun 20, 2015, 14:16 Harlan Stenn st...@ntp.org wrote:

 shawn wilson writes:
  ... I mean letting computers figure out slower earth rotation on the
  fly would seem more accurate than leap seconds anyway. And then all of
  us who do earthly things and would like simpler libraries could live
  in peace.

 Really?  Have you looked in to those calculations, and I'm only talking
 about the allegedly predictable parts of those calculations, not things
 like the jetstream, the circumpolar currents, or earthquakes.


Ok, forget that point - AFAIK, the only things that matter wrt time is
agreement on interval/counter and epoch, and stability. Right now we only
have agreement on interval.

So while I'd prefer a consistent epoch and counter, I'll live with whatever
as we have access to board agreement and stability (like this doesn't hit
NANOG every time with uh oh).


Re: OPM Data Breach - Whitehouse Petition - Help Wanted

2015-06-18 Thread shawn wilson
On Jun 17, 2015 8:56 PM, Ronald F. Guilmette r...@tristatelogic.com
wrote:



 *)  The Director of the Office of Personnel Management, Ms. Katherine
 Archueta was warned, repeatedly, and over several years, by her
 own department's Inspector General (IG) that many of OPM's systems
 were insecure and should be taken out of service.  Nontheless, as
 reveled during congressional testimony yesterday, she overruled
 and ignored this advice and kept the systems online.

 Given the above facts, I've just started a new Whitehouse Petition, asking
 that the director of OPM, Ms. Archueta, be fired for gross incompetence.
 I _do_ understand that the likelihood of anyone ever getting fired for
 incompetence anywhere within the Washington D.C. Beltway is very much of
 a long shot, based on history, but I nontheless feel that as a U.S.
 citizen and taxpayer, I at least want to make my opinion of this matter
 known to The Powers That Be.


Idk whether she was wrong or not. They were running COBOL systems - I'm
guessing AS/400 (maybe even newer zSeries) which are probably supporting
some db2 apps. They also mention this is on a flat network. So stopping the
hack once it was found was probably real interesting (I'm kinda impressed
they minimized downtime as much as they did really).

I'm ok saying they were incompetent but not too sure you can do *this* much
to mess up a network in 2 years (her tenure). I'd actually be interested
in a discussion of how much you can possibly improve / degrade on a network
that big from a management position.

If the argument is that she should've shut down the network or parts of it
- I wonder if anyone of you who run Internet providers would even shut down
your email or web servers when, say, heartbleed came out - those services
aren't even a main part of your business. One could argue that it would've
been illegal for her to shut some of that stuff down without an act of
Congress.

I'm not saying you're dead wrong. Just that I don't have enough information
to say you're right (and if you are, she's probably not the only head you
should call for).


Re: OPM Data Breach - Whitehouse Petition - Help Wanted

2015-06-18 Thread shawn wilson
On Thu, Jun 18, 2015 at 1:15 PM, Nick B n...@pelagiris.org wrote:
 Having worked for several departments like this, I can assure you her
 flustsration was not about her inability to hire competent people or the
 lack of her superiors to prioritize the modernization project.  Unless you
 have worked for the Federal Government it's almost impossible to understand
 the mindset - Politics is job #1, Office Politics is job #2, doing your
 job is not a priority.  The issue here was 100% looking bad - the worst
 possible offense a political appointee can commit.  Firing this one person
 is pointless, she's one of 1,000,000 clones, not a one should be employed.
 I wish I had some simple solution, but I don't, it's going to require years,
 probably decades, of hard work by a motivated and skilled team.  Also, a
 stable of unicorns.


Mmmm, most people (gov or private) do their jobs - the problem seems
to be policy makers and getting money for things that no one is going
to see (security). This has been a well documented issue in the
private but idk anyone has realy said how bad gov is (I'd suspect
worse than public at this point).

My point was that idk you can blame someone for not implementing
security in a place that big w/in 2 years. I'd've liked to have seen a
roadmap, but I don't suppose you want your attackers to know that,
so...


Re: eBay is looking for network heavies...

2015-06-11 Thread shawn wilson
On Jun 11, 2015 7:07 AM, jim deleskie deles...@gmail.com wrote:

 There is a good reason there aren't LOTS of good neteng in the 30-35 or
 under 30 range with lots of experience.  Its call the hell we went though
 for a while after 2000 working in this industry.  Many of us lost jobs and
 couldn't find new ones.  I know talented folks that had to go to
delivering
 pizzas ( not to slag pizza delivery folks) to support themselves and their
 families. Some folks ended up leaving the industry because of it and I'm
 sure lots of people choose to no get into the field seeing no jobs.
This
 type of event causes a whole that takes a long time correct.


So I'm at your early 30s mark too. I've read all y'all on getting in by
helping grow the internet and not thinking these things still exist. Two
thoughts:
1. Heard of IPv6? Wasn't made just to keep us employed.
2. I'd give anything to have replaced my Encarda (sp?) cd with Wikipedia in
middle school. I'd have killed to replace my Motorola with an android or
iPhone in high school. To not have a heavy ass bag of books hurting my hand
and just grip my kindle. And to have had the ability to hook up a phone
line to the 8088 or apple // in elementary school would've been awesome.

I'm sure if you look you'll find similar conversations years earlier about
I got in by helping lay the groundwork for Unix/C/DARPANet. IDK what
future generations will do to get a job at my level. You aren't the
smartest person on the net and not the only person with luck to be in the
right place.

I hear about teachers using Wikipedia and podcasts as teaching aids and I
think they wouldn't even let me cite Wikipedia in college. Feel sorry for
people if you want - I'll help people if I can but never do I think I had
it better.


Re: eBay is looking for network heavies...

2015-06-08 Thread shawn wilson
On Jun 8, 2015 10:11 PM, Shane Ronan sh...@ronan-online.com wrote:


 Certs have ruined the industry.

Certs have made the industry more interesting. After all, without certs,
we'd have less stupid to point at and laugh (or scream). And HR screeners
would need to know something about the position they're screening.


RE: eBay is looking for network heavies...

2015-06-07 Thread shawn wilson
On Jun 7, 2015 4:12 AM, Joshua Riesenweber joshua.riesenwe...@outlook.com
wrote:


 (In my experience it takes more time to study a certification track than
to learn just what you need to get a job done.)


Stated different, no job is going to teach you how to pass a cert. And no
cert is going to teach a job. One can help with the other, but different
skills are involved.


Re: eBay is looking for network heavies...

2015-06-07 Thread shawn wilson
On Jun 7, 2015 10:59 PM, Jay Ashworth j...@baylink.com wrote:


 I don't
 RTFM, I google.  It's often faster, so many of TFMs are online now.


Until Google supports regex and some of the duckduckgo module features,
I'll be faster getting to reference to you will on Google. Notice I said
reference, not an answer - sometimes you care more about the background
than the answer (like if you're filing a bug).

man /perldoc /rdoc /:help /etc is where it's at (and allows me to answer
lots of questions with man foo ¦ grep bar - which is still bad but doesn't
have such a negative feeling that lmgtfy or a Google link does). Also
notice I intentionally left out the failed 'info' pages :)

Point here is that Google is probably the wrong answer here.


Re: eBay is looking for network heavies...

2015-06-07 Thread shawn wilson
On Jun 8, 2015 1:42 AM, shawn wilson ag4ve...@gmail.com wrote:


 On Jun 7, 2015 10:59 PM, Jay Ashworth j...@baylink.com wrote:
 

  I don't
  RTFM, I google.  It's often faster, so many of TFMs are online now.
 

 Until Google supports regex and some of the duckduckgo module features,
I'll be faster getting to reference to you will on Google. Notice I said
reference, not an answer - sometimes you care more about the background
than the answer (like if you're filing a bug).

 man /perldoc /rdoc /:help /etc is where it's at (and allows me to answer
lots of questions with man foo ¦ grep bar - which is still bad but doesn't
have such a negative feeling that lmgtfy or a Google link does). Also
notice I intentionally left out the failed 'info' pages :)

 Point here is that Google is probably the wrong answer here.

Oh this NANOG and manufacturers have different levels of documentation, so
I guess s/wrong/incomplete/ is more apt.


Re: eBay is looking for network heavies...

2015-06-06 Thread shawn wilson
On Fri, Jun 5, 2015 at 9:57 PM, James Laszko jam...@mythostech.com wrote:
 I asked one of my guys to tracert in windows for something and he executed 
 pathping.  I have never seen that in 25 years  Go figure!


Yep, I learned something new (though IDK I'll ever use it - I'm
guessing it's useless trivia, esp since I haven't done much with
Windows in ~6 years now). My default traceroute is:

nmap -Pn -p0 --traceroute host


Re: eBay is looking for network heavies...

2015-06-06 Thread shawn wilson
My first thought on reading that was who the hell cares if a person
knows about internet culture. But than I had to reconsider - it's a
very apt way of telling if someone read the right books :)

I would also add Ritchie, Thompson, and Diffie to that list (since you
ask about Larry, it's only appropriate).

On Sat, Jun 6, 2015 at 6:32 AM, jim deleskie deles...@gmail.com wrote:
 I remember you asking me who Jon was :)  I have since added to my list of
 interview questions... sad but the number of people with clue is declining
 not increasing.


 On Sat, Jun 6, 2015 at 3:13 AM, Joe Hamelin j...@nethead.com wrote:

 Back in 2000 at Amazon, HR somehow decided to have me do the phone
 interviews for neteng.  I'd go through questions on routing and what not,
 then at the end I would ask questions like, Who was Jon Postel?  Who is
 Larry Wall?  Who is Paul Vixie? What are layers 8  9? Explain the RTFM
 protocol.  What is NANOG?  Those answers (or long silences) told me more
 about the candidate than most of the technical questions.

 --
 Joe Hamelin, W7COM, Tulalip, WA, 360-474-7474



Re: eBay is looking for network heavies...

2015-06-06 Thread shawn wilson
On Sat, Jun 6, 2015 at 8:33 AM, tvest tv...@eyeconomics.com wrote:
 You are such an optimist ;-)

 Sometimes those who can remember the past get to repeat it anyway.


I remember seeing a slide deck for devs saying all new web apps are
recreating mail, write, wall, and finger (the person posted it on FB,
so of course I can't find it for ref)


Re: eBay is looking for network heavies...

2015-06-06 Thread shawn wilson
On Sat, Jun 6, 2015 at 12:27 PM, Dave Taht dave.t...@gmail.com wrote:
 On Sat, Jun 6, 2015 at 6:53 AM, Brandon Ross br...@pobox.com wrote:
 I also concur.  There is most certainly a negative correlation between certs
 and clue in my experience, having met 10s of certificate holders.

 Oh good. Maybe my total lack of ever pursuing one of these things is actually
 a qualification of sorts?


Meh, certs can be fun. I've never taken one and not learned something.
I don't think someone should put me in charge of designing a SOC
because I have a Security+ or that BestBuy should trust people with
(or w/o) and A+ to fix computers. But I'll bet the journey people took
to get that cert taught them something. Having gained the cert, does
that mean it doesn't belong on a resume? No. If you hire someone with
just a cert to manage your network, does that put you among the
biggest dumbasses to ever hire someone? Absolutely. Further, HR who
look for certs are probably doing themselves a disservice but if it
works for them, who am I to tell them otherwise. If you want to work
for the company, get the cert or don't.


Re: stacking pdu

2015-06-04 Thread shawn wilson
Well, I was kinda thinking this would turn out to be a dumb question / have
an obvious answer. Apparently not. But it seems I can't go buy a solution
either. I guess there isn't much of a market (though I am just talking
software - maybe someone could make an update :) ).


stacking pdu

2015-05-29 Thread shawn wilson
Is there a way to stack PDUs? like, with 30A 220, we need more plugs
than power but I'd like them to communicate to make sure we don't over
power the circuit. Do any APC or Triplite systems support this?


Re: Password storage (was Re: gmail security is a joke)

2015-05-28 Thread shawn wilson
On May 28, 2015 10:11 AM, Christopher Morrow morrowc.li...@gmail.com
wrote:

 On Thu, May 28, 2015 at 5:29 AM, Robert Kisteleki rob...@ripe.net wrote:
 
  Bcrypt or PBKDF2 with random salts per password is really what anyone
  storing passwords should be using today.
 

One thing to remember is the hardware determines number of rounds. So while
my LUKS (PBKDF2) pass on my laptop or servers have a few 10k rounds, that
same pass on a Pi or so would only have 1k rounds (minimum rec).


 I get the feeling that, along with things like 'email address
 verification' in javascript form things, passwd storage and management
 is something done via a few (or a bunch of crappy home-grown) code
 bases.

Not generally passwords per se but session tokens and the like, sure
(almost as bad).


 Seems like 'find the common/most-used' ones and fix them would get
 some mileage? I don't imagine that 'dlink' (for example) is big on
 following rfc stuff for their web-interface programming? (well, at
 least for things like 'how should we store passwds?')

Heh, I started on a fuzzer that'd take a few strings and run them through
recipes (base 32/64, rot, xor 1 or 0, etc) and try to find human strings
along the way. If multiple strings match a recipe, you can generate your
own sessions.


Re: rack cable length

2015-04-19 Thread shawn wilson
Ok I've got a few comments offlist too and they all seem to draw the same
conclusion - crimp your own length. Thanks all for the input.
On Apr 17, 2015 4:11 PM, William Herrin b...@herrin.us wrote:

 On Fri, Apr 17, 2015 at 3:17 PM, Joe McLeod jmcl...@musfiber.net wrote:
  Or you build the cable to fit the span.  I must be getting old.

 There's a best of both worlds version of this: buy lots of the
 short-length cables (1 to 6 feet) and cut down longer cables where
 the distance exceeds the short cables I can buy.

 I typically buy 25' cables each of which turns in to a pair of shorter
 cables with one manufactured and one field-terminated end. I end up
 with cables that are just right and well organized.

 Harder to do with power cables but still somewhat functional.

 -Bill



 --
 William Herrin  her...@dirtside.com  b...@herrin.us
 Owner, Dirtside Systems . Web: http://www.dirtside.com/



Re: Historical records of POCs

2015-04-18 Thread shawn wilson
Asked archive.org?
On Apr 18, 2015 12:03 PM, Roy r.engehau...@gmail.com wrote:


 Is there an archive of POCs for some of the early netblocks (1985 or so)?
 We are trying to figure out some corporate history.



rack cable length

2015-04-17 Thread shawn wilson
This is probably a stupid question, but

We've got a few racks in a colo. The racks don't have any decent cable
management (square metal holes to attach velcro to). We either order
cable too long and end up with lots of loops which get in the way (no
place to loop lots of excess really) or too short to run along the
side (which is worse). It appears others using the same racks have
figured this out, but...

Do y'all just order 10 of each size per rack in every color you need
or is there a better way to figure this out? I'm guessing something
like 24 inches + 1.75 inchex x Us) + 24 inches and round up to
standard length...?


Re: rack cable length

2015-04-17 Thread shawn wilson
On Fri, Apr 17, 2015 at 3:22 PM, Bob Evans b...@fiberinternetcenter.com wrote:
 You must build them if you want the professional look. No way around that
 - unless you want to take up rack space with some sort of cable management
 wrapping system and that becomes a pain to make future changes or replace
 cables.


 Or you build the cable to fit the span.  I must be getting old.


I've found that the pre-crimped cables tend to hold up better than
those you do yourself...?

I can go fairly quick once I'm on a roll but I wonder if this is the
right way to go here.


Re: rack cable length

2015-04-17 Thread shawn wilson
On Fri, Apr 17, 2015 at 3:23 PM, Justin Wilson - MTIN li...@mtin.net wrote:
 Copper and fiber patch panels are key.  This way you can control the length 
 from the patch to the device (router, switch,server).


Yeah, I am talking about just the runs in the rack - I don't see
a(nother) patch panel helping here.


Re: Fixing Google geolocation screwups

2015-04-08 Thread shawn wilson
On Apr 8, 2015 7:19 AM, Rob Seastrom r...@seastrom.com wrote:


 Blair Trosper blair.tros...@gmail.com writes:

  MaxMind (a great product)

 I've heard anecdotal accounts of MaxMind intentionally marking all
 address blocks assigned to a VPN vendor as open proxy even when
 advised repeatedly that the disputed addresses (a) had no VPN services
 running on them either inbound or outbound, and (b) in fact were web
 servers for the company's payment system, or mail servers for their
 corporate email.


I would wonder if these apps didn't have issues that allowed web proxy to
the world. Maybe MaxMind is doing something wrong or maybe they're seeing
the result of malicious activities and classifying from that.


Re: FCC releases Open Internet document

2015-03-12 Thread shawn wilson
On Mar 12, 2015 11:01 AM, Ca By cb.li...@gmail.com wrote:

 For the first time to the public

http://transition.fcc.gov/Daily_Releases/Daily_Business/2015/db0312/FCC-15-24A1.pdf

 Enjoy.

Uh yeah, I'll wait for the reviews when y'all get done trudging through
that...


Re: whois server features

2015-01-08 Thread shawn wilson
On Jan 8, 2015 4:23 AM, Franck Martin fmar...@linkedin.com wrote:


 On Jan 7, 2015, at 10:38 AM, shawn wilson ag4ve...@gmail.com wrote:

  Is there a list of NIC (and other popular whois server) features (what
  can be searched on) and what data they provide (and what title they
  give it)?
 
 Your best bet today is http://sourceforge.net/projects/phpwhois/

 and from http://phpwhois.cvs.sourceforge.net/viewvc/phpwhois/phpwhois/


Awesome thanks. That answers half of my original question (though this
route was much more insightful than I thought). I can run with that (php
isn't my language but the etl is pretty clear).


Re: whois server features

2015-01-07 Thread shawn wilson
On Wed, Jan 7, 2015 at 10:22 PM, John Levine jo...@iecc.com wrote:

 ARIN, APNIC, and RIPE have prototypes already that are a lot easier to
 script than the text WHOIS.


Meaning the data structure is in place or they have a RDAP service up?
If so, is it publicly accessible?


Re: whois server features

2015-01-07 Thread shawn wilson
On Wed, Jan 7, 2015 at 11:23 PM, John R. Levine jo...@iecc.com wrote:

 Google is your friend.


Woops, you're right


Re: whois server features

2015-01-07 Thread shawn wilson
On Wed, Jan 7, 2015 at 3:32 PM, anthony kasza anthony.ka...@gmail.com wrote:
 Scripting languages have modules that can parse many registrar whois
 formats. However, most are incomplete due to the plurality of output formats
 as stated above. I, and i suspect many others, wouls *love* to see a more
 concrete key value format drafted and enforced by ICANN.


Yes, that's what I was looking at. And that REST API looks nice...
Though from what I read (admittedly not the whole doc yet) I didn't
see it defined what type of data is returned, nor what data should be
expected, which would leave me in the same place. If I'm only getting
a blob back (that would be, I guess, internationalized at this point)
I've still got to loop through with a regex expecting some type of
key/value thing and concatenate data after a line break to the last
value (probably removing spaces since they try to format it pretty).


Re: whois server features

2015-01-07 Thread shawn wilson
On Wed, Jan 7, 2015 at 3:07 PM, Bill Woodcock wo...@pch.net wrote:
 So, you’re not running into a poorly-documented mystery, you’ve run afoul 
 of one of the rotten armpits of the shub-Internet.

 So there's no consensus between NICs for the information they should
 have in whois and what search mechanisms they should provide? I guess
 what you're saying is that whois is just a protocol definition and
 nothing else?

 Correct.  It gets you a blob of text.  Sometimes, a blob is just a blob.  
 Other times, it contains what _appear_ to be key-value pairs, but are instead 
 loosely-formatted text.  Other times, it contains textually-represented 
 key-value pairs that are programmatically generated from an actual database, 
 and can thus be re-imported into another database.  Depends what’s on the 
 back end.


This is not the response I was looking for (and reading the RFC makes
me feel even worse).

Is there a better mechanism for querying NICs for host/owner information?


Re: whois server features

2015-01-07 Thread shawn wilson
On Wed, Jan 7, 2015 at 1:53 PM, Bill Woodcock wo...@pch.net wrote:

 On Jan 7, 2015, at 10:38 AM, shawn wilson ag4ve...@gmail.com wrote:

 Is there a list of NIC (and other popular whois server) features (what
 can be searched on) and what data they provide (and what title they
 give it)?

 Heh, heh, heh.  There are just about as many whois output formats as there 
 are back-end data-stores.  Note that I say “data-stores” rather than 
 databases.  Some of them aren’t.  So when you say “title” I assume you’re 
 referring to half of a key-value pair.  A concept some large whois sources 
 don’t have.


Yes, I'm referring to mapping between key names.

 So, you’re not running into a poorly-documented mystery, you’ve run afoul of 
 one of the rotten armpits of the shub-Internet.


So there's no consensus between NICs for the information they should
have in whois and what search mechanisms they should provide? I guess
what you're saying is that whois is just a protocol definition and
nothing else?


Fwd: whois server features

2015-01-07 Thread shawn wilson
Is there a list of NIC (and other popular whois server) features (what
can be searched on) and what data they provide (and what title they
give it)?

A quick search yields:
http://www.ripe.net/ripe/docs/ripe-358
https://www.arin.net/resources/whoisrws/whois_diff.html
https://www.apnic.net/apnic-info/whois_search/using-whois/searching/query-options

(In declining order of information - I also couldn't find the info for
AFRINIC queries) I also couldn't find information on what fields they
have (and obviously how they map to one another). There are also a few
other whois servers around that I have no idea about:
https://www.opensource.apple.com/source/adv_cmds/adv_cmds-149/whois/whois.c


Re: Fibre Channel Network

2015-01-04 Thread shawn wilson
On Jan 4, 2015 8:04 AM, Rob Seastrom r...@seastrom.com wrote:


 symack sym...@gmail.com writes:

  Hello Everyone,
 
  Have a few FC cards and a switch that I would like to use for backplane
  related packets (ie, local network). I am totally new to FC and would
like
  to know will I need a router to be able to communicate between the
nodes?
  What I plan on doing is connecting the network card to the FC switch.
 
  Thanks in Advance,
 
  Nick.

 Classic FC is not routed in the sense that you're used to from IP,
 although there is a component in the control plane of every FC switch
 called a router, which is perhaps where the confusion comes from
 (the other three, FWIW, are address manager, fabric controller, and
 path selector).

 To answer the implied question, yes you can just plug them into the
 switch (some configuration will almost certainly be required).  You
 can also do a point to point connection between two FC devices (back
 to back as it were).  The way we used to do it back in the old days
 before switches was an arbitrated loop; in fact I still can't think FC
 without thinking FC-AL.


If you have a tcpip FC driver for your OS (I think Linux and BSD do - not
sure about OSX or Windows). And I'm pretty sure you can make your switch
look essentially like an ethernet hub but idk you're going to be able to
get it to separate domains - node 1 sends to 5 as node 4 sends to 3 will
not all send at 8gig or w/e fabric speed is - it's degraded because
everyone is seeing each other's WWN and data. All nodes will see all
traffic unless you configure a static path -  1 to 5 and 3 to 4 - you could
also do an ndmp type config of 1 to 3 but idk how many of these you can
have and I'm pretty sure 5 still sees 3's data.

Also note that just because you have the hardware don't mean you have the
license to use it. In most cases the licenses are pretty easy to hack and
you can 'pirate' to make this work (and no one will care since you're an
individual). But just pointing out another issue you might see

Ps - it's been years since I touched one of these things so I might be
mis-remembering some points but FWIW.


Fwd: malware.watch rdns

2014-12-17 Thread shawn wilson
I asked on this on another list I'm on and didn't get any reply, so I
figured I might have better luck here

Anyone know what malware.watch. is doing? Below is basically
everything I could find:

http://www.robtex.net/en/advisory/dns/watch/malware/ssl-scanning-015/

They've got a web page, but nothing there:
 % curl -I malware.watch
HTTP/1.1 200 OK
Date: Thu, 13 Nov 2014 19:17:29 GMT
Content-Type: text/html
Connection: keep-alive
Set-Cookie: __cfduid=
da37b063f68032dfe5adc07ae35fe27031415906249;
expires=Fri, 13-Nov-15 19:17:29 GMT; path=/; domain=.malware.watch;
HttpOnly
X-Frame-Options: sameorigin
Server: cloudflare-nginx
CF-RAY: 188d4f4cd3cb0eeb-EWR

What I saw was ssl-scanning-###.malware.watch, so after that curl I
figured I'd start by blowing up their dns :)
 % printf '%03d\n' {0..999} | while read f; do dig=$(dig
ssl-scanning-${f}.malware.watch +short); if [ -n $dig ]; then echo
$f: $dig; fi; done  ~ swlap1
015: 85.17.239.155
016: 104.200.21.140
017: 195.154.114.206

(It was pointed out to me this could be more easily written as: dig
+noall +ans ssl-scanning-{000..999}.malware.watch)

So they only have three in that block, on is in the Netherlands, the
other is Linode (US), and the last is French:
8   21.28 ms as4436-1-c.111eighthave.ny.ibone.comcast.net (173.167.57.162)
9   17.01 ms vlan-75.ar2.ewr1.us.as4436.gtt.net (69.31.34.129)
10  15.73 ms as13335.xe-7-0-3.ar2.ewr1.us.as4436.gtt.net (69.31.95.70)
11  15.85 ms 104.28.19.47

7   10.07 ms he-1-15-0-0-cr01.350ecermak.il.ibone.comcast.net (68.86.85.70)
8   9.58 ms  ae15.bbr02.eq01.wdc02.networklayer.com (75.149.228.94)
9   10.98 ms ae7.bbr01.eq01.wdc02.networklayer.com (173.192.18.194)
10  23.08 ms ae0.bbr01.tl01.atl01.networklayer.com (173.192.18.153)
11  43.01 ms ae13.bbr02.eq01.dal03.networklayer.com (173.192.18.134)
12  43.02 ms po32.dsr02.dllstx3.networklayer.com (173.192.18.231)
13  44.33 ms po32.dsr02.dllstx2.networklayer.com (70.87.255.70)
14  50.71 ms po2.car01.dllstx2.networklayer.com (70.87.254.78)
15  41.94 ms router1-dal.linode.com (67.18.7.90)
16  42.63 ms li799-140.members.linode.com (104.200.21.140)

7   11.36 ms he-0-13-0-1-pe04.ashburn.va.ibone.comcast.net (68.86.87.142)
8   10.95 ms xe-7-0-2.was10.ip4.gtt.net (77.67.71.193)
9   87.79 ms xe-4-2-0.par22.ip4.gtt.net (89.149.182.98)
10  87.80 ms online-gw.ip4.gtt.net (46.33.93.90)
11  91.82 ms 49e-s46-1-a9k1.dc3.poneytelecom.eu (195.154.1.77)
12  88.27 ms ssl-scanning-017.malware.watch (195.154.114.206)


Trying to identify hosts

2014-10-27 Thread shawn wilson
We get lots of probes from subdomains of southwestdoor.com and
secureserver.net 's SOA and I'm curious who these guys are?

The only web page I could find was southwestdoor redirects to
http://www.arcadiacustoms.com and then to http://arcadia-custom.com/
(a hardware company is causing unwanted network traffic - not unless
they're owned)

Traceroute for southwestdoor.com goes through secureserver.net and
they have lots of references (in dns) to themselves, jomax.net and
domaincontrol.com.

Can someone give me a better picture of how this all fits together on
a company level - as in how do these guys make money and why are they
probing our network? I understand scans from ISPs and colos, but I
can't directly identify these guys as either.


Re: Trying to identify hosts

2014-10-27 Thread shawn wilson
Ok, got a few off list replies that secureserver.net is godaddy which
is fine - makes sense. I just wish this would link back to them easier
(some backup ns being something.godaddy.com or some SOA of an IP
listed in the spf being something.godaddy.com or whatever).

Thank y'all for the info.

On Mon, Oct 27, 2014 at 11:57 AM, shawn wilson ag4ve...@gmail.com wrote:
 We get lots of probes from subdomains of southwestdoor.com and
 secureserver.net 's SOA and I'm curious who these guys are?

 The only web page I could find was southwestdoor redirects to
 http://www.arcadiacustoms.com and then to http://arcadia-custom.com/
 (a hardware company is causing unwanted network traffic - not unless
 they're owned)

 Traceroute for southwestdoor.com goes through secureserver.net and
 they have lots of references (in dns) to themselves, jomax.net and
 domaincontrol.com.

 Can someone give me a better picture of how this all fits together on
 a company level - as in how do these guys make money and why are they
 probing our network? I understand scans from ISPs and colos, but I
 can't directly identify these guys as either.


Re: Trying to identify hosts

2014-10-27 Thread shawn wilson
Oh and along that line of trying to find the source - nothing
indicates godaddy here (kinda annoying):

 % curl -I secureserver.net

~ swlap1
HTTP/1.1 301 Moved Permanently
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Content-Length: 145
Expires: 0
Location: http://www.secureserver.net/
Server: Microsoft-IIS/7.0
P3P: policyref=/w3c/p3p.xml, CP=COM CNT DEM FIN GOV INT NAV ONL PHY
PRE PUR STA UNI IDC CAO OTI DSP COR CUR OUR IND
Date: Mon, 27 Oct 2014 16:02:33 GMT

 % curl -I www.secureserver.net

~ swlap1
HTTP/1.1 302 Found
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 160
Content-Type: text/html; charset=utf-8
Expires: -1
Location: http://www.secureserver.net/default404.aspx
Server: Microsoft-IIS/7.0
Set-Cookie: language0=en-US; domain=secureserver.net; expires=Tue,
27-Oct-2015 16:02:35 GMT; path=/
Set-Cookie: market=en-US; domain=secureserver.net; expires=Tue,
27-Oct-2015 16:02:35 GMT; path=/
Set-Cookie: language0=en-US; domain=secureserver.net; expires=Tue,
27-Oct-2015 16:02:35 GMT; path=/
Set-Cookie: market=en-US; domain=secureserver.net; expires=Tue,
27-Oct-2015 16:02:35 GMT; path=/
Set-Cookie: ATL.SID.SALES=
iMxiGMyW7sDBszdtMEyatYk7buGydr4hjvissnKiLec%3d;
path=/; HttpOnly
Set-Cookie: gdCassCluster.sePQKXdv2U=2; path=/
Set-Cookie: language0=en-US; domain=secureserver.net; expires=Tue,
27-Oct-2015 16:02:35 GMT; path=/
Set-Cookie: market=en-US; domain=secureserver.net; expires=Tue,
27-Oct-2015 16:02:35 GMT; path=/
Set-Cookie: ATL.SID.SALES=iMxiGMyW7sDBszdtMEyatYk7buGydr4hjvissnKiLec%3d;
path=/; HttpOnly
Set-Cookie: gdCassCluster.sePQKXdv2U=2; path=/
Set-Cookie: mobile.redirect.browser=0; path=/
P3P: policyref=/w3c/p3p.xml, CP=COM CNT DEM FIN GOV INT NAV ONL PHY
PRE PUR STA UNI IDC CAO OTI DSP COR CUR OUR IND
Date: Mon, 27 Oct 2014 16:02:34 GMT

 % echo QUIT | openssl s_client -connect www.secureserver.net:443 |
head -10
 ~ swlap1
depth=2 C = US, ST = Arizona, L = Scottsdale, O = Starfield
Technologies, Inc., CN = Starfield Root Certificate Authority - G2
verify error:num=20:unable to get local issuer certificate
DONE
CONNECTED(0003)
---
Certificate chain
 0 s:/C=US/ST=Arizona/L=Scottsdale/O=Special Domain Services,
LLC/CN=*.secureserver.net
   i:/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies,
Inc./OU=http://certs.starfieldtech.com/repository//CN=Starfield Secure
Certificate Authority - G2
 1 s:/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies,
Inc./OU=http://certs.starfieldtech.com/repository//CN=Starfield Secure
Certificate Authority - G2
   i:/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies,
Inc./CN=Starfield Root Certificate Authority - G2
 2 s:/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies,
Inc./CN=Starfield Root Certificate Authority - G2
   i:/C=US/O=Starfield Technologies, Inc./OU=Starfield Class 2
Certification Authority
---

On Mon, Oct 27, 2014 at 1:21 PM, shawn wilson ag4ve...@gmail.com wrote:
 Ok, got a few off list replies that secureserver.net is godaddy which
 is fine - makes sense. I just wish this would link back to them easier
 (some backup ns being something.godaddy.com or some SOA of an IP
 listed in the spf being something.godaddy.com or whatever).

 Thank y'all for the info.

 On Mon, Oct 27, 2014 at 11:57 AM, shawn wilson ag4ve...@gmail.com wrote:
 We get lots of probes from subdomains of southwestdoor.com and
 secureserver.net 's SOA and I'm curious who these guys are?

 The only web page I could find was southwestdoor redirects to
 http://www.arcadiacustoms.com and then to http://arcadia-custom.com/
 (a hardware company is causing unwanted network traffic - not unless
 they're owned)

 Traceroute for southwestdoor.com goes through secureserver.net and
 they have lots of references (in dns) to themselves, jomax.net and
 domaincontrol.com.

 Can someone give me a better picture of how this all fits together on
 a company level - as in how do these guys make money and why are they
 probing our network? I understand scans from ISPs and colos, but I
 can't directly identify these guys as either.


Re: Why is .gov only for US government agencies?

2014-10-20 Thread shawn wilson
On Oct 19, 2014 9:53 AM, Mike. the.li...@mgm51.com wrote:



 I'd rather see .gov (and by implication, .edu) usage phased out and
 replaced by country-specific domain names (e.g. fed.us).

 imo, the better way to fix an anachronism is not to bend the rules so
 the offenders are not so offensive, but to bring the offenders into
 compliance with the current rules.


Bad idea. I'm betting we'd find half of gov web sites down due to not being
able to reboot and issues in old coldfusion and IIS and the like (and
needing to fix static links and testing etc). No, if it ain't broke don't
fix it.


Re: Why is .gov only for US government agencies?

2014-10-20 Thread shawn wilson
On Mon, Oct 20, 2014 at 10:20 AM,  valdis.kletni...@vt.edu wrote:
 On Mon, 20 Oct 2014 05:58:01 -0400, shawn wilson said:

 Bad idea. I'm betting we'd find half of gov web sites down due to not being
 able to reboot and issues in old coldfusion and IIS and the like (and
 needing to fix static links and testing etc).

 You say that like it's a bad thing

Well yeah, there's tons of possible bad here.
1. Some contractor would get millions over a few years for doing this
2. Spending time to maintain old code that no one cares about just to
make stuff work is kinda annoying (both for those maintaining the code
and #1)
3. I don't want to see the report on how many Allaire ColdFusion with
NT 3.5 .gov sites are out there

 any other reasons not to do this? Maybe, but here's the real
question - why in the hell would we want to do this?


Re: Why is .gov only for US government agencies?

2014-10-20 Thread shawn wilson
On Mon, Oct 20, 2014 at 10:52 AM, Stephen Satchell l...@satchell.net wrote:
 On 10/20/2014 07:20 AM, valdis.kletni...@vt.edu wrote:
 On Mon, 20 Oct 2014 05:58:01 -0400, shawn wilson said:

 Bad idea. I'm betting we'd find half of gov web sites down due to not being
 able to reboot and issues in old coldfusion and IIS and the like (and
 needing to fix static links and testing etc).

 You say that like it's a bad thing

 It's a dollar thing -- show me a substantial return on the investment

Indeed


 Adobe and Microsoft would *love* the increased revenue from updates that
 would have to be applied to all those old servers.  And what about those
 sites that were made using Front Page?  Talk about a nightmare.  A
 costly one.


Oh yeah, I totally forgot about old FrontPage. I was thinking Homesite
or Dreamweaver, but idk FrontPage from ~10 years back would port very
clean into anything modern. So, if anything there needed changing,
you'd have to do a manual cleanup of that code.


Re: Why is .gov only for US government agencies?

2014-10-20 Thread shawn wilson
On Mon, Oct 20, 2014 at 11:44 AM,  valdis.kletni...@vt.edu wrote:
 On Mon, 20 Oct 2014 10:45:44 -0400, shawn wilson said:

 3. I don't want to see the report on how many Allaire ColdFusion with
 NT 3.5 .gov sites are out there

  any other reasons not to do this? Maybe, but here's the real
 question - why in the hell would we want to do this?

 See your point 3.

I think you're assuming that people go back and fix stuff when they do
massive changes that are out of scope - they don't. First they aren't
being paid to do so, gov contractors always run over budget and work
is never delivered on time so why would they want to make it worse,
etc. No, if a massive domain move started, stuff would be fixed enough
to make it work with a new domain, and stuff would stay at and
possibly worse than the current state of working. I can handle stuff
staying at the current state as long as China/Russia doesn't use it to
get more of a foothold into our infrastructure, but making this stuff
worse might be a really bad thing.

Just something to consider - lets say web stuff is ok, email ports,
old SOAP (and whatever was/is used on mainframes) stuff doesn't break.
I'm betting something accesses
relay-4.building-10.not-yet-offline.missile-defense-system.mil someone
fails to point to building-10's dns in a dns migration which may be a
cooling system that gets changed by some computer and shit hits the
fan because we wanted to normalize our gov tld with the rest of the
world. No, I think I'll pass on finding out what breaks here.

Again - give me a real reason we should do this. And if not, if it
ain't broke, don't fix it.

PS - MDS is only 10 years old so any part of that still online is
likely to have audits (and any installs would be in east-EU and
hopefully on classified internet - one hopes - so who knows). It was
just an example I pulled. It's more possible that some Blackberry
system can't get updated after we stop holding them up and we budget
for this and gov email goes down :) Just saying I don't want to find
out what gets left behind and breaks here.


Re: Why is .gov only for US government agencies?

2014-10-20 Thread shawn wilson
On Mon, Oct 20, 2014 at 6:26 PM, Doug Barton do...@dougbarton.us wrote:

 3. Set a target date for the removal of those TLDs for 10 years in the
 future


Because this worked for IPv6?

 Obviously there are various implementation details for effecting the move,
 but application-layer stuff will be as obvious to most readers as it is
 off-topic for this list.


In this case, it's all about the application-layer stuff - that'd be
the stuff to fail hard - mainframe IP gateways, control systems,
Lotus, Domino, etc. BIND is fine. Even most of the PHP apps would
(should, maybe) be fine. But that's not runs most of the gov.

 Regarding the time period in #3, decommissioning a TLD is harder than you
 might think, and we have plenty of extant examples of others that have taken
 longer, and/or haven't finished yet *cough*su*cough*.


Do we really have any prior examples that are even .1 the size of the
usgov public system? Again, I'm not just referring to BIND and Windows
DNS (and probably some Netware 4 etc stuff) - this would be web, soap
parsers, email systems, vpn, and all of their clients (public,
contractor, and gov). Anything close to what y'all are talking about?


Re: Why is .gov only for US government agencies?

2014-10-20 Thread shawn wilson
On Oct 20, 2014 9:33 PM, Bill Woodcock wo...@pch.net wrote:


 On Oct 21, 2014, at 9:23 AM, Jared Mauch ja...@puck.nether.net wrote:

  Breaking tons of things is an interesting opinion of why not”.

 Eh.  Off the top of my head, I see two categories of breakage:

1) things that hard-code a list of “real” TLDs, and break when their
expectations aren’t met, and

2) things that went ahead and trumped up their own non-canonical TLDs
for their own purposes.

 Neither of those seem like practices worth defending, to me.  Not worth
going out of one’s way to break, either, but…


I'm not defending any practice. Let's just say everything else goes smooth.
How many fed employees are there and what's their average salary? Let's
assume it takes them 5 minutes to change their email sig. How much would
that cost?

There's probably also a legal issue 1here. You can't make it so that
someone can't communicate with their elected official. No term limits in
the House so you'd start this and 50 years later, you'd be able to complete
the project (due to the last congressman being replaced).


Re: Why is .gov only for US government agencies?

2014-10-20 Thread shawn wilson
On Oct 20, 2014 11:54 PM, Doug Barton do...@dougbarton.us wrote:

 On 10/20/14 4:07 PM, shawn wilson wrote:



 Do we really have any prior examples that are even .1 the size of the
 usgov public system? Again, I'm not just referring to BIND and Windows
 DNS (and probably some Netware 4 etc stuff) - this would be web, soap
 parsers, email systems, vpn, and all of their clients (public,
 contractor, and gov). Anything close to what y'all are talking about?


 Actually I think I could make a very convincing argument that GOV would
not be the most challenging problem of the 3 I mentioned, but I won't. :)


You're right. But, edu and gov might be a tie with some obsolete tech they
maintain that won't conform. But maybe not. As far as mil, I hold no
clearance and if I did, I couldn't discuss even their public infrastructure
(which AFAIK requires at least a public trust to work on). So I think
leading this discussion to just the issues with gov (and maybe edu - but
for some strange reason I have faith in them here) vs mil and edu as
well...?

 The question here is not, Is it easy? The questions are, Is it the
right thing to do? and Will it get easier to do tomorrow than it would
have been to do today?


No, the first question should be is it possible - we all seem to think
its possible in some timeframe (though I wonder about the legality of
changing active congressman's email). Next, is it the right thing - I'm
going to go with yes, it probably is. But the later question is basically
the cost benefit analysis - I'm just not sure if its worth it. And finally
your question about time:

 I can tell you beyond a shadow of a doubt that it would have been easier
to do a decade ago, and 10 years from now it will be harder still.


Will it get easier/harder if we wait - I agree, it would've been easier 10
years ago and with the cheap IoT crap starting to come out (none that uses
DNS yet, but) its not going to get any easier. If y'all disagree with me
and feel there'd be a real benefit to doing this, the process should be
started now.


Re: ipmi access

2014-06-02 Thread shawn wilson
On Mon, Jun 2, 2014 at 8:26 AM, Randy Bush ra...@psg.com wrote:
 I use OpenVPN to access an Admin/sandboxed network with insecure portals,
 wiki, and ipmi.

 h.  'cept when it is the openvpn server's ipmi.  but good hack.  i
 may use it, as i already do openvpn.  thanks.


So, kinda the same idea - just put IPMI on another network and use ssh
forwards to it. You can have multiple boxes connected in this fashion
but the point is to keep it simple and as secure as possible (and IPMI
security doesn't really count here :) ).

Kinda funny though - I've all of the findings have been for newer
IPMI. So, I had (have) an HP DL380g5 and didn't feel like resetting
the iLo2 password manually. Well, everything I could find for dumping
info from iLo was for iLo3... go figure. (I still wouldn't put it on
the net)


Re: ipmi access

2014-06-02 Thread shawn wilson
On Mon, Jun 2, 2014 at 10:14 AM, Jared Mauch ja...@puck.nether.net wrote:
 My IPMI (super micro) you can put v6 and v4 filters into for protecting the 
 ip space from trusted sources. Has my home static ip ranges and a few 
 intermediary ranges that I also have access to.


Mmmm, and an ip has never been spoofed and no arp poisoned. And I
wonder how good these filters are in their TCP stack implementation -
not something I'd trust :)


Re: ipmi access

2014-06-02 Thread shawn wilson
iLo is a value add to HP. DRAC sucks (so I'd replace it and then Dell
would have hardware under support with some unknown IPMI). Supermicro,
Tyan, etc - idk. Really, it would be nice to have an open card that
does this. Even if the card were limited to what you could do with DMA
and some serial (i2c and whatnot) cables. I'd use that instead of
something else (in this case, mainly because I'd replace the Java
console for some VNC solution - but also because of trust).

On Mon, Jun 2, 2014 at 1:32 PM, Nikolay Shopik sho...@inblock.ru wrote:

 On 02/06/14 20:56, Christopher Morrow wrote:

 so... as per usual:
1) embedded devices suck rocks
2) no updates or sanity expected anytime soon in same
3) protect yourself, or suffer the consequences

 seems normal.


 So I wonder why vendors don't publish source code of these ipmi firmware in
 first place? Like supermicro from what we know its 99% is open source stuff.


Re: ipmi access

2014-06-02 Thread shawn wilson
On Mon, Jun 2, 2014 at 3:19 PM, Nikolay Shopik sho...@inblock.ru wrote:


 Java only used for mouting images. KVM is transfered via VNC protocol iirc.

They're not re-inventing the wheel, but I think KVM is generally some
VNC stream embedded in http(s) which VNC clients can't seem to
understand (at least, at a glance, I haven't been able to connect to
iLo, DRAC, Spider, or Tyan IPMI from outside the Java app).


Re: ipmi access

2014-06-02 Thread shawn wilson
On Mon, Jun 2, 2014 at 7:42 PM, Jimmy Hess mysi...@gmail.com wrote:
 On Mon, Jun 2, 2014 at 8:21 AM, shawn wilson ag4ve...@gmail.com wrote:  
 [snip]
 So, kinda the same idea - just put IPMI on another network and use ssh
 forwards to it. You can have multiple boxes connected in this fashion
 but the point is to keep it simple and as secure as possible (and IPMI
 security doesn't really count here :) ).

 About that as secure as possible bit.If just one server gets
 compromised that happens to have its IPMI port plugged into this
 private network;  the attacker may  be able to pivot  into the IPMI
 network  and start unloading IPMI exploits.


Generally, I worry about workstations with access being compromised
more than I do about a server running sshd and routing traffic. But
obviously, if someone gets access, they can cause play foosball with
your stuff.

 So caution is definitely advised,  about security boundaries: in case
 a shared IPMI network is used,  and this  is a case where a Private
 VLAN   (PVLAN-Isolated)   could be considered,   to ensure devices on
 the IPMI  LAN cannot communicate with one another ---  and only
 devices on a separate dedicated IPMI Management station subnet  can
 interact with the IPMI LAN.


I can't really argue against the proper use of vlans (and that surely
wasn't my point). I was merely saying that you can use ssh as a
simpler solution (and possibly a more secure one since there's not a
conduit to broadcast to/from) than a vpn. That's it.


Re: DNSSEC?

2014-04-12 Thread shawn wilson
But it doesn't really matter if you zero out freed memory. Maybe it'll
prevent you from gaining some stale session info and the like. But even if
that were the case, this would still be a serious bug - you're not going to
reread your private key before encrypting each bit of data after all -
that'd just be wasteful.

In other words, this is kind of moot.
On Apr 12, 2014 2:24 AM, Mark Andrews ma...@isc.org wrote:


 Don't think for one second that using malloc directly would have
 saved OpenSSL here.  By default malloc does not zero freed memory
 it returns.  It is a feature that needs to be enabled.  If OpenSSL
 wanted to zero memory it was returning could have done that itself.

 The only difference is that *some* malloc implementations examine
 the envionment and change their behaviour based on that.

 That OpenSSL used its own memory allocator was a problem does not
 stand up to rigourous analysis.

 Mark
 --
 Mark Andrews, ISC
 1 Seymour St., Dundas Valley, NSW 2117, Australia
 PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org




Re: CVE-2014-0160 mitigation using iptables

2014-04-10 Thread shawn wilson
On Thu, Apr 10, 2014 at 9:52 AM,  valdis.kletni...@vt.edu wrote:
 On Wed, 09 Apr 2014 11:07:36 +0100, Fabien Bourdaire said:

 # Log rules
 iptables -t filter -A INPUT  -p tcp --dport 443  -m u32 --u32 \
 52=0x1803:0x1803 -j LOG --log-prefix BLOCKED: HEARTBEAT

 That 52= isn't going to work if it's an IPv4 packet with an unexpected
 number IP or TCP options, or an IPv6 connection

IPv6 wasn't mentioned here (that'd be ip6tables). But yeah, there
might be some other shortcomings with the match. I think it's the
right way to go - it just needs a bit of work (maybe a bm string
match?). You're also going to deal with different versions (see
ssl-heartbleed.nse for the breakdown). Though, I wonder if there are
any other variations you might miss...



Re: How to catch a cracker in the US?

2014-03-17 Thread shawn wilson
On Mon, Mar 17, 2014 at 10:21 AM, Sholes, Joshua
joshua_sho...@cable.comcast.com wrote:
 On 3/13/14, 7:35 PM, Larry Sheldon larryshel...@cox.net wrote:

Not sure I can agree with that.  I have been in this game for a very
long time, but for most of it in places where the world's population
cleaved neatly into two parts: Authorized Users who could be
identified by the facts that they had ID cards, Badges, and knew the
door code; and trespassers who were all others.

Then you new kids came along and (pointlessly, in my opinion) divided
the later group into the two described above.

 See, the way *I* learned it was that part of the creed of the hacker
 involved why would I want to play with your systems, mine are much
 cooler.;  that is, by definition a hacker is in the first group.


The point is that 'computer security' involves innovation as much as
is done at hacker spaces (which can be geared to hardware or computer
security or whatever). I think the difference you're trying to argue
is the legality and not the task or process. I think calling the
illegal form of the study of computer security cracking, the legal
form hacking and people who are cracking who don't know what
they're doing script kiddies is irrelevant, useless, and causes
useless debates (that I started) like this.



Re: How to catch a cracker in the US?

2014-03-13 Thread shawn wilson
On Mar 13, 2014 7:37 PM, Larry Sheldon larryshel...@cox.net wrote:

 On 3/13/2014 8:22 AM, Sholes, Joshua wrote:

 On 3/13/14, 12:35 AM, shawn wilson ag4ve...@gmail.com wrote:

 A note on terminology - whether you know what you're doing, actually
break
 into a system, or obtain a thumb drive with data that you weren't
supposed
 to have - it has the same end so I'd refer to it by the same term -
 hacking. Trying to differentiate terms based on skill, target, or data
 type is kinda dumb.


 If one came up in this field with a mentor who was old school, or if one
 is old school oneself, one tends use the original (as I understand it)
 definitions--a cracker breaks security or obtains data unlawfully, a
 hacker is someone who likes ethically playing (in the joyful
 exploration sense) with complicated systems.

 People who are culturally younger tend use hacker, as you are doing,
for
 the former and as far as I can tell no specific term for the latter.

 If you ask me, this is something of a cultural loss.


 Not sure I can agree with that.  I have been in this game for a very long
time, but for most of it in places where the world's population cleaved
neatly into two parts: Authorized Users who could be identified by the
facts that they had ID cards, Badges, and knew the door code; and
trespassers who were all others.

 Then you new kids came along and (pointlessly, in my opinion) divided the
later group into the two described above.


Sorry for my note. Didn't mean it to sidetrack the question (I probably
should've).

/me o_O


Re: How to catch a cracker in the US?

2014-03-12 Thread shawn wilson
On Mar 11, 2014 3:09 AM, Dobbins, Roland rdobb...@arbor.net wrote:


 On Mar 11, 2014, at 2:00 PM, Markus unive...@truemetal.org wrote:

  Any advice?

 Start with CERT-BUND, maybe?


That is the correct answer, if you want something less settle (and possibly
illegal), there were discussions on 'hacking back'. That is, basically
having malicious documents with fake (or not) bank/personal information. If
you can find who is using the info (some Comcast business IPs have the
address in whois) and go OSINT from there (though if you go this route, try
to contact LE before you post something and burn bridges).

A note on terminology - whether you know what you're doing, actually break
into a system, or obtain a thumb drive with data that you weren't supposed
to have - it has the same end so I'd refer to it by the same term -
hacking. Trying to differentiate terms based on skill, target, or data type
is kinda dumb.


comcast business service

2014-02-20 Thread shawn wilson
A while ago I got Comcast's business service. Semi-idle connections
are get dropped (I haven't really diagnosed this - I just no that it
isn't the client or server but some network in between). However the
second and most obvious issue is that intermittently, the service will
grind to a halt:
--- 8.8.8.8 ping statistics ---
37 packets transmitted, 34 received, 8% packet loss, time 36263ms
rtt min/avg/max/mdev = 398.821/5989.160/14407.055/3808.068 ms, pipe 15

After a modem reboot, it goes normal:
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 23.181/23.920/24.298/0.474 ms

This seems to happen about once or twice a day. I can't attribute it
to any type of traffic or number of connections. All of the rest of
the network equipment is the same and the behavior persists when a
computer is plugged directly into the modem. I called Comcast and they
said they didn't see anything even when I was experiencing ridiculous
ping times. I tend to think it's an issue with the 'modem' but I'm not
sure what the issue might be or how to reproduce it when asked to if I
tell them to look at it.



Re: comcast business service

2014-02-20 Thread shawn wilson
Thanks. The tech said they looked at signal levels when I called and didn't
see anything. I didn't have a baseline at the time (I do now) and assumed
they'd see something there if there was something.

I do have the Netgear. So I'll keep this in mind when I call them again
(assuming it's really not a noise issue) though I'm not sure exactly what
happened here or how I can get them to try the same thing?
On Feb 20, 2014 3:11 PM, Aaron C. de Bruyn aa...@heyaaron.com wrote:

 If it's one of their new Netgear-branded modems, see if you can get your
 tech to dig up an SMC.

 We had the same issue.  They swapped out one Netgear modem for another
 Netgear and the problem continued. The phone techs couldn't see the problem
 and kept blaming our equipment.  They finally sent out one of the 'senior'
 engineers I had worked with before on other jobs.  He managed to get a hold
 of one of the SMCs from their warehouse.  No more issues.

 -A


 On Thu, Feb 20, 2014 at 1:08 AM, shawn wilson ag4ve...@gmail.com wrote:

 A while ago I got Comcast's business service. Semi-idle connections
 are get dropped (I haven't really diagnosed this - I just no that it
 isn't the client or server but some network in between). However the
 second and most obvious issue is that intermittently, the service will
 grind to a halt:
 --- 8.8.8.8 ping statistics ---
 37 packets transmitted, 34 received, 8% packet loss, time 36263ms
 rtt min/avg/max/mdev = 398.821/5989.160/14407.055/3808.068 ms, pipe 15

 After a modem reboot, it goes normal:
 --- 8.8.8.8 ping statistics ---
 4 packets transmitted, 4 received, 0% packet loss, time 3003ms
 rtt min/avg/max/mdev = 23.181/23.920/24.298/0.474 ms

 This seems to happen about once or twice a day. I can't attribute it
 to any type of traffic or number of connections. All of the rest of
 the network equipment is the same and the behavior persists when a
 computer is plugged directly into the modem. I called Comcast and they
 said they didn't see anything even when I was experiencing ridiculous
 ping times. I tend to think it's an issue with the 'modem' but I'm not
 sure what the issue might be or how to reproduce it when asked to if I
 tell them to look at it.





Windows Update subnets

2014-01-16 Thread shawn wilson
Does anyone have a list of all of the ranges Microsoft uses for
Windows Update? I've found domains but not a full list of subnets.



Re: verify currently running software on ram

2014-01-13 Thread shawn wilson
dd kmem and see if it's what you'd expect (size of ram+swap). If so you
should be able to look at it

Also see Volatility
On Jan 13, 2014 7:21 AM, Tassos Chatzithomaoglou ach...@forthnet.gr
wrote:

 Saku Ytti wrote on 13/1/2014 12:51:
  On (2014-01-13 12:46 +0200), Saku Ytti wrote:
  On (2014-01-13 12:26 +0200), Tassos Chatzithomaoglou wrote:
 
  I'm looking for ways to verify that the currently running software on
 our Cisco/Juniper boxes is the one that is also in the flash/hd/storage/etc.
  IOS: verify /md5 flash:file
  JunOS: filechecksum md5|sha-256|sha1 file
 
  But if your system is owned, maybe the verification reads filename and
 outputs
  expected hash instead of correct hash.
  mea culpa, you were looking to check running to image, I don't think
 this is
  practical.
  In IOS its compressed and decompressed upon boot, so no practical way to
 map
  the two together.
  Same is true in JunOS, even without compression it wouldn't be possible
 to
  reasonably map the *.tgz to RAM.
 
  I think vendors could take page from XBOX360 etc, and embed public keys
 inside
  their NPU in modern lithography then sign images, it would be impractical
  attack vector.

 I was assuming the vendors could take a snapshot of the memory and somehow
 compare it to a snapshot of the original software.
 Or (i don't know how easy it is) do an auditing of the memory snapshot on
 specific pointers...well, i don't know...just thinking loudly...
  But changing memory runtime is probably going to very complicated to
 verify,
  easier to create infrastructure/HW where program memory cannot be changed
  runtime.
 
 I agree, and we already do that, but a regulatory authority has brought
 into surface something trickier.

 --
 Tassos





Re: verify currently running software on ram

2014-01-13 Thread shawn wilson
Doh, tired and not reading - the util should help after you get a dump
though.
On Jan 13, 2014 7:29 AM, shawn wilson ag4ve...@gmail.com wrote:

 dd kmem and see if it's what you'd expect (size of ram+swap). If so you
 should be able to look at it

 Also see Volatility
 On Jan 13, 2014 7:21 AM, Tassos Chatzithomaoglou ach...@forthnet.gr
 wrote:

 Saku Ytti wrote on 13/1/2014 12:51:
  On (2014-01-13 12:46 +0200), Saku Ytti wrote:
  On (2014-01-13 12:26 +0200), Tassos Chatzithomaoglou wrote:
 
  I'm looking for ways to verify that the currently running software on
 our Cisco/Juniper boxes is the one that is also in the flash/hd/storage/etc.
  IOS: verify /md5 flash:file
  JunOS: filechecksum md5|sha-256|sha1 file
 
  But if your system is owned, maybe the verification reads filename and
 outputs
  expected hash instead of correct hash.
  mea culpa, you were looking to check running to image, I don't think
 this is
  practical.
  In IOS its compressed and decompressed upon boot, so no practical way
 to map
  the two together.
  Same is true in JunOS, even without compression it wouldn't be possible
 to
  reasonably map the *.tgz to RAM.
 
  I think vendors could take page from XBOX360 etc, and embed public keys
 inside
  their NPU in modern lithography then sign images, it would be
 impractical
  attack vector.

 I was assuming the vendors could take a snapshot of the memory and
 somehow compare it to a snapshot of the original software.
 Or (i don't know how easy it is) do an auditing of the memory snapshot on
 specific pointers...well, i don't know...just thinking loudly...
  But changing memory runtime is probably going to very complicated to
 verify,
  easier to create infrastructure/HW where program memory cannot be
 changed
  runtime.
 
 I agree, and we already do that, but a regulatory authority has brought
 into surface something trickier.

 --
 Tassos





Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-31 Thread shawn wilson
On Tue, Dec 31, 2013 at 8:05 AM, Ray Soucy r...@maine.edu wrote:

 This whole backdoor business is a very, very, dangerous game.

While I agree with this (and the issues brought up with NSA's NIST
approved PRNG that RSA used). If I were in their shoes, I would have
been collecting every bit of data I could (ie, I can't fault them on
PRISM and have some serious issues with most of these disclosures). I
don't believe that anyone has said this isn't a big deal. I think
even the NSA has said the exact opposite (for different reasons).

I have no oppinion at this point of whether they put a back door in
routers - I think it's possible. Maybe even with multiple moving parts
(submit some HDL to a manufacturer for their own project and allow
them to use it for others under an NDA, knowing that the chip could be
used in hardware and knowing that something would hit that part of the
chip) and no one on either end has to know a back door has been
inserted.

It's also possible that ANT stuff is propaganda (though the ideas in
there are pretty cool and should be implemented under open source).



Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread Shawn Wilson


Saku Ytti s...@ytti.fi wrote:
On (2013-12-30 20:30 +1100), sten rulz wrote:


I really think we're doing disservice to an issue which might be at
scale of
human-rights issue, by spamming media with 0 data news. Where is this
backdoor? How does it work? How can I recreate on my devices?

I don't really want you to know how to recreate it until the companies have had 
a chance to fix said issue. I'd hope, if such issues were disclosed, those news 
outlets would go through proper channels of disclosure before going to press 
with it. 


Large audience already seems to largely be in ignore mode about NSA
revelations, since revelations are very noisy but little signal.

I think the NSA is hoping that to be the case. But just based on the fact that 
60 Minutes did a story on the NSA and the NSA, POTUS, congress, and that half 
my Twitter, Facebook, and mailing lists are still talking about it (though my 
networks are probably biased) shows that people are still interested. Also, I 
think there's a fair chance SCOTUS will take this up due to differing rulings. 
Before this goes the way of Crypto-AG or clapper, its got quite a fair distance 
left in it. 



Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread shawn wilson
On Mon, Dec 30, 2013 at 8:07 AM, Ray Soucy r...@maine.edu wrote:


 I hope Cisco, Juniper, and others respond quickly with updated images for
 all platforms affected before the details leak.

So, if this plays out nice (if true, it won't), the fix will come
months before the disclosure. Think, if you're leasing a router from
your ISP, you might not have the ability to update it (or might
violate your contract). So, you need to wait for [manufacturer] to
update, test, and release an update, then you need to work with your
provider to make sure the update gets pushed correctly.

Also, even open hardware isn't completely open - see the Pi - probably
the most open of hardware stacks. The CPU isn't completely open. Also,
see FreeBSD not using hardware PRNG for this reason.



Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread shawn wilson
On Mon, Dec 30, 2013 at 1:17 PM, Lorell Hathcock lor...@hathcock.org wrote:
 NANOG:

 Here's the really scary question for me.

 Would it be possible for NSA-payload traffic that originates on our private
 networks that is destined for the NSA to go undetected by our IDS systems?


Yup. Absolutely. Without a doubt.

 For example tcpdump-based IDS systems like Snort has been rooted to ignore
 or not report packets going back to the NSA?  Or netflow on Cisco devices
 not reporting NSA traffic?  Or interface traffic counters discarding
 NSA-packets to report that there is no usage on the interface when in fact
 there is?


Do you detect 100% of malware in your IDS? Why would anyone need to do
anything with your IDS? Craft a PDF, DOC, Java, Flash, or anything
else that can run code that people download all the time with payload
of unknown signature. This isn't really a network discussion. This is
just to say - I seriously doubt there's anything wrong with your IDS -
don't skin a cat with a flame thrower, it just doesn't need to be that
hard.

 Here's another question.  What traffic do we look for on our networks that
 would be going to the NSA?


Standard https on port 443 maybe? That's how I'd send it. If you need
to send something bigger than normal, maybe compromise the email
server and have a few people send off some 5 - 10 meg messages?
Depends on your normal user base. If you've got a big, complex user
base, it's not hard to stay under the radar. Google 'Mandiant APT1'
for some real good reading.



Re: The Making of a Router

2013-12-28 Thread Shawn Wilson


Chris Adams c...@cmadams.net wrote:
Once upon a time, Shawn Wilson ag4ve...@gmail.com said:
 I was hoping someone could give technical insight into why this is
good or not and not just buy a box branded as a router because I said
so or your business will fail. I'm all for hearing about the business
theory of running an ISP (not my background or day job) but didn't
think that's what the OP was asking about (and it didn't seem they were
taking business suggestions very well anyway).

There's been some technical insight here I would say.  I'm a big Linux,
Open Source, and Free Software advocate, and I'll use Linux-based
systems for routing/firewalling small stuff, but for high speed/PPS,
get
a router with a hardware forwarding system (I like Juniper myself).

You can build a decently-fast Linux (or *BSD) system, but you'll need
to
spend a good bit of time carefully choosing motherboards, cards, etc.
to
maximize packet handling, possibly buying multiple of each to find the
best working combination.  Make sure you buy a full set of spares once
you find a working combination (because in the PC industry, six months
is a lifetime).  Then you have to build your OS install, tweaking the
setup, network stack, etc.

After that, you have to stay on top of updates and such (so plan for
more reboots); while on a hardware-forwarding router you can mostly
partition off the control plane, on a Linux/*BSD system, the base OS is
the forwarding plane.  Also, if something breaks, falls over under an
attack, etc., you're generally going to be on your own to figure it
out.
Maybe you can Google the answer (and hope it isn't that'll be fixed in
kernel 3.today's version+2.  Not saying that doesn't happen with
router vendors (quoting RFCs at router engineers is fun), but it is
IMHO less often.

The question becomes: what is your time worth?  You could spend
hundreds
of hours going from the start to your satisfactory in-service router,
and have a potentially higher upkeep cost.  Can you hire somebody with
all the same Linux/*BSD knowlege as yourself, so you are not on-call
for
your home-built router around the clock?

I've used Linux on all my computers for almost 20 years, I develop on
Linux, and contribute to a Linux distribution.  However, when I want to
record TV to watch later, I plug in a TiVo, not build a MythTV box.
There is a significant value in just plug it in and it works, and if
you don't figure your time investment (both up-front and on-going) into
the cost, you are greatly fooling yourself.

I agree with all of this to some degree. IDK whether cost of ownership on a 
hardware router or a desktop is more or less - I jus haven't done the research. 
We use them at work and at home I have Cisco and Linksys gear (plus Linux doing 
some things the router could like DHCP) - go figure.

I agree that some network cards and boards work better than others (and am 
partial to the Intel Pro cards - though I'm unsure if they're still the best). 
I would also hesitate to route that much traffic with a PC. Though, I have no 
technical reason for this bias. 

If you have hardware in production, you really should have a spare - whether 
we're talking servers, HDDs, batteries, or routers. Ie, that comment is not 
unique to servers. I also don't think warranty has any bearing on this - I've 
seen servers stay down for over a day because (both HP and Dell for their 
respective hardware) screwed up and the company didn't budget for a spare board 
and I've seen a third of a network be taken out because multiple switch ports 
just died. How much would a spare switch have cost compared to 50 people not 
online?

At any rate, I'm interested in this because I've worked in both environments 
and haven't seen a large difference between the two approaches (never worked at 
an ISP or high bandwidth web environment though). I do like the PC router 
approach because it allows more versatility wrt dumping packets (no need to dig 
out that 10mbit dumb hub and throttle the whole network), I can run snort or do 
simple packet inspection with iptables (some routers can do this but most can't 
or require a license). So I'm sorta leaning to the PC router as being better - 
maybe not cheaper but better. 



Re: The Making of a Router

2013-12-27 Thread shawn wilson
On Fri, Dec 27, 2013 at 1:33 AM,  valdis.kletni...@vt.edu wrote:
 On Thu, 26 Dec 2013 11:16:53 -0800, Seth Mattinen said:
 On 12/26/13, 9:24, Andrew D Kirch wrote:
 
  If he can afford a 10G link... he should be buying real gear...  I mean,
  look, I've got plenty of infrastructure horror stories, but lets not
  cobble together our own 10gbit solutions, please?  At least get one of
  the new microtik CCR's with a 10gig sfp+?  They're only a kilobuck... If
  you can't afford that I suggest you can't afford to be an ISP.


 Unless all the money is going into the 10 gig link.

 If you've sunk so much into the 10G link (or anything else, for that matter)
 that you don't have a kilobuck to spare, you're probably undercapitalized to 
 be
 an ISP.


I have issue with this line of thought. Granted, a router is built
with custom ASICs and most network people understand IOS. However,
this is where the benefit of a multi-thousand buck router ends. Most
have limited RAM, so this limits the size of your policies and how
many routes can be stored and the likes. With a computer with multi
10s or 100s of gigs of RAM, this really isn't an issue. Routers also
have slow-ish processors (which is fine for pure routing since they
are custom chips but) if you want to do packet inspection, this can
slow things down quite a bit. You could argue that this is the same
with iptables or pf. However, if you just offload the packets and
analyze generally boring packets with snort or bro or whatever,
packets flow as fast as they would without analysis. If you have
multiple VPNs, this can start to slow down a router whereas a computer
can generally keep up.

... And then there's the money issue. Sure, if you're buying a gig+
link, you should be able to afford a fully spec'd out router. However,
(in my experience) people don't order equipment with all features
enabled and when you find you need a feature, you have to put in a
request to buy it and then it takes a month (if you're lucky) for it
to be approved. This isn't the case if you use ipt/pf - if te feature
is there, it's there - use it.

And if a security flaw is found in a router, it might be fixed in the
next month... or not. With Linux/BSD, it'll be fixed within a few days
(at the most). And, if your support has expired on a router or the
router is EOL, you're screwed.

I think in the near future, processing packets with GPUs will become a
real thing which will make doing massive real time deep packet
inspection at 10G+ a real thing.

Granted, your network people knowing IOS when they're hired is a big
win for just ordering Cisco. But, I don't see that as a show stopper.
Stating the scope of what a box is supposed to be used for and not
putting endless crap on it might be another win for an actual router.
However, this is a people/business thing and not a technical issue.

Also, I'm approaching this as more of a question of the best tool for
the job vs pure economics - a server is generally going to be cheaper,
but I generally find a server nicer/easier to configure than a router.



Re: The Making of a Router

2013-12-27 Thread Shawn Wilson
This has gotten a bit ridiculous. 

I was hoping someone could give technical insight into why this is good or not 
and not just buy a box branded as a router because I said so or your business 
will fail. I'm all for hearing about the business theory of running an ISP 
(not my background or day job) but didn't think that's what the OP was asking 
about (and it didn't seem they were taking business suggestions very well 
anyway).

This thread started cool and about 10 posts in, started sucking. 

Warren Bailey wbai...@satelliteintelligencegroup.com wrote:
I propose cage fighting at the next NANOG summit.


Sent from my Mobile Device.


 Original message 
From: Randy Bush ra...@psg.com
Date: 12/27/2013 7:07 PM (GMT-09:00)
To: valdis.kletni...@vt.edu
Cc: North American Network Operators' Group nanog@nanog.org
Subject: Re: The Making of a Router


 Right.  And the point that others are trying to make clear is that if
 that $100K is half your capitalization, you have $200K - and that's
 nowhere near enought to cover all the stuff you're going to hit
 starting an ISP.  (Hint - what's your projected burn rate for the
 first two months of business?)

not to worry.  growth is not going to be an issue doing openflow due to
today's tcam limits.




Re: The Making of a Router

2013-12-26 Thread Shawn Wilson
Totally agree that a routing box should be standalone for tons of reasons. Even 
separating network routing and call routing.

It used to be that BSD's network stack was much better than Linux's under load. 
I'm not sure if this is still the case - I've never been put in the situation 
where the Linux kernel was at its limits. FWIW

Jared Mauch ja...@puck.nether.net wrote:
Have to agree on the below. I've seen too many devices be so integrated
they do no task well, and can't be rebooted to troubleshoot due to
everyone using them. 

Jared Mauch

 On Dec 26, 2013, at 10:55 AM, Andrew D Kirch trel...@trelane.net
wrote:
 
 Don't put all this in one box.




Re: Bandwidth for a weekend @ Gaylord National Harbor, DC metro area

2013-09-17 Thread shawn wilson
I'm not sure of te topology around there, but you can get these 2.4Ghz
dishes for *cheap* (I got one at a hamfest for $20 - spent as much on the
rp-sma converter cost almost as much). If someone (or a colo) is near
there, you might convince them to put up the same thing and work with that.
I think you'd be ok with a amateur radio operator since you're non-profit
(and should only need one since they'd be the control for the remote
station).

If you setup legistics and aren't the same weekend as Shmoo (I think you
are) I can help with that setup and check the regs and use my callsign.


On Tue, Sep 17, 2013 at 7:48 AM, Dustin Jurman dus...@rseng.net wrote:

 WISPA.ORG would be a good resource.

 Dustin Jurman C.E.O
 Rapid Systems Corporation
 1211 North Westshore BLVD suite 711
 Tampa, Fl 33607
 Building Better Infrastructure

 On Sep 17, 2013, at 12:38 AM, Christopher Morrow 
 morrowc.li...@gmail.com wrote:

  the katsucon folks do this hotel yearly... maybe finding them would be
 useful?
   http://www.katsucon.com/
 
  On Mon, Sep 16, 2013 at 11:00 PM,  telmn...@757.org wrote:
  I help with an event at the Gaylord at National Harbor in Maryland.
 It's a
  music and video gaming festival. The network team for the event is
 looking
  for options on getting bandwidth for ~5 days across a weekend at the
 hotel.
  RF or handoff in building.
 
  The large array of clearwire modems isn't cutting it any more.
 
  Looking for pointers I can give them. Please email off-list.
 
  Thanks!
 
 - Ethan O'Toole
 
 




Re: Parsing Syslog and Acting on it, using other input too

2013-08-30 Thread Shawn Wilson


Christopher Morrow morrowc.li...@gmail.com wrote:
On Thu, Aug 29, 2013 at 10:50 AM, Don Wilder don.wil...@gmail.com
wrote:
 I wrote a script in Linux that watches for unauthorized login
attempts and
 adds the ip address to the blocked list in my firewall. You might
want to
 search sourceforge for a DYN Firewall and modify it from there.


because fail2ban was too hard to install? or because you just wanted
to test yourself?

Actually I did the same. I use ipset lists (generally with a timeout) and take 
a regex or two and black / white list from a YAML file and just take (possibly 
multiple inputs) from piping tail -F. I also store addresses for future 
reference (by the script or otherwise). 

This is quite maintainable as I can look at a list of people who have attacked 
the mail server and compare it to web attacks. Each process is a different type 
of service (different config file) and probably a different ipset. Due to ipset 
not actually doing anything until I make an iptables rule for it, I can run my 
script in a test mode (by default) and just see what happens (check it's logs 
and the ipset list it generates). I haven't found the need for this yet but I 
can use cymru to look up how big their net is (see geocidr for an example of 
how to do this in perl) and use a hash:net ipset type and cover a whole net.

Basically what I'm saying in doing it this way is quite expandable and isn't 
very hard and I can do tons of stuff that fail2ban can't (I don't think - it's 
been a while since I looked). 



Re: Parsing Syslog and Acting on it, using other input too

2013-08-30 Thread shawn wilson
Ah it seems they do:
https://github.com/fail2ban/fail2ban/blob/master/config/action.d/iptables-ipset-proto6.conf

IDK enough about fail2ban to know whether I can assign a per proto or per
log type config (I assume I can). In which casethis does what my script
does and then some. I would probably dump out a ipset save on exit and try
to 'restore' on resume (which /I/ do) and I'm sure there's a way fail2ban
can check a store of addresses and check what network a host belongs to
(instead of just a host).

So, fail2ban is probably the way to go.


On Fri, Aug 30, 2013 at 10:00 AM, Christopher Morrow 
morrowc.li...@gmail.com wrote:

 On Fri, Aug 30, 2013 at 8:55 AM, Shawn Wilson ag4ve...@gmail.com wrote:
 
 
  Christopher Morrow morrowc.li...@gmail.com wrote:
 On Thu, Aug 29, 2013 at 10:50 AM, Don Wilder don.wil...@gmail.com
 wrote:
  I wrote a script in Linux that watches for unauthorized login
 attempts and
  adds the ip address to the blocked list in my firewall. You might
 want to
  search sourceforge for a DYN Firewall and modify it from there.
 
 
 because fail2ban was too hard to install? or because you just wanted
 to test yourself?
 
  Actually I did the same. I use ipset lists (generally with a timeout)
 and take a regex or two and black / white list from a YAML file and just
 take (possibly multiple inputs) from piping tail -F. I also store addresses
 for future reference (by the script or otherwise).
 
  This is quite maintainable as I can look at a list of people who have
 attacked the mail server and compare it to web attacks. Each process is a
 different type of service (different config file) and probably a different
 ipset. Due to ipset not actually doing anything until I make an iptables
 rule for it, I can run my script in a test mode (by default) and just see
 what happens (check it's logs and the ipset list it generates). I haven't
 found the need for this yet but I can use cymru to look up how big their
 net is (see geocidr for an example of how to do this in perl) and use a
 hash:net ipset type and cover a whole net.
 
  Basically what I'm saying in doing it this way is quite expandable and
 isn't very hard and I can do tons of stuff that fail2ban can't (I don't
 think - it's been a while since I looked).

 you seem to be describing what fail2ban does... that and some grep of
 syslog for fail2ban messages. If your solution works then great! :)



Re: CableWiFi SSID in Washington DC?

2013-08-26 Thread Shawn Wilson
There are indeed FreePublicWiFi nodes in some areas like Dupont Circle but 
it's not very convenient most of the time (signal strength or speed issues). 
IIRC there's a Commotion mesh around Columbia Heights which should be much 
faster. Personally, I just use a Mifi and never have any issues. 

 



-Original Message-
From: Alex Buie alex.b...@frozenfeline.net
To: Drew Linsalata drew.linsal...@gmail.com
Cc: NANOG list nanog@nanog.org
Sent: Sun, 25 Aug 2013 20:12
Subject: Re: CableWiFi SSID in Washington DC?

I haven't tried it in DC, but I can confirm that my parents' XFINITY and
grandparents' OO logins both work on the CableWiFi SSIDs in San Francisco,
and friends in DC with XFINITY say theirs work there. I assume it will also
for you.

(cf
http://www.techspot.com/news/48684-five-us-cable-providers-join-forces-to-offer-5-wireless-hotspots.html
)

-alex


On Sun, Aug 25, 2013 at 6:50 PM, Drew Linsalata drew.linsal...@gmail.comwrote:

 What?  Free?  Public?  How can I NOT connect to that?;-)


 On Sun, Aug 25, 2013 at 6:25 PM, chris tknch...@gmail.com wrote:

  Why don't you try a rogue ad hoc FreePublicWifi ? :)
 
 
 



Re: One of our own in the Guardian.

2013-07-14 Thread shawn wilson
Well, I think Google has the right idea with providing Internet by floating
balloons. And the way that cell phone tech has been improving, we might all
have 10G in... 10 years or so?

If Google is providing it, it'll be monitored by our government but hey,
we'll have enough bandwidth to hang ourselves with :)

I really wish more places would just start Internet co-ops.
On Jul 14, 2013 1:10 AM, Mike Lyon mike.l...@gmail.com wrote:

 There are a few wireless providers that serve the Mountain View area..

 -Mike

 Founder
 Ridge Wireless
 www.ridgewireless.net

 Sent from my iPhone

 On Jul 13, 2013, at 21:56, Grant Ridder shortdudey...@gmail.com wrote:

  In Mountain View (the middle of Silicon Valley) the only choice i have is
  overpriced Comcast w/ a 300 gig limit.  I used to chew threw 300 gig in a
  week when i was in school.
 
  -Grant
 
  On Sat, Jul 13, 2013 at 9:44 PM, Alex Rubenstein a...@corp.nac.net
 wrote:
 
  Yet, here, where I live, only 47 road miles from New York City, I have a
  cable company who sells me metered (yes, METERED) DOCSIS, for nearly
  $100/month, 35/3. The limitation is like 100 GB/month or something (the
  equivalent of the amount of Netflix or AppleTV my kids watch in a
 weekend)
  No alternatives, no FiOS, no nothing. Well, I can get 3/.768 DSL if I
  please.
 
  Someone, please help me.
 
  Please.
 
 
 
 
 
  Jima said: Really, who has 100/100 at home?
 
  Oddly, those living in Grand Coulee, WA.
 
  I went there once to setup corporate connectivity for a regional tire
  store.
  They ordered the minimal drop, 50/50Mbs. One of the tire changers there
  told me that he had 100/100 at home for $50/month.
 
  This was a town without T-Mobile service. I had to haul out the butt
 set
  and
  clip on to the business POTS lines to turn up the VPN.
 
  Most of rural Central Washington has very good fiber connectivity.
  Forward
  looking Public Utility Districts FTW!
 
  --
  Joe Hamelin, W7COM, Tulalip, WA, 360-474-7474
 
 




Re: One of our own in the Guardian.

2013-07-14 Thread shawn wilson
You're on a continent with the second least amount of light pollution
of all of the continents on earth (iirc) and are somehow surprised
about bad net access? I would question the wisdom of planning a tech
conference there, but not the facility itself.

On Sun, Jul 14, 2013 at 4:16 AM, David Conrad d...@virtualized.org wrote:
 On Jul 14, 2013, at 6:50 AM, Mark Seiden m...@seiden.com wrote:
 and here i am in the icann-selected hotel for the icann conference, and they 
 gave us a total of 500MB of metered usage.

 Trust me, the 500MB limit (per day, and resettable if you go down to the 
 front desk and request more) is the least of your worries:

 % ping trantor.virtualized.org
 
 Request timeout for icmp_seq 179
 Request timeout for icmp_seq 180
 Request timeout for icmp_seq 181
 64 bytes from 199.48.134.42: icmp_seq=104 ttl=40 time=78594.936 ms
 64 bytes from 199.48.134.42: icmp_seq=64 ttl=40 time=119037.553 ms
 64 bytes from 199.48.134.42: icmp_seq=80 ttl=40 time=103268.363 ms (DUP!)
 64 bytes from 199.48.134.42: icmp_seq=80 ttl=40 time=103690.981 ms (DUP!)
 64 bytes from 199.48.134.42: icmp_seq=64 ttl=40 time=120196.719 ms (DUP!)
 64 bytes from 199.48.134.42: icmp_seq=64 ttl=40 time=120333.246 ms (DUP!)
 64 bytes from 199.48.134.42: icmp_seq=85 ttl=40 time=99395.502 ms
 64 bytes from 199.48.134.42: icmp_seq=105 ttl=40 time=79406.728 ms
 Request timeout for icmp_seq 186
 64 bytes from 199.48.134.42: icmp_seq=93 ttl=40 time=94822.040 ms
 Request timeout for icmp_seq 188
 Request timeout for icmp_seq 189
 ...

 Regards,
 -drc





Re: One of our own in the Guardian.

2013-07-14 Thread shawn wilson
On Jul 14, 2013 5:36 AM, Bill Woodcock wo...@pch.net wrote:


 On Jul 14, 2013, at 2:12 AM, shawn wilson ag4ve...@gmail.com wrote:

 You're on a continent with the second least amount of light pollution
 of all of the continents on earth (iirc) and are somehow surprised
 about bad net access? I would question the wisdom of planning a tech
 conference there, but not the facility itself.


 Nope.


Heh nice pic :)

Ok I've been wrong before.


Re: Google's QUIC

2013-06-28 Thread shawn wilson
On Jun 29, 2013 12:23 AM, Christopher Morrow morrowc.li...@gmail.com
wrote:

 On Fri, Jun 28, 2013 at 10:12 PM, Octavio Alvarez
 alvar...@alvarezp.ods.org wrote:
  On Fri, 28 Jun 2013 17:20:21 -0700, Christopher Morrow
  morrowc.li...@gmail.com wrote:
 
 
  Runs in top of UDP... Is not UDP...
 
  If it has protocol set to 17 it is UDP.
 
 
  So QUIC is an algorithm instead of a protocol?

 it's as much a protocol as http is.. I suppose my point is that it's
 some protocol which uses udp as the transport.

 Because of this I don't see any (really) kernel/stack changes
 required, right? it's just something the application needs to work out
 with it's peer(s). No different from http vs smtp...


SCTP was layer 4, if QUIC is the same, than it will too. If QUIC is layer 5
up, it won't. That might be the difference (I haven't looked into QUIC).

 -chris



Re: PDU recommendations

2013-06-24 Thread shawn wilson
Heh, I wouldn't dream of putting this type of device on the net - nothing
good can come from that.
On Jun 24, 2013 3:04 PM, Alain Hebert aheb...@pubnix.net wrote:

 Hi,

 Yes.

 They are good.

 Nothing I would deploy in a large data center but for a few racks
 they are perfect.

 Beware that they are not built to be connected straight to the
 internet =D.

 The management module can reset depending on packet payload and
 overall traffic.  They should always be behind some sort of firewall
 with rules limiting its access.

 PS: Ours are a few years old, I'm sure APC added some sort of
 security since then, you may want to look 'em up.

 Happy 24th to all.

 -
 Alain Hebertaheb...@pubnix.net
 PubNIX Inc.
 50 boul. St-Charles
 P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
 Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443

 On 06/24/13 14:41, Ryan - Lists wrote:
  Does anyone on list have experience with the APC AP7920 switched rack
 PDU, or any of the horizontal rack mountables with management? We're
 looking at these for our remote sites.
 
  Sent from my iPhone
 
  On Jun 24, 2013, at 6:10 AM, Måns Nilsson mansa...@besserwisser.org
 wrote:
 
  Subject: Re: PDU recommendations Date: Sun, Jun 23, 2013 at 09:32:00PM
 -0400 Quoting shawn wilson (ag4ve...@gmail.com):
  So, that's not a very good endorsement :)
 
  Idk why you'd use a fuse in a PDU.
  MCB units age.  Especially with vibration.  A 10A MCB becomes a 9A MCB
 after some miles.
 
  Fuses don't.
 
  MCB units are good at protecting people since they trip quickly and
 aggressively.
 
  Fuses tend to linger before blowing, and thus are comparatively bad at
 protecting
  people (longer shock) but better at protecting infrastructure (surge
  and switch-on-transient resistance).
 
  --
  Måns Nilsson primary/secondary/besserwisser/machina
  MN-1334-RIPE +46 705 989668
  There's a little picture of ED MCMAHON doing BAD THINGS to JOAN RIVERS
  in a $200,000 MALIBU BEACH HOUSE!!
 
 





PDU recommendations

2013-06-23 Thread shawn wilson
We currently use Triplite stuff but they've got an issue where after a few
minutes, they stop accepting new tcp connections. We're adding a new 30A
circuit and I'm thinking of going with APC (ran them in the past and never
had any issues). However, I figured I'd see if there was a better brand /
specific model recommendations for quality or bang / buck?

Specs: 30A 24+ port 0U, managed (with ssh), lcd use display.


RE: PDU recommendations

2013-06-23 Thread shawn wilson
Thanks, I think the amount of love for the APC stuff confirmed my
experience. I'll look at the Raritan PDUs as I like their KVMs but if I
have to use their software or a WebUI to manage it (I use AddleLink for
remote to the KVM because I don't like proprietary management) I won't use
their PDU.

AFAIK (I'll obviously confirm), the circuit is 208VAC and the plug is the
dual phase NMEA type.
On Jun 23, 2013 3:41 PM, Petter Bruland petter.brul...@allegiantair.com
wrote:

 We're replacing TrippLite with APC. Had two TrippLite SNMP/Web cards stop
 working at random times, and need to be reset. Pain when the datacenter is
 far away. On a different note TrippLite support has been super awesome.

 The APC line we went with, model # escaping me at the moment, only had
 overall unit AMP load indicator. But their SNMP access is nice, for custom
 graphs in Cacti etc, and being able to remote bounce an outlet.

 We did have a few WTI, the really old ones, which did not store the config
 in flash, thus needed to be reconfigured via serial after power outage.
 Even with that annoyance, they had much faster response time from telnet
 (I know, old) turning on/off outlets than either of APC or TrippLite.

 -Petter

 
 From: trit...@cox.net [trit...@cox.net]
 Sent: Sunday, June 23, 2013 12:05 PM
 To: shawn wilson; North American Network Operators Group
 Subject: Re: PDU recommendations

 APC is solid. Their newer line can provide outlet metering. WTI is also
 good...they support redundant circuits. I've seen Baytech died after just
 unplugging a server.
 --Original Message--
 From: shawn wilson
 To: North American Network Operators Group
 Subject: PDU recommendations
 Sent: Jun 23, 2013 8:37 AM

 We currently use Triplite stuff but they've got an issue where after a few
 minutes, they stop accepting new tcp connections. We're adding a new 30A
 circuit and I'm thinking of going with APC (ran them in the past and never
 had any issues). However, I figured I'd see if there was a better brand /
 specific model recommendations for quality or bang / buck?

 Specs: 30A 24+ port 0U, managed (with ssh), lcd use display.





Re: PDU recommendations

2013-06-23 Thread shawn wilson
So, that's not a very good endorsement :)

Idk why you'd use a fuse in a PDU.

The management interface can be rebooted without taking anything down on
the TrippLite but it's at a colo and it *shouldn't* time out like it does.
I think of this like a vehicle computer - if it goes down, you might still
drive for a little but get ready for a crash.
On Jun 23, 2013 9:00 PM, Luke S. Crawford l...@prgmr.com wrote:

 I also have had good experience with (used) servertech/century/power tower
 (I think all the same brand)  -  very inexpensive;  if you are in santa
 clara I have some spare 2u 16 port 208v (20a/c19) units.

 Here is something a buddy wrote up when we were wiring them to the
 user-accessable power on/off menu:

 http://blog.prgmr.com/**xenophobia/2012/02/notes-on-**
 setting-up-a-sentry-p.htmlhttp://blog.prgmr.com/xenophobia/2012/02/notes-on-setting-up-a-sentry-p.html


 My new rack is all avocent PM3001-401 units.Used, of course;  but the
 feature I was after was per-port power monitoring.

 I haven't quite gotten 'em all the way figured out yet.   One thing I see
 as a negative (but might be positive?) is that they have fuses, not
 breakers.I don't know if this provides better protection;  I do know
 that when my buddy overloaded one of them in testing, I had to replace
 fuses, rather than just switching a breaker back.   (also, when a different
 buddy plugged a ancient desktop (so old the PSU wasn't auto-switch)  with
 the power input switch set to 110 in, it blew some of the fuses in the PDU
 (and took out the rack)  - it didn't damage any of the other servers on the
 pdu (other than taking out the PDU;  but everything came back up when I
 swapped it with my spare.)

 Also note, uh, the servertech and the avocent and I think all the other
 PDUs I've seen can reboot the management interface without flipping the
 outlets.  I did it a bunch when I was getting familiar with the avocent.

 Yeah.  I think I need to give fewer buddies access to production.


 Nobody takes hardware seriously enough.  I can find people I trust with
 root, and that trust doesn't seem misplaced.But I let them touch the
 hardware?  and they fuck it up.So I end up doing almost all the
 hardware stuff myself.



 On 06/23/2013 04:48 PM, Trey Valenta wrote:

 I'll also throw out recommendations for ServTech PDUs. They have an
 affordable line of PDUs with static transfer switches that are particularly
 attractive for all your single-power-supply devices.






Re: /25's prefixes announced into global routing table?

2013-06-22 Thread shawn wilson
RFC 3587 - IPv6 Global Unicast Address Format
On Jun 22, 2013 6:50 AM, John Curran jcur...@istaff.org wrote:

 On Jun 22, 2013, at 1:45 AM, Owen DeLong o...@delong.com wrote:

  Yes… It will probably settle out somewhere around 100-125K routes.

 Owen -

   Can you elaborate some on this estimate?  (i.e. what approximations
   and/or assumptions are you using to reach this number?)

 Thanks!
 /John





Re: This is a coordinated hacking. (Was Re: Need help in flushing DNS)

2013-06-20 Thread shawn wilson
I think ICANN would have to add a delay in where a request was sent out to
make sure everyone was on the same page and then what happens the couple
thousand (more)  times a day that someone isn't updated or is
misconfigured?

I think Netsol should be fined. Maybe even a class action suite filed
against them for lost business. And that's it.
On Jun 20, 2013 11:28 PM, Hal Murray hmur...@megapathdsl.net wrote:


  at what point is the Internet a piece of infrastructure whereby we
  actually need a way to watch this thing holistically as it is one system
 and
  not just a bunch of inter-jointed systems? Who's job is it to do nothing
 but
  ensure that the state of DNS and other services is running as it
  shouldwho's the clearing house here.

  The Internet:  Discovering new SPOF since 1969!
 :)  Thanks.

 Perhaps we should setup a distributed system for checking things rather
 than
 another SPOF.  That's distributed both geographically and administratively
 and using several code-bases.

 In this context, I'd expect lots of false alarms due to people changing
 their
 DNS servers but forgetting to inform their monitoring setup (either
 internal
 or outsourced).

 How would you check/verify that the communication path from the monitoring
 agency to the right people in your NOC was working correctly?


 --
 These are my opinions.  I hate spam.







Re: Blocking TCP flows?

2013-06-13 Thread shawn wilson
Johnathan is correct about not using perl for this. There are some iptables
modules, but they're all out of date or incomplete (I mention this because
if you get around to making them work decent, I'll love you for it).
Otherwise, perl - IPC::Run - ipt isn't going to gain you anything. And
I'd be amazed if you could even keep up with a gbit.

Per signature detection, see Bro. Though, it seems the ipt state module
might fit the bill just fine. And you could log that and then have an ETL
that scraped your log file and created a new ACL based on that (so that
hardware could do the majority of the work). I'm sure an ipt - acl isn't a
new idea and you can probably find something that handles most edge cases.
On Jun 13, 2013 7:12 PM, Phil Fagan philfa...@gmail.com wrote:

 Yeah, I only thought of perl cause I'm used to running through 'while true'
 loops and someone showed me Perl was about 400x fastergood thing I'm
 not running through 10gb/s worth of data :-D

 Figured getting closer to hardware was the way to go.I'll have to check
 out PF_RING.




 On Thu, Jun 13, 2013 at 4:49 PM, Jonathan Lassoff j...@thejof.com wrote:

  On Thu, Jun 13, 2013 at 3:38 PM, Phil Fagan philfa...@gmail.com wrote:
   I would assume something FreeBSD based might be best
 
  Meh... personal choice. I prefer Linux, mostly because I know it best
  and most network application development is taking place there.
 
   On Thu, Jun 13, 2013 at 4:37 PM, Phil Fagan philfa...@gmail.com
 wrote:
  
   I really like the idea of a stripe of linux boxes doing the heavy
  lifting.
   Any suggestions on platforms, card types, and chip types that might be
   better purposed at processing this type of data?
 
  Personally, I'd use modern-ish Intel Ethernet NICs. They seem to have
  the best support in the kernel.
 
   I assume you could write some fast Perl to ingest and manage the
 tables?
   What would the package of choice be for something like this?
 
  Heh...  fast Perl.
  As for programming the processing, I would do as much as possible in
  the kernel, as passing packets off to userland really slows everything
  down.
  If you really need to, I'd do something with Go and/or C these days.
 
  Using iptables and the string module to match patterns, you can chew
  through packets pretty efficiently. This comes with the caveat that
  this can only match against strings contained within a single packet;
  this doesn't do L4 stream reconstruction.
 
  You can do some incredibly-parallel stuff with ntop's PF_RING code, if
  you blow more traffic through a single core than it can chew through.
 
  It all depends on what you're trying to do.
 
  --j
  
  
   On Thu, Jun 13, 2013 at 3:11 PM, Jonathan Lassoff j...@thejof.com
  wrote:
  
   Are you trying to block flows from becoming established, knowing what
   you're looking for ahead of time, or are you looking to examine a
   stream of flow establishments, and will snipe off some flows once
   you've determined that they should be blocked?
  
   If you know a 5-tuple (src/dst IP, IP protocol, src/dst L4 ports) you
   want to block ahead of time, just place an ACL. It depends on the
   platform, but those that implement them in hardware can filter a lot
   of traffic very quickly.
   However, they're not a great tool when you want to dynamically
   reconfigure the rules.
  
   For high-touch inspection, I'd recommend a stripe of Linux boxes,
 with
   traffic being ECMP-balanced across all of them, sitting in-line on
 the
   traffic path. It adds a tiny bit of latency, but can scale up to
   process large traffic paths and apply complex inspections on the
   traffic.
  
   Cheers,
   jof
  
   On Thu, Jun 13, 2013 at 12:32 PM, Eric Wustrow ew...@umich.edu
  wrote:
Hi all,
   
I'm looking for a way to block individual TCP flows (5-tuple) on a
  1-10
gbps
link, with new blocked flows being dropped within a millisecond or
 so
of
being
added. I've been looking into using OpenFlow on an HP Procurve,
 but I
don't
know much in this area, so I'm looking for better alternatives.
   
Ideally, such a device would add minimal latency (many/expandable
 CAM
entries?), can handle many programatically added flows (hundreds
 per
second),
and would be deployable in a production network (fails in bypass
  mode).
Are
there any
COTS devices I should be looking at? Or is the market for this all
under
the table to
pro-censorship governments?
   
Thanks,
   
-Eric
  
  
  
  
   --
   Phil Fagan
   Denver, CO
   970-480-7618
  
  
  
  
   --
   Phil Fagan
   Denver, CO
   970-480-7618
 



 --
 Phil Fagan
 Denver, CO
 970-480-7618



Re: chargen is the new DDoS tool?

2013-06-12 Thread shawn wilson
This is basically untrue. I can deal with a good rant as long as there's
some value in it. As it is (I'm sorta sorry) I picked this apart.

On Jun 12, 2013 12:04 AM, Ricky Beam jfb...@gmail.com wrote:

 On Tue, 11 Jun 2013 22:55:12 -0400, valdis.kletni...@vt.edu wrote:



 But seriously, how do you measure one's security?

Banks and insurance companies supposedly have some interesting actuarial
data on this.

 The scope is constantly changing.

Not really. The old tricks are the best tricks. And when a default install
of Windows still allows you to request old NTLM authentication and most
people don't think twice about this, there's a problem.

 While there are companies one can pay to do this, those reports are
*very* rarely published.

It seems you are referring to two things - exploit writing vs pen testing.
While I hate saying this, there are automated tools that could clean up
most networks for a few K (they can also take down things if you aren't
careful so I'm not saying spend 2k and forget about it). Basically, not
everyone needs to pay for a professional test out of the gate - fix the
easily found stuff and then consider next steps.

As for exploit writing, you can pay for this and have an 0day for between
$10 and $50k (AFAIK - not what I do with my time / money) but while you've
got stuff with known issues on the net that any scanner can find, thinking
someone is going to think about using an 0day to break into your stuff is a
comical wet dream.

 And I've not heard of a single edu performing such an audit.

And you won't. I'm not going to tell you about past problems with my stuff
because even after I think I've fixed everything, maybe I missed something
that you can now easily find with the information I've disclosed. There are
information sharing agreements between entities generally in the same
industry (maybe even some group like this for edu?). But this will help
with source and signatures, if your network is like a sieve, fix that first
:)

 The only statistics we have to run with are of *known* breaches.

As I indicated above, 0days are expensive and no one is going to waste one
on you. Put another way, if someone does, go home proud - you're in with
the big boys (military, power plants, spy agencies) someone paid top dollar
for your stuff because you had everything else closed.

 And that's a very bad metric as a company with no security at all that's
had no (reported) intrusions appears to have very good security, while a
company with extensive security looks very bad after a few breaches.

I'll take that metric any day :) Most companies only release a break in if
they leak customer data. The only recent example I can think of where this
wasn't true was the Canadian company that develops SCATA software
disclosing that China stole their stuff. Second, if you look at the stocks
of public companies that were hacked a year later, they're always up. The
exception to this is HBGary who pissed of anonymous and are no longer in
business (they had shady practices that were disclosed by the hack - don't
do this).

 One has noone sniffing around at all, while the other has teams going at
it with pick-axes.

If you have no one sniffing around, you've got issues.

 One likely has noone in charge of security, while the other has an entire
security department.

Whether you have a CSO in name or not might not matter. Depending on the
size of the organization (and politics), a CTO that understands security
can do just as much.


Re: chargen is the new DDoS tool?

2013-06-12 Thread shawn wilson
On Wed, Jun 12, 2013 at 4:51 AM, Jimmy Hess mysi...@gmail.com wrote:
 On 6/12/13, shawn wilson ag4ve...@gmail.com wrote:

 The scope is constantly changing.
 Not really. The old tricks are the best tricks. And when a default install
 By best, you must mean effective against the greatest number of targets.


By best, I mean effective - end of story.

 of Windows still allows you to request old NTLM authentication and most
 people don't think twice about this, there's a problem.

 Backwards compatibility and protocol downgrade-ability is a PITA.


Yes, telling people that NT/2k can't be on your network might be a
PITA, but not using software or hardware that has gone EOL is
sometimes just a sensible business practice.

 It seems you are referring to two things - exploit writing vs pen testing.
 While I hate saying this, there are automated tools that could clean up
 most networks for a few K (they can also take down things if you aren't
 careful so I'm not saying spend 2k and forget about it). Basically, not

 For the orgs that the 2K tool is likely to be most useful for,  $2k is
 a lot of cash.
 The scan tools that are really worth the trouble start around 5K,  and
 people don't like making much investment in security products,  until
 they know they have a known breach on their hands.Many are likely
 to forego both,  purchase the cheapest firewall appliance they can
 find, that claims to have antivirus functionality,  maybe some
 stateful TCP filtering, and Web policy enforcement to restrict surfing
 activity;and feel safe,  the firewall protects us, no other
 security planning or products or services  req'd.


I don't really care to price stuff so I might be a little off here
(most of this stuff has free components). Nessus starts at around $1k,
Armitage is about the same (but no auto-pown, darn), Metasploit Pro is
a few grand. My point being, you can have a decent scanner (Nessus)
catching the really bad stuff for not much money (I dislike this line
of thought, but if you aren't knowledgeable to use tools and just want
a report for a grand, there you go).

 As I indicated above, 0days are expensive and no one is going to waste one
 on you. Put another way, if someone does, go home proud - you're in with
 [snip]

 I would call this wishful thinking;  0days are expensive,  so the
 people who want to use them, will want to get the most value they can
 get out of the 0day, before the bug gets fixed.


Odays are expensive, so when you see them, someone (Google, Firefox,
Adobe, etc) have generally paid for them. Once you see them, they are
not odays (dispite what people like to call recently disclosed public
vulns - it ain't an 0day).

 That means both small numbers of high value targets, and,  then...
 large numbers of lesser value targets. If you have a computer
 connected to the internet, some bandwidth, and a web browser or e-mail
 address, you are a probable target.


No, this means Stuxnet, Doqu, Flame. This means, I spent a million on
people pounding on stuff for a year, I'm going to take out a nuclear
facility or go after Google or RSA. I want things more valuable than
your student's social security numbers.

 If a 0day is used against you,  it's most likely to be used against
 your web browser  visiting a trusted  site you normally visit.


I don't have anything to back this up off hand, but my gut tells me
that most drive by web site malware isn't that well thought out.

 The baddies can help protect their investment in 0day exploit code,
 by making sure that by the time you detect it,  the exploit code is
 long gone,  so  the infection vector will be unknown.


If the US government can't prevent companies from analyzing their
work, do you really think random baddies can? Seriously?... No
really, seriously?

Here's the point, once you use an Oday, it is not an 0day. It's burnt.
It might still work on some people, but chances are all your high
value targets know about it and it won't work on them.



Re: chargen is the new DDoS tool?

2013-06-12 Thread shawn wilson
On Wed, Jun 12, 2013 at 7:14 AM, Aaron Glenn aaron.gl...@gmail.com wrote:
 On Wed, Jun 12, 2013 at 11:17 AM, shawn wilson ag4ve...@gmail.com wrote:


 Banks and insurance companies supposedly have some interesting actuarial
 data on this.


 Do you know of any publicly available sources?


I don't. There's a US entity that represents credit card companies
that has their own type of Verizon Data Breach Investigations Report
where you might find some iinfo of this type. You might also look at
how/if AlienVault and others rank threats which should give you the
how hard is this hack and how hard is this to fix figure.

The theory behind generating this type of actuarial data should be
more available than it is. I have a feeling that companies who have
this information look at entities in the same type of business and
make educated guesses on how breaches affected their bottom line based
on stock vaule and the like. There is probably some private data
sharing here as well.



Re: chargen is the new DDoS tool?

2013-06-12 Thread shawn wilson
Getting back to the topic. I just saw quite a few of our hosts scanned
for this by 192.111.155.106 which doesn't say much on its own as
http://dacentec.com/ is a hosting company.

On Tue, Jun 11, 2013 at 11:27 PM, Ricky Beam jfb...@gmail.com wrote:
 On Tue, 11 Jun 2013 22:52:52 -0400, Jimmy Hess mysi...@gmail.com wrote:

 Who really has a solid motive to make them stop working (other than a
 printer manufacturer who wants to sell them more) ?


 Duh, so people cannot print to them. (amungst various other creative pranks)

 From a cybercriminal pov, to swipe the things you're printing... like that
 CC authorization form you just printed, or a confidential contract, etc.
 (also, in many offices, the printer is also the scanner and fax)

 --Ricky




Re: PRISM: NSA/FBI Internet data mining project

2013-06-06 Thread shawn wilson
On Jun 6, 2013 9:30 PM, Jeff Kell jeff-k...@utc.edu wrote:


 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 6/6/2013 9:22 PM, valdis.kletni...@vt.edu wrote:
  On Thu, 06 Jun 2013 21:12:35 -0400, Robert Mathews (OSIA) said:
  On 6/6/2013 7:35 PM, Jay Ashworth wrote:
  [ . ]   Happily, none of the companies listed are transport
 networks:
 
  Could you be certain that TWC, Comcast, Qwest/CenturyLink could not be
  involved?
 
  Pay attention.  None of the ones *listed* are transport networks.
  Doesn't mean they're not involved but unlisted (as of yet).
 

 Umm... CALEA.  They've *already* had access for quite some time.


AFAIK, CALEA doesn't by default collect data for everyone on their network.
You use the word 'access' which doesn't convey anything to me - a network
switch might have access to all the data on the network but you might only
see some of it.

Should law enforcement have easy access to some data? Absolutely. If my
phone is ever stolen, I want the next cop car driving by the thief's
location to retrieve my phone and pick up the thief. But I'd prefer some
fat dude in an office not see pictures my grandmother emails to me.

Is there a way to do both? Sure. The way I'd like it done is to make all
requests for data open and respond to FOIA requests within a month. Or,
easier option - any data LE requests goes online. This way, if you have a
reason to request data for a whole state and your family happens to live
there, you know that the conversation between your family will also be
publicly available so will be more likely to limit the scope.


Re: Geoip lookup

2013-05-25 Thread shawn wilson
If anyone is interrested, here's a little Perl CLI util to lookup what
countries registered networks within a block. There's no documentation
yet, it's a .pl where it should probably be a command with a makefile
installer, and Net::CIDR overlaps Net::IP. At any rate, hopefully it
is useful to someone.

https://github.com/ag4ve/geocidr

PS - do note the -mask option (where you can define say, a 20 or 21 or
22) so that you're not sitting there banging on their DNS looking up
tons of /32s for blocks CYMRU doesn't have any information on.

On Sat, May 25, 2013 at 6:44 AM, John Curran jcur...@arin.net wrote:
 On May 24, 2013, at 10:47 AM, David Conrad d...@virtualized.org wrote:

 I replied privately to Owen, but might as well share:

 On May 23, 2013, at 11:57 PM, Owen DeLong o...@delong.com wrote:
 True, according to (at least some of) the RIRs they reside in regions...
 Really? Which ones? I thought they were only issued to organizations that 
 had operations in regions.
 That was exactly my point, Bill... If you have operations in RIPE and ARIN 
 regions, it is entirely possible for you to obtain addresses from RIPE or 
 ARIN and use them in both locations, or, obtain addresses from both RIPE 
 and ARIN and use them in their respective regions, or mix and match in just 
 about any imaginable way. Thus, IP addresses don't reside in regions, 
 either. They are merely issued somewhat regionally.

 A direct quote from a recent interaction with ARIN (this was requested by 
 ARIN staff as part of the back and forth for requesting address space):

 Please reply and verify that you will be using the requested number 
 resources within the ARIN region and announcing all routing prefixes of the 
 requested space from within the ARIN region. In accordance with section 2.2 
 of the NRPM, ARIN issues number resources only for use within its region. 
 ARIN is therefore only able to provide for your in-region numbering needs.

 I believe AfriNIC and LACNIC have similar limitations on use but am too lazy 
 to look it up (and I don't really care all that much: just thought it was 
 amusing).

 Indeed.  This was covered in more detail in the Policy Experience Report
 given at the ARIN 31, in which it was noted that we are seeing an increase
 in requests for IPv4 address space from parties who have infrastructure in
 the region, but for customers entirely from outside the region.  This has
 resulted in a significant change in the issuance rate and therefore any
 estimates for regional free pool depletion.  ARIN has sought guidance from
 the community regarding what constitutes appropriate in-region use, should
 this be based on infrastructure or served customers, and whether incidental
 use outside the region is appropriate.  (This topic was also on this list on
 26 April 2012 - see attached email from that thread)   Policy proposals in
 this area to bring further clarity in address management are encouraged.

 FYI,
 /John

 John Curran
 President and CEO
 ARIN

 ===
 Begin forwarded message:

 From: John Curran jcur...@arin.net
 Subject: Re: It's the end of the world as we know it -- REM
 Date: April 26, 2013 10:43:51 AM EDT
 To: nanog@nanog.org Group nanog@nanog.org

 On Apr 26, 2013, at 10:23 AM, Chris Grundemann cgrundem...@gmail.com wrote:

 One interesting twist in all of this is that several of these new
 slow-start players in the ARIN region seem to be servicing customers
 outside of the region with equipment and services hosted here inside
 the ARIN region (see slide 12 on the ARIN 31 Policy Implementation
 and Experience Report
 https://www.arin.net/participate/meetings/reports/ARIN_31/PDF/monday/nobile_policy.pdf).

 NANOG Folks -

 Please read this slide deck, section noted by Chris.  It explains the
 situation...  (I would not call the sudden acceleration in IP address
 issuance a problem, per se, as that is an judgement for the community
 either way.)

 FYI,
 /John

 John Curran
 President and CEO
 ARIN







Re: Geoip lookup

2013-05-24 Thread shawn wilson
I knew this would come up. Actually I'm surprised and glad it waited until
I got a solution first.

I'll address a few points:
- this is mainly to stop stupid things from sending packets from countries
we will probably never want to do business with (I'm looking mainly at that
big country under APNIC).
- I'd prefer a solution that blocks all traffic that is routed through
those countries so that they could never see data from us (and when
Jin-rong has a configuration mess up and rerouts ~10% of traffic through
them for a half hour, I don't see any of that traffic). Since I have no
idea how one would go about doing this, just blocking traffic from IP
addresses registered in certain countries is good enough.
- it is well known (I think everyone on this list at least) that you can
evade geographic placement of your origin by tunneling. Given this, I fail
to see the point in bringing up that GeoIP doesn't work. Also, if it
doesn't work, why do content providers, CDNs, google, and streaming
services rely on it as part of their business model? The sad truth of the
mater is it does work and surprisingly well. We just don't like it because
it's brittle and a user can fool us (I know Akami and the like look at trip
time and the like because they know there are issues). Given all of this,
how often is looking at the country an IP address originates from via what
is listed for the particular ASN actually fiction?

Again, the input was invaluable for getting me where I wanted to be so
thanks again.
On May 24, 2013 2:59 AM, Owen DeLong o...@delong.com wrote:


 On May 23, 2013, at 23:49 , bmann...@vacation.karoshi.com wrote:

  On Thu, May 23, 2013 at 11:39:12PM -0700, Owen DeLong wrote:
 
  On May 23, 2013, at 23:17 , David Conrad d...@virtualized.org wrote:
 
  On May 23, 2013, at 10:53 PM, Andreas Larsen 
 andreas.lar...@ip-only.se wrote:
  The whole idea of Geoip is flawed.
 
  Sure, but pragmatically, it's an 80% solution.
 
  IP dosen't reside in countries,
 
  True, according to (at least some of) the RIRs they reside in
 regions...
 
 
  Really? Which ones? I thought they were only issued to organizations
 that had operations in regions.
 
  Owen
 
Just because I have operations in one region does not preclude me
 from having operations
in other regions.  YMMV of course.
 
  /bill

 That was exactly my point, Bill... If you have operations in RIPE and ARIN
 regions, it is entirely possible for you to obtain addresses from RIPE or
 ARIN and use them in both locations, or, obtain addresses from both RIPE
 and ARIN and use them in their respective regions, or mix and match in just
 about any imaginable way. Thus, IP addresses don't reside in regions,
 either. They are merely issued somewhat regionally.

 Owen





Geoip lookup

2013-05-23 Thread shawn wilson
What's the best way to find the networks in a country? I was thinking of
writing some perl with Net::Whois::ARIN or some such module and loop
through the block. But I think I'll have to be smarter than just a simple
loop not to get blocked and I figure I'm not the first to want to do this.

I've noticed some paid databases out there. They don't cost much but are
they even worth what they charge? Ie, countryipblocks.net doesn't list
quite a few addresses from a country I've looked at blocking. Isn't this
information free from the different *NICs anyway?

This is probably two questions: a program that smartly looks for country's
blocks in a block and are GeoIP services worth anything?


Re: Geoip lookup

2013-05-23 Thread shawn wilson
On Thu, May 23, 2013 at 4:32 PM, Joe Abley jab...@hopcount.ca wrote:

 On 2013-05-23, at 15:47, shawn wilson ag4ve...@gmail.com wrote:

 What's the best way to find the networks in a country? I was thinking of
 writing some perl with Net::Whois::ARIN or some such module and loop
 through the block. But I think I'll have to be smarter than just a simple
 loop not to get blocked and I figure I'm not the first to want to do this.

 If you are looking for registration data, try looking in one or more of

   ftp://ftp.apnic.net/public/apnic/stats/apnic/
   ftp://ftp.ripe.net/ripe/dbase/
   ftp://ftp.lacnic.net/pub/stats/lacnic/
   ftp://ftp.afrinic.net/stats/afrinic/
   ftp://ftp.arin.net/pub/stats/arin/

 (poke around and see what you can find; I didn't spend much time trying, but 
 several/all of the RIRs seem to mirror data from all the others)

Thanks


 Note that networks in a country is a funny phrase. The sets

  - address space assigned to all organisations located in country X
  - routes visible in country X (from some viewpoint)
  - all addresses assigned to devices physically located within country X
  - routes that are considered in-country in places where billing is aligned 
 with the necessity to traverse a long bit of wet glass

 are frequently incongruent. If this matters, you might want to consider a 
 more detailed specification of networks in a country.


I had somewhat considered the second and the fourth point. I assumed
by using whois data, I am getting the second of those options and that
was good enough. If there's a way to (somewhat easily) implement the
third option, I'm all ears.



Re: Geoip lookup

2013-05-23 Thread shawn wilson
On Thu, May 23, 2013 at 4:40 PM, shawn wilson ag4ve...@gmail.com wrote:
 On Thu, May 23, 2013 at 4:32 PM, Joe Abley jab...@hopcount.ca wrote:

 On 2013-05-23, at 15:47, shawn wilson ag4ve...@gmail.com wrote:



   ftp://ftp.apnic.net/public/apnic/stats/apnic/
   ftp://ftp.ripe.net/ripe/dbase/
   ftp://ftp.lacnic.net/pub/stats/lacnic/
   ftp://ftp.afrinic.net/stats/afrinic/
   ftp://ftp.arin.net/pub/stats/arin/


It looks you're right and everyone does have the same data in
historical format. Looks like RIPE has everything compiled into what
is current. So if a block hasn't changed for 10 years, it'll be in the
RIPE dataset vs with the others, I'd have to write something to
overlay the data through out time to get current?



  1   2   >