Re: Zayo opinions

2014-11-12 Thread Jonathan Lassoff
Zayo owns what used to be Abovenet.

In my experience, your experience will vary from market to market,
depending on the network you're based on.

As of late, we've had repeated capacity issues and packet loss in the San
Francisco Bay Area, however other metros have been perfectly stable.

On Wed, Nov 12, 2014 at 1:16 PM, james jones ja...@freedomnet.co.nz wrote:

 I am current going through some vendor selection for tier 1 providers. I
 was trying get some opinions on Zayo. I have personally never heard of
 them. Thoughts?



Re: Keeping Track of Data Usage in GB Per Port

2014-10-15 Thread Jonathan Lassoff
On Wed, Oct 15, 2014 at 12:38 PM, Colton Conor colton.co...@gmail.com wrote:
 So based on the response I have received so far it seems cable was a
 complicated example with service flows involved. What if we are talking
 about something simpler like keeping track of how much data flows in and
 out of a port on a switch in a given month? I know you can use SNMP, but I
 believe that polls in intervals and takes samples which isn't really
 accurate right?

It depends on what you're talking about.

Network devices implementing the SNMP IF-MIB have counters for each
interface that when polled, show the number of bytes being transmitted
and received.
Conventionally, network operators poll these counter values, compute
the difference from the last time it was polled, and extrapolate a
rate (bit volume over a time unit) from that. Often, this is done over
a 5 minute interval.
This introduces some averaging error.

However, if an operator is just computing cumulative transfer, it's pretty easy.
Just continue to sum up the counter value deltas from poll to poll.
It could be easy to mess this up if the counter size is too small, or
rolls more than once in-between polls.


If a large telecom can't get billing correct, they shouldn't be
allowed to do business.
Easier solution: stop metering customers, and sink more money into
expanded infrastructure.


Re: BGP Session

2014-07-19 Thread Jonathan Lassoff
An Anycasting node. For example, as part of a reliable DNS service.
A /24 is usually the smallest prefix length that is portably accepted.

Also, applications where connections need to appear to be coming from many
source IPs.

On Saturday, July 19, 2014, Suresh Ramasubramanian ops.li...@gmail.com
wrote:

 A single linux box with a whole /24 on it? What sort of use case is that,
 BTW?
  On 19-Jul-2014 10:26 pm, Abuse Contact stopabuseandrep...@gmail.com
 javascript:;
 wrote:

  I know, the DC is going to be giving me a BGP session on their router so
 I
  can set it up, I'm not using a Linux server as a router.
 
 
  On Sat, Jul 19, 2014 at 9:04 AM, William Herrin b...@herrin.us
 javascript:; wrote:
 
   On Wed, Jul 16, 2014 at 4:05 AM, Abuse Contact
   stopabuseandrep...@gmail.com javascript:; wrote:
So I just purchased a Dedicated server from this one company and I
  have a
/24 IPv4 block that I bought from a company on WebHostingTalk, but I
 am
clueless on how to setup the /24 IPv4 block using the BGP Session. I
  want
to set it up to run through their network as if it was one of their
  IPs,
etc. I keep seeing things like iBGP (which I think means like a inner
routing BGP) and eBGP (what I'm talking about??) but I have no idea
 how
   to
set those up or which one I would need.
  
   Howdy,
  
   Unless you have (1) a real router available, not a just a server and
   (2) an expert available to help you with your first BGP configuration
   I strongly recommend you simply ask your service provider to announce
   the /24 to the Internet on your behalf.
  
   Server-based BGP software like Quagga for Linux is reasonably good but
   it should absolutely not be involved in your _first_ attempt to
   connect with the Internet's default-free zone. Simple mistakes with
   eBGP can cause tremendous damage to other folks on the Internet. Trial
   and error is simply not OK. If it isn't worth it to you to buy a
   BGP-capable router then you also aren't prepared to make the
   investment in learning it takes to use BGP without causing harm.
  
   Regards,
   Bill Herrin
  
  
   --
   William Herrin  her...@dirtside.com javascript:;
 b...@herrin.us javascript:;
   Owner, Dirtside Systems . Web: http://www.dirtside.com/
   Can I solve your unusual networking challenges?
  
 



Re: BGP Session

2014-07-19 Thread Jonathan Lassoff
On Sat, Jul 19, 2014 at 10:12 AM, Abuse Contact
stopabuseandrep...@gmail.com wrote:
 Yeah, we're using it for an anycasted node but like, I'm confused on certain
 parts like, just a really basic question.
 When doing things like

 conf t
 router bgp AS1337

 neighbor 208.54.128.0 remote-as AS13335
 neighbor 208.54.128.0 description BGP with Upstream
 neighbor 208.54.128.0 password lolpass

 address-family ipv4
 no synchronization
 neighbor 208.54.128.0 activate
 neighbor 208.54.128.0 soft-reconfiguration inboung

 I'm confused on when doing this, would I need to state like

 First go to AS13335 then go to TATA then go to my server or would it just
 automatically do that or would my provider do that? I'm confused on that.
 how would I state multiple peers.?

AS13335 is Cloudflare.
How does TATA relate? You have a deicated server connected to TATA and
Cloudflare? I'm skeptical.

You really ought to do some more reading, learning, and practicing
before running public BGP.

I would recommend reading this book cover-to-cover:
http://www.bgpexpert.com/'BGP'-by-Iljitsch-van-Beijnum/
It's only ~250 small pages.
To practice and experiment, emulate some example configurations with
GNS3 and Dynamips, or some Linux VMs with Quagga or BIRD.




 On Sat, Jul 19, 2014 at 10:06 AM, Jonathan Lassoff j...@thejof.com wrote:

 An Anycasting node. For example, as part of a reliable DNS service.
 A /24 is usually the smallest prefix length that is portably accepted.

 Also, applications where connections need to appear to be coming from many
 source IPs.


 On Saturday, July 19, 2014, Suresh Ramasubramanian ops.li...@gmail.com
 wrote:

 A single linux box with a whole /24 on it? What sort of use case is that,
 BTW?
  On 19-Jul-2014 10:26 pm, Abuse Contact stopabuseandrep...@gmail.com
 wrote:

  I know, the DC is going to be giving me a BGP session on their router
  so I
  can set it up, I'm not using a Linux server as a router.
 
 
  On Sat, Jul 19, 2014 at 9:04 AM, William Herrin b...@herrin.us wrote:
 
   On Wed, Jul 16, 2014 at 4:05 AM, Abuse Contact
   stopabuseandrep...@gmail.com wrote:
So I just purchased a Dedicated server from this one company and I
  have a
/24 IPv4 block that I bought from a company on WebHostingTalk, but
I am
clueless on how to setup the /24 IPv4 block using the BGP Session.
I
  want
to set it up to run through their network as if it was one of their
  IPs,
etc. I keep seeing things like iBGP (which I think means like a
inner
routing BGP) and eBGP (what I'm talking about??) but I have no idea
how
   to
set those up or which one I would need.
  
   Howdy,
  
   Unless you have (1) a real router available, not a just a server and
   (2) an expert available to help you with your first BGP configuration
   I strongly recommend you simply ask your service provider to announce
   the /24 to the Internet on your behalf.
  
   Server-based BGP software like Quagga for Linux is reasonably good
   but
   it should absolutely not be involved in your _first_ attempt to
   connect with the Internet's default-free zone. Simple mistakes with
   eBGP can cause tremendous damage to other folks on the Internet.
   Trial
   and error is simply not OK. If it isn't worth it to you to buy a
   BGP-capable router then you also aren't prepared to make the
   investment in learning it takes to use BGP without causing harm.
  
   Regards,
   Bill Herrin
  
  
   --
   William Herrin  her...@dirtside.com  b...@herrin.us
   Owner, Dirtside Systems . Web: http://www.dirtside.com/
   Can I solve your unusual networking challenges?
  
 




Re: BGP Session

2014-07-16 Thread Jonathan Lassoff
Wow -- be careful playing with public eBGP sessions unless you know
what you're doing. It can affect the entire Internet.

Since you're just connecting to a single upstream ISP, you wont
qualify for a public AS number. So, you'll have to work with your
upstream ISP to agree on a private AS number you can use.
You will be setting up an eBGP session (which is a session between two
different AS numbers, as opposed to iBGP, wherein the AS numbers are
the same).

As for running BGP on a dedicated server, it'll depend on the OS in
use. Assuming Linux, take a look at Quagga, BIRD, and ExaBGP.
http://www.nongnu.org/quagga/
http://bird.network.cz/
https://code.google.com/p/exabgp/


It may be a *lot* easier for you to just have your upstream ISP
announce your IP space, and route it to your dedicated server, unless
you need the ability to turn it off and on over time.

Cheers,
jof

On Wed, Jul 16, 2014 at 1:05 AM, Abuse Contact
stopabuseandrep...@gmail.com wrote:
 Hi,
 So I just purchased a Dedicated server from this one company and I have a
 /24 IPv4 block that I bought from a company on WebHostingTalk, but I am
 clueless on how to setup the /24 IPv4 block using the BGP Session. I want
 to set it up to run through their network as if it was one of their IPs,
 etc. I keep seeing things like iBGP (which I think means like a inner
 routing BGP) and eBGP (what I'm talking about??) but I have no idea how to
 set those up or which one I would need.

 Any help would be appreciated.


 Thanks!


Re: MACsec SFP

2014-06-24 Thread Jonathan Lassoff
On Tue, Jun 24, 2014 at 12:59 AM, Pieter Hulshoff phuls...@aimvalley.nl wrote:
 On 24-6-2014 8:37, Saku Ytti wrote:

 On (2014-06-23 11:13 +0200), Pieter Hulshoff wrote:

 feature and market information for such a device, and I would welcome
 some
 feedback from interested people. Discussion about other types of smart
 SFPs
 would also be welcome. Feel free to contact me directly using the contact
 information below.

 I'd do questionable things for subrate SFP, SFP which I can put to 1GE
 port
 and have 10M and 100M rates available. Or to 10GE port and get 1GE, 100M
 and
 10M

 Use case is network generation upgrade where you still have one or two
 100M
 ports for MGMT ports etc.


 I've seen this request from others as well. Do you have any
 proposal/preference to limit the data rate from the switch?

Seems like it would be just like emulating a media convertor. Drop any
frames in excess of 100 Mbit? Perhaps buffer a little bit?

If using the interface for any protocols, configuration might need to
be made to adjust link costs.


Re: Odd syslog-ng problem

2014-05-11 Thread Jonathan Lassoff
Peter, it's a bit difficult to tell what's going on without seeing the
rest of the syslog-ng configuration and your script's source code.

However, a couple possibilities come to mind:
- Your script is only reading one line at a time. syslog-ng starts a
program() output persistently and expects that it can send multiple
messages into its pipe to your script's stdin.
- Messages are being buffered inside of syslog-ng. Check out the
flush_lines() and flush_timeout() flags to syslog-ng's program()
output. Find the right page for your version, but here's v3.3.:
http://www.balabit.com/sites/default/files/documents/syslog-ng-ose-3.3-guides/en/syslog-ng-ose-v3.3-guide-admin-en/html/reference_destination_program.html
- Messages are being buffered in your shell or script. Maybe try some
non-blocking IO with a smallish buffer to see data as it comes in
before a whole line or block fills and flushes in.


To Anurag's question about open source log management with a WebUI, I
agree with Blake: logstash ingesting syslog and inputting it into
elasticsearch makes for a great backend for Kibana.
The logstash grok filter is great for pulling apart and indexing weird
vendor-specific logging formats:
http://logstash.net/docs/1.4.1/filters/grok

Cheers,
jof

On Sat, May 10, 2014 at 2:24 AM, Peter Persson web...@webbax.se wrote:
 Hey,

 I got a weird problem with my syslog-ng setup, im logging from alot of
 cisco machines and that works great.
 The problem is that when i pass this further to a shell program, some
 lines disapere.

 My destination looks like this
 destination hosts {
file(/var/log/ciscorouters/$HOST.log
owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes));
program(/scripts/irc/syslog_wrapper_new.sh template(t_irctempl));
 };
 The /var/log/ciscorouters/$HOST.log writes correct, but the data thats
 putted trough to /scripts/irc/syslog_wrapper_new.sh only get the first
 line, if it gets flooded (like 5 rows per second).

 Do anyone of you have any idea of what might be the problem?

 Regards,
 Peter


Re: Fwd: Serious bug in ubiquitous OpenSSL library: Heartbleed

2014-04-08 Thread Jonathan Lassoff
For testing, I've had good luck with
https://github.com/titanous/heartbleeder and
https://gist.github.com/takeshixx/10107280

Both are mostly platform-independent, so they should be able to work even
if you don't have a modern OpenSSL to test with.

Cheers and good luck (you're going to need it),
jof

On Tue, Apr 8, 2014 at 5:03 PM, Michael Thomas m...@mtcc.com wrote:

 Just as a data point, I checked the servers I run and it's a good thing I
 didn't reflexively update them first.
 On Centos 6.0, the default openssl is 1.0.0 which supposedly doesn't have
 the vulnerability, but the
 ones queued up for update do. I assume that redhat will get the patched
 version soon but be careful!

 Mike


 On 04/07/2014 10:06 PM, Paul Ferguson wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 I'm really surprised no one has mentioned this here yet...

 FYI,

 - - ferg



 Begin forwarded message:

  From: Rich Kulawiec r...@gsp.org Subject: Serious bug in
 ubiquitous OpenSSL library: Heartbleed Date: April 7, 2014 at
 9:27:40 PM EDT

 This reaches across many versions of Linux and BSD and, I'd
 presume, into some versions of operating systems based on them.
 OpenSSL is used in web servers, mail servers, VPNs, and many other
 places.

 Writeup: Heartbleed: Serious OpenSSL zero day vulnerability
 revealed
 http://www.zdnet.com/heartbleed-serious-openssl-zero-day-vulnerability-
 revealed-728166/

   Technical details: Heartbleed Bug http://heartbleed.com/

 OpenSSL versions affected (from link just above):  OpenSSL 1.0.1
 through 1.0.1f (inclusive) are vulnerable OpenSSL 1.0.1g is NOT
 vulnerable (released today, April 7, 2014) OpenSSL 1.0.0 branch is
 NOT vulnerable OpenSSL 0.9.8 branch is NOT vulnerable


 - -- Paul Ferguson
 VP Threat Intelligence, IID
 PGP Public Key ID: 0x54DC85B2
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.22 (MingW32)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iF4EAREIAAYFAlNDg9gACgkQKJasdVTchbIrAAD9HzKaElH1Tk0oIomAOoSOvfJf
 3Dvt4QB54os4/yewQQ8A/0dhFZ/YuEdA81dkNfR9KIf1ZF72CyslSPxPvkDcTz5e
 =aAzE
 -END PGP SIGNATURE-






Re: Blocking of domain strings in iptables

2014-02-08 Thread Jonathan Lassoff
This is going to be tricky to do, as DNS packets don't necessarily contain
entire query values or FQDNs as complete strings due to packet label
compression (remember, original DNS only has 512 bytes to work with).

You can use those u32 module matches to find some known-bad packets if
they're sufficiently unique, but it simply lacks enough logic to fully
parse DNS queries.
Here's an interesting example to visualize what's happening:
http://dnsamplificationattacks.blogspot.com/p/iptables-block-list.html

One quick thing that would work would be to match a single label (e.g.
google, but not google.com), but this will end up blocking any frames
with that substring in it (e.g. you want to block evil.com, but this also
blocks evil.example.com).

If you find yourself needing to parse and block DNS packets based on their
content in a more flexible way, I would look into either making an iptables
module that does the DNS parsing (
http://inai.de/documents/Netfilter_Modules.pdf), or using a userspace
library like with NFQUEUE (e.g. https://pypi.python.org/pypi/NetfilterQueue)
or l7-filter (http://l7-filter.sourceforge.net/).

Best of luck and happy hacking!

Cheers,
jof



On Sat, Feb 8, 2014 at 12:08 AM, Anurag Bhatia m...@anuragbhatia.com wrote:

 Hello everyone


 I am trying to figure out the way to drop a domain name DNS resolution
 before it hits application server. I do not want to do domain to IP mapping
 and block destination IP (and source IP blocking is also not an option).

 I can see that a string like this:

 iptables -A INPUT -p udp -m udp --dport 53 -m string --string domain
 --algo kmp --to 65535 -j DROP


 this can block domain which includes domain.com/domain.net and
 everything
 in that pattern. I tried using hexadecimal string for value like domaincom
 (hexa equivalent) and firewall doesn't pics that at all.

 The only other option which I found to be working nicely is u32 based
 string as something suggested on DNS amplification blog post here -

 http://dnsamplificationattacks.blogspot.in/2013/12/domain-dnsamplificationattackscc.html


 A string like this as suggested on above link works exactly for that domain

 iptables --insert INPUT -p udp --dport 53 -m u32 --u32
 0x280xFFDFDFDF=0x17444e53  0x2c0xDFDFDFDF=0x414d504c 
 0x300xDFDFDFDF=0x49464943  0x340xDFDFDFDF=0x4154494f 
 0x380xDFDFDFDF=0x4e415454  0x3c0xDFDFDFDF=0x41434b53 
 0x400xFFDFDFFF=0x02434300 -j DROP -m comment --comment DROP DNS Q
 dnsamplificationattacks.cc


 but here I am not sure how to create such string out and script them for
 automation.



 Can someone suggest a way out for this within IPTables or may be some other
 open source firewall?


 Thanks.

 --


 Anurag Bhatia
 anuragbhatia.com

 Linkedin http://in.linkedin.com/in/anuragbhatia21 |
 Twitterhttps://twitter.com/anurag_bhatia
 Skype: anuragbhatia.com

 PGP Key Fingerprint: 3115 677D 2E94 B696 651B 870C C06D D524 245E 58E2



Re: GEO location issue with google

2014-02-07 Thread Jonathan Lassoff
Here's the FAQ on this topic:
https://support.google.com/websearch/answer/873?hl=en

It links to a contact form where you can ask for some redress.

Cheers,
jof


On Fri, Feb 7, 2014 at 7:20 AM, Praveen Unnikrishnan p...@pmgroupuk.comwrote:

 Hi,

 We are an ISP based in UK. We have got an ip block from RIPE which is
 5.250.176.0/20. All the main search engines like yahoo shows we are based
 in UK. But Google thinks we are from Saudi Arabia and we redirected to
 www.google.com.sahttp://www.google.com.sa instead of googlw.co.uk. I
 have sent lot of emails to google but no luck. All the information from
 google are in Arabic and youtube shows some weird videos as well.

 Could anyone please help me to sort this out?

 Would be much appreciated for your time.

 Praveen Unnikrishnan
 Network Engineer
 PMGC Technology Group Ltd
 T:  020 3542 6401
 M: 07827921390
 F:  087 1813 1467
 E: p...@pmgroupuk.commailto:p...@pmgroupuk.com

 [cid:image001.png@01CF2418.27F29CA0]


 [cid:image002.jpg@01CE1663.96B300D0]
 www.pmgroupuk.comhttp://www.pmgroupuk.com/ | www.pmgchosting.com 
 http://www.pmgchosting.com/
 How am I doing? Contact my manager, click heremailto:sha...@pmgroupuk.com
 ?subject=How's%20Praveen%20doing?.

 
 [cid:image003.jpg@01CE1663.96B300D0]

 PMGC Managed Hosting is now live!  After a successful 2012, PMGC continues
 to innovate and develop by offering tailored IT solutions designed to
 empower you through intelligent use of technologies. www.pmgchosting.com
 http://www.pmgchosting.com/.

 PMGC Technology Group Limited is a company registered in England and
 Wales. Registered number: 7974624 (3/F Sutherland House, 5-6 Argyll Street,
 London. W1F 7TE). This message contains confidential (and potentially
 legally privileged) information solely for its intended recipients. Others
 may not distribute copy or use it. If you have received this communication
 in error please contact the sender as soon as possible and delete the email
 and any attachments without keeping copies. Any views or opinions presented
 are solely those of the author and do not necessarily represent those of
 the company or its associated companies unless otherwise specifically
 stated. All incoming and outgoing e-mails may be monitored in line with
 current legislation. It is the responsibility of the recipient to ensure
 that emails are virus free before opening.

 PMGC(r) is a registered trademark of PMGC Technology Group Ltd.





Re: The state of TACACS+

2013-12-30 Thread Jonathan Lassoff
I don't understand why vendors and operators keep turning to TACACS. It
seems like they're often looking to Cisco as some paragon of best security
practices. It's a vulnerable protocol, but some times the only thing to
choose from.

One approach to secure devices that can support only TACACS or RADIUS:
Deploy a small embedded *nix machine (Soekris, Raspberry Pi, etc.) that
runs a RADSEC (for RADIUS) or stunnel (for TACACS) proxy. Attach it to a
short copper with 802.1q, take weak xor'ed requests in on one tag, wrap the
requests with TLS, and forward out another tag towards your central AAA box.

Kerberos or more certificate-based SSH on routers would be super.
SSH with certificates is nice in that it allows authenticators out in the
field to verify clients offline, without needing a central AAA server.
However, the tradeoff is that you must then make sure all the clocks are
correct and in-sync, and root certificates are verified.




On Mon, Dec 30, 2013 at 2:06 AM, Robert Drake rdr...@direcpath.com wrote:

 Ever since first using it I've always liked tacacs+.  Having said that
 I've grown to dislike some things about it recently.  I guess, there have
 always been problems but I've been willing to leave them alone.

 I don't have time to give the code a real deep inspection, so I'm
 interested in others thoughts about it.  I suspect people have just left it
 alone because it works.  Also I apologize if this is too verbose or
 technical, or not technical enough, or just hard to read.

 History:

 TACACS+ was proposed as a standard to the IETF.  They never adopted it and
 let the standards draft expire in 1998.  Since then there have been no
 official changes to the code.  Much has happened between now and then.  I
 specifically was interested in parsing tac_plus logs correctly.  After
 finding idiosyncrasies I decided to look at the source and the RFC to see
 what was really happening.

 Logging, or why I got into this mess:

 In the accounting log, fields are sometimes logged in different order.  It
 appears the client is logging whatever it receives without parsing it or
 modifying it.  That means the remote system is sending them in different
 orders, so technically the fault lies with them.  However, it seems too
 trusting to take in data and log it without looking at it.  This can also
 cause issues when you send a command like (Cisco) dir /all nvram: on a
 box with many files. The device expands the command to include everything
 on the nvram (important because you might want to deny access to that
 command based on something it expanded), but it gets truncated somewhere
 (not sure if it's the device buffer that is  full, tac_plus, or the logging
 part.  I might tcpdump for a while to see if I can figure out what it looks
 like on the wire) I'm not sure if there are security implications there.

 Encryption:

 The existing security consists of md5 XOR content with the md5 being
 composed of a running series of 16 byte hashes, taking the previous hash as
 part of the seed of the next hash.  A sequence number is used so simple
 replay shouldn't be a factor.  Depending on how vulnerable iterative md5 is
 to it, and how much time you had to sniff the traffic, I would think this
 would be highly vulnerable to chosen plaintext if you already have a
 user-level login, or at least partial known plaintext (with the assumption
 they make backups, you can guess that at least some of the packets will
 have show running-config and other common commands).  They also don't pad
 the encrypted string so you can guess the command (or password) based on
 the length of the encrypted data.

 For a better description of the encryption you can read the draft:
 http://tools.ietf.org/html/draft-grant-tacacs-02
 I found an article from May, 2000 which shows that the encryption scheme
 chosen was insufficient even then.
 http://www.openwall.com/articles/TACACS+-Protocol-Security

 For new crypto I would advise multiple cipher support with negotiation so
 you know what each client and server is capable of. If the client and
 server supported multiple keys (with a keyid) it would be  easier to roll
 keys frequently, or if it isn't too much overhead they could use public key.


 Clients:

 As for clients, Wikipedia lists several that seem to be based on the
 original open-source tac_plus from Cisco.  shrubbery.net has the
 official version that debian and freebsd use.  I looked at some of the
 others and they all seemed to derive from Cisco's code directly or
 shrubbery.net code, but they retained the name and started doing their
 own versioning.  All the webpages look like they're from 1995.  In some
 cases I think it's intentional but in some ways it shows a lack of care for
 the code, like it's been dropped since 2000.

 Documentation is old:

 This only applies to shrubbery.net's version.  I didn't look at the other
 ones that closely.  While all of it appears valid, one QA in the FAQ was
 about IOS 10.3/11.0.   Performance 

Re: WaPo writes about vulnerabilities in Supermicro IPMIs

2013-08-15 Thread Jonathan Lassoff
The primary point of IPMI for most users is to be able to administer and
control the box when it's not running.
Using the host itself as a firewall is the quickest way to get that BMC
online, but it kinda defeats the purpose.


On Thu, Aug 15, 2013 at 7:46 PM, Jay Ashworth j...@baylink.com wrote:

 - Original Message -
  From: Brandon Martin lists.na...@monmotha.net

  As to why people wouldn't put them behind dedicated firewalls, imagine
  something like a single-server colo scenario. Most such providers don't
  offer any form of lights-out management aside from maybe remote reboot
  (power-cycle) nor do they offer any form of protected/secondary network
  to their customers. So, if you want to save yourself from a trip, you
  chuck the thing raw on a public IP and hope you configured it right.

 Well, *I* would firewall eth1 from eth0 and cross-over eth1 to the ILO
 jack;
 let the box be the firewall.  Sure, it's still as breakable as the box
 proper, but security-by-obscurity isn't *bad*, it's just *not good enough*.

 It's another layer of tape.

 Whether it's teflon or Gorilla is up to you.

 Cheers,
 -- jra
 --
 Jay R. Ashworth  Baylink
 j...@baylink.com
 Designer The Things I Think   RFC
 2100
 Ashworth  Associates http://baylink.pitas.com 2000 Land
 Rover DII
 St Petersburg FL USA   #natog  +1 727 647
 1274




Re: Blocking TCP flows?

2013-06-13 Thread Jonathan Lassoff
Are you trying to block flows from becoming established, knowing what
you're looking for ahead of time, or are you looking to examine a
stream of flow establishments, and will snipe off some flows once
you've determined that they should be blocked?

If you know a 5-tuple (src/dst IP, IP protocol, src/dst L4 ports) you
want to block ahead of time, just place an ACL. It depends on the
platform, but those that implement them in hardware can filter a lot
of traffic very quickly.
However, they're not a great tool when you want to dynamically
reconfigure the rules.

For high-touch inspection, I'd recommend a stripe of Linux boxes, with
traffic being ECMP-balanced across all of them, sitting in-line on the
traffic path. It adds a tiny bit of latency, but can scale up to
process large traffic paths and apply complex inspections on the
traffic.

Cheers,
jof

On Thu, Jun 13, 2013 at 12:32 PM, Eric Wustrow ew...@umich.edu wrote:
 Hi all,

 I'm looking for a way to block individual TCP flows (5-tuple) on a 1-10 gbps
 link, with new blocked flows being dropped within a millisecond or so of
 being
 added. I've been looking into using OpenFlow on an HP Procurve, but I don't
 know much in this area, so I'm looking for better alternatives.

 Ideally, such a device would add minimal latency (many/expandable CAM
 entries?), can handle many programatically added flows (hundreds per
 second),
 and would be deployable in a production network (fails in bypass mode). Are
 there any
 COTS devices I should be looking at? Or is the market for this all under
 the table to
 pro-censorship governments?

 Thanks,

 -Eric



Re: Blocking TCP flows?

2013-06-13 Thread Jonathan Lassoff
On Thu, Jun 13, 2013 at 3:38 PM, Phil Fagan philfa...@gmail.com wrote:
 I would assume something FreeBSD based might be best

Meh... personal choice. I prefer Linux, mostly because I know it best
and most network application development is taking place there.

 On Thu, Jun 13, 2013 at 4:37 PM, Phil Fagan philfa...@gmail.com wrote:

 I really like the idea of a stripe of linux boxes doing the heavy lifting.
 Any suggestions on platforms, card types, and chip types that might be
 better purposed at processing this type of data?

Personally, I'd use modern-ish Intel Ethernet NICs. They seem to have
the best support in the kernel.

 I assume you could write some fast Perl to ingest and manage the tables?
 What would the package of choice be for something like this?

Heh...  fast Perl.
As for programming the processing, I would do as much as possible in
the kernel, as passing packets off to userland really slows everything
down.
If you really need to, I'd do something with Go and/or C these days.

Using iptables and the string module to match patterns, you can chew
through packets pretty efficiently. This comes with the caveat that
this can only match against strings contained within a single packet;
this doesn't do L4 stream reconstruction.

You can do some incredibly-parallel stuff with ntop's PF_RING code, if
you blow more traffic through a single core than it can chew through.

It all depends on what you're trying to do.

--j


 On Thu, Jun 13, 2013 at 3:11 PM, Jonathan Lassoff j...@thejof.com wrote:

 Are you trying to block flows from becoming established, knowing what
 you're looking for ahead of time, or are you looking to examine a
 stream of flow establishments, and will snipe off some flows once
 you've determined that they should be blocked?

 If you know a 5-tuple (src/dst IP, IP protocol, src/dst L4 ports) you
 want to block ahead of time, just place an ACL. It depends on the
 platform, but those that implement them in hardware can filter a lot
 of traffic very quickly.
 However, they're not a great tool when you want to dynamically
 reconfigure the rules.

 For high-touch inspection, I'd recommend a stripe of Linux boxes, with
 traffic being ECMP-balanced across all of them, sitting in-line on the
 traffic path. It adds a tiny bit of latency, but can scale up to
 process large traffic paths and apply complex inspections on the
 traffic.

 Cheers,
 jof

 On Thu, Jun 13, 2013 at 12:32 PM, Eric Wustrow ew...@umich.edu wrote:
  Hi all,
 
  I'm looking for a way to block individual TCP flows (5-tuple) on a 1-10
  gbps
  link, with new blocked flows being dropped within a millisecond or so
  of
  being
  added. I've been looking into using OpenFlow on an HP Procurve, but I
  don't
  know much in this area, so I'm looking for better alternatives.
 
  Ideally, such a device would add minimal latency (many/expandable CAM
  entries?), can handle many programatically added flows (hundreds per
  second),
  and would be deployable in a production network (fails in bypass mode).
  Are
  there any
  COTS devices I should be looking at? Or is the market for this all
  under
  the table to
  pro-censorship governments?
 
  Thanks,
 
  -Eric




 --
 Phil Fagan
 Denver, CO
 970-480-7618




 --
 Phil Fagan
 Denver, CO
 970-480-7618



Re: Prism continued

2013-06-12 Thread Jonathan Lassoff
Logstash and Splunk are both wonderful, in my experience.

What sets them apart from just a plain grep(1) is that they build an
index that points keywords to to logging events (lines).

What if you're looking for events related to a specific interface or LSP?
Not a problem with a modest log volume, as grep can tear through text
nearly as quickly as your disk can pass it up.
However, once you have a ton of historical logs, or just a large
volume, grep becomes way to slow as you have to retrieve tons of
unrelated log messages to check if they're what you're looking for.

Having an index gives you a way to search for that interface or LSP
name, and get a listing of all the locations that contain log events
matching what you're looking for.


In the PRISM context, I highly doubt their using Splunk for any kind
of analysis beyond systems and network management. It's not good at
indexing non-texty-things.
What if you need to search for events that were geographically
proximate to one another? That takes a special kind of index.

On Wed, Jun 12, 2013 at 6:13 PM, Chip Marshall c...@2bithacker.net wrote:
 On 2013-06-12, Phil Fagan philfa...@gmail.com sent:
 Speaking of Splunk; is that really the tool of choice?

 I've been hearing a lot of good things about logstash these days
 too, if you prefer the open source route.

 http://logstash.net/

 --
 Chip Marshall c...@2bithacker.net
 http://2bithacker.net/



Re: PRISM: NSA/FBI Internet data mining project

2013-06-06 Thread Jonathan Lassoff
Agreed. I can already pretty much just assume this widespread
surveillance is going on.
The Bluffdale, Utah facility isn't being built to store nothing.
It's happening whether we like it or not.

When I care about my privacy, I know that I have to take matters into
my own hands.
GnuPG and TLS are mine and your friends. Use them together. Use them in peace.

Cheers,
jof (0x8F8CAD3D)

On Thu, Jun 6, 2013 at 5:07 PM, Alex Rubenstein a...@corp.nac.net wrote:
  Has fingers directly in servers of top Internet content companies,
  dates to 2007.  Happily, none of the companies listed are transport
  networks:

 I've always just assumed that if it's in electronic form, someone else is 
 either
 reading it now, has already read it, or will read it as soon as I walk away 
 from
 the screen.


 So, you are comfortable just giving up your right to privacy? It's just the 
 way it is?

 I'm sorry, I am not as accepting of that fact as you are. I am disappointed 
 and disgusted that this is, and has been, going on. Our government is failing 
 us.








Re: Cat-5 cables near 200 Paul, SF

2013-05-31 Thread Jonathan Lassoff
I could suggest a few places. Might want to call ahead to make sure
they'll have what you need:
- Central Computer. Has locations in San Francisco and San Mateo. SF
maybe closer, but will take longer with traffic and parking.
-- http://www.centralcomputers.com/commerce/misc/sanfrancisco.jsp
-- http://www.centralcomputers.com/commerce/misc/sanmateo.jsp
- Frys. Much further. Closest shop would be Palo Alto
-- 
http://www.frys.com/template/isp/index/Frys/isp/Middle_Topics/H1%20Store%20Maps/palo%20alto/
- Jameco. Has some limited Ethernet cable selection. Has a will-call
pick up at their warehouse in Belmont.
-- http://www.jameco.com/

Cheers,
jof



Re: Headscratcher of the week

2013-05-31 Thread Jonathan Lassoff
Those are some truly perplexing graphs. Quite strange that it appears
linear, as if something is slightly changing over time or
growing/shrinking at a constant-ish rate.

Do you have throughput or PPS graphs for the intermediate links as
well? Any similar correlations in the derivative slope?

My only hunch would be some intermediate buffer being increasingly
full over time, as some other application riding the path linearly
grows in packets/second or bits/second.

Cheers,
jof

On Fri, May 31, 2013 at 3:25 PM, Mike mike-na...@tiedyenetworks.com wrote:
 Gang,

 In the interest of sharing 'the weird stuff' which makes the job of
 being an operator ... uh, fun? is that the right word?..., I would like to
 present the following two smokeping latency/packetloss plots, which are by
 far the weirdest I have ever seen.

 These plots are from our smokeping host out to a customer location.
 The customer is connected via DSL and they run PPPoE over it to connect with
 our access concentrator. There is about 5 physical insfastructure hops
 between the host and customer; The switch, the BRAS, the Switch again, and
 then directly to the DSLAM and then customer on the end.


 The 10 day plot:
 http://picpaste.com/10_Day_graph-YV3IdvRV.png

 The 30 hour plot:
 http://picpaste.com/30_hour_graph-DrwzfhYJ.png


 How can you possibly have consistent increase in latency like that?
 I'd love to hear theories (or offers of beer, your choice!).

 Happy friday all!


 Mike-




Re: need help about free bandwidth graph program

2013-04-08 Thread Jonathan Lassoff
I'm not sure of your specific application, but it sounds to me like
netflow/sflow exports would be the most scalable way to do this.

For small applications, ntop or bandwidthd can do this.
http://www.ntop.org/products/ntop/
http://bandwidthd.sourceforge.net/

Cheers,
jof

On Mon, Apr 8, 2013 at 12:51 PM, Deric Kwok deric.kwok2...@gmail.com wrote:
 Hi all

 Do you know any opensource program bandwidthgraph by ipaddess?

 Thank you



Re: BGP RIB Collection

2013-02-26 Thread Jonathan Lassoff
Personally, I would just use BGP on a PC to collect this information.

Place some import/input policy on your eBGP sessions on your edge
routers to add communities to the routes such that you can recognize
which peers gave you the route.
Then, use an iBGP session to a BIRD or Quagga instance from which you
can dump the routes and filter based on the communities.

Cheers,
jof

On Tue, Feb 26, 2013 at 6:24 PM, chip chip.g...@gmail.com wrote:
 Hello all,

   I have an application that needs to gather BGP RIB data from the routers
 that connect to all of our upstream providers.  Basically I need to know
 all the routes available from a particular provider.  Currently I'm
 gathering this data via SNMP.  While this works it has its draw backs, it
 takes approximately 20 minutes per view, its nowhere near real-time, and
 I'm unable to gather information for IPv6.  SNMP, however, is faster than
 screen scraping.  All of the XML based access methods seem to take about
 the same time as well.

   I've been watching, with keen interest, the i2rs ietf workings, but the
 project is still in its infancy.  BMP seems to be a good solution but I've
 not found a working client implementation yet.  I see that you can actually
 configure this on some Juniper gear but I can't seem to locate a client to
 ingest the data the router produces.  The BGP Add Paths implementation
 seems to be the best choice at the moment and exabgp has a working
 implementation.

 Are there any other technologies or methods of accessing this data that
 I've missed or that you've found useful?

 Thanks!

 --chip

 --
 Just my $.02, your mileage may vary,  batteries not included, etc



Re: Micro Trenching for Fiber Optic Deployment

2013-02-11 Thread Jonathan Lassoff
I would think that in such a deployment scenario, microtrenching might
not be the best bet.
Part of the appeal (IMO) of microtrenching in existing pavement is
that once filled, the pavement slab provides for some protection and
rigidity.
If making a small trench into packed dirt, you're much more
susceptible to accidental cuts and erosion.

I would suggest a Ditch Witch or similar trencher and laying some
ABS/PVC conduit could give some protection from small nicks and cuts,
and allow for future strands to be pulled along your path.

Cheers,
jof

On Mon, Feb 11, 2013 at 10:34 AM, david peahi davidpe...@gmail.com wrote:
 Does anyone have experience in running fiber optic cable with
 micro-trenching techniques in areas where there is no existing asphalt or
 concrete roadway, just packed earth and rock? Environmental limitations do
 not allow for constructing an aerial power pole alignment, or underground
 ductbank. The distance is about 10 kM.

 David



Re: L3 East cost maint / fiber 05FEB2012 maintenance

2013-02-05 Thread Jonathan Lassoff
My hunch is that this is fallout and repairs from Juniper PR839412.
Only fix is an upgrade. Not sure why they're not able to do a hitless
upgrade though; that's unfortunate.

Specially-crafted TCP packets that can get past RE/loopback filters
can crash the box.

--j

On Tue, Feb 5, 2013 at 7:39 AM, Josh Reynolds ess...@gmail.com wrote:
 I know a lot of you are out of the office right now, but does anybody have
 any info on what happened with L3 this morning? They went into a 5 hour
 maintenance window with expected downtime of about 30 minutes while they
 upgraded something like *40* of their core routers (their words), but
 also did this during some fiber work and completely cut off several of
 their east coast peers for the entirety of the 5 hour window.

 If anybody has any more info on this, on a NOC contact for them on the East
 Coast for future issues, you can hit me off off-list if you don't feel
 comfortable replying with that info here.

 Thanks, and I hope hope you guys are enjoying Orlando.

 --
 *Josh Reynolds*
 ess...@gmail.com - (270) 302-3552



Re: L3 East cost maint / fiber 05FEB2012 maintenance

2013-02-05 Thread Jonathan Lassoff
On Tue, Feb 5, 2013 at 9:33 AM, Jason Biel ja...@biel-tech.com wrote:
 Workaround is proper filtering and other techniques on the RE/Loopback to
 prevent the issue from happening.

Agreed. However, if it only takes one packet, what if an attacker
sources the traffic from your management address space?

Guarding against this requires either a separate VRF/table for
management traffic or transit traffic, RPF checking, or TTL security.
If these weren't setup ahead of time, maybe it would be easier to
upgrade than lab, test, and deploy a new configuration.

This is all speculation about Level3 on my part; I don't know their
network from an internal perspective.

--j

 Should an upgrade be performed? Yes, but certainly doesn't have to have
 right away or without notice to customers.

 On Tue, Feb 5, 2013 at 11:23 AM, Jonathan Lassoff j...@thejof.com wrote:

 My hunch is that this is fallout and repairs from Juniper PR839412.
 Only fix is an upgrade. Not sure why they're not able to do a hitless
 upgrade though; that's unfortunate.

 Specially-crafted TCP packets that can get past RE/loopback filters
 can crash the box.

 --j

 On Tue, Feb 5, 2013 at 7:39 AM, Josh Reynolds ess...@gmail.com wrote:
  I know a lot of you are out of the office right now, but does anybody
 have
  any info on what happened with L3 this morning? They went into a 5 hour
  maintenance window with expected downtime of about 30 minutes while they
  upgraded something like *40* of their core routers (their words), but
  also did this during some fiber work and completely cut off several of
  their east coast peers for the entirety of the 5 hour window.
 
  If anybody has any more info on this, on a NOC contact for them on the
 East
  Coast for future issues, you can hit me off off-list if you don't feel
  comfortable replying with that info here.
 
  Thanks, and I hope hope you guys are enjoying Orlando.
 
  --
  *Josh Reynolds*
  ess...@gmail.com - (270) 302-3552




 --
 Jason



Re: ATT Uverse/DSL Network Engineer DNS question

2013-02-05 Thread Jonathan Lassoff
These appear to be an anycasted service, as I reach different destinations
based on my source address.

Hopefully each deployment has unique origin IPs for their recursive queries.

I would recommend against looking at RIR registration data to determine IP
location. There's often little to no correlation, there.

--j

On Tue, Feb 5, 2013 at 1:01 PM, Tim Haak thaiti...@hotmail.com wrote:










 Hi,




 Can a ATT Uverse/DSL Network Engineer answer a question about the DNS
 server IPs that are handed out to customers please? I am currently testing
 from
 a Florida IP. Can you please let me know if all Uverse and DSL customers
 across the United States only use these 2 IPs as their primary and
 secondary
 DNS servers?



 68.94.156.1

 68.94.157.1



 We
 provide services based on IP GEO-location. Since the 2 recursive resolvers
 below are registered in Texas every DNS query for any of our records return
 results that are intended for IPs in that region. In other words, users on
 the
 east coast would actually resolve to a central part of the US or west
 coast IP.



 Thanks
 in advance,Tim






Re: ATT Uverse/DSL Network Engineer DNS question

2013-02-05 Thread Jonathan Lassoff
On Tue, Feb 5, 2013 at 1:10 PM, Jonathan Lassoff j...@thejof.com wrote:

 These appear to be an anycasted service, as I reach different destinations
 based on my source address.

 Hopefully each deployment has unique origin IPs for their recursive
 queries.


Just confirmed this. As these resolvers traverse and query your servers,
they'll have different source IPs, depending on the regional resolver.

Return differentiated DNS responses, based on that.

--j


 I would recommend against looking at RIR registration data to determine IP
 location. There's often little to no correlation, there.

 --j


 On Tue, Feb 5, 2013 at 1:01 PM, Tim Haak thaiti...@hotmail.com wrote:










 Hi,




 Can a ATT Uverse/DSL Network Engineer answer a question about the DNS
 server IPs that are handed out to customers please? I am currently
 testing from
 a Florida IP. Can you please let me know if all Uverse and DSL customers
 across the United States only use these 2 IPs as their primary and
 secondary
 DNS servers?



 68.94.156.1

 68.94.157.1



 We
 provide services based on IP GEO-location. Since the 2 recursive resolvers
 below are registered in Texas every DNS query for any of our records
 return
 results that are intended for IPs in that region. In other words, users
 on the
 east coast would actually resolve to a central part of the US or west
 coast IP.



 Thanks
 in advance,Tim









Re: Whats so difficult about ISSU

2012-11-08 Thread Jonathan Lassoff
On Thu, Nov 8, 2012 at 8:13 PM, Mikael Abrahamsson swm...@swm.pp.se wrote:
 On Thu, 8 Nov 2012, Phil wrote:

 The major vendors have figured it out for the most part by moving to
 stateful synchronization between control plane modules and implementing
 non-stop routing.


 NSR isn't ISSU.

 ISSU contains the wording in service. 6 seconds of outage isn't in
 service. 0.5 seconds of outage isn't in service. I could accept a few
 microseconds of outage as being ISSU, but tenths of seconds isn't in
 service.


 The main remaining hurdle is updating microcode on linecards, they still
 need to be rebooted after an upgrade.


 ... and as long as this is the case, there is no ISSU. There is only
 shorter outages during upgrade compared to a complete reboot.

This.
There are some wonderfully reconfigurable router hardwares out in the
world, and platforms that can dynamically program their forwarding
hardware make this seem possible.

It's possible to build things such that portions of a single box can
be upgraded at a time. With multiple links, or forwarding-paths out to
a remote destination, it seems to me that if the upgrade process could
just coordinate things and update each piece of forwarding hardware
while letting traffic cut over and waiting for it to come back before
moving on.

I could envision a Juniper M/TX box, where MPLS FRR or an ae
interface across FPCs could take backup traffic while a PFE is
upgraded.
Of course, every possible path would need to be able to survive an FPC
being down, and the process would have to have hooks into protocols to
know when everything is switched back.



Re: Detection of Rogue Access Points

2012-10-14 Thread Jonathan Lassoff
On Sun, Oct 14, 2012 at 1:59 PM, Jonathan Rogers quantumf...@gmail.com wrote:
 Gentlemen,

 An issue has come up in my organization recently with rogue access points.
 So far it has manifested itself two ways:

 1. A WAP that was set up specifically to be transparent and provided
 unprotected wireless access to our network.

This is actually a really tough problem to solve without either total
dictatorial control of your switchports or lots of telemetry and
monitoring.

At $DAYJOB, we detect the transparent bridge case by having a subset
of AP hardware setup as monitors that listen to 802.11 frames on the
various channels, keeping a log of the client MAC addresses and the
BSSID that they're associated with.
Then, by selecting out only those client MAC addresses that are not
associated to a known BSSID that we control, we compare that set of
unknown client MAC addresses to the Ethernet L2 FIBs on our switches
and look for matches.

If we see entries, than there is some 802.11 device bridging clients
onto our network and we hunt it down from there.


I've yet to see a solid methodology for detecting NATing devices,
short of requiring 802.1x authentication using expiring keys and
one-time passwords. :p

Cheers,
jof



Re: best way to create entropy?

2012-10-11 Thread Jonathan Lassoff
On Thu, Oct 11, 2012 at 5:01 PM, shawn wilson ag4ve...@gmail.com wrote:
 in the past, i've done many different things to create entropy -
 encode videos, watch youtube, tcpdump -vvv  /dev/null, compiled a
 kernel. but, what is best? just whatever gets your cpu to peak or are
 some tasks better than others?

Personally, I've used and recommend this USB stick: http://www.entropykey.co.uk/

Internally, it uses diodes that are reverse-biased just ever so close
to the breakdown voltage such that they randomly flip state back and
forth.

Cheers,
jof



Re: best way to create entropy?

2012-10-11 Thread Jonathan Lassoff
On Thu, Oct 11, 2012 at 5:20 PM, Jimmy Hess mysi...@gmail.com wrote:
 On 10/11/12, shawn wilson ag4ve...@gmail.com wrote:
 in the past, i've done many different things to create entropy -
 encode videos, watch youtube, tcpdump -vvv  /dev/null, compiled a
 kernel. but, what is best? just whatever gets your cpu to peak or are

 You are referring to  the entropy pool used for  /dev/random  and
 crypto operations ?


 You could  setup a  video capture card  or radio tuner card,  tune it into
 a good noise source,  and arrange for   the bit stream to get  written
  to  /dev/random

Yes, but then you're also introducing a way for an external attacker
to transmit data that can be mixed into your entropy pool.

While certainly a cool hack, I don't think anything like this would be
safe for cryptographic use.

/two cents

Cheers,
jof



Re: dot1q encapsulation overhead?

2012-09-06 Thread Jonathan Lassoff
On Thu, Sep 6, 2012 at 7:55 AM,  u...@3.am wrote:
 A while back we had a customer colocated vpn router (2911) come in and we put 
 it
 on our main vlan for initial set up and testing.  Once that was done, I 
 created a
 separate VLAN for them and a dot1q subinterface on an older, somewhat 
 overloaded
 2811.  I set up the IPSec Tunnel, a /30 for each end to have an IP and all the
 static routes needed to make this work and it did.

 However, a few days later they were complaining of slow speeds...I don't 
 recall,
 but maybe something like 5mbs when they needed 20 or so.  We had no policing 
 on
 that port.  After a lot of testing, we tried putting them back on the main, 
 native
 vlan and it worked fine...they got the throughput they needed.

 So my question is: could the dot1q encapsulation be causing throughput issues 
 on a
 2811 that's already doing a lot?  I regret that I don't recall what sh proc 
 cpu
 output was, or if I even ran it at all.  It was kind of hectic just to get it
 fixed at the time.

 Well, a few months later (last week), the chicken came home to roost when 
 their
 IPSec tunnel started proxy ARP puking stuff to our side that temporarily took 
 out
 parts of our internal LAN.  I have requested a 2911 replacement for the 2811
 because I have seen the 2811 cpu load max out a few times when passing lots of
 traffic.  I am hoping it will allow us to go back to this VLAN setup again, 
 but
 I've never heard whether dot1q adds any overhead.

It's small, but plain 802.1Q adds in a 4-byte (32-bit) header after
the normal Ethernet header and before the actual Ethernet payload.

That tag has to go somewhere. :p

Also, I'm not privy to the details of your IPSec Tunnel, but that
can introduce additional overhead as well.

I'm not sure about your specific 2811/2911 and IOS combination (to
know if this feature is there or not), but you might also consider
setting ip tcp adjust-mss and ip mtu values on your tunnel
interfaces to signal the true maximum-transportable-size of the
various traffic types over the tunnel.

I've also been bit by this bug before
(http://networknerd.wordpress.com/2010/06/11/cisco-ipsec-mtu-bug/)
that affects the MTU calculations of tunnels, for which the source
address is specified by an interface, after a reboot. Worth knowing
about.

Cheers,
jof



Re: Why use PeeringDB?

2012-07-18 Thread Jonathan Lassoff
On Wed, Jul 18, 2012 at 8:43 AM, Chris Grundemann cgrundem...@gmail.com wrote:
 I am currently working on a BCOP for IPv6 Peering and Transit and
 would very much appreciate some expert information on why using
 PeeringDB is a best practice (or why its not). All opinions are
 welcome, but be aware that I plan on using the responses to enhance
 the document, which will be made publicly available as one of several
 (and hopefully many more) BCOPs published at http://www.ipbcop.org/.

It's a nice resource for finding out which networks are in which facilities.

As someone seeking out and setting up peering sessions, it's useful to
be able to search out networks that also have a couple common POPs, so
that one can call or email them and ask about potential
interconnection.

It's certain cut down on emails that are just requests for information
(Where do you have sites? We're in these metros..., Looks like we'd
be good potential peers, what's your policy like?).

Overall -- I really like it!

Cheers,
jof



Re: Why use PeeringDB?

2012-07-18 Thread Jonathan Lassoff
On Wed, Jul 18, 2012 at 9:59 AM, Zaid Ali z...@zaidali.com wrote:
 The goal is Source of truth for any peer to know information at the
 Exchange points as well as peering coordinator information. I think it is
 a great tool for the peering community and definitely useful. Cons: Will
 it be the next RADB? There needs to be a sustainable community to keep it
 running since it is a volunteer effort.

Good point. I suspect that enough large users (with money, developers,
hosting, etc.) are enjoying it that it has reached a critical mass of
a semi-core service that wont have a hard time getting some support
going forward.

--j



Re: technical contact at ATT Wireless

2012-06-28 Thread Jonathan Lassoff
On Thu, Jun 28, 2012 at 1:50 PM, Christopher Morrow
morrowc.li...@gmail.com wrote:
 of course, but you aren't supposed to be doing that on their network
 anyway... so says the nice man from sprint 4 nanogs ago.

That, and if you are tunneling in, it's good practice to forward over
any DNS traffic as well (or all, depending on the application).

That way, if you have internal names or special resolvers setup,
you'll hit that as well.

Cheers,
jof



Re: Peer1/Server Beach support for BGP on dedicated servers

2012-05-19 Thread Jonathan Lassoff
On Sat, May 19, 2012 at 3:23 AM, Anurag Bhatia m...@anuragbhatia.com wrote:
 Was wondering if there's anyone from Server Beach/Peer1 here. We have a
 dedicated server with them which we primarily use for DNS. I am adding
 support for anycasting on that one but seems like Peer1 is not supporting
 BGP at all. NOC support told me that they can announce our block
 and statically pass us but cannot hear BGP announcement from our router.
 Was wondering if someone else had similar issue?

Generally, most dedicated hosting (renting/leasing the exclusive use
of a computer in their facility) outfits aren't setup to speak BGP to
individual servers/customers. Such a request is usually infrequent
enough that it doesn't warrant setting up the added hardware.

While you could have your provider announce your space for you, you'll
loose the fine-grained control over how that route gets announced once
the routing is out of your hands.

 This is important and if doesn't works then I would have to find a new
 place for dedicated server somewhere in California.

I would instead recommend looking for a colocation provider that will
host small installations (1 - 5 U), but is also savvy enough to speak
eBGP with their customers.

Cheers,
jof



Re: Squeezing IPs out of ARIN

2012-04-25 Thread Jonathan Lassoff
On Wed, Apr 25, 2012 at 8:46 AM, Kenneth McRae
kenneth.mc...@dreamhost.comwrote:

 I have never provided the names of end users..  How the address space
 would be utilized?  Definitely..  But not the names of end users...


Probably because you are an end user.
If you're talking about AS26347, I don't think there is any re-assigned
space in there.

Do you ever assign users CIDR blocks of IP space for their own use? If
it's just the transitory use of IPs in an operational network you control,
then that sounds like end user use to me, even though you may sell the
use of those IPs.

If you have questions about this stuff, the ARIN NRPM is a great resource:
https://www.arin.net/policy/nrpm.html

Cheers,
jof


Re: Squeezing IPs out of ARIN

2012-04-24 Thread Jonathan Lassoff
On Tue, Apr 24, 2012 at 10:32 AM,  ad...@thecpaneladmin.com wrote:
 Anyone have any tips for getting IPs from ARIN? For an end-user allocation
 they are requesting that we provide customer names for existing allocations,
 which is information that will take a while to obtain. They are insisting
 that this is standard process and something that everyone does when
 requesting IPs.  Has anyone actually had to do this?

Indeed. It's worked this way for a long time.

When starting a new organization, there's a bit of a chicken and egg
problem with IP space. If anyone could get IP space just for asking
for it, it would have been consumed too quickly. So, organizations
must first get some space assigned to them from an upstream provider
and begin using it.
At some point the current usage and growth rate of the assigned space
will justify a direct allocation.

Then, you can renumber into your new space and be totally independent.

Cheers,
jof



Re: Squeezing IPs out of ARIN

2012-04-24 Thread Jonathan Lassoff
On Tue, Apr 24, 2012 at 11:14 AM, Owen DeLong o...@delong.com wrote:
 That's not entirely true. What you say applies to one possible way for an
 ISP to get an allocation. It does not apply at all to end-users.

Even for end-user allocations, they would still need to fulfill the
requirements of 4.3.3 in the ARIN NRPM
(https://www.arin.net/policy/nrpm.html#four33), no?

I suppose for immediate need assignments, this can be short
circuited, but from what I know those are pretty rare.

Am I missing something?

Cheers,
jof



Re: About Juniper MX10 router performance

2012-04-22 Thread Jonathan Lassoff
On Sun, Apr 22, 2012 at 9:05 PM, Md.Jahangir Hossain
jrjahan...@gmail.com wrote:
 Dear valued member:


 Wishes all are fine.


 i need   suggestion from you about Juniper MX10 router performance. i want
 to buy  this router for IP Transit provider where i received  all global
 routes .

Do you have some specific questions about it? You should be able to
comfortably take in one to two full BGP feeds as RIBs, and hold an
Internet-sized FIB.

It's basically an MX80 (internally), but with some software
modifications to limit its performance. You probably need to make sure
that you're also purchasing the S-MX80-ADV-R software license in your
base bundle (to support the full-scale L3 routing), but the licensing
is based on the honor system.

Cheers,
jof



Re: About Juniper MX10 router performance

2012-04-22 Thread Jonathan Lassoff
On Sun, Apr 22, 2012 at 9:48 PM, Md.Jahangir Hossain
jrjahan...@gmail.com wrote:
 Thanks jonathan for your reply .

 Actually i have not specific question , i need suggestion about this product
 if i purchase this  as IP Transit provider.

Only someone with the knowledge of your business and requirements can
answer this for you.

If you'd like to take this up off-list, I'm happy to share what I know.
Generally, we try to keep the discussion to the list (that is sent to
the many subscribers) on topics that are of interest to all network
operators.

For Juniper-specific questions, the j-nsp mailing list is pretty great
(http://puck.nether.net/mailman/listinfo/juniper-nsp).

Cheers,
jof



Re: airFiber (text of the 8 minute video)

2012-03-29 Thread Jonathan Lassoff
On Thu, Mar 29, 2012 at 12:33 PM, Oliver Garraux oli...@g.garraux.net wrote:
 I was at Ubiquiti's conference.  I don't disagree with what you're
 saying.  Ubiquiti's take on it seemed to be that 24 Ghz would likely
 never be used to the extent that 2.4 / 5.8 is.  They are seeing 24 Ghz
 as only for backhaul - no connections to end users.

I suspect this is just due to cost and practicality. ISPs, nor users
will want to pay 3k USD, nor widely utilize a service that requires
near-direct LOS.
I could see this working well in rural or sparse areas that might not
mind the transceiver.

 I guess
 point-to-multipoint connections aren't permitted by the FCC for 24
 Ghz.

The whole point of these unlicensed bands is that their usage is not
tightly controlled. I imagine hardware for use still should comply
with FCC's part 15 rules though.

 AirFiber appears to be fairly highly directional.  It needs to
 be though, as each link uses 100 Mhz, and there's only 250 Mhz
 available @ 24 Ghz.

Being so directional, I'm not sure that cross-talk will as much of an
issue, except for dense hub-like sites. It sounds like there's some
novel application of using GPS timing to make the radios spectrally
orthogonal -- that's pretty cool. If they can somehow coordinate
timing across point-to-point links, that would be great for sites that
co-locate multiple link terminations.

Overall, this looks like a pretty cool product!

--j



Re: airFiber (text of the 8 minute video)

2012-03-29 Thread Jonathan Lassoff
On Thu, Mar 29, 2012 at 2:37 PM, Joel jaeggli joe...@bogus.com wrote:
 Cost will continue to drop, fact of the matter is the beam width is
 rather narrow and they attenuate rather well so you can have a fair
 number of them deployed without co-channel interference. if you pack a
 tower full of them you're going to have issues.

This is exactly the kind of case that I'm thinking about (central towers).

The novel thing Ubiquiti seems to do is TDMA-like channelization (like
with Airmax), or by changing the coding scheme over the air to
maintain orthogonality (what it sounds like this new product may be
doing).

--j



Re: Concern about gTLD servers in India

2012-03-10 Thread Jonathan Lassoff
On Sat, Mar 10, 2012 at 10:45 AM, Bill Woodcock wo...@pch.net wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256


 On Mar 10, 2012, at 8:05 AM, Suresh Ramasubramanian wrote:
 Sure, if you can find a datacenter that's capable of handling all the
 traffic, and has staff who are able to provide efficient remote hands for
 huge racks of extremely powerful servers .

 Honestly, we haven't even gotten that far when we've offered to deploy 
 servers (for instance for domains like .IN) inside India.  The bribes that 
 were requested in exchange for giving us permission to deploy a free service 
 were, uh, both prohibitive and ludicrous in their enormity.

This.

This and the import duties on hardware and the requirement for
licensing to operate as an ISP makes placing even a modest
deployment a lot more work compared to deploying in other neighboring
countries.

I would presume that Verisign decided that it just wasn't worth the
effort to deploy into India.
It obviously has a gigantic user base for which getting into local
ISPs and IXPs would probably save on transit costs.

Perhaps if some local root operators could donate some
space/power/connectivity, Verisign-grs could colocate a gTLD cluster
there?

Cheers,
jof



Re: WW: Colo Vending Machine

2012-02-17 Thread Jonathan Lassoff
On Fri, Feb 17, 2012 at 10:35 AM, Jay Ashworth j...@baylink.com wrote:
 Please post your top 3 favorite components/parts you'd like to see in a
 vending machine at your colo; please be as specific as possible; don't
 let vendor specificity scare you off.

This is a riot! I'd love to have something like this at facilities I'm in.
Some useful stuff that comes to mind:
 - Rack screws of various common sizes and threadings
 - SFPs, GBICs, etc.
 - Rollover cable / DE-9-8P8P adapter
 - Screwdrivers
 - Cross-over Ethernet, patch cables
 - zip ties, velcro tape, etc.
 - Label tape

Cheers,
jof



Re: WW: Colo Vending Machine

2012-02-17 Thread Jonathan Lassoff
On Fri, Feb 17, 2012 at 10:55 AM, Leo Bicknell bickn...@ufp.org wrote:
 In a message written on Fri, Feb 17, 2012 at 01:35:15PM -0500, Jay Ashworth 
 wrote:
 Please post your top 3 favorite components/parts you'd like to see in a
 vending machine at your colo; please be as specific as possible; don't
 let vendor specificity scare you off.

 USB-Serial adapters.  Preferably selected so they are driverless on
 both OSX and Windows. :)

Does such a device exist? I've yet to run across one.

Personally, I would recommend those based on FTDI chips or Prolific
PL2303 -- both have support for Linux, Windows, and OSX.

--j



Re: 802.11 MAC Point Coordination Function

2012-02-16 Thread Jonathan Lassoff
On Wed, Feb 15, 2012 at 8:13 PM, Jeremy jba...@gmail.com wrote:
 I'm doing some research on 802.11 quality of service, congestion control,
 etc. I'm trying to find some information on the Point Coordination
 Function, a polling based access control method, but I'm having a hard time
 finding much in the way of vendor support. I have access to some cisco
 1242's, 1140's and 1252's and I've been searching the Cisco's site and
 can't find a real answer on whether or not it's supported let alone how to
 configure it.

 Does anyone have any experience with this? Does Cisco have some special
 name for it aside from PCF? Any help would be appreciated!

I know of no such feature in classic Aironet APs. Everything I've
run across only does classic 802.11-style DCF along with all the
stations.
You might check out the Aironet 1550, which supports forming dynamic
mesh-like topologies, though I can't speak to it as I've never laid
hands on the hardware or software.

Other vendors and hackers are implementing alternative media-access
control and coordination functions, though.
You might check out Ubiquiti's AirMax-speaking hardware or
ieee80211_tdma in FreeBSD for software MAC-capable drivers.

Cheers,
jof



Re: Wireless Recommendations

2012-02-15 Thread Jonathan Lassoff
On Wed, Feb 15, 2012 at 7:50 PM, Faisal Imtiaz fai...@snappydsl.net wrote:
 Is that because of Channel Spacing ? or some other reason ?

I would presume channel spacing. In FCC-land, there are only 3
non-overlapping 20 Mhz bandwidths available.

--j



Re: Wireless Recommendations

2012-02-15 Thread Jonathan Lassoff
On Wed, Feb 15, 2012 at 8:41 PM, Joel jaeggli joe...@bogus.com wrote:
 On 2/15/12 20:14 , Mario Eirea wrote:
 This is my guess too, i guess there is some bleed over from their antenna 
 arrays.

 Even the most directional sector antenna in the world has a back lobe...
 and there there's the clients...

Agreed. There is rarely a thing as a perfectly-directional antenna
(not without a lot of shielding, I would presume).

Since I would presume that all the radios are controlled by the same
host, perhaps it could coordinate the 802.11 DCF and sequence CTS
frames so that the various client and AP radios remain as spectrally
orthogonal as possible. There's not much you can do about the clients
transmitting RTSes, but it can be predicted to a certain extent.

 there's no magic bullet you simply can't do it all in one ap with the
 space available.

Agreed. More, lower-power APs means better spectral efficiency and
overall resilience.


--j



Re: Wireless Recommendations

2012-02-07 Thread Jonathan Lassoff
On Tue, Feb 7, 2012 at 11:19 AM, Arzhel Younsi xio...@gmail.com wrote:
 Xirrus say that they can support 640 clients with this device:
 http://www.xirrus.com/Products/Wireless-Arrays/XR-Series/XR-4000-Series
 I heard about it a couple weeks ago, didn't try it yet.

That's a pretty neat product -- it seems like it takes care of
spectrally isolating clients by utilizing 4 - 8 radios per AP-box and
8 - 24 directional sector antennas.

I feel like this addresses the suggestions that I and others gave to
utilize more APs rather than a big central one, but it just packages
it all into one box with many antennas.

Cheers,
jof



Re: Hijacked Network Ranges

2012-01-31 Thread Jonathan Lassoff
On Tue, Jan 31, 2012 at 10:19 AM, Grant Ridder shortdudey...@gmail.com wrote:
 Hi,

 What is keeping you from advertising a more specific route (i.e /25's)?

Most large transits and NSPs filter out prefixes more specific than a /24.

Conventionally, at least in my experience, /24's are the most-specific
prefix you can use and expect that it will end up in most places.
Some shops with limited router processing or table storage capacity
will filter even more restrictively, so a bigger aggregate is worth
announcing as well.

Cheers,
jof



Re: Hijacked Network Ranges

2012-01-31 Thread Jonathan Lassoff
On Tue, Jan 31, 2012 at 10:00 AM, Kelvin Williams
kwilli...@altuscgi.com wrote:
 We've been in a 12+ hour ordeal requesting that AS19181 (Cavecreek Internet
 Exchange) immediately filter out network blocks that are being advertised
 by ASAS33611 (SBJ Media, LLC) who provided to them a forged LOA.

 [ ...snip...]

Ugh, what a hassle. I've been there, and it's really no fun.

 Our customers and services have been impaired.  Does anyone have any
 contacts for anyone at Cavecreek that would actually take a look at ARINs
 WHOIS, and IRRs so the networks can be restored and our services back in
 operation?

Have you tried the contacts listed at PeeringDB for AS19181? Check
out: as19181.peeringdb.com

 Additionally, does anyone have any suggestion for mitigating in the
 interim?  Since we can't announce as /25s and IRRs are apparently a pipe
 dream.

If you fail to get AS19181 to respond, you might consider contacting
*their* upstreams and explaining the situation.

Cheers,
jof



Re: Wireless Recommendations

2012-01-30 Thread Jonathan Lassoff
On Mon, Jan 30, 2012 at 12:46 PM, Jim Gonzalez j...@impactbusiness.com wrote:
 Hi,

                I am looking for a Wireless bridge or Router that will
 support 600 wireless clients concurrently (mostly cell phones).  I need it
 for a proof of concept.

I've had some great luck with a variety of vendors, though never with
this many clients on one AP.
For a stable 802.11 stack, I've found Cisco AP1142N's to be great.

That said, I'm not sure what you're trying to do here, but I think
you'll be disappointed with any AP with 600 *active* stations
associated to it. No AP can work around the congestive collapse of
hundreds of stations all transmitting RTS frames at once.

If you can split up your many stations across a swath of APs, bridging
down to a couple L2 Ethernet LANs, I think you'll get something much
more scalable.

Cheers,
jof



Re: Populating BGP from Connected or IGP routes

2012-01-23 Thread Jonathan Lassoff
On Mon, Jan 23, 2012 at 12:46 PM, Eric C. Miller e...@ericheather.com wrote:
 Hi all,

 I'm looking for a best practice sort of answer, plus maybe comments on why 
 your network may or may not follow this.

 First, when running a small ISP with about the equivilent of a /18 or /19 in 
 different blocks, how should you decide what should be in the IGP and what 
 should be in BGP? I assume that it's somewhere between all and none, and one 
 site that I found made some good sense saying something to the following, 
 Use a link-state protocol to track interconnections and loopbacks only, and 
 place all of the networks including customer networks into BGP.

 Secondly, when is it ok, or preferable to utilize redistribute connected 
 for gathering networks for BGP over using a network statement? I know that 
 this influences the origin code, but past that, why else? Would it ever be 
 permissible to redistribute from the IGP into BGP?

This is one of those questions where the answer will depend heavily on
who you ask. In my opinion, I would
 - Keep externally-learned eBGP routes in one table. The Internet table.
 - Keep internal links (loopbacks, single-homed (to me) customers,
networks containing next-hops outside your AS) in an IGP (like OSPF or
IS-IS). These routes should very rarely get exchanged outside the AS.
 - Where possible, have multi-homed customers speak BGP to your AS and
just treat those routes as those you'll provide transit for
(re-announcing them to other external peers)
 -- In cases where customers multi or single-home with their own
address space that they'd like you to address, put very specific
filters and tagging on the routes. This way, you can perform careful
filtering on allowing those routes to cross the boundary from IGP to
EGP (and onto your external peers).

Cheers,
jof



Re: bgp question

2012-01-18 Thread Jonathan Lassoff
On Wed, Jan 18, 2012 at 5:58 AM, Deric Kwok deric.kwok2...@gmail.com wrote:
 ls it supporting equally multipath in different bgp connections?

Most software routing protocols have support for this in their RIBs,
but the actual forwarding ability of the underlying kernel will
determine the support for this.
What platform do you route on?

Cheers,
jof



Re: enterprise 802.11

2012-01-15 Thread Jonathan Lassoff
On Sun, Jan 15, 2012 at 3:36 PM, Greg Ihnen os10ru...@gmail.com wrote:
 Since we're already top-posting…

 I've heard a lot of talk on the WISPA (wireless ISP) forum that 802.11g/n 
 starts to fall apart with more than 30 clients associated if they're all 
 reasonably active. I believe this is a limitation of 802.11g/n's media access 
 control (MAC) mechanism, regardless of who's brand is on the box. This is 
 most important if you're doing VoIP or anything else where latency and jitter 
 is an issue.

 To get around that limitation, folks are using proprietary protocols with 
 polling media access control. Ubiquiti calls theirs AirMax. Cisco uses 
 something different in the Canopy line. But of course then you've gone to 
 something proprietary and only their gear can connect. So it's meant more for 
 back-hauls and distribution networks, not for end users unless they use a 
 proprietary CPE.

 Since you need consumer gear to be able to connect, you need to stick with 
 802.11g/n. You should limit to 30 clients per AP. You should stagger your 
 2.4GHZ APs on channels 1, 6 and 11, and turn the TX power down and have them 
 spaced close enough that no more than 30 will end up connecting to a single 
 AP. 5.8GHz APs would be better, and you'll want to stagger their channels too 
 and turn the TX power down so each one has a small footprint to only serve 
 those clients that are nearby.

 Stay away from mesh solutions and WDS where one AP repeats another, that 
 kills throughput because it hogs airtime. You'll want to feed all the APs 
 with Ethernet.

After working in some WISP-like and access environments, I con
corroborate that this is pretty much true. It becomes worse the lower
the SNR is and the more that clients are spread out. It just makes the
'hidden node' problem worse.

Making APs as low power and local as possible is good advice. Where
possible, feed everything with hardlines back to your Ethernet
switching environment. If client roaming and client-client traffic is
important, using a central controller that can tunnel 802.11 frames
over whatever wired L2 network you like is a good win. It means that
to clients they can associate and/or authenticate to one AP and roam
from place to place while keeping the same session to the controller.


As far as vendor gear goes, if roaming and client-client stuff isn't
as important, Ubiquiti UnFi is great stuff for the price. Next rung up
in my book would be Meraki, followed by Cisco or Aruba.

Good luck!

Cheers,
jof



Re: Linux Centralized Administration

2012-01-12 Thread Jonathan Lassoff
On Thu, Jan 12, 2012 at 1:02 PM, Paul Stewart p...@paulstewart.org wrote:
 Hey folks. just curious what people are using for automating updates to
 Linux boxes?



 Today, we manually do YUM updates to all the CentOS servers . just an
 example but a good one.  I have heard there are some open source solutions
 similar to that of Red Hat Network?

There's no tool I could recommend that would be very close to RHN.
However, for solving the problem of keeping packages up to date and
systems in a known-state, I would recommend checking out some
configuration management tools.

There are several popular ones nowadays, though I personally prefer
Puppet or Chef.
Both are tools that allow administrators to declare what a system
should look like, and abstract away the hard work of making that
happen on a variety of platforms. In both cases, it's possible to
monitor how well those tools are working and what they're doing in the
background so that you can get an idea of what's up to date and what's
not.

Are you just trying to solve for making sure that packages are up to
date? Making sure that running daemons are also up to date?

Cheers,
jof



Re: bgp question

2012-01-10 Thread Jonathan Lassoff
On Tue, Jan 10, 2012 at 2:43 PM, Deric Kwok deric.kwok2...@gmail.comwrote:

 Hi all

 When we get  newip, we should let the upstream know to expor it as
 there should have rule in their side.

 how about upstream provider, does they need to let their all bgp
 interconnect to know those our newip?

 If no, Can I know how it works?

 If they don't have rules each other, ls it any problems?


It depends on your upstream ISPs.

Conventionally, some choose to place exact filters in place on BGP
announcements that exactly match IP space that is registered with a RIR or
LIR, some build those filters from IRR sources, and others just filter on
the number of prefixes your sending (to avoid sending a whole table out on
accident). I'm sure there are some other filtering schemes in place around
the world.

In the case of exact filters, you'll need to contact your upstream ISPs and
ask them to update their filters.
In the case of IRR-sourced filtering information, update the prefixes that
you originate with your IRR provider.
And in the case of max-prefix filtering, ask your ISP what they have their
equipment set to.


Cheers,
jof


Re: subnet prefix length 64 breaks IPv6?

2011-12-24 Thread Jonathan Lassoff
On Sat, Dec 24, 2011 at 6:48 AM, Glen Kent glen.k...@gmail.com wrote:

 
  SLAAC only works with /64 - yes - but only if it runs on Ethernet-like
  Interface ID's of 64bit length (RFC2464).

 Ok, the last 64 bits of the 128 bit address identifies an Interface ID
 which is uniquely derived from the 48bit MAC address (which exists
 only in ethernet).

  SLAAC could work ok with /65 on non-Ethernet media, like a
  point-to-point link whose Interface ID's length be negotiated during the
  setup phase.

 If we can do this for a p2p link, then why cant the same be done for
 an ethernet link?


I think by point-to-point, Alexandru was referring to PPP-signalled
links. In the case of Ethernet and SLAAC, the standards define a way to
turn a globally unique 48-bit 802.3 MAC-48 address into an EUI-64
identifier by flipping and adding some bits.

This uniquely maps conventional MAC-48 addresses into EUI-64 addresses. I
imagine this was chosen because the IEEE is encouraging new standards and
numbering schemes to use the 64-bit schemes over the older 48-bit ones.
Presumably to avoid exhaustion in the future (like we're seeing with IPv4).

The result of which is that with the standards we've got today, we can
easily map a piece of hardware's globally unique MAC address into a
globally unique 64-bit identifier -- which happens to cleanly fit into the
second half of the v6 address space.

I suppose one could make an argument to use /80 networks and just use the
MAC-48 identifier for the address portion, but given the vastness of v6
space I don't think it's really worth the extra savings of bit space.


So, to address your original question, in v6 networks with netmask lengths
greater than 64 bits nothing breaks per se, but some of the conventional
standards and ideas about what a network is in that context are broken.
While it's not possible to have hosts uniquely pick addresses for
themselves, one can use other addressing mechanisms like DHCPv6 or static
addresses.

--j


Re: Multiple ISP Load Balancing

2011-12-14 Thread Jonathan Lassoff
The best applications for analyzing paths, that I've seen, have been
in-house development projects. So, admittedly, I don't have much experience
with commercial products for route optimization.

Projects I've seen that analyze best paths to Internet destinations via
multiple ISPs add instrumentation to content-serving applications to log
stream performance details to a database or log collection system along
with a timestamp. Another database keeps a periodic log of RIB data that
lists the specific next-hops out of the AS. Another log keeps a running log
of UPDATEs.
From joining up all of this information, you can figure out the ISP you're
taking to a destination (at a given time) and how the stream performed.
Then, add some logic to inject routes to try out different next-hop ISPs
for some destinations.

Then, compare the newer ISP-path to the older one and see which performs
best. Where best means something specific to your application
(optimizing for latency, cost, etc.)

Cheers,
jof


Re: Inaccessible network from Verizon, accessible elsewhere.

2011-12-11 Thread Jonathan Lassoff
On Sat, Dec 10, 2011 at 11:49 AM, NetSecGuy netsec...@gmail.com wrote:

 I have a Linode VPS in Japan that I can't access from Verizon FIOS,
 but can access from other locations.  I'm not sure who to blame.

 The host, 106.187.34.33, is behind the gateway 106.187.34.1:

 From FIOS to 106.187.34.1  (this works).

 traceroute to 106.187.34.1 (106.187.34.1), 64 hops max, 52 byte packets

  4  so-6-1-0-0.phil-bb-rtr2.verizon-gni.net (130.81.199.4)  9.960 ms
 9.957 ms  6.666 ms
  5  so-8-0-0-0.lcc1-res-bb-rtr1-re1.verizon-gni.net (130.81.17.3)
 12.298 ms  13.463 ms  13.706 ms
  6  0.ae2.br1.iad8.alter.net (152.63.32.158)  14.571 ms  14.372 ms
  14.003 ms
  7  204.255.169.218 (204.255.169.218)  14.692 ms  14.759 ms  13.670 ms
  8  sl-crs1-dc-0-1-0-0.sprintlink.net (144.232.19.229)  13.077 ms
 12.577 ms  14.954 ms
  9  sl-crs1-nsh-0-5-5-0.sprintlink.net (144.232.18.200)  31.443 ms
sl-crs1-dc-0-5-3-0.sprintlink.net (144.232.24.37)  33.005 ms
sl-crs1-nsh-0-5-5-0.sprintlink.net (144.232.18.200)  31.507 ms
 10  sl-crs1-kc-0-0-0-2.sprintlink.net (144.232.18.112)  57.610 ms
 58.322 ms  59.098 ms
 11  otejbb204.kddnet.ad.jp (203.181.100.45)  196.063 ms
otejbb203.kddnet.ad.jp (203.181.100.13)  188.846 ms
otejbb204.kddnet.ad.jp (203.181.100.21)  195.277 ms
 12  cm-fcu203.kddnet.ad.jp (124.215.194.180)  214.760 ms
cm-fcu203.kddnet.ad.jp (124.215.194.164)  198.925 ms
cm-fcu203.kddnet.ad.jp (124.215.194.180)  200.583 ms
 13  124.215.199.122 (124.215.199.122)  193.086 ms *  194.967 ms

 This does not work from FIOS:

 traceroute to 106.187.34.33 (106.187.34.33), 64 hops max, 52 byte packets

  4  so-6-1-0-0.phil-bb-rtr2.verizon-gni.net (130.81.199.4)  34.229 ms
 8.743 ms  8.878 ms
  5  so-8-0-0-0.lcc1-res-bb-rtr1-re1.verizon-gni.net (130.81.17.3)
 15.402 ms  13.008 ms  14.932 ms
  6  0.ae2.br1.iad8.alter.net (152.63.32.158)  13.325 ms  13.245 ms
  13.802 ms
  7  204.255.169.218 (204.255.169.218)  14.820 ms  14.232 ms  13.491 ms
  8  lap-brdr-03.inet.qwest.net (67.14.22.78)  90.170 ms  92.273 ms
  145.887 ms
  9  63.146.26.70 (63.146.26.70)  92.482 ms  92.287 ms  94.000 ms
 10  sl-crs1-kc-0-0-0-2.sprintlink.net (144.232.18.112)  58.135 ms
 58.520 ms  58.055 ms
 11  otejbb203.kddnet.ad.jp (203.181.100.17)  205.844 ms
otejbb204.kddnet.ad.jp (203.181.100.25)  189.929 ms
otejbb203.kddnet.ad.jp (203.181.100.17)  204.846 ms
 12  sl-crs1-oro-0-1-5-0.sprintlink.net (144.232.25.77)  87.229 ms
sl-crs1-oro-0-3-3-0.sprintlink.net (144.232.25.207)  88.796 ms  88.717
 ms
 13  124.215.199.122 (124.215.199.122)  193.584 ms  202.208 ms  192.989 ms
 14  * * *

 Same IP from different network:

 traceroute to 106.187.34.33 (106.187.34.33), 30 hops max, 60 byte packets

  6  ae-8-8.ebr2.Washington1.Level3.net (4.69.134.105)  2.230 ms  1.847
 ms  1.938 ms
  7  ae-92-92.csw4.Washington1.Level3.net (4.69.134.158)  2.010 ms
 1.985 ms ae-62-62.csw1.Washington1.Level3.net (4.69.134.146)  1.942 ms
  8  ae-94-94.ebr4.Washington1.Level3.net (4.69.134.189)  12.515 ms
 ae-74-74.ebr4.Washington1.Level3.net (4.69.134.181)  12.519 ms  12.507
 ms
  9  ae-4-4.ebr3.LosAngeles1.Level3.net (4.69.132.81)  65.957 ms
 65.958 ms  66.056 ms
 10  ae-83-83.csw3.LosAngeles1.Level3.net (4.69.137.42)  66.063 ms
 ae-93-93.csw4.LosAngeles1.Level3.net (4.69.137.46)  65.985 ms
 ae-63-63.csw1.LosAngeles1.Level3.net (4.69.137.34)  66.026 ms
 11  ae-3-80.edge2.LosAngeles9.Level3.net (4.69.144.143)  66.162 ms
 66.160 ms  66.238 ms
 12  KDDI-AMERIC.edge2.LosAngeles9.Level3.net (4.53.228.14)  193.317 ms
  193.447 ms  193.305 ms
 13  lajbb001.kddnet.ad.jp (59.128.2.101)  101.544 ms  101.543 ms
 lajbb002.kddnet.ad.jp (59.128.2.185)  66.563 ms
 14  otejbb203.kddnet.ad.jp (203.181.100.13)  164.217 ms  164.221 ms
  164.330 ms
 15  cm-fcu203.kddnet.ad.jp (124.215.194.164)  180.350 ms
 cm-fcu203.kddnet.ad.jp (124.215.194.180)  172.779 ms
 cm-fcu203.kddnet.ad.jp (124.215.194.164)  185.824 ms
 16  124.215.199.122 (124.215.199.122)  175.703 ms  175.700 ms  168.268 ms
 17  li377-33.members.linode.com (106.187.34.33)  174.381 ms  174.383
 ms  174.368 ms


In doing a little probing right now, from various source addresses, I'm
unable to reproduce the problem.

I've seen failures similar to this one (where the source address matters;
some work, some don't) when multi-port LAGs or ECMP paths have a single
link in them fail, but are still detected and forwarded over as if it was
up. This can happen, for example, if you run a LAG with no channeling
protocol (like LACP or PAGP), that hashes source and destination IPs to
pick a link (to ensure consistent paths per-path, and with ports,
per-flow). If one of those links fails in the underlying media or physical
path, but the link is still detected as up, packets to some IPs (but not
others) will just drop on the flor.

Now, in this particular case, it doesn't seem like the path to both
destinations seem like they even take the same path (so that previous
hypothesis is pure conjecture). Perhaps routes were actively 

Re: Internet Edge and Defense in Depth

2011-12-06 Thread Jonathan Lassoff
I would argue that collapsing all of your policy evaluation and routing for
a size/zone/area/whatever into one box is actually somewhat detrimental to
stability (and consequently, security to a certain extent).

Cramming every little feature under the sun into one appliance makes for
great glossy brochures and Powerpoint decks, but I just don't think it's
practical.

Take a LAMP hosting operation for example. Which will scale the furthest to
handle the most amount of traffic and stateful sessions: iptables and snort
on each multi-core server, or one massive central box with some interface
hardware and Cavium Octeons.
If built properly, my money's on the distributed setup.

Cheers,
jof


Re: IPv6 prefixes longer then /64: are they possible in DOCSIS networks?

2011-11-28 Thread Jonathan Lassoff
On Mon, Nov 28, 2011 at 10:43 PM, valdis.kletni...@vt.edu wrote:

 On Tue, 29 Nov 2011 00:15:02 EST, Jeff Wheeler said:

  Owen and I have discussed this in great detail off-list.  Nearly every
  time this topic comes up, he posts in public that neighbor table
  exhaustion is a non-issue.  I thought I'd mention that his plan for
  handling neighbor table attacks against his networks is whack-a-mole.
  That's right, wait for customer services to break, then have NOC guys
  attempt to clear tables, filter traffic, or disable services; and
  repeat that if the attacker is determined or going after his network
  rather than one of his downstream customers.

 It's worked for us since 1997.  We've had bigger problems with IPv4 worms
 that
 decided to probe in multicast address space for their next target, causing
 CPU
 exhaustion on routers as they try to set up zillions of multicast groups.

 Sure, it's a consideration.  But how many sites are *actually* getting hit
 with this, compared to all the *other* DDOS stuff that's going on?  I'm
 willing
 to bet a large pizza with everything but anchovies that out in the *real*
 world, 50-75 times as many (if not more) sites are getting hit with IPv4
 DDoS attacks that they weren't prepared for than are seeing this one
 particular neighbor table exhaustion attack.

 Any of the guys with actual DDoS numbers want to weigh in?


Agreed. While I don't have any good numbers that I can publicly offer up,
it also intuitively makes sense that there's a greater proportion of IPv4
DDOS and resource exhaustion attacks vs IPv6 ones.
Especially on the distributed part; there's a heck of lot more v4-only
hosts to be 0wned and botnet'ed than dual-stacked ones.

That said, I think the risk of putting a /64 on a point-to-point link is
much greater than compared to a (incredibly wasteful) v4 /24 on a
point-to-point. Instead of ~254 IPs one can destinate traffic towards (to
cause a ARP neighbor discovery process), there's now ~18446744073709551614
addresses one can cause a router to start sending ICMPv6 messages for.

For links that will only ever be point-to-point, there's no good reason
(that I know of) to not use a /127. For peering LANs or places that have a
handful of routers, /112's are cute, but I would think the risk is then
comparable to a /64 (which has the added benefit of SLAAC).

I imagine the mitigation strategies are similar for both cases though: just
rate-limit how often your router will attempt neighbor discovery. Are there
other methods?

Cheers,
jof


Re: ASA log viewer

2011-11-19 Thread Jonathan Lassoff
On Sat, Nov 19, 2011 at 4:51 PM, Duane Toler deto...@gmail.com wrote:

 Hey NANOG!

 My employer is deploying CIsco ASA firewalls to our clients
 (specifically the 5505, 5510 for our smaller clients).  We are having
 problems finding a decent log viewer.  Several products seem to mean
 well, but they all fall short for various reasons.  We primarily use
 Check Point firewalls, and for those of you with that experience, you
 know the SmartViewer Tracker is quite powerful.  Is there anything
 close to the flexibility and filtering capabilities of Check Point's
 SmartView Tracker?

 For now, I've been dumping the logs via syslog with TLS using
 syslog-ng to our server, but that is mediocre at best with varying
 degrees of reliability.  The syslog-ng server then sends that to a
 perl script to put that into a database.  That allows us to run our
 monthly reports, but that doesn't help us with live or historical log
 parsing and filtering (see above, re: SmartView Tracker).


It sounds like you've already got a pretty good aggregation setup going,
here. I've had great luck with UDP Syslog from devices to a site-local log
aggregator that then ships off log streams to a central place over TCP (for
the WAN paths) and/or TLS/SSL.

It sounds like you may have something similar going here, though I'd be
curious to know where you've had this fall down reliability-wise.

If a customer called to help us troubleshoot connection issues over
 the past few days, there's no way to review the logs and figure out
 what happened back then.  Every CCIE we've talked to, and Cisco
 themselves, seem to not care about firewall traffic logs or the
 ability to parse and review them.  We know about Cisco Security
 Center, but that seems incapable of handling logs, etc.  CS-MARS
 would've been great, but that's overpriced and now discontinued
 anyway.  We'd hate to spend the time writing our own app if there's a
 viable product already available (we're willing to pay a reasonable
 price for one, too).


I don't know of any great commercial products, as I've only built homegrown
tools for various organizations. I'm curious though, what kinds of features
are you looking for? Searching log data? Alerting on events based on log
data?

Cheers,
jof


Re: ASA log viewer

2011-11-19 Thread Jonathan Lassoff
On Sat, Nov 19, 2011 at 5:32 PM, Duane Toler deto...@gmail.com wrote:

 On Sat, Nov 19, 2011 at 20:04, Jay Ashworth j...@baylink.com wrote:
  - Original Message -
  From: Duane Toler deto...@gmail.com
 
  My employer is deploying CIsco ASA firewalls to our clients
  (specifically the 5505, 5510 for our smaller clients). We are having
  problems finding a decent log viewer. Several products seem to mean
  well, but they all fall short for various reasons. We primarily use
  Check Point firewalls, and for those of you with that experience, you
  know the SmartViewer Tracker is quite powerful. Is there anything
  close to the flexibility and filtering capabilities of Check Point's
  SmartView Tracker?
 
  Is your problem the aggregation proper, or the mining?
 
  Do the ASA's log to syslog?
 
  Cheers,
  -- jra
  --

 Yep, we log to syslog, and the issue is the mining.  Not that I/we
 *can't* grep/regex/sed/awk/perl our way thru the log files.  It's just
 that it's overly tedious.  Especially when compared to Check Point's
 product (given that they are aiming to compete...).


I'd second Mike's suggestion then -- check out Splunk. They make a
commercial log viewing, searching, and reporting product that's pretty
awesome. They license based on log volume, and the pricing scales somewhat
logarithmically. So, I would consider your log volume and budget before
sinking too much time into it.

There's a free trial installation and license that's available if you want
to try it out.

Cheers,
jof


Re: ASA log viewer

2011-11-19 Thread Jonathan Lassoff
On Sat, Nov 19, 2011 at 5:46 PM, Duane Toler deto...@gmail.com wrote:

 On Sat, Nov 19, 2011 at 20:30, Jonathan Lassoff j...@thejof.com wrote:
  On Sat, Nov 19, 2011 at 4:51 PM, Duane Toler deto...@gmail.com wrote:
 
  Hey NANOG!
 
  My employer is deploying CIsco ASA firewalls to our clients
  (specifically the 5505, 5510 for our smaller clients).  We are having
  problems finding a decent log viewer.  Several products seem to mean
  well, but they all fall short for various reasons.  We primarily use
  Check Point firewalls, and for those of you with that experience, you
  know the SmartViewer Tracker is quite powerful.  Is there anything
  close to the flexibility and filtering capabilities of Check Point's
  SmartView Tracker?
 
  For now, I've been dumping the logs via syslog with TLS using
  syslog-ng to our server, but that is mediocre at best with varying
  degrees of reliability.  The syslog-ng server then sends that to a
  perl script to put that into a database.  That allows us to run our
  monthly reports, but that doesn't help us with live or historical log
  parsing and filtering (see above, re: SmartView Tracker).
 
  It sounds like you've already got a pretty good aggregation setup going,
  here. I've had great luck with UDP Syslog from devices to a site-local
 log
  aggregator that then ships off log streams to a central place over TCP
 (for
  the WAN paths) and/or TLS/SSL.
  It sounds like you may have something similar going here, though I'd be
  curious to know where you've had this fall down reliability-wise.

 We considered that, but didn't want to burden small customers with a
 classic scenario of ok well you have to have our other box in your
 room and have to deal with procurement, maintenance, upkeep,
 monitoring, blah blah.  Recent ASA code (8.3-ish, 8.4? i forget) had
 syslog-tls built in and finally able to ship logs out across the
 lowest security zone, which was quite a nice addition.


Ah, this totally makes sense now. I can see why you'd want to use features
that are already on your ASAs. Sounds like a bug to me, though.
I wonder what Cisco calls syslog-tls though. Syslog-like packet bodies,
over a TLS-wrapped TCP socket?

Sorry to hear it's been so unreliable -- I guess that's why I'm biased
towards just running generic PCs and open source software for this kind of
stuff; when bugs happen, you're actually empowered to debug and fix
problems.

I'd like to fully search on an 'column', a la 'ladder logic' style.,
 as well as have the data presented in an orderly well-defined fashion.
  I know that sounded like the beginnings of use XML! but oh dear,
 not XML, please. :)  Poor syslog is just too flat and in a state of
 general disarray.  The bizarre arrangement of connection setup, NAT,
 non-NAT, traffic destined to the device, originating from the device,
 traffic routing across the to another zone, etc. ... it's very
 nonsensical, verbose, and frankly maddening.


This does indeed sound like a good application for splunk. They have ways
of defining custom logging formats that will parse out simple column and
message types so that you can construct queries based on that information.

There's some more information here in Splunk's docs on custom field
extraction:
http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Managesearch-timefieldextractions

Cheers,
jof


Re: Cable standards question

2011-11-14 Thread Jonathan Lassoff
On Mon, Nov 14, 2011 at 7:12 AM, Jon Lewis jle...@lewis.org wrote:

 On Mon, 14 Nov 2011, Sam (Walter) Gailey wrote:

  My question is this; Is there an appropriate standard to specify for
 fiber-optic cabling that if it is followed the fiber will be installed
 correctly? Would specifying TIA/EIA 568-C.3, for example, be correct?

 I'm envisioning something like;

 The vendor will provide fiber connectivity between (building A) and
 (building B). Vendor will be responsible for all building penetrations and
 terminations. When installing the fiber-optic cable the vendor will follow
 the appropriate TIA/EIA 568 standards for fiber-optic cabling.


 At minimum, I think you should probably specify the type and number of
 fibers you want.  i.e. Based on the distance and gear you'll be using, do
 you need single-mode, or will multi-mode do (as well as the core/cladding
 diameter)?  Generally, but not always, fiber uses one strand for transmit
 and another for receive, so a typical fiber run is done using duplex fiber.
  Some optics can transmit and receive over one strand using different
 wavelengths.  You might even specify how you want the fiber terminated (SC,
 LC, cables hanging from the wall, fiber patch panel, etc.).


I'd agree with this. I wouldn't worry about the standard so much as the
practical aspects of a run. Once you have an idea of the approximate
distance of the run, you can figure out which optics you plan on using.
This will determine what physical connectors you'll need and what your
approximate link budget will be.

Based on that information, you can figure out which type to ask for
(9um/125um single-mode, most likely), a range of path loss that you're
comfortable with, and the physical termination you'd like at either end.

Cheers,
jof


Re: Firewalls - Ease of Use and Maintenance?

2011-11-10 Thread Jonathan Lassoff
On Wed, Nov 9, 2011 at 12:44 PM, Nick Hilliard n...@foobar.org wrote:
 On 09/11/2011 19:07, C. Jon Larsen wrote:

 put the main portion of the conf in subversion as an include file and
 factor out local differences in the configs with macros that are defined
 in
 pf.conf

 Easy.

 As I said, it's not a pf problem.  Commercial firewalls will do all this
 sort of thing off the shelf.  It's a pain to have to write scripts to do
 this manually.

Agreed. This is rather a pain to have to do manually each time (either
scp'ing or scripting). It's unfortunate that there's not a
conventional script or mechanism for doing this.

I have plenty of scripts from past commercial work that do this, but
they're sadly tied up license-wise.

I've had good luck, pf-wise, with creating a ruleset that is just
identical between hosts. By keeping the interface naming/numbering
scheme consistent across two hosts, the same configuration can just
work on both.

Cheers,
jof



Re: Firewalls - Ease of Use and Maintenance?

2011-11-09 Thread Jonathan Lassoff
On Wed, Nov 9, 2011 at 5:24 AM, Nick Hilliard n...@foobar.org wrote:
 On 09/11/2011 12:22, Richard Kulawiec wrote:
 You will find it very difficult to beat pf on OpenBSD for efficiency,
 features, flexibility, robustness, and security.  Maintenance is very
 easy: edit a configuration file, reload, done.

 There are several areas where pf falls down.  One is auto-synchronisation
 from primary to backup firewall (not really a pf problem, but it's
 important for production firewall systems).

I've found that this works decently well, via pfsync. It sends out
multicast IP packets with multi-valued elements describing the state
of the flows it has in its table.

If you're having pf inspect TCP sequence numbers, there's a bit of a
race condition in failover with frequently or fast-moving TCP streams.
As the window of acceptable sequence numbers moves on the active
firewall, they're slightly delayed in getting replicated to the
backup(s) and installed in their state tables.
Consequently, on failover, it's possible for some flows to get blocked
and which have to be re-created.

I've hit this and dug into it recently, so if you're having a problem,
I'd be happy to chat offlist.

Cheers,
jof



Re: Firewalls - Ease of Use and Maintenance?

2011-11-08 Thread Jonathan Lassoff
It really depends on what constraints you have. Do you care about:
cost? performance? support?

Personally, for cost-constrained applications of 1 Gbit/s or less
(assuming modestly-sized packets, not all-DNS for example), I like
OpenBSD/pf or Linux/netfilter and generic x86 64-bit servers.
It's cheap, deeply customizable and since everything touches a CPU, it
allows for deep traffic inspection.

The tradeoff is that there's no support from major vendors, but there
are many smaller but very experienced consulting shops that can
integrate any patches and fix and issues that may arise.


What kinds of things are you looking for?

Cheers,
jof

On Tue, Nov 8, 2011 at 3:06 PM, Jones, Barry
bejo...@semprautilities.com wrote:
 Hello all.
 I am potentially looking at firewall products and wanted suggestions as to 
 the easiest firewalls to install, configure and maintain? I have a few small 
 networks ( 50 nodes at one site, 50 odd at another, and maybe 20 at another. 
 I have worked with Cisco Pix, ASA, Netscreen, and Checkpoint (Nokia), and 
 each have strong and not as strong features for ease of use. Like everyone, 
 I'm resource challenged and need an easy solution to stand up and operate.

 Feel free to ping me offline - and thank you for the assistance.

 
 Barry Jones - CISSP GSNA
 Project Manager II
 Sempra Energy Utilities
 (760) 271-6822

 P please don't print this e-mail unless you really need to.
 





Re: IPv6 Availability on XO

2011-05-28 Thread Jonathan Lassoff
On Mon, May 23, 2011 at 4:39 PM, Ryan Rawdon r...@u13.net wrote:
 I've heard some mixed reports of XO's IPv6 availability - some that they have 
 full deployment/availability, but others like the answer back from our XO 
 reseller that XO does not offer IPv6 on circuits under 45mbit/s.

 What is the experience of NANOG on this matter, particularly with XO 
 connectivity under 45mbit/s?

Interesting. Perhaps they haven't plumbed native v6 throughout their network?

For comparison, I'm currently running some native IPv6 over XO in the
San Francisco Bay Area (homed off of an XO router in Fremont, CA).
The circuit is GigE.

Cheers,
jof



Re: what about 48 bits?

2010-04-04 Thread Jonathan Lassoff
Excerpts from John Peach's message of Sun Apr 04 08:17:28 -0700 2010:
 On Sun, 4 Apr 2010 11:10:56 -0400
 David Andersen d...@cs.cmu.edu wrote:
 
  There are some classical cases of assigning the same MAC address to every 
  machine in a batch, resetting the counter used to number them, etc.;  
  unless shown otherwise, these are likely to be errors, not accidental 
  collisions.
  
-Dave
  
  On Apr 4, 2010, at 10:57 AM, jim deleskie wrote:
  
   I've seen duplicate addresses in the wild in the past, I assume there
   is some amount of reuse, even though they are suppose to be unique.
   
   -jim
   
   On Sun, Apr 4, 2010 at 11:53 AM, A.B. Jr. skan...@gmail.com wrote:
   Hi,
   
   Lots of traffic recently about 64 bits being too short or too long.
   
   What about mac addresses? Aren't they close to exhaustion? Should be. Or 
   it
   is assumed that mac addresses are being widely reused throughout the 
   world?
   All those low cost switches and wifi adapters DO use unique mac 
   addresses?
   
 Sun, for one, used to assign the same MAC address to every NIC in the
 same box.

I could see how that *could* work as long as each interface connected to
a different LAN.

Maybe the NICs shared a single MII/MAC sublayer somehow? I've never
borne witness to this though.


Re: MAC address exhaustion, if the the second-to-least significant bit
in the first byte is 0 (Globally Unique / Individually Assigned bit),
then the first three bytes of the MAC should correspond to the
manufacturer's Organizationally Unique Identifier. These are
maintained by the IEEE, and they have a list of who's who here:
http://standards.ieee.org/regauth/oui/index.shtml


I haven't ever programmatically gone through the list, but it looks like
a lot of the space is assigned.

Cheers,
jof



Re: Using private APNIC range in US

2010-03-18 Thread Jonathan Lassoff
Excerpts from Jaren Angerbauer's message of Thu Mar 18 09:22:40 -0700 2010:
 Thanks all for the on / off list responses on this.  I acknowledge I'm
 playing in territory I'm not familiar with, and was a bad idea to jump
 to the conclusion that this range was private.  I made that assumption
 originally because the entire /8 was owned by APNIC, and just figured
 since the registrar owned them, it must have been a private range. :S
 
 It sounds like this range was just recently assigned -- is there any
 document (RFC?) or source I could look through to learn more about
 this, and/or provide evidence to my client?

There's a couple of relevant documents you could refer them to:

IANA's IPv4 Address Space Registry ( 
http://www.iana.org/assignments/ipv4-address-space/ ),
which will show you a listing of which registries and various entities
are assigned /8 chunks of IPv4 space.
There's some interesting names and historical registrations in there
(including 1.0.0.0/8's recent allocation to APNIC)

There's also an RFC, RFC1918 that sets aside some IPv4 space for
private, ad-hoc use.
http://www.faqs.org/rfcs/rfc1918.html

This is also a good lay reference:
http://en.wikipedia.org/wiki/Private_network

Have fun,
jof



Re: news from Google

2009-12-03 Thread Jonathan Lassoff
Excerpts from Charles Wyble's message of Thu Dec 03 10:44:49 -0800 2009:
 8.8.8.8  6.6.6.6 would have been really really funny. :) 

Nice IPs from Level 3, huh?

6.6.6.6 belongs to the US Army.

--j



Re: Layer 2 vs. Layer 3 to TOR

2009-11-12 Thread Jonathan Lassoff
Excerpts from David Coulson's message of Thu Nov 12 13:07:35 -0800 2009:
 You could route /32s within your L3 environment, or maybe even leverage 
 something like VPLS - Not sure of any TOR-level switches that MPLS 
 pseudowire a port into a VPLS cloud though.

I was recently looking into this (top-of-rack VPLS PE box). Doesn't seem
to be any obvious options, though the new Juniper MX80 sounds like it
can do this.  It's 2 RU, and looks like it can take a DPC card or comes
in a fixed 48-port GigE variety.

I like the idea of doing IP routing to a top-of-rack or edge device, but
have found others to be skeptical.

Are there any applications that absolutely *have* to sit on the same
LAN/broadcast domain and can't be configured to use unicast or multicast
IP?

--j



Re: San Francisco Power Outage

2007-07-24 Thread Jonathan Lassoff


Well, the fact still remains that operating a datacenter smack-dab in
the center of some of the most inflated real estate in recent history
is quite a castly endeavor.
I really wouldn't be all that surprised if 365 Main cut some corners
here and there behind the scenes to save costs while saving face.

As it is, they don't have remotely enough power to fill that facility
to capacity, and they've suffered some pretty nasty outages in the
recent past. I'm strongly considering the possibility of completely
moving out of there.

--j

On 7/24/07, Patrick Giagnocavo [EMAIL PROTECTED] wrote:



On Jul 24, 2007, at 6:54 PM, Seth Mattinen wrote:

 I have a question: does anyone seriously accept oh, power trouble
 as a reason your servers went offline? Where's the generators? UPS?
 Testing said combination of UPS and generators? What if it was
 important? I honestly find it hard to believe anyone runs a
 facility like that and people actually *pay* for it.


Sad that the little Telcove DC here in Lancaster, PA, that Level3
bought a few months ago, has weekly full-on generator tests where
100% of the load is transferred to the generator, while apparently
large DCs that are charging premium rates, do not.

Cordially

Patrick Giagnocavo
[EMAIL PROTECTED]







--
Jonathan Lassoff
echo thejof | sed 's/^/jof@/;s/$/.com/'
http://thejof.com
GPG: 0xC8579EE5