Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Michael . Dillon

 How many channels can you get on your (terrestrial) broadcast receiver?

There are about 30 channels broadcast free-to-air
on digital freeview in the UK. I only have so many
hours in the day so I never have a problem in finding
something. Some people are TV junkies or they only
want some specific content so they get satellite dishes.
Any Internet TV service has a limited market because
it competes head-on with free-to-air and satellite
services. And it is difficult to plug Internet TV into
your existing TV setup.

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Michael . Dillon

   Then why can't they plug in Power, TV  phone line? That's
  where IPTV STBs are going...

OK, I can see that you could use such a set-top box to
sell broadband to households which would not otherwise 
buy Internet services. But that is a niche market.

 Especially as more and more ISPs/telcos hand out WLAN boxen of various
 kinds - after all, once you have some sort of Linux (usually)
 networked appliance in the user's premises, it's quite simple to
 deploy more services (hosted VoIP, IPTV, media centre, connected
 storage, maybe SIP/Asterisk..) on top of that.

He didn't say that his STB had an Ethernet port.
And I'm not aware of any generic Linux box that can
be used to deploy additional services other than
do-it-yourself. And that too is a niche market.

Also, note that the proliferation of boxes, each
needing its own power connection and some place 
to sit, is causing its own problems in the household.
Stacking boxes is not straightforward because some have
air vents on top and others are not flat on top.
The TV people have not learned the lessons of
that the hi-fi component people learned back in
the 1960s.

--Michael Dillon



Re: RFQ: IP service in U.K. for U.S. hosting company

2007-01-10 Thread Michael . Dillon

We've determined that the best option for our problem is not 
 peering, but the purchase of service from a provider with a good network 

 in the U.K. and on the continent. We'd like to receive proposals based 
 on this requirement.

Then you should write up an RFP and send it to
companies that meet your requirement. Otherwise you
risk only getting quotes from people who troll the
mailing lists, desperate for business.

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-09 Thread Michael . Dillon

 I remember the times when I could watch mexican tv transmitted from a
 studio in florida.

If it comes from a studio in Florida then it
is AMERICAN TV, not Mexican TV. I believe there
are three national TV networks in the USA, 
which are headquartered in Miami and which 
broadcast in Spanish.

--Michael Dillon



What comes AFTER YouTube?

2007-01-09 Thread Michael . Dillon

 Not only does this type of programming require real-time 
 distribution, as these shows are quite often cheaper to produce than 
 pre-recorded entertainment or documentaries they tend to fill a large 
 portion of the schedule. 

And since there are so many of these reality shows in
existence and the existing broadcast technology seems to
perfectly meet the needs of the show producers,
what is the point of trying to shift these shows
to the Internet?

If it ain't broke, don't fix it!

I do believe that the amount of video content on
the Internet will increase dramatically over the next
few years, just as it has in the past. But I don't
believe that existing video businesses, such as 
TV channels, are going to shift to Internet distribution
other than through specialized services. 

The real driver behind the future increase in
video on the Internet is the falling cost of video
production and the widespread knowledge of how to
create watchable video. Five years ago in high school,
my son was taking a video production course. Where do
you think YouTube gets their content?

In the past, it was broadband to the home, webcams
and P2P that drove the increase in video content, but
the future is not just more of the same. YouTube has
leveraged the increased level of video production skills
in the population but only in a crude way.

Let's put it this way. How much traffic on the net was
a result of dead-tree newpapers converting to Internet
delivery, and how much was due to the brand-new concept 
of blogging?

--Michael Dillon






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-08 Thread Michael . Dillon

 But what happens when 5% of the paying subscribers use 95% of the 
existing 
 capacity, and then the other 95% of the subscribers complain about poor 
 performance?

Capacity is too vague of a word here. If we assume that the P2P 
software can be made to recognize the ISP's architecture and prefer
peers that are topologically nearby, then the issue focuses on the 
ISP's own internal capacity. It should not have a major impact on
the ISP's upstream capacity which involves stuff that is rented 
from others (transit, peering). Also, because P2P traffic has its
sources evenly distributed, it makes a case for cheap local
BGP peering connections, again, to offload traffic from more
expensive upstream transit/peering.

  What is the real cost to the ISP needing to upgrade the
 network to handle the additional traffic being generated by 5% of the
 subscribers when there isn't spare capacity?

In the case of DSL/Cable providers, I suspect it is mostly in
the Ethernet switches that tie the subscriber lines into the
network.

 The reason why many universities buy rate-shaping devices is dorm users 
 don't restrain their application usage to only off-peak hours, which may 

 or may not be related to sleeping hours.  If peer-to-peer applications 
 restrained their network usage during periods of peak network usage so 
 it didn't result in complaints from other users, it would probably 
 have a better reputation.

I am suggesting that ISP folks should be cooperating with
P2P software developers. Typically, the developers have a very
vague understanding of how the network is structured and are
essentially trying to reverse engineer network capabilities. 
It should not be too difficult to develop P2P clients that
receive topology hints from their local ISPs. If this results
in faster or more reliable/predictable downloads, then users
will choose to use such a client. 

 The Internet is good for narrowcasting, but its
 still working on mass audience events.

Then, perhaps we should not even try to use the Internet
for mass audience events. Is there something wrong with
the current broadcast model? Did TV replace radio? Did
radio replace newspapers?

--Michael Dillon



RE: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-08 Thread Michael . Dillon

 Bring that box to the living room in an attractive package and
 the stats will be very different.

This kind of box is very popular in England. It 
is called a digital TV receiver and it receives
MPEG-2 streams broadcast freely over the airwaves.
Some people, myself included, have a receiver that
with a hard disk that allows pausing live TV and 
scheduling recording from the electronic program
guide which is part of the broadcast stream.

Given that the broadcast model for streaming content
is so successful, why would you want to use the
Internet for it? What is the benefit?

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Michael . Dillon

 2.  The question I don't understand is, why stream?  In these days, when 
a
 terabyte disk for consumer PCs is about to be introduced, why bother 
with
 streaming?  It is so much simpler to download (at faster than real-time 
rates,
 if possible), and play it back.

Very good question. The fact is that people have
been doing Internet TV without streaming for years
now. That's why P2P networks use so much bandwidth.
I've used it myself to download Russian TV shows
that are not otherwise available here in England.
Of course the P2P folks aren't just dumping raw DVB
MPEG-2 streams onto the network. They are recompressing
them using more advanced codecs so that they do not
consume unreasonable amounts of bandwidth.

Don't focus on the Venice project. They are just one
of many groups trying to figure out how to make TV
work on the Internet. Consumer ISPs need to do a better
job of communicating to their customers the existence
of GB/month bandwidth caps, the reason for the caps,
how video over IP creates problems, and how to avoid
those problems by using Video services which support
high-compression codecs. If it is DVB, MPEG-2 or MPEG-1
then it is BAD. Stay away.

Look for DIVX, MP4 etc.

Note that video caching systems like P2P networks can
potentially serve video to extremely large numbers of
users while consuming reasonably low levels of upstream
bandwidth. The key is in the caching. One copy of BBC's
Jan 8th evening news is downloaded to your local P2P
network consuming upstream bandwidth. Then local users
use local bandwidth to get copies of that broadcast over
the next few days. 

For this to work, you need P2P software whose algorithms
are geared to conserving upstream bandwidth. To date, the
software developers do not work in cooperation with ISPs 
and therefore the P2P software is not as ISP-friendly as
it could be. ISPs could change this by contacting P2P
developers. One group that is experimenting with better
algorithms is http://bittyrant.cs.washington.edu/

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Michael . Dillon

  That might be worse for download operators, because people may 
  download
  an hour of video, and only watch 5 minutes :/

 So, from that standpoint, making a video file available for download 
 is wasting order of 90% of the bandwidth used
 to download it.

Considering that this is supposed to be a technically
oriented list, I am shocked at the level of ignorance
of networking technology displayed here.

Have folks never heard of content-delivery networks,
Akamai, P2P, BitTorrent, EMule?

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Michael . Dillon

  Note that video caching systems like P2P networks can
  potentially serve video to extremely large numbers of
  users while consuming reasonably low levels of upstream
  bandwidth.
 
 The total bandwidth used is the same though, no escaping
 that, someone pays.

This is not true. Increased bandwidth consumption does 
not necessarily cost money on most ISP infrastructure. 
At my home I have a fairly typical ISP service using 
BT's DSL. If I use a P2P network to download files from
other BT DSL users, then it doesn't cost me a penny
more than the basic DSL service. It also doesn't cost
BT any more and it doesn't cost those users any more.
The only time that costs increase is when I download
data from outside of BT's network because the increased
traffic reaquires larger circuits or more circuits, etc.

The real problem with P2P networks is that they don't 
generally make download decisions based on network
architecture. This is not inherent in the concept of
P2P which means that it can be changed. It is perfectly
possible to use existing P2P protocols in a way that is
kind to an ISP's costs.

 If it was only redistributed locally. Even in that case it's not
 helping much as it still consumes the most expensive bandwidth (for UK
 ADSL). Transit is way cheaper than BT ADSL wholesale, you're saving
 something that's cheap.

I have to admit that I have no idea how BT charges
ISPs for wholesale ADSL. If there is indeed some kind
of metered charging then Internet video will be a big
problem for the business model. 

 Or the caches that are being sold to fudge the protocols to
 keep it local but if you're buying them we could have just
 as easily done http download and let it be cached by existing
 appliances.

The difference with P2P is that caching is built-in to
the model, therefore 100% of users participate in 
caching. With HTTP, caches are far from universal, 
especially to non-business users.

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Michael . Dillon

 Why would you want to stream in unicast when there are already 
 models for effective multicast content delivery (see Michael's 
 list)? *See point above!*

The word multicast in the above quote, does not refer
to the set of protocols called IP multicast. Content
delivery networks (CDNs) like Akamai are also, inherently,
a form of multicasting. So are P2P networks like BitTorrent
and EMule. If this sounds odd to you, perhaps you don't really
understand the basics of either multicast or P2P. Check out
Wikipedia to see what I mean:
http://en.wikipedia.org/wiki/Peer-to-peer
http://en.wikipedia.org/wiki/Multicast

If your data absolutely, positively, must be delivered
simultaneously to multiple destinations, i.e. time is
of the essence, then I agree that P2P and IP multicast
are not comparable. But the context of this discussion
is not NYSE market data feeds, but entertainment video.
The use-cases for entertainment mean that timing is
of little importance. More important are things like
consistency and control.

--Michael Dillon




Re: NATting a whole country?

2007-01-04 Thread Michael . Dillon

  all of Qatar appears on the net as a single IP address.
 
 I wonder what they use the other 241663 addresses for.

Same as you.
To address the many machines and networks in Qatar.
The existence of a NAT gateway to one portion of the 
Internet does not remove the need for registered IP
addresses. They are still needed to avoid addressing
conflicts in the portion of the Internet which is
not behind the gateway.

--Michael Dillon



Re: Phishing and BGP Blackholing

2007-01-04 Thread Michael . Dillon

 For those of us who read nanog from a mobile device, it's incredibly
 annoying to have no content in the first few bytes - a lot of mobile
 e-mail clients (all MS Windows Mobile 5 devices and every Blackberry
 I've seen) pull the first 0.5KB of each message, i.e. the header,
 subject line and the first few lines of text, so the user can decide
 which ones are worth reading in full.

Why should all 1 billion Internet users change
their behavior just because your minority mail-reading
system is broken?

Hint: Procmail is your friend. Set up your own mail 
server and run procmail against all incoming email
with newline-greaterthan in the first 500 bytes. You
can preprocess these messages to do something like
strip headers that you don't read and copy the first
few reply lines to be first in the message. That way
your mobile device will get more bang for the buck
than most other people's.

Paul Vixie's colo registry may be of help if you need
to find a place to stick your own mail server
http://www.vix.com/personalcolo/

--Michael Dillon



Re: Phishing and BGP Blackholing

2007-01-04 Thread Michael . Dillon

 (All right then, scroll down for content :-))

It is not necessary to quote an entire message
when you are only replying to one specific 
part of it.

 Minority? A mail client has been standard-ish for the last three to
 four years of upgrade iterations. There are a LOT of mobiles out
 there. Granted not many of them are used for e-mail, but that is a
 

One could say that not many is a reasonable
definition of a minority. So, yes, a MINORITY
of users have need for special message formatting.
Why should the other 999 million of us need
to change the way we do things?

 Anyway, I wouldn't write a letter with nothing worth reading on the
 first page. I don't write articles with nothing in the first
 paragraph. 

Nor do I, but there is a well-established tradition
in written English of the preamble. One could say that
a brief quote to set the the context of a statement
is perfectly good practice. Of course some people
take it to excess like the ones who wrote this declaration
a couple of hundred or so years ago:

We, therefore, the Representatives of the United States of America, in 
General Congress, Assembled, appealing to the Supreme Judge of the world 
for the rectitude of our intentions, do, in the Name, and by Authority of 
the good People of these Colonies, solemnly publish and declare, That 
these United Colonies are, and of Right ought to be Free and Independent 
States, that they are Absolved from all Allegiance to the British Crown, 
and that all political connection between them and the State of Great 
Britain, is and ought to be totally dissolved; and that as Free and 
Independent States, they have full Power to levy War, conclude Peace 
contract Alliances, establish Commerce, and to do all other Acts and 
Things which Independent States may of right do.

--Michael Dillon



Re: DNS - connection limit (without any extra hardware)

2007-01-02 Thread Michael . Dillon

 What is this group's name?  Oh yeah.  So that means you have one of 
 two choices ;-)

Smart NANOGers have taken the time to read the NANOG
charter here: http://www.nanog.org/charter.html
which says...

   The purpose of NANOG is to provide forums in the 
   North American region for education and the sharing 
   of knowledge for the Internet operations community. 

--Michael Dillon



Re: Security of National Infrastructure

2007-01-02 Thread Michael . Dillon

 Why is it that every company out there allows connections through their
 firewalls to their web and mail infrastructure from countries that they
 don't even do business in. Shouldn't it be our default to only allow US
 based IP addresses and then allow others as needed? The only case I can
 think of would be traveling folks that need to VPN or something, which
 could be permitted in the Firewall, but WHY WIDE OPEN ACCESS? We still
 seem to be in the wild west, but no-one has the [EMAIL PROTECTED] to be 
 braven and
 block the unnecessary access.
 
 Please don't feed the troll...

All those meandering replies full of jokes,
puns, political comments and smart remarks
do feed the trolls. But a straightforward 
answer is not troll feeding.

The fact is that all those companies out
there are PUBLISHING information on their
web servers. In order to PUBLISH you must 
open access to arbitrary members of the 
PUBLIC. These companies also publish email
addresses and invite people to send them 
email. In order for this email to get through
they have to open their incoming mail servers
to anyone.

This does not mean that their mail infrastructure 
or web infrastructure is wide open. In most cases
only an HTTP load balancer and an incoming-only
SMTP server will be accessible directly.

If anyone knows of a significant number of companies
where this is not the case then I think you have 
found a potential market for some consultancy
services. Rather than whining on NANOG, it would be 
more productive to find a salesperson to help you 
get your foot in the door and fix the problems.

--Michael Dillon


Re: Clueful Comcast.net Contact Needed

2006-12-14 Thread Michael . Dillon

 Can someone clueful from comcast.net contact me offlist please? 
 Getting through the outer defenses is proving difficult. :-(

Do we need some kind of tutorial on how to get
through outer defenses and make contact with
clueful NOC personnel?

The Rockford Files is available on DVD now
if you want some general tips...

--Michael Dillon




Re: Curious question on hop identity...

2006-12-14 Thread Michael . Dillon

 Besides, why do you believe the text in an in-addr.arpa record?  Or why 
do 
 you think the absence of an in-addr.arpa record is meaningful?

Back in the old days, say 10 years ago, you
could run a network by the seat of your pants
using rules of thumb about interpretation of
in-addr.arpa records. And you could be quite
successful at running a network using such techniques
because everybody else was doing pretty much the
same thing. Because of this uniformity, you could make
a lot of intelligent guesses and resolve problems.

However, I think times have changed, there is no
longer uniformity among the people making technical
decisions about Internet networks and many rules 
of thumb don't work any more even though they are
still out there in network operator folklore.

In fact, most people making network architectural
decisions about Internet networks don't participate
in NANOG any more. Most people making network operational
decisions also do not participate in NANOG anymore.
It's not just that many people have left NANOG behind,
but a lot of newcomers to the industry over the past
few years have not joined NANOG because they don't 
get why it is relevant to them.

Not that I'm complaining about the message quoted above.
It is a great example of the useful information that one
can find in this mailing list. I wish there were more
messages like this one, i.e. people sharing info rather
than complaints and pleas for help.

--Michael Dillon



RE: Bogon Filter - Please check for 77/8 78/8 79/8

2006-12-13 Thread Michael . Dillon

 B) Threaten the bogon list operator with a lawsuit for falsely claiming 
your
 addresses are bogons and hope they take the simplest path and fix their
 list.
 
 This is a pretty classic case of someone inducing other people to rely 
on
 the accuracy of their data and then offering incorrect data (not 
arguably
 incorrect, manifestly incorrect and most likely negligently so) which 
those
 other people then rely on.

It's not just incorrect data. The design of the
system used by completewhois is flawed at the core.
They only know that certain address ranges are
bogons at a certain point in time. If their system
only reported this fact along with the date for
which it is known to be valid, then they would
likely win any lawsuits for incorrect data.

The fact is, that you can only know that an address
range is a bogon at the point in time which you check
it and that it WAS a bogon for some past period. For
most bogons, it is not possible to predict the future
time period during which it will remain a bogon.

Any protocol which does not allow the address range
to be presented along with the LAST TIME IT WAS CHECKED
is simply not suitable for presenting a bogon list.
BGP simply is not suitable for this. HTTP/REST, XML-RPC
or LDAP could be used to make a suitable protocol.

But even better would be to not have any bogons at all.
If IANA and the RIRs would step up to the plate and 
provide an authoritative data source identifying which
address ranges have been issued for use on the Internet
then bogon lists would not be needed at all. And if people
plug their systems into the RIR data feed, then there would
be fewer issues when the RIRs start issuing addresses from
a new block. IANA would be the authoritative source for
stuff like RFC 1918 address ranges and other non-RIR ranges.

One wonders whether it might not be more effective in the
long run to sue ICANN/IANA rather than suing completewhois.com.

--Michael Dillon

P.S. As any lawyer will tell you, it is a good idea to make
some attempt at solving your issue outside of the courts. 
Anyone contemplating a lawsuit against ICANN should probably
try emailing them and writing a few letters first. Since they
are a somewhat democratic structure, it may be possible to
get this fixed without lawsuits.




Re: U.S./Europe connectivity

2006-12-06 Thread Michael . Dillon

  I am doing some work on a network in central Illinois that is 
 currently peering with Sprint and McLeod. They have a number of 
 customers in the U.K. and they want to reduce latency to that part of 
 the world.

Make sure they're not trying to reduce latency below
the speed of light in fibre. Make sure that your client
understands that they will never achieve the same latencies
trans-Atlantic as they achieve within the state. We recently
had to haul back one of our over-eager account managers
who was trying to sell a low-latency solution that was
about 3 times faster than the speed of light in fibre.

BTW, the speed of light in fibre is roughly equal to
the speed of electrons in copper and roughly equal to
two-thirds the speed of light in a vacuum. You just
can't move information faster than about 200,000 km/hr.

--Michael Dillon



Re: U.S./Europe connectivity

2006-12-06 Thread Michael . Dillon

 You cannae break the laws of physics, Captain!
 
 Seriously, LINX is the obvious first step.

To find a low latency connection from Chicago to Europe?

Somehow I think that he should be shopping locally but
it might be useful to use the LINX looking-glass
to validate what his local vendors tell him about
round trip times. Or he could use a looking-glass
in Chicago to measure traffic to various European
destinations.

LINX, London
http://www.linx.net/www_public/our_network/network_tools

Equinix, Chicago
http://lg.broadwing.net/looking/

If I were in his position I would make the rounds of
all vendors in Chicago, ask for prices and latency data,
then check their latency numbers using various
looking-glass sites. If a vendor gives out numbers that
vary significantly from what you can measure then I 
would want a detailed explanation of why that is.

--Michael Dillon





Exotic meeting locations in North America

2006-12-05 Thread Michael . Dillon

There really is no need for all NANOG meetings to have the same format.
In fact, if we accept the idea of varying formats, then some of the cost 
issues
can be tamed. For instance, one full meeting, one regional meeting, and 
one
special-focus meeting per year. The full meeting could be the one that is 
done
in conjunction with ARIN in a major center with full free networking, beer 
and gear
etc. 

The regional meeting would be in a smaller city with the expectation that 
the
majority of attendees are from the local area and don't have access to big 
travel
budgets. And the special focus meetings would target some specific topic 
and
pick a location to match. Some of the regional and special focus meetings 
would
not supply comprehensive free Internet access. If Internet access is 
available
people would pay for it and expect bandwidth limitations and higher than 
normal
latency. Depends on the location.

Here are some exotic locations that could work with a special focus.

Iqaluit, the capital of Nunavut is rather exotic. The native language is 
neither English nor French nor Spanish. It has the issues of remoteness 
and reliance on satellite telecommunications. 

New Orleans has dealt dramatically with disaster recovery and rebuilding 
infrastructure. It is exotic because it is still in the process of 
rebuilding unlike most American cities.

St. John's, Newfoundland - a British colony until 1949 when it joined 
Canada, this is located on a large island, has a history in trans-atlantic 
telecommunications and still has a certain amount of undersea fiber 
connectivity.

Montpelier, Vermont is the smallest state capital in the USA, located in 
the Vermont,New Hampshire, Maine area which is rather more rural than the 
average in the USA as well as being somewhat mountainous terrain.

If you don't count New Orleans before Katrina, I'd guess that well over 
90% of NANOGers have never been to any of these four cities.

Other special focus areas might be:

Government and the Internet, Government and IPv6 - Washington DC.
The Research Community and the Internet - Ann Arbor MI
Network Security from a Military Viewpoint - Sierra Vista AZ near US 
Army's CECOM-ISEC headquarters
Strategic Aspects of Network Security - Harrisburg PA not far from US Army 
War College Strategic Studies Institute in nearby Carlisle

The idea of regional meetings is mainly to have a scaled down NANOG to 
reach a much wider audience that does not have a large conference travel 
budget. This is rather similar to RIPE's meetings in Qatar, Moscow, 
Bahrain, Nairobi and Tallinn.

The idea of special focus meetings is to do something entirely new, 
perhaps redefining the NANOG role and audience in the process. It is clear 
that the traditional NANOG audience is shrinking because the traditional 
Internet provider has been mostly replaced by larger general 
telecommunications providers. The same old topics and same old restricted 
set of participants doesn't have enough future potential to keep NANOG 
running in the long term. Special focus meetings can help bring in new 
blood.

---
Michael Dillon
Capacity Management, 66 Prescot St., London, E1 8HG, UK
Mobile: +44 7900 823 672Internet: [EMAIL PROTECTED]
Phone: +44 20 7650 9493Fax: +44 20 7650 9030

http://www.btradianz.com
One Community   One Connection   One Focus



Re: Fwd: The IESG Approved the Expansion of the AS Number Registry

2006-12-05 Thread Michael . Dillon

 Thanks very much for this link (and the summary). I see an interesting 
 (if not surprising) trend in Advertised AS Count. Up until 2001 it was 
 accelerating... and after 2001 its stayed linear. However, unadvertised 
 AS count which was basically stagnant has increased markedly before 
then.

Those are not unadvertised ASNs. Those are only ASNs
which have been issued but are undetectable by his monitoring
tools. That doesn't mean that they are not advertised, just
that his tool cannot detect them. Given the fact that the
Internet is now thoroughly global with rich interconnectivity
in most regions of the globe, it is hardly surprising that lots
of ASNs do not get advertised globally.

The trend you see is likely cause by rich local interconnectivity
becoming the norm rather than a few circuits from the capital 
city to some big U.S. city.

--Michael Dillon



Re: IP adresss management verification

2006-11-13 Thread Michael . Dillon

I'm curious on how regional RIR which allocates ip address, verifies 
the 
 usage pattern info provided by their members in their application 
process. 

It's quite simple, really.

They ask for it.

If the iformation that you provided with your application
does not answer their questions, they ask you for more
information. I assume that all the RIRs will sign an
NDA with you, certainly ARIN does this. ARIN may also
ask for corporate confidential information in order to
verify your application so they have strict internal
security policies to keep it confidential. 

Some people send detailed network diagrams, purchase
orders for routers/switches/circuits, sales history
data with projected trends, customer lists, etc.

If you need specific details, just ask your RIR.

--Michael Dillon
 


RE: IP adresss management verification

2006-11-13 Thread Michael . Dillon

  They ask for it.

 Is the policy still that dedicated IP addresses for web
 hosts *should* only be used when technical justification
 exists?  I really wish it would change to a requirement
 as we very frequently get new hosting customers who get
 angry when they find their site that doesn't have SSL
 or any other technical reason for a dedicated IP ends up
 on a shared IP when their old host didn't do that or
 would sell them a dedicated IP for $5/month, etc.

SSL is a technical justification for separate IP
addresses for web hosts. Virtual servers is another
technical justification for assigning multiple IP
addresses to a single physical server.

ISPs should really make it clear to their customers
which features are included in a product and which
ones are not. There is no reason why you could not
have offered a web hosting product with no SSL
capability at one price point and a secure web
hosting product with SSL at another pricepoint.
What is the point in leaving things vague?

And if you look at this template
http://www.arin.net/registration/templates/net-isp.txt
updated in September, it no longer mentions web
hosting.

--Michael Dillon



Re: [c-nsp] [Re: huge amount of weird traffic on poin-to-point ethernet link]

2006-11-10 Thread Michael . Dillon

  The craziest stuff that gets announced isnt in the
  reserved/unallocated realm anyway so the effort seems to be
  disproportional to the benefits... and most issues I read about with
  reserved space is packets coming FROM them not TO them
 
 Steve's 100% spot-on here.  I don't have bogon filters at all and it
 hasn't hurt me in the least.  I think the notion that this is somehow
 a good practice needs to be quashed.

I think there is a terminology problem here. People think
that bogons means bogus routes. From that they infer
that bogus routes should be filtered and use the Cymru feed
because it seems to be a no-brainer.

The problem arises because the Cymru feed only contains 
the low-hanging fruit. It only refers to address ranges
that *might* be bogus and which are easy to identify. 
The problem is that if you pick this fruit, it soon goes
rotten and you end up filtering address ranges which are
in use and almost certainly not bogus.

If there were some way to have a feed of real bogons,
i.e. address prefixes that are *KNOWN* to be bogus at
the point in time they are in the feed, that would be
useful for filtering. And it would likely be a best practice
to use such a feed.

But at the present time, such a feed does not exist.

Also, I think that anyone contemplating creating a new
feed should give some thought to what they are doing.
It would be very useful to have a feed or database which
can assign various attributes to address ranges. When there
is only one possible attribute, bogon, then the meaning 
of the attribute gets stretched and the feed becomes useless.
But if there are many attributes such as
UNALLOCATED, UNASSIGNED, DOS-SOURCE, SPAM-SOURCE,
RIR-REGISTERED then it starts to look interesting.
Some networks might like to filter based on several
attributes, others will just filter those with the 
DOS-SOURCE attribute.

Obviously, it would require lots of cooperation for
some of these such as UNASSIGNED, but perhaps the Internet
needs to move towards more cooperation between network
operators.

--Michael Dillon




Re: odd hijack

2006-11-10 Thread Michael . Dillon

  My question to the community is,
 what kind of misconfiguration could cause this set of prefixes to be
 announced? 

 11.0.0.0/8
 12.0.0.0/7
 121.0.0.0/8
 122.0.0.0/7
 124.0.0.0/7
 126.0.0.0/8
 128.0.0.0/3
etc ...

This looks to me like some large multinational leaked
their internal announcements to an ISP. It is not unusual
for large companies to use random unregistered /8 blocks
in their internal networks. There are all kinds of 
applications that need to talk across networks which do
not need any Internet connectivity or any direct
connectivity to general use workstations. This network
traffic would normally be hidden inside some kind of
VPN on the same infrastructure as other corporate 
traffic.

So to answer your question, first look for all the ways
that a misconfiguration could allow routing information
to leak out of some flavor of VPN.

--Michael Dillon



Re: [c-nsp] [Re: huge amount of weird traffic on poin-to-point ethernet link]

2006-11-10 Thread Michael . Dillon

 WRT acls, I would suggest any acl is a bad idea and only a dynamic 
 system such as rpf should be used, this is because manual filters 
 that deny bogons has the same issue as BGP filtering in that it can 
 go stale and you drop newly allocated space. 

Your comment implies that ACLs are static and must
be configured manually. In this day and age of automated
systems, that is no longer true. Anyone who wants to can
easily implement dynamic ACLs. They will be slightly less
dynamic than a routing protocol, but ACLs do not have to
be manually configured and do not have to be static.

Of course, on some hardware ACLs have a significant CPU
impact, but that is less of a factor than it used to be.

--Michael Dillon



Re: [c-nsp] [Re: huge amount of weird traffic on poin-to-point ethernet link]

2006-11-10 Thread Michael . Dillon

  If there were some way to have a feed of real bogons,
  i.e. address prefixes that are *KNOWN* to be bogus at
  the point in time they are in the feed, that would be
  useful for filtering. And it would likely be a best practice
  to use such a feed.
 
  But at the present time, such a feed does not exist.
 
 http://www.cymru.com/BGP/bogon-rs.html

That is not a feed of routes that are known to be bogus.
That is a feed of routes that use addresses which have 
not been allocated by IANA to an RIR. There are many 
bogus routes that are not included in the Cymru feed.

For instance,
RIR address ranges that have not yet been allocated
ISP address ranges that have not yet been assigned
Assigned address ranges that are not announced by
the assignee. Address ranges from which a high
percentage of the traffic is SPAM, i.e. a network
owned by spammers.

I am arguing that it is better to start with a database
that allows several attributes, both negative and positive,
to be associated with address ranges. Then build a feed
from that, in fact, allow the user to specify which attributes
they want in their feed. One size fits all just doesn't work.

--Michael Dillon




Re: [c-nsp] [Re: huge amount of weird traffic on poin-to-point ethernet link]

2006-11-10 Thread Michael . Dillon

 how about PORN-SOURCE, COMMUNIST-SOURCE, DEMOCRACY-SOURCE, 
 TERRORIST-SOURCE, RIGHT-WING-CHRISTIAN-SOURCE, 
COURT-ISSUED-LIBEL-CASE-SOURCE
 
 be careful before you open such a pandoras box...

The box was opened a long time ago. In an Internet
context, there are many email blacklists which 
apply various different criteria for inclusion, 
therefore, they are essentially publishing different
attributes. In a social context, freedom of religion
is a long-accepted principle and various religions
publish lists of literature that is either acceptable
or unacceptable. 

If a network operator finds a business case for
supplying service only to right wing organizations
and blocking network traffic from communist sources
then what is wrong with that? The principle of the
Internet is that network operators run private networks
and set their own policies independent of regulators
and governments.

 will this scale?

The fact that the database has multiple attributes
to assign to address ranges makes it more likely
to scale. 

 who will want to use it?

People who find some value in dynamically filtering
Internet traffic based on a trusted source for filters.

 can it be exploited?

Virtually anything can be exploited. Smart network operators
do not hardwire their routers to a 3rd-party BGP feed. Instead
they pull that feed into their operational support systems
where it can raise alarms so that a human being can decide
whether to stop or start filtering a particular range. Or else
they make some kind of 2-party binding contract with SLAs and
penalties such as a transit contract or a peering agreement.

 what sort of liability do you take on by becoming responsible for 
 policing the Internet?

Who said anything about policing the Internet? This is all
about identifying address ranges who source various kinds
of traffic that some network operators do not wish to
transit their networks. Every network operator has an AUP
for their own customers and peers. This merely extends that
to 3rd parties who wish to transit the network.

--Michael Dillon



Re: Urgent need for bandwidth in Chiswick/London

2006-11-03 Thread Michael . Dillon

 Does anyone know of any MAN (or anything else for that matter) options
 at this location?

Yes, somebody does know but they are unlikely to 
be on this list. In general, for any commercial
building in any city, there is a building manager
who takes care of utilities, air conditioning, heating
systems, etc. This person or persons will be aware of
any and all telecommunications circuits into the building
and which companies send technicians to monkey around 
with the comms closet. That should be your first port
of call.

If that person is too hard to get answers from, next
best is to talk to neighbouring commercial buildings
because MANs are built in rings so chances are that 
one of the companies connected to neighbouring buildings
will also be in yours.

Third port of call is to contact the sales departments
of all the companies offering MAN services in your city.
Since they have a chance to win some business, they will
be happy to research their internal records to find out
whether or not they have infrastructure in your building.

Fourth port of call is your other technical contacts
in the city/country in question.

And maybe in the fifth position, or lower, is a general
mailing list like NANOG. 

My question is, did you actually go through all the above
BEFORE posting to NANOG?

--Michael Dillon



Re: register.com down sev0?

2006-10-27 Thread Michael . Dillon

 but i am not foolish enough to believe
 that religious ranting on mailing lists is gonna change anyone from
 doing what makes business sense for their network. 

Indeed!

And it is not going to change the minds of the 
majority of network operations folks who are not
on the NANOG list nor the majority of telecoms
executives who are also not on the NANOG list.

Back in the old days, the NANOG list did hold the
majority of Internet operations folks so new ideas
like flap dampening were able to spread quickly.
But those days are long gone. NANOG still has an
important educational role but it is no longer based
on being part of the old boys club and knowing the
secret handshake. In other words, there is no cohesive
society of network operators which can be swayed
by attempts at social engineering like shaming or
cajoling.

BCP 38 has had its day. Nowadays, it is more important
to look at how to mitigate current DDoS techniques and
to describe the larger problem and look for larger
solutions. However, any attempt at larger solutions 
require a large amount of humility because nobody
can say for sure, what will work and what won't.

The fact remains that there is not a good technical 
method for mitigating large scale distributed DDoS 
that results in LARGE TRAFFIC FLOWS ENTERING A NETWORK
FROM ALL PEERED ASES SIMULTANEOUSLY.

Perhaps if we could find a way to allow the attacked
AS to set ACLs automatically in all the source AS
networks, that would help mitigate these attacks.
For instance, consider a set of ASes which all install
an ACL-setter box. These boxes all trust each other to
send-receive ACL setting requests through a trusted channel.
The owner of a box sets some limits on the ACLs that can 
be set, for instance n ACLs per AS, max ACL lifetime, etc.
And the box owner also decides the subset of their routers
which will accept an ACL for a given address range.
Then when an attack comes in, the victim AS uses some tool
to identify large sources, i.e. a CIDR block that covers 
some significant percentage of the source addresses in 
one AS. They then issue an ACL request to that AS to block
the flow and the ACL takes effect almost instantaneously with
no human intervention.

Yes, this can result in some IP addresses being blocked 
unfairly, but the DDoS traffic levels often have the same
impact. In any case, the AS holding the destination address
is the one doing the blocking even though the mechanism
is an ACL inside the source AS network.

On the technical side, it is not a complex problem to put
such a system in place. The complexity is largely in getting
network operators to come to an agreement on the terms
under which operator A will allow operator B to set ACLs
in operator A's network. Until network operators see DDoS
as a significant business problem, this will not happen.
Note that a business problem does not refer solely to
the direct costs of mitigating a DDoS attack. It also includes
the indirect fallout which is harder to measure such as
loss of goodwill, missed opportunities, etc.

--Michael Dillon



Re: BCP38 thread 93,871,738,435 + SPF

2006-10-27 Thread Michael . Dillon

 How is this attack avoided?

Sounds like the attack is inherent in SPF. In that case,
avoiding it is simple. Discourage the use of SPF, perhaps
by putting any SPF using domain into a blacklist.
Eventually, people will stop using SPF and the attack
vector goes away.

--Michael Dillon



Re: Extreme Slowness

2006-10-27 Thread Michael . Dillon

 Which begs the same question I've asked in the recent past: then
 what *is* a good diagnostic tool?  If ICMP is not the best way to
 test, then what is?  What other globally-implemented layer 3 or
 below protocols do we have available for troubleshooting?
 
 Sure, UDP-based traceroute still relies on ICMP TTL exceeded
 responses to work.  I've no idea what TCP traceroute relies on,
 as I haven't looked at it.

I love it when people answer their own questions
and tell us that they are lazy, to boot.

For the record, TCP traceroute and similar TCP based
tools rely on the fact that if you send a TCP SYN 
packet to a host it will respond with either a
TCP RST (if the port is NOT listening) or a TCP
SYN/ACK. The round trip time of this provides useful
information which is unaffected by any ICMP chicanery
on the part of routers or firewalls. A polite application
such as TCP traceroute will reply to the SYN/ACK with
an RST packet so it is reasonably safe to use this tool
with live services.

Of course, even TCP packets can be blocked or dropped
for various reasons so this is not a 100% solution.
However, if you want to avoid ICMP filtering or low
precedence, then TCP traceroute will help.

--Michael Dillon



Re: BCP38 thread 93,871,738,435 + SPF

2006-10-27 Thread Michael . Dillon

   How is this attack avoided?
 
  Sounds like the attack is inherent in SPF. In that case,
 
 how did the thread about dns providers and rfc compliance morph into SPF
 and spam discussions?

Ask Doug Otis. He stated that SPF sets the stage for DDoS 
attacks against DNS servers. Presumably he said this because
it points to another *COST* of DDoS that could be used as 
a business justification to implement BCP38.

Or you could look at it as a weakness of SPF that should be
used as a justification for discouraging its use. After all
if we discourage botnets because they are DDoS enablers, 
shouldn't we discourage other DDoS enablers like SPF?

--Michael Dillon



re: passports for NANOG-39, Toronto

2006-10-26 Thread Michael . Dillon

 http://travel.state.gov/travel/tips/regional/regional_1170.html

 December 31, 2007 - Passport required for all land border crossings, as
 well as air and sea travel. 

If someone wants to go but does not have a passport for
whatever reason, i.e. last minute travel plans, then it
is possible to fly to Buffalo NY and make a land crossing
from there, i.e. bus or rental car. If you do want to take
a rental car across the border, you have to notify your
rental company so they can issue a non-resident insurance
card for you. As long as you have a US driver's licence this
is fairly routine. Cross the bridge to Canada and take 
the QEW all the way to Toronto.

http://www.buffaloairport.com/

You could do the same fly-drive via Detroit but there is
a lot more driving.

--Michael Dillon

P.S. Now that you have your shiny new passports, don't 
just stop at Canada. There's a whole world out there.



RE: Collocation Access

2006-10-24 Thread Michael . Dillon

   I'm not exactly sure why these sites want to retain ID, but I think it
 goes along with the big weight that is connected to the gas station 
bathroom
 key.  They want to make sure you return your cabinet keys (if any),
 temporary pass (if any), etc.  Legal risk or not, can you think of a 
better
 way to get someone to return to the security desk to sign out?  Until 
then,
 these sites will continue this practice.

a) cash deposit
b) heavy weight attached to cabinet keys and temporary pass
c) bulky object attached to cabinet keys and temporary pass

In high school, our data centre keys were attached to a few
links of chain bolted onto a chunk of 2 x 4. I never mislaid them.

I remember at least one place where I received a plastic card key
similarily attached to a few links of chain welded to an broken
wrench. Why couldn't ID cards be treated the same way?

For that matter, in these days of RFID badges, why can't colo
centers issue magic wands, 3 foot long rods tipped with an
embedded RFID tag? They would not fit in pockets or briefcases 
etc. They would function identically to the RFID tags embedded
in credit-card sized plastic but they would never get lost.

Perhaps what we have here is another failure of imagination 
like the one cited in the 9/11 report.

--Michael Dillon




Re: Broadband ISPs taxed for generating light energy

2006-10-11 Thread Michael . Dillon

 A Cisco ZX GBIC produces a max of 4.77 dBm (or less than 4mw).  4mw 
 corresponds to 35 watt hours in one year.

.035 kwh per year costs 34.5 cents per year using
the average US electricity cost in March 2006 of
9.86 cents/kwh.

Since the energy flow could be bidirectional,
one of the two parties receives a net benefit
of up to 34.5 cents. 

If a broadband provider offers customers 
a free gift such as a hat, does this make them
into a hat retailer for tax purposes?

--Michael Dillon



Re: that 4byte ASN you were considering...

2006-10-10 Thread Michael . Dillon

  - 'Canonical representation of 4-byte AS numbers '
 
http://www.ietf.org/internet-drafts/draft-michaelson-4byte-as-representation-01.txt
 
 
 
 and what is good or bad about this representation?  seems simple to me. 
   and having one notation seems reasonable.  what am i missing?

It breaks any applications which recognize IP address-like 
objects by seeing a dot in an otherwise numeric token.
For the purposes of parsing a string into internal 
representation, an application can treat IP addresses,
netmasks and inverse masks identically.

We all know that the Internet is awash in homegrown scripts
written in PERL or TCL or bash or Ruby or Python. It is likely
that many authors have, in the past 15 years, written scripts
which contain regular expressions like [0123456789.]* to
match a string containing only digits and the period. Those
scripts will be confused by this AS number notation. Also,
any script which recognizes IP address-like objects when
it hits the first period in a numeric string.

The real question is what does the notation 1.0 add that the
notation 65536 does not provide?

All I can see is that it adds the risk of broken scripts and 
the confusion of AS numbers that look like decimal numbers.
If the IETF had really wanted to create a universal notation
then they should have recommended that AS numbers be
represented in the form AS65536 which is completely
unambiguous.

When IP addresses were created, it was important to indicate
the boundaries between the network number and the host address.
Originally, the periods represented this boundary for the
three classes of IP address, class A, class B and class C.
Long ago, we removed this classfulness attribute, but the
notation remains because lots of applications expect this
notation. So why on earth are we changing AS number notation
today?

--Michael Dillon



Re: that 4byte ASN you were considering...

2006-10-10 Thread Michael . Dillon

 Well, it will break an applications that considers everything
 consisting of numbers and dots to be an IP address/netmask/inverse
 mask.  I don't think many applications do this, as they will then
 treat the typo 193.0.1. as an IP address. 

An application using [0123456789.]* will not break when it
sees the above typo. 193.0.1. *IS* an IP address-like object
and any existing code will likely report it as mistyped
IP address or mask. 

 It won't break applications
 that check if there are exactly 4 numbers in the 0-255 range and 3 dots.

True, however my point is that I do not believe that all
existing applications do this. Therefore, changing their 
input in an unexpected way will break them.

 The real question is what does the notation 1.0 add that the
 notation 65536 does not provide?
 
 It is (for me, and I guess most other humans) much easier to read and
 remember, just as 193.0.1.49 is easier to read and remember than
 3238002993.  It also reflects that on the wire there are two 16
 bit numbers, rather than 1 32-bit number.

In my experience, ISPs do not transmit numbers by phone calls
and paper documents. They use emails and web pages which allow
cut'n'paste to avoid all transcription errors. And I know of no
earthly reason why a general written representation needs to
represent the format of bits on the wire. How many people
know or care whether their computer is bid-endian or little
endian?

 1. If you are a 16-bit AS speaker (ASN16), then AS65536 is not just
 the next one in the line, it is an AS that will have to be treated
 differently.  The code has to recognize it and replace it by the
 transistion mechanism AS.

And how is a special notation superior to 

  if asnum  65535 then
  process_big_as
  else
  process_little_as

In any case, people wishing to treat big asnums differently will need
to write new code so the dot notation provides them zero benefit.

 2. Just as people having used the regexps that you mentioned, I'm
 also certain that people have used unsigned short int's or
 signed long int's in their code.

Typically ISPs are using apps written in higher level languages
which are more likely to treat integers as 32-bit signed quantities.
In any case, this is a length issue, not an issue of notation.

 In short, like it or not, you will have to check and update your tools
 anyway.

My point is that if we do NOT introduce a special notation
for ASnums greater than 65536, then tools only need to be 
checked, not updated. If your tool was written by someone
who left the company 7 years ago then you might want to
do such checking by simply testing it with large as numbers,
not by inspecting the code. The dot notation requires that
somebody goes in and updates/fixes all these old tools.

--Michael Dillon



Re: Broadband ISPs taxed for generating light energy

2006-10-10 Thread Michael . Dillon

 In the process of
 data transmission, other than light energy, no other elements are 
involved and
 the customers are paying for the same. This proves that light energy
 constitutes goods, which is liable for levy of tax. Therefore, the State 
has
 every legal competence and jurisdiction to tax it, the department has
 contended.

Sounds reasonable to me. Since the sale of energy is 
usually measured in kilowatt-hours, how many kwh of
energy is transmitted across the average optical fibre
before it reaches the powereda mplifier in the destination
switch/router?

I'd like to see some hard numbers on this.

The light shining down optical fibres is laser light.
There exist medical devices which are powered by laser
light shining through the tissues. There are also some
types of satellite devices which can receive power from
ground-based laser beams. The crux of this issue is the
actual measurement of power transmitted which will turn
out to be very small.

--Michael Dillon



Re: Anyone from state of illinois or ATT on list?

2006-09-28 Thread Michael . Dillon

 One of the state of illinois' servers is attempting to breach 
 security in our network and their arin abuse contact doesnt exist.

Then contact their upstream network provider. 

If someone doesn't publish an abuse contact in
the normal way, it generally means that there is
nobody competent available to be contacted.
 
You don't really know that it is a state of Illinois
server. All you can tell from network diagnostic
tools is that the traffic appears to originate
from ISP X who provides Internet access to the
state of Illinois. Make them responsible for
their traffic.

--Michael Dillon



Re: Armed Forces Information Service.

2006-09-28 Thread Michael . Dillon

   Could someone responsible for the armed forces information service
 please contact me off list.  Thanks.

Which AS are you refering to?
Why didn't you mention the AS in your posting to the list?

--Michael Dillon



Re: Topicality perceptions

2006-09-25 Thread Michael . Dillon

 One of the biggest issues with the list as I've seen from time to 
 time from my perspective, is the definition of operations. So on a
 quick breakdown of the logical definition of NANOG, I derive 
 Operations of the North American Network. The problem with this 
 stems from far too many bastardizing their own definition of what it
 should be.

Please don't contribute to the bastardization. Section 3 of
the NANOG charter states:

   The purpose of NANOG is to provide forums in the North 
   American region for education and the sharing of knowledge 
   for the Internet operations community. 

You can read the full charter here: http://www.nanog.org/charter.html

By your definition, Cat's recent request for outage
information about Telehouse North would be off-topic.
But according to the NANOG FAQ here:
http://www.nanog.org/listfaq.html
outages are on topic. Obviously, network infrastructure
tends to span political borders and geographic borders,
therefore it is not unusual that Cat has an infrastructure
issue in Europe to deal with.

On your first point, the fuzziness and lack of clarity
of what network operations issues belong on this list,
I agree. The FAQ is never posted on the list so it has
become an obscure document hidden away on a little-used
website. It needs to be promoted more and I think it 
needs to be updated to communicate more clearly.

 These are off-topic but I wouldn't trade em for the world. 
 I've learned much from them, as have I from all sorts of posts on 
 topic or not. 

I agree with you. Unfortunately some old-timers 
would rather see a return to the old days when network
ops and engineering was an obscure passtime only understood
by those who knew the secret handshake and were admitted
to the inner circle. They forget that NANOG's major role
has been in educating the new people who have flooded into
the net ops community as the Internet grew and grew and grew.

--Michael Dillon


Re: icmp rpf

2006-09-25 Thread Michael . Dillon

 The non-announcers, because they're also breaking PMTUD.

If you're not sure what benefits PMTUD gives, 
you might want to review this page:
http://www.psc.edu/~mathis/MTU/index.html

--Michael Dillon



Re: tech support being flooded due to IE 0day

2006-09-22 Thread Michael . Dillon

 i've
 assumed that the hardcore bgp engineering community now meets elsewhere.

Or perhaps BGP engineering hasn't changed in so many years
that it is now more than adequately covered by books,
certificate courses, and internal sharing of expertise.
Lists are good for things that are new or confusing or
difficult. BGP no longer fits into those categories.

 (c) the flames completely outweigh gadi's own original posts,

Words of wisdom. I was wondering when someone
would point this out.

 and (d) some of
 the folks lurking here actually tell me that they benefit from gadi's 
stuff.

And, no doubt, they tell Gadi too which is why he 
continues to post on this list and does not seem to
be wounded by the flaming arrows sent his way.

 ISC Training!  October 16-20, 2006, in the San Francisco Bay Area,
 covering topics from DNS to DHCP.  Email [EMAIL PROTECTED]

Now that is on topic. Maybe we need more advertising
on the list to make people happy?

--Michael Dillon



Have you really got clue?

2006-09-22 Thread Michael . Dillon

 that, and a thread where half of the posts are from the
 initial poster himself anyway. but then, happily watching
 him, at least he is creative in topics... i am mentally
 killfilling his threads anyway, less and less relevant.
 it is scary what stuff is discussed lately.
 
 -ako

OK, Alexander Koch. You apparently have clue and you
apparently know what *IS* on topic for this mailing
list. Instead of posting an off-topic message like
the one above, kindly post a message listing *ALL*
of the topics that belong on this list. 

And if anyone else here thinks they know what is
on topic, please tell us.

I am getting bored by the flood of negative messages
that say only You can't say that here. Please stop
telling us what you cannot say on NANOG. If you really
must register your discontent with a message, then 
at least take the time to list some of the topics that
belong on the list.

What is NANOG all about? What is relevant to network
operations? Is NANOG a narrowly focused technical list
for a small group of technical specialists? Or is it
some kind of broader industry-focused list that covers
many issues relevant to the industry?

--Michael Dillon



Re: tech support being flooded due to IE 0day

2006-09-22 Thread Michael . Dillon

 To the people who say we throw in the towel and just say Gadi will 
never 
 stop posting off-topic crap, so why bother trying to correct him?, I'd 
 suggest that this is a self-defeating attitude. Not only because Gadi 
 could actually be posting useful stuff if set on the right path as to 
what 
 is appropriate and what is not, but because 10,000 other people are 
going 
 to be reading that post and thinking that this is appropriate subject 
 matter. One off-topic post you can delete, but an entire list which has 
 been co-opted by off-topic material can not be fixed.

I agree with you 100%. Please give us your list of *ALL* 
the topics that you think are appropriate for this list.

--Michael Dillon

P.S. Note that I do not agree that anyone has yet tried
to correct Gadi. All I have seen is bellyaching on a
personal level, i.e. person A does not like person B's message.
To set everyone on the right path we need a description
of the path itself.



Re: IPv6 PI block is announced - update your filters 2620:0000::/23

2006-09-18 Thread Michael . Dillon

 Yes, please, let's have that flamewar all over again... Or you couldjust 
read 
 one or more of the previous flamewars and spare us another round. Here's 
a 
 starting point:

The problem with this suggestion is that it doesn't
have an end-point. If someone would summarize both 
the pros and the cons of bogon filtering on a page
at http://nanog.cluepon.net then it would be reasonable
to say that the poster should go elsewhere for their
information.

Until people actually populate that wiki with useful
information, we just have to accept that things will
be rehashed and rehashed on this list.

--Michael Dillon



RE: Watch your replies (was Kremen....)

2006-09-14 Thread Michael . Dillon

  Perhaps the list should be turned into a wiki; 

 I might just to watch the hilarity.  Is there any real interest in this?

Do we want another wiki to compete with http://nanog.cluepon.net ?

Mediawiki is a good idea, but proliferation is not so good.
Also, if you want to contribute, why not write up a page
or two for the existing wiki?

--Michael Dillon



Re: renumbering IPv6

2006-09-14 Thread Michael . Dillon

 The 8xx system is the one which maps to domain names,
 not the standard land-line system.

In the United States, due to number portability regulations,
the standard land-line phone numbers also map to domain
names because they are no longer used for routing calls.
In the UK, mobile phone numbers also map to domain names
because of regulations that allow you to switch mobile
network operators and maintain your phone number.

 Perhaps a customer who wanted to make IP addresses
 portable would pay a fee to the ISP whose addresses
 they are, and maintain redirection equipment to the
 real IPs...  And perhaps the price of doing so would
 actually be higher than just keeping a T1 to that
 first provider... 

There are people who are proposing a mechanism like
that in order to do a new type of multihoming in 
IPv6.

http://www.ietf.org/html.charters/multi6-charter.html

--Michael Dillon



Cyber Storm Findings

2006-09-14 Thread Michael . Dillon

A quote from the DHS's recently released report about their Cyberstorm 
exercise in Feb:
http://www.dhs.gov/interweb/assetlibrary/prep_cyberstormreport_sep06.pdf

Finding 3: Correlation of Multiple Incidents between Public and Private 
Sectors. Correlation of multiple incidents across multiple infrastructures 
and between the public and private sectors remains a major challenge. The 
cyber incident response community was generally effective in addressing 
single threats/attacks, and to some extent multiple threats/attack. 
However, most incidents were treated as individual and discrete events. 
Players were challenged when attempting to develop an integrated 
situational awareness picture and cohesive impact assessment across 
sectors and attack vectors.

And a question:
Do network operators have something to learn from these DHS activities
or do we have best practices that the DHS should be copying?

--Michael Dillon



Re: Commodity (was RE: [Fwd: Kremen ...])

2006-09-13 Thread Michael . Dillon

  Since IP addresses are tightly tied to the network
  architecture, how can they ever be liquid?
 
 How are PI addresses tightly tied to network architecture?

What percentage of the total IPv4 address
space is PI? If non-PI addresses are not
property then how do PI addresses gain that
attribute?

--Michael Dillon

P.S. PI addresses get configured into devices just
the same as non-PI addresses. If you could sell a PI
block then you would be faced with the prospect of
renumbering all those devices. DHCP makes end-user
devices pretty easy, but devices in the NETWORK
ARCHITECTURE pose more of a problem. In addition there
are some people who use IP addresses encoded in 
hardware in a non-mutable fashion. Those people will
apply for PI allocations which, on average, makes
PI addresses more tied to the hardware than non-PI.

But the important points are not the ones mentioned
in this postscript.




Re: Commodity (was RE: [Fwd: Kremen ...])

2006-09-13 Thread Michael . Dillon

 Erm, Uranium *is* a commodity. Last week's spot price was
 $52 a pound for U3O8. It's a small market in terms of numbers
 of players but it's still an open market in the economic sense.
 102 million pounds were traded in 2004. Hedge funds are players
 in the uranium market (source: www.uxc.com, home page)

I don't know where you got that figure but the website
you reference states that in 2005 only 35 million pounds
were traded in 107 transaction. I think most people will
agree that any item for which only 107 transactions are
concluded in a year is not terribly liquid. 

According to this 
http://www.cbot.com/cbot/pub/cont_detail/0,3206,1248+21215,00.html
in Chicago alone, counting only trades for 100 oz. unit
size, there were 15,544 contracts. Add to that the fact
that you can buy and sell gold in any major bank in any
major city as well as in most large jewellry stores and
you have a very liquid commodity indeed.

--Michael Dillon



RE: Kremen's Buddy?

2006-09-13 Thread Michael . Dillon

 It seems to me that this nicely illustrates a major problem with the
 current system.  Here we have large blocks of IP space that, by their
 own rules, ARIN should take back.  It all sounds nice on paper, but
 clearly there is a hole in the system whereby ARIN doesn't know and
 apparently has no way of figuring out that the space is no longer in
 use. 

Or maybe it means that ARIN has priorities and recovering
this space is low on the priority list. Anyway, you are
wrong. ARIN does have a way of figuring out that the space
is no longer in use. When some sucker buys the addresses and
tries to use them, they will find out that they must first
update ARIN's records. And when they do that, ARIN will learn
about the deal. At that point, they have to justify their address
space just like anyone else, and only get to keep the amount
of address space which they can justify.

The fact that there are few suckers around to buy these addresses
means that these block have been kicking around for a long time.
But if there is ever a crunch for IPv4 address space, you can bet
that ARIN members will empower ARIN to act unilaterally and take
back the space.

 but the way things currently work it seems like if you can
 justify a block today, it's yours forever even if you stop actively
 using it.

You haven't read through ARIN's policies yet, have you?

--Michael Dillon



Re: Kremen's Buddy?

2006-09-13 Thread Michael . Dillon

 The fact that there is a lot of space assigned/allocated and not used 
 in any easily observable way is well known to those who track the 
 address exhaustion issue, I think.

The fact that addresses are not used in an observable way does
not imply that the addresses are not used at all. It simply means
that the observation techniques used are not perfect.

--Michael Dillon



Watch your replies (was Kremen....)

2006-09-13 Thread Michael . Dillon

 It's insulting
 when you trim the message to a shorter statement that you are
 responding to.  The other 18 lines may not have been important to this
 particular response but they were not content free.

If your content was in any way, interesting, then people
will have read it in the message that you posted. I see
no need to repeat a bunch of irrelevant text when I am
only replying to one point in your email.

Personally, I wish more people would trim away all the
irrelevant junk when replying.

On the other hand, in the corporate world I find that
the habit of top posting is very useful to me. I often
see things that were never intended to be sent to me
and I often discover that the previous replies in a thread
betray the fact that the writer did not read or did not
understand the original message. 

But on a mailing list, trimmed replies are superior.

--Michael Dillon

P.S. are the standards of this list so unclear that
Darcy and I have to discuss this? Who is right?




Re: [Fwd: RE: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-12 Thread Michael . Dillon

  The reason that ARIN allocations are not property is
  that pre-ARIN allocations were not property. ARIN is
  merely continuing the former process with more structure
  and public oversight. Are telephone numbers property?

 IP addresses appear to be property - - read http://news.findlaw.com/
 hdocs/docs/cyberlaw/kremencohen72503opn.pdf.  Given that domain names
 are property, IP addresses should be property, especially in
 California where are constitution states All things of value are
 property

1. I searched that PDF and it says nothing whatsoever
   about IP addresses, therefore your statement above
   is not true.

2. The court didn't just say that domain names are property
   like anything else, he said that some of the laws regarding
   property apply to domain names. But others do not.

3. Domain names are delegated to people who pay money to
   register a domain name for their exclusive use forever
   as long as they maintain their renewal payments.

4. IP addresses are assigned to organizations who have a 
   JUSTIFIED technical requirement for those addresses in
   their network. Most addresses can only be used as long
   as they remain connected to the same upstream network.
   PI addresses can only be used as long as they are
   technically justified. If an organization sells their
   network or shuts it down, then they can no longer keep
   their IP addresses and there are hundreds of cases where
   those addresses have gone back to ARIN to be allocated
   to other networks.

 Why is it that they involve lawyers,
 ask you all your customers names and etc... This is more information 
than 
 I think they should be requiring. Any company that wishes to engage in
 business as an ISP or provider in some capacity should be granted the
 right to their own ip space

Look at this page: http://www.arin.net/cgi-bin/member_list.pl
Every one of those organizations has disclosed to ARIN
all their customer names, etc... That is the way things
are done. If you don't want to play ball like the rest
of us, then you are not going to get IP addresses. That's
the simple truth. We have a level playing field and you
are asking for special privileges that other organizations
don't feel are necessary.

--Michael Dillon



Commodity (was RE: [Fwd: Kremen ...])

2006-09-12 Thread Michael . Dillon

 You make an incorrect assumption - that IP addresses are currently free
 (they are not, in either money or time) and that commoditizing them will
 increase their cost (there is significant evidence it will not). 

You seem to think that commoditizing IP addresses will 
reduce their cost. Commoditization is changing an 
illiquid resource into a liquid resource, i.e. one
that can readily be converted to cash in an open market.
Since IP addresses are tightly tied to the network
architecture, how can they ever be liquid? If they
cannot become liquid then they can never be a commodity
in the first place.

For example, let's compare gold and uranium. Both metals
are very valuable. Gold can be bought and sold at any
time on an open market. It is a commodity. But uranium is
not as liquid. There are few buyers and sellers. Trades
happen too infrequently to establish an open market. There
are restrictions on posession and transport of the material.
In the end, uranium is not a commodity and is not liquid.
IP adresses are more like uranium than gold.

--Michael Dillon



RE: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-11 Thread Michael . Dillon

 3) What's wrong with treating assignments like property and setting 
 up a market to buy and sell them? There's plenty of precedent for this: 
 
  Mineral rights, mining claims, Oil and gas leases, radio spectrum. 

Before you start making inferences from an analogy,
you had better be sure that you have the right analogy.
IP addresses are not like any of the things that you
mention. They are like phone numbers which also are
not property and also managed by a central admin
function NANPA.

--Michael Dillon


 


Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-11 Thread Michael . Dillon

  Your statement about preferential treatment is factually
  incorrect. Larger ARIN members do not get larger allocations.
  It is the larger network infrastructures that get the larger
  allocations which is not directly tied to the size of the
  company. Yes, larger companies often have larger infrastructures.
 
 And that's the point: A company that is established gets preferential 
 treatment over one that is not; that is called a barrier to entry by the 

 anti-trust crowd. 

You need to understand the basics of networking to see
that this is NOT preferential treatment but is instead
even-handed treatment. You see, a network is a collection
of devices interconnected with circuits. Each point where
a circuit connects to a device is called an interface.
Devices may have more than one interface. Typically, the
devices used by network operators have many interfaces.
IP addresses are numbers used to uniquely identify such
interfaces and the Internet Protocol (IP) requires that
these numbers be assigned in a structured manner. 

It is then obvious that larger networks have more interfaces
and therefore can TECHNICALLY justify more addresses. This 
is even-handed treatment even though small companies end up
with less addresses than large companies.

You may feel that such a barrier is justified and 
 fair, but those on the other side of it (or more importantly, their 
 lawyers) are likely to disagree.

Yes, lawyers do not understand networks. No doubt some of
them will read the above text and begin to get a glimmer
of understanding.

 Of course it's directly connected; all you have to do is look at the 
 current fee schedule and you'll see:
 
 /24 = $4.88/IP
 /23 = $2.44/IP

That is completely untrue. ARIN's web page here
http://www.arin.net/billing/fee_schedule.html
says nothing of the sort. In fact, ARIN's annual
fees are structure so that organizations which
have a larger transaction volume pay a larger
fee. These transactions could be IP address applications
or SWIP transactions or in-addr.arpa traffic.
The size categories are just a rough rule of 
thumb for categorizing organizations that has
been accepted by the ARIN members themselves.

 So, just between the two ends of the fee schedule, we have a difference 
 of _two orders of magnitude_ in how much an registrant pays divided by 
 how much address space they get.

Large organizations get their allocations bit
by bit, applying for 3-6 months requirements
at a time. Small organizations may have only
a single allocation.

 Besides the above, Kremen also points out that larger prefixes are more 
 likely to be routed, therefore refusing to grant larger prefixes (which 
 aren't justified, in ARIN's view) is another barrier to entry.  Again, 
 since the folks deciding these policies are, by and large, folks who are 

 already major players in the market, it's easy to put an anticometitive 
 slant on that.

Routability decisions are not made by ARIN. If anyone
is unhappy with routability they should be suing those
organizations which recommend route filtering. But they
would have to prove that the route filtering is not 
technically justified which will be difficult when all
the expert witnesses are on the other side.

--Michael Dillon



Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-11 Thread Michael . Dillon

 Since the public policy meetings and mailing lists where 
 consensus is judged
 are open to any interested party, it is very hard to view this as an 
 anti-competitive act in my
 opinion.

Kremen filed the suit on April 12, 2006. That is the 
last day of the ARIN public meeting in Montreal. I was
at the Montreal meeting and Kremen never appeared 
publicly there to question ARIN's actions. It make me
think that he did not make a reasonable attempt to
resolve the situation out of court.

--Michael Dillon




RE: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?

2006-09-11 Thread Michael . Dillon

Even if you assume that allocations made by ARIN are not property, 
it's
 hard to argue that pre-ARIN allocations are not. They're not subject to
 revocation and their grant wasn't conditioned on compliance with 
policies.

The reason that ARIN allocations are not property is
that pre-ARIN allocations were not property. ARIN is
merely continuing the former process with more structure
and public oversight. Are telephone numbers property?

In any case, since the conditions of the pre-ARIN allocations
were all informal, unrecorded and largely verbal, nobody
can prove that there was any kind of irrevocable grant.

--Michael Dillon



Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-08 Thread Michael . Dillon

 I am looking for anyone who has input on possibly the largest case
 regarding internet numbering ever. This lawsuit may change the way
 IP's are governed and adminstered. Comments on or off list please.

My personal opinion is that this is yet another
example of ignorance leading to anger leading to
a stupid waste of court time. The case is filled with
incorrect statements of fact which ARIN can easily
demolish. But at the bottom line, these people are
complaining because ARIN didn't let them use some
IP addresses that were assigned to a different company.

Since IP addresses are basically available free from
any ISP who sells Internet access services, this seems
like a severe error in judgement on the part of the
plaintiff. A smart businessperson would have used the
free IP addresses to keep their business online even if
they did decide to dispute ARIN's decision.

But in the end, IP addresses are not property, therefore
they cannot be assets and cannot be transferred. They can
only be kept if they are in use on network assets which are
transferred and which continue to be operational. And even
then, most people have no choice as to which specific 
address block they use. They simply take what the ISP gives
them.

I personally suspect that ARIN will have this thrown out 
of court in fairly short order. Even if it did go much
further, the parallels with NANPA would see it fade away
quite quickly. 

This discussion really belongs on http://www.groklaw.net/
where I note it has not yet appeared. Perhaps another 
indication that this is a tempest in a teapot.

--Michael Dillon



Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-08 Thread Michael . Dillon

  The debate there will be around the 
 preferential treatment that larger ARIN members get (in terms of larger 
 allocations, lower per address fees, etc), which Kremen construes as 
 being anticompetitive via creating artificial barriers to entry.  That 
 may end up being changed.

Your statement about preferential treatment is factually
incorrect. Larger ARIN members do not get larger allocations.
It is the larger network infrastructures that get the larger
allocations which is not directly tied to the size of the
company. Yes, larger companies often have larger infrastructures.
However, ARIN gives the addresses based primarily on the 
number of hosts attached to the network infrastructure.
It has been argued in the past that ARIN's policies are
prejudiced AGAINST larger organizations because the rules
do not properly allow for the scaling overhead necessary
due to the complexity of larger networks.

As for fees, there are no per-address fees and there 
never have been. When we created ARIN, we paid special
attention to this point because we did not want to create
the erroneous impression that people were buying IP
addresses. The fees are related to the amount of effort
required to service an organization and that is not
directly connected to the number of addresses.

--Michael Dillon
(no longer in any official ARIN capacity. Just another member)



RE: Router / Protocol Problem

2006-09-07 Thread Michael . Dillon

 Apparently some how this connection is being
 matched via NBAR for good old Code Red.

 Best moved to cisco-nsp.

What!?
Network operator discovers that measures taken to mitigate
an old network security measure, long past their sell-by
date, are now causing random grief. Seems to me like
bang on topic for NANOG. What other such temporary mitigating
measures are still in place long after the danger has passed.

Note, that Code RED was a both an application vulnerability
and a network DDoS. Even though there are likely still many
hosts running the vulnerable application, the number is not
sufficient to cause another massive DDoD and measures taken
to protect against this particular peculiar DDoS, really 
don't have a good technical reason to remain in place.

This is probably also another instance of the well-known
ops problem: We know how to get stuff deployed but we
can't undeploy stuff because we are too busy deploying
other stuff.

--Michael Dillon



Re: Spain was offline

2006-09-04 Thread Michael . Dillon

  I can't get a TLD zone? But back to the root servers. Are you
  agreering with me that if I announce F and I root's netblocks
  inside of my own network that everyone would be ok with that?

 Who is responsible if this set-up fails?
 
 Who is responsible if it lies?
 
 Who is likely to get blamed for any failures?
 
 Would this require explicit consent from all customers 
 subject to such treatment?
 
 Would this require a possibility for each custoemr to opt out
 of such a scheme?

Aren't all of these questions private issues between
the private network operator and their customers? 
The same thing applies to companies who use IP addresses
inside their private networks that are officially
registered to someone else. This is a fairly common
practice and yet it rarely causes problems on the 
public Internet.

Since Internet network operators are generally not regulated
in how they operate their IP networks, it seems to me that
the people who say that it is not proper to announce root
netblocks in a private network are really calling for network
regulation by an external authority.

 And - ah yes - what particular problem does such a set-up solve?

It seemed to me to be a theoretical question not intended
to solve a particular problem. However, theoretically, a
network that sources a lot of DDoS traffic to root servers
could do this to attract the traffic to their local copy
of the root server in order to analyze it. Theoretically,
this is something that would be enabled by the hypothetical
situation described above.

--Michael Dillon



Media reports (was: Spain was offline)

2006-09-01 Thread Michael . Dillon

 [EMAIL PROTECTED] host www.red.es
 www.red.es is an alias for web.red.es.
 web.red.es has address 194.69.254.50
 
 No idea what happened, and I don't read spanish,

According to red.es, they believe that a possible
hardware failure caused a file to be corrupted 
during the update. When this was discovered, they
shifted to a backup file.

Details here for those who do read Spanish:
http://www.noticias.info/asp/aspComunicados.asp?nid=214866src=0

--Michael Dillon





Re: Spain was offline

2006-08-31 Thread Michael . Dillon

 Do you know how to contact your network provider without looking up
 e.g. www.example.com (network provider web site)?  IS TCPWRAPPER 
 configured to lookup names before allowing an operator login on a
 critical server?  Do you know your name servers IP addresses?  If
 the PSTN phone numbers don't work, do you have a INOC-DBA phone?  If
 the INOC-DBA phone numbers don't work, do you have a PSTN phone number?

Do you have your own mirrors of TLDs that are 
important to your users, i.e. .com, your .xx
country domain, etc.?

--Michael Dillon



Re: Spain was offline

2006-08-31 Thread Michael . Dillon

 You seem to be suggesting that ISPs run stealth slaves for these 
 kinds of zones. 

Not really. In today's world such simplistic solutions 
don't work.

 For zones that are being made available on anycast servers, ISPs may 
 be able to lobby/pay the zone operator to install an anycast instance 
 in their network. However, in general, the days of ISPs being able to 
 set these things up on their own and see benefit from them are past, 
 in my opinion.

I believe that there are still some things that ISPs can
do which cannot simply be bought on the market. For instance,
most ISPs runs simple caching servers for their DNS queries
where they keep any responses for a short time before deleting
them. It's so simple that it is built into DNS relays as an
option. 

An ISP could run a modified DNS relay that replicates all
responses to a special cache server which does not time out
the responses and which is only used to answer queries when
specified domains are unreachable on the Internet.

For instance, if you specified that all .es responses were
to be replicated to the cache and that your DNS relay should
divert queries to the cache when .es nameservers are *ALL* 
unreachable, then the impact of this type of outage is greatly
reduced. You could specify important TLDs to be cached this way
as well as important domains like google.com and yahoo.com.
The actual data cached would only be data that *YOUR* customers
are querying anyway. In fact, you could specify that any domain
which receives greater than x number of queries per day should
be cached in this way.

The volume of data cached would be so small in todays terms that
it only needs a low-end 1U (or single blade) server to handle 
this.

Since nothing like this exists on the market, the only way
for ISPs to do this is to roll their own. Of course, it is
likely that eventually someone will productize this and then
you simply buy the box and plug it in. But for now, this is the
type of thing that an ISP has to set up on their own.

--Michael Dillon




Re: IP failover/migration question.

2006-07-05 Thread Michael . Dillon

 It's actually a rather frustrating
 situation for people who aren't big enough to justify a /19 and an
 AS#, but require geographically dispersed locations answering on the
 same IP(s).

If the number of IPs you require is small, then you can
probably solve the problem with IPv4 anycasting. Several
people have built out distributed anycast networks but 
the problem is that they think IPv4 anycast is a DNS thing.
Therefore they don't sell anycast hosting services to
people like you who need it.

Of course, if you made them more aware of market
demand, this could change.

--Michael Dillon



Re: Fanless x86 Server Recommendations

2006-07-05 Thread Michael . Dillon

  We're looking to acquire a couple small servers that can act as 
routers for
  us at remote locations.
 
You may want to check out soekris. (www.soekris.com)

This type of server is far more common nowadays
than it was when Soekris launched their business.
A Google search will lead you to dozens of fanless
servers built around a VIA EPIA mini-itx board or
one of AMD's GEODE chips.

--Michael Dillon



Re: Fanless x86 Server Recommendations

2006-07-05 Thread Michael . Dillon

...but the fanless chips are 
 not always as fanless as you might like.  I've seen a number of them 
come 
 back well fried.

Fanless doesn't just mean no fans to break down.
It also means well-ventilated installation required.

Maybe you can find a datacenter with so many hot
bladeservers that they can't fill their racks. Then
you could ask them to give you 8U cheap at the bottom
of a rack and mount your servers vertically for
maximal airflow. ;-)

--Michael Dillon

P.S. on the other hand, if there is enough demand for
fanless server installations, maybe some datacenters
will begin to offer vertically vented space at the
bottom of their racks...



Re: Who wants to be in charge of the Internet today?

2006-06-23 Thread Michael . Dillon

 The Business Roundtable, composed of the CEOs of 160 large U.S. 
companies,
 said neither the government nor the private sector has a coordinated 
plan
 to respond to an attack, natural disaster or other disruption of the
 Internet. While individual government agencies and companies have their
 own emergency plans in place, little coordination exists between the
 groups, according to the study.

I don't believe that this is entirely true. I think that
there is a lot of coordination between companies at an
industry level, for instance the automotive industry or
the financial services industry. This coordination doesn't
get much visibility outside of the industry concerned
but that doesn't mean that it isn't there. In fact, I
strongly suspect that visibility of this coordination
does not often reach the CEO level in these companies
because much of the coordination is between specialist
groups within the companies. Does your CEO know that
you participate in NANOG?

One might even venture to suggest that there is no
point in coordinating emergency plans between companies
who have little or no direct business relationships
unless it is at a metropolitan level, i.e. New York
area businesses, Los Angeles area businesses. After 
all, why should NY businesses plan for earthquakes
and why should LA plan for a hurricane?

--Michael Dillon



Re: DNSSEC in Plain English

2006-06-15 Thread Michael . Dillon

 but it ain't the crypto.  never has been.  and it is not always
 easy to explain math in plain english.  so let's focus on where
 work needs to be done.

You and I are in violent agreement. The problem is
in understanding whether or not the crypto under the
hood really does provide a TRUSTABLE system. And that
is more to do with policies and procedures. This is
the stuff that I don't see explained in plain English
so that the decision makers who rely on DNS can
make a decision on DNSSEC.

Ed Lewis pointed out two presentations which
he claims have no crypto. However his own
presentation at Apricot is laced with technical
jargon including crypto. Stuff like hierarchy
of public keys, DNSSEC data, hash of the DNSKEY,
certificates, and so on. This is fine for a
technical audience but it won't help explain the
issue to the decision makers who spend the money.

I understand how the crypto works to the extent
that I believe it is technically possible for
something like DNSSEC to work. However, I don't
see an explanation of the policies and procedures
that convinvces me that it DNSSEC really does work.
The history of crypto-based security is filled
with flawed implementations.

--Michael Dillon


--Michael Dillon



Re: wrt joao damas' DLV talk on wednesday

2006-06-14 Thread Michael . Dillon

 actually, in a brilliant demonstration of fair use of copyrighted
 lyrics, paul was quoting directly from the song about alice's
 restaurant.  well, actually, despite saying so, it's not much about
 the restaurant at all.  and the restaurant is not called alice's
 restaurant, that's just the name of the song.

And this whole discussion about DNSSEC and DLV 
reminds me of a bunch of 8 x 10 glossy photographs
with the circles and arrows and a paragraph on
the back of each one. Just another case of
American blind justice I suppose.

Has anyone ever considered trying to come up
with a way that these crypto projects could be
explained in plain English? I think a lot of
the problem with adoption of DNSSEC stems from
the fact that most people who might make a decision
to use it, haven't got a clue what it is, how it
works, or whether it even works at all. And it's
not their fault that they don't understand. It's
the fault of a technical community that likes to
cloak its discussions in TLAs and twisted jargon.

--Michael Dillon




Re: Tracing network procedure for stolen computers

2006-06-13 Thread Michael . Dillon

 Earlier this month my daughters Ibook was stolen, oh well that is life I
 guess.
 Anyway updated mail server software for full debug and IP log since 
noticed
 that mail account was accessed yesterday.

It's a UNIX machine. You own it. You know
the password. If you had only set up an 
SSH server on it, you would now be able to
log in and collect additional information
about the current user.

Interesting things can happen when intelligent
devices find themselves stolen...
http://www.evanwashere.com/StolenSidekick/

--Michael Dillon



Re: Extreme Networks BD 6808 errors -- help to interpret.

2006-06-12 Thread Michael . Dillon

 I have probably missed something, perhaps unwritten policy, and for that 
I
 am sorry. I will not repeat my mistake.

Please DO CONTINUE to discuss this on the list.
Ignore all those messages of complaint. The only
complaints that matter are those of the Mailing
List Administrators whose names are listed here:
http://www.nanog.org/listadmins.html

The people who were complaining to you are not
serious about network operations. They just want
to keep it as a private club where only people
who know the secret handshake can apply.

However, in the 21st century, stable and reliable
network operations are vital to the global economy.
This means that we MUST openly discuss issues that
arise in order to jointly solve the problems and
to educate all parties involved, vendors, researchers
and operators.

It's OK to step on some toes and offend a few people.
This is a rough and tumble business where you need
to have a thick skin to survive. Perhaps the problem
is that the COMPLAINANTS do not have a thick enough
skin.

--Michael Dillon



Re: wrt joao damas' DLV talk on wednesday

2006-06-12 Thread Michael . Dillon

 you were attending nanog without registering and paying?  that is
 rude.  have you offered to pay retroactively?  that would be the
 honorable thing to do.
 todd underwood +1 603 643 9300 x101
 renesys corporationchief of operations 
security 

There was a similar comment from another
Renesys employee on nanog-futures. Is it
possible that this is some sort of commercial
dispute between two companies, Renesys and ISC,
who are not network operators, but who offer services
to network operators?

In any case, it doesn't seem to be on topic for
the NANOG list. If Renesys really doesn't like
ISC, why don't you sue him instead of whining on
this list?

--Michael Dillon



Re: IP failover/migration question.

2006-06-12 Thread Michael . Dillon

 clear understanding as to what is involved in terms of moving the IPs,
 and how fast it can potentially be done.

I don't believe there is any way to get the IPs
moved in any kind of reasonable time frame for
an application that needs this level of failover
support.

If I were you I would focus my attention on
maintaining two live connections, one to each
data centre. If you can change the client software,
they they could simply open two sockets, one for
traffic and one for keepalives. If the traffic
destination datacentre fails, your backend magic
starts up the failover datacentre and the traffic
then flows over the keepalive socket.

And if you can't change the clients, you can do
much the same by using two tunnels of some sort,
MPLS LSPs, multicast dual-feed, GRE tunnels. 
The Chicago Mercantile Exchange has published
a network guide that covers similar use cases.
In the case of market data, they generally run
both links with duplicate data and the client
chooses whichever packets arrive first. Since
market data applications can win or lose millions
of dollars per hour, they are the most time-sensitive
applications on the planet.
http://www.cme.com/files/NetworkingGuide.pdf

 When I desire to migrate hosts to the failover site, B would send a
 BGP update advertizing  that the redundant link should become
 preferred,

There is your biggest timing problem which is 
also effectively out of your control. By maintaining
two live connections over two separate paths to
two separate data centers, you have more control
over when to switch and how quickly to switch.

--Michael Dillon



Re: wrt joao damas' DLV talk on wednesday

2006-06-12 Thread Michael . Dillon

 attending nanog wasn't an option.  i hadn't realized that sitting in on
 joao's talk so i could be there for QA equalled attendance, and so i
 neither paid nor offered retroactively to pay.

Sounds to me like your intent was to be a Speaker

  do you really think i
 should?  (i asked everybody i met on site, and was universally told by
 those i asked to stop worrying about it.)

If Merit had simply given you a Speaker's Badge
then all this tempestuous teapot wouldn't
have dribbled a single drop.

--Michael Dillon



Re: 2006.06.06 NANOG-NOTES IDC power and cooling panel

2006-06-08 Thread Michael . Dillon

 Dan Golding, 30 seconds, what would each person like
 to see the folks on other side do to help.

Did anybody mention cogeneration? At least some
of that waste heat is turned into electricity 
which reduces the total consumption off the
grid.
http://www.polarpowerinc.com/products/generators/cogenset.htm

And what about risks? As you increase the active
cooling of your IDC, you reduce the ability to 
survive power outages. This is another reason
to separate hot customers from cool. In the event
of a power outage, your cool customers who have
better heat engineering do not have to share fate
with the lazy hot customers. Here I am referring to
server customers who do have choices to use more
efficient hardware, improve the power efficiency
of their software, and do things like server
virtualisation to reduce the CPU count in your IDC.
The embedded systems industry has plenty of experience
in reducing power consumption through both hardware
and software improvements.

--Michael Dillon



Re: Phantom packet loss is being shown when using pathping in connection with asynchronous routing - although there is no real loss.

2006-06-07 Thread Michael . Dillon

 The only part that I don't get is that you can mtr to him without 
 packetloss.  Although the path in-between may be different, the final 
hop 
 packetloss should exactly equal what he sees when mtring you.  A 
round-trip 
 is a round-trip, and results should be identical regardless of who 
 originates.  I can't think of any way this would be different unless 
echo 
 and echo-reply were being rate limited independently.

If the time was different then the packet loss would
be different. Perhaps the customer runs the tests during
his busy period when he is concerned about making sure
there is no delay. Then, later in the day, after his busy
period is over he takes the time to contact his ISP. The ISP
then runs some tests which show there is no packet loss
at all. To be sure this is not happening, synchronize the
tests and run simultaneously.

Try tcptraceroute because this more accurately reflects
the traffic that is flowing. 
http://michael.toren.net/code/tcptraceroute/

http://tracetcp.sourceforge.net/ is a windows tool
that is similar.

The open source tool LFT can be built to run on Windows
under cygwin http://pwhois.org/lft/ but they have this
warning on their page:

   Many people have complained about various problems on 
   the Windows platform. Both LFT and the WhoB client 
   compile and run well under Cygwin environments on 
   Windows. Unfortunately, Microsoft's changes to the 
   Windows IP stack (as of XP Service Pack 2) reduced 
   their raw socket functionality significantly as part 
   of their security bolstering process. These changes 
   have effectively stopped LFT from working properly 
   while using TCP. LFT's UDP tracing and other advanced 
   features still work properly. For more information on 
   Windows raw sockets, consult 
 
www.microsoft.com/technet/prodtechnol/winxppro/maintain/sp2netwk.mspx#EIAA 


This may have nothing to do with your MTR issue but it
does make one wonder whether a Windows machine is safe
to do performance testing. In any case, the LFT people
think that their non-TCP features still work properly
on Windows and this is a tool that you can also run
on your end. Worth a try?

--Michael Dillon



Re: Zebra/linux device production networking?

2006-06-07 Thread Michael . Dillon

 First, a little background..
 My CTO made my stomach curdle today when he announced that he wanted to
 do away with all our cisco [routers] and instead use Linux/zebra boxen.
 We are a small company, so naturally penny pinching is the primary
 motivation.

It is primarily small companies that use zebra or Quagga or 
openbgpd or Xorp or the Click Modular Router project.
There is more than one choice so do your research.
The main drawback of all of these is that you cannot
get PCI-bus cards that support some common circuit
types and the PCI bus cannot handle switching high
traffic volumes. Many people build and sell routers
based on a PC server running UNIX. They work fine
if they are no stretched beyond the role intended.
Cisco routers are the same. Look at the limitations
of the 2500/2600 series for instance.

Some URLs of interest:
http://www.read.cs.ucla.edu/click/
http://www.xorp.org/
http://www.openbgpd.org/
http://www.quagga.net/
http://www.zebra.org/

 Has there been any discussion (or musings) of moving towards such a 
 solution? I've seen a lot of articles talking about it, but I've not 
 actually seen many network operators chiming in.

This tends to be a list focused on the cult of
the BIG IRON, namely Cisco and Juniper. The people
who use PC-based routers have their own hangouts.
My main piece of advice is to seek out those hangouts
and ask your questions there.

 Here's the article that started it all (this was featured on /., so 
 likely you've read it already).

Sorry, haven't seen these.

--Michael Dillon


RE: Zebra/linux device production networking?

2006-06-07 Thread Michael . Dillon

 I would be interested to know how many software (for want of a better
 description) routers are in live production in this kind of environment
 i.e. the 99.% Uptime variety, from speaking to people albeit
 randomly in data centres it would seem to be more common than one might
 expect.

It is indeed very common. That is why there are several
implementations of BGP and routing software available.
These are used in dozens and dozens of commercial products
some of which are sold as IP routers, plain and simple.

In any case, 5 nines and 6 nines are not always what the
marketing department claims. They often exclude planned
maintenance periods so if you reboot once a week or you
have a crash after changing a config, that doesn't count
against the 5 nines. In addition, the 5 nines figure
generally applies to the network, not to individual devices
within it. Networks can be designed so that the failure
of a device does not cause a network outage.

This whole issue is so complex that you just can't
make blanket recommendations. Even the biggest networks
don't just buy and deploy big iron. They run every new
router model and software release through an extensive
battery of tests. Then they write operational guidelines
telling people which features can be used in which
situations. They do this to avoid crashes and network
outages because the big iron (Cisco/Juniper) simply
cannot provide that on its own.

A smart small company can get excellent results from
Linux routers (although I would take a serious look 
at FreeBSD or OpenBSD for this). Process is as important
as hardware.

--Michael Dillon



Re: Layer3 down?

2006-06-06 Thread Michael . Dillon

 From my IT department:
 
 It seems that the internet is having issues at a Layer3 Communications
 router, this normally would not be a problem but L3 runs some routing 
for
 the internet backbone. The techs at L3 are working on the problem and we 
do
 not have an eta as to when it will be back up, this situation is 
completely
 out of our control. You may experience short network outages or severe
 latency.

Thanks for posting my morning smile.

:-)

--Michael Dillon


Re: Are botnets relevant to NANOG?

2006-05-30 Thread Michael . Dillon

 for this community would trend analysis with the best of who is getting 
 better and the worst of who is getting worse and some baseline counts be 

 enough for this group to understand if the problem is getting better.

Your 5-day numbers were very reminiscent of the
weekly CIDR report. I think that if you clean it
up for weekly submission then that would be 
useful to some. 

For instance, you only published data for two
categories of ASN. Where is the tier-1 data?
And numbers should cover a 7-day period, not
5 days. In addition, for each category you should
provide a fixed cutoff. The CIDR report shows
the top 30 ASNs. 

I think that the ideal would be a table 
including ASN category as one column and showing
the top 50 ASNs. In addition, you should attempt
to separate the dynamic addresses by some means 
or other and either add that as a separate column
or else do a separate table. Since this would
be posted weekly over a long period of time, it
is best to put some thought into how to structure
it so it remains relevant. 

Also, provide a URL where researchers can download
more complete datasets, not just top 50.

 I am suggesting that NANOG is an appropriate forum to publish general 
 stats on who the problem is getting better/worse for and possibly why 
 things got better/worse.

I think few people will complain about a weekly
posting of this nature.

--Michael Dillon




Re: Are botnets relevant to NANOG?

2006-05-30 Thread Michael . Dillon

 The motive is unclear because attacking,
 for example, root servers, is an effort without some obvious economic
 incentive

Since when is advertising NOT a sign of
an obvious economic incentive?

 The DA report went through a large thread(s) to post statistics here
 and I'm not sure why yours will be any better, or, just another set
 of statistics which further de-sensitizes everyone to the problem. 

Stats by themselves can be boring. But making data
available regularly and publicly will inevitably
lead to some people doing analyses of this data and
presenting those analyses to NANOG meetings. This will
lead to wider understanding of what is going on and
will provide raw material for getting management support
for actions to solve the problem.

--Michael Dillon



Are botnets relevant to NANOG?

2006-05-26 Thread Michael . Dillon

In recent discussions about botnets, some people maintained
that botnets (and viruses and worms) are really not a relevant
topic for NANOG discussion and are not something that we
should be worried about. I think that the CSI and FBI would 
disagree with that.

In a press release announcing the last CSI/FBI survey
http://www.gocsi.com/press/20050714.jhtml
the following statement appears:

Highlights of the 2005 Computer Crime and Security Survey include:

  - The total dollar amount of financial losses resulting from 
security breaches is decreasing, with an average loss of 
$204,000 per respondent-down 61 percent from last year's 
average loss of $526,000. 
  - Virus attacks continue as the source of the greatest 
financial losses, accounting for 32 percent of the 
overall losses reported. 
  - Unauthorized access showed a dramatic increase and 
replaced denial of service as the second most significant 
contributor to computer crime losses, accounting for 
24 percent of overall reported losses, and showing 
a significant increase in average dollar loss. 

So where do botnets come in? First of all, botnets are
used to distribute viruses, the largest source of 
financial losses. Second, botnets are built on what
the CSI calls unauthorised access, the second largest
source of loss. And denial of service, which used to 
be the 2nd largest, is also something that botnets do.

Now NANOG members cannot change OS security, they can't
change corporate security practices, but they can have 
an impact on botnets because this is where the nefarious
activity meets the network.

Therefore, I conclude that discussions of botnets do 
belong on the NANOG list as long as the NANOG list is
not used as a primary venue for discussing them.

One thing that surveys, such as the CSI/FBI Security
Survey, cannot do well is to measure the impact of 
botnet researchers and the people who attempt to shut
down botnets. It's similar to the fight against terrorism.
I know that there have been 2 terrorist attacks on
London since 9/11 but I don't know HOW MANY ATTACKS
HAVE BEEN THWARTED. At least two have been publicised 
but there could be dozens more.

Cleaning up botnets is rather like fighting terrorism.
At the end, you have nothing to show for it. No news
coverage, no big heaps of praise. Most people aren't
sure there was ever a problem to begin with. That doesn't
mean that the work should stop or that network providers
should withold their support for cleaning up the
botnet problem.

---
Michael Dillon
Capacity Management, 66 Prescot St., London, E1 8HG, UK
Mobile: +44 7900 823 672Internet: [EMAIL PROTECTED]
Phone: +44 20 7650 9493Fax: +44 20 7650 9030

http://www.btradianz.com
One Community   One Connection   One Focus



Re: AS12874 - FASTWEB

2006-05-26 Thread Michael . Dillon

  http://plany.fasthosting.it/dbmap.asp?table=Mappatura

  I take it that this means we can use any ip range allocated to Fastweb
  as if it were RFC1918 space, including the necessary border filters?

 I'd personally contract to build a moat around their NOC for Homeland 
 Security reasons using as many backhoes as I could get on short notice.

I would strongly advise against such actions.
European governments take a dim view of terrorist
activities and some countries such as Italy are
particularly sensitive about this. I'm surprised 
that an American on an Internet operations mailing
list would be promoting terrorist activity in 
another NATO member country.

In any case, you can't CONTRACT to do this. The
law does not consider an agreement to perform 
illegal acts to be a contract. The action you describe
is clearly illegal, therefore it cannot be contracted
for.

--Michael Dillon

P.S. this is NANOG, not IRC



Re: ISP compliance LEAs - tech and logistics [was: snfc21 sniffer docs]

2006-05-24 Thread Michael . Dillon

 The NANOG meeting archives are full of presentations as the result
 of very sophisticated network monitoring.  Like most technology,
 it can be used for good and evil.  You can't tell the motivation
 just from the technology.

OK, so he says in a roundabout way that you are
already paying for some sophisticated network monitoring
and it probably won't cost you much to just give
some data to the authorities.

 Sean, please drop this subject. You have no experience here and it's
 annoying that you keep making authoritative claims like you have some
 operational experience in this area. If you do, please do elaborate
 and correct me. From what I understand from the folks at SBC, you
 did not run harassing call, annoyance call, and LAES services. I would
 appreciate a correction.

Huh!?!?!?
Are you saying that people should buzz off from 
the NANOG list if they change jobs and their latest
position isn't operational enough? Are you saying that
people should not be on the NANOG list unless they
have TELEPHONY operational experience?

What is the world coming to!?

--Michael Dillon



Re: private ip addresses from ISP

2006-05-24 Thread Michael . Dillon

  Does NANOG have a role in developing some best
  practices text that could be easily imcorporated
  into peering agreements and service contracts?
 ...
 
 RFC 2267 - RFC 2827 == Best Current Practice (BCP) 38
 RFC 3013 == BCP 46
 RFC 3704 == BCP 84
 Are these followed?

No, the IETF BCP's are not followed and part of
the reason is that they are not written by network
operators but often by vendors and academics. The
fact is that the collective of network operations 
people (in North America at least) does not have
any agreed set of BEST OPERATIONAL PRACTICES. 
There are various camps that promote various sets
of rules which are often overly simplistic and
cannot be 100% adhered to in practice.

What we really need is a forum to discuss best 
operational practices and resolve all these various
differences in opinion systematically. The end
result should be a set of best practices that 
people really can follow with no exceptions. Of
course this means that the best practices must
incorporate various exceptions to the simple rules
and explain when those exceptions are and are not
appropriate.

So again, I ask the question: Is NANOG an appropriate
forum to develop some best practices text that 
could be incorporated into service agreements and
peering agreements by reference in the same way
that a software licence incorporates the GPL
by referring to it?

--Michael Dillon



Re: ISP compliance LEAs - tech and logistics

2006-05-24 Thread Michael . Dillon

  The guy wants to say, please raise your eyes above the horizon of your
  plate and view a not yet existing country named europe. Here our
  infrastructure is a lot more advanced and we have standardized a
  common eavesdropping api.
 
 We have? News to me.

You missed a line later in his message:

Of course
nobody except the European Central Bank is allowed listening, but -
who cares?

Sounds like typical lunatic ravings to me.
I guess anything goes on this list now...

--Michael Dillon



Re: private ip addresses from ISP

2006-05-23 Thread Michael . Dillon

 Proper good net neighbor egress filtering of RFC1918 source addresses 
 takes a number of separate rules.  Several 'allows', followed by a 
default
 'deny'.

Really?
Do you have those rules on your network?
Any reason why you didn't post the operational
details on this operational list?

Have you ever read your peering agreements or
service contracts to see if filtering of RFC 1918
sourced traffic is specifically covered by them?
If it is not covered by the contract, then why should
your peers/upstreams filter it?

Another good question is whether or not every
service contract and peering agreement should
contain unique text or whether there should be
some community-developed best practices statement
that could be plugged in by reference. For instance,
software publishers can publish their software
under the terms of the GPL without including the
full text of the GPL verbatim in their software
license.

Does NANOG have a role in developing some best
practices text that could be easily imcorporated
into peering agreements and service contracts?

--Michael Dillon



Re: Geo location to IP mapping

2006-05-16 Thread Michael . Dillon

 As a major caveat, all geolocation services do have some degree of
 inaccuracy, because the sources of data are very diverse.  (Some ISPs
 provide complete subnet maps to MaxMind and other providers, whereas
 some data is scraped from WHOIS or provided by inference from
 end-users.)

And some organizations run their own internal networks
across international borders. In other words, knowing
that subnet X is allocated to company Y who has a
300 meg Internet connection in city Z, does not mean
that all the users of that connection are also in 
city Z. They could be scattered around the world.

This is why some companies use other sources of data
to infer the location, i.e. if users of an IP address
prefer yahoo.fr to yahoo.com, then that is one datapoint
in favour of them being located in France. If you 
understand the principles of RBL weighting then you
will get the idea.

--Michael Dillon



Re: Geo location to IP mapping

2006-05-16 Thread Michael . Dillon

 I can solve the visualization part and the GIS issues. But comes down
 to the accuracy of the geo-ip database in the end.

According to the Brand X localisation database which
was rated tops in the Brand Y Web Magazine survey in
2005, our top customers are located in these cities.

Who said marketing is not for techies?

--Michael Dillon



Re: Geo location to IP mapping

2006-05-16 Thread Michael . Dillon

 I just tried that, says I'm 100 miles south of where I really am. That's 

 quite a long way out in a small country like England.

I live in London and use BT Broadband. But geolocation
shows me being in Ipswich up in East Anglia, a long
way from London. I assume this is because the geolocation
only knows that I use an IP address from a DHCP pool
managed in Ipswich. They don't know anything about 
BT's own extensive network, and in the case of DSL using
tunnels and DHCP servers, the real topology becomes
entirely invisible. The end result is that most of 
England's population lives in Ipswich. Eat your heart 
out Alan Partridge.

A few years ago, while working at a different company 
in London, I had a New York IP address because our company's 
internal network Internet gateway was in New York. Then
they changed things around so that we used a gateway in
London for all the European offices. But that meant that
colleagues in France, Germany, etc... would all show up
as being located in London.

Nowadays, I use a VPN to work from home. The VPN software
knows of multiple tunnel endpoint servers so if there is a 
problem with the UK server it fails over to a server in
the USA. My IP address on the Internet comes from the
NAT server at the Internet gateway. Depending on where the
tunnel endpoint is, it could be a US address or a UK address.

100% accurate geolocation is not achievable but if you
understand the issues then you can better make a decision
how to apply geolocation services to your own problem.
It may work well enough for some things. 
 
--Michael Dillon



Re: MEDIA: ICANN rejects .xxx domain

2006-05-15 Thread Michael . Dillon

  But there's no technical advantage of a hierarchical system over a
  simple hashing scheme, they're basically isomorphic other than a hash
  system can more easily be tuned to a particular distribution goal.
 
 Amazing how many experienced people seem to be saying this isn't 
possible, 
 given there are already schemes out there using flat namespaces for 
large 
 problems (e.g. Skype, freenet, various file sharing systems). Most of 
these 
 are also far more dynamic than the DNS in nature, and most have no 
management 
 overhead with them, you run the software and the namespace just works.

According to your description, this is a hierarchical naming
system. At the top level you have Skype, freenet, etc.
defining separate namespaces. Because DNS was intended to be
a universal naming system, it had to incorporate the hierarchy
into the system.

 However I think the pain in DNS for most people is the hierarchy, but 
the 
 diverse  registration systems. i.e. It isn't that it is delegated, it is 
that 
 delegates all do their own thing.

Seems to me that this is part of the definition
of delegate. Some would say that this makes for
a more robust system than a monolithic hierarchy
where everyone has to toe the party line.

 I've always pondered doing a flat, simple part of the DNS, or even 
 an overlay, 
 but of course it needs a business model of sorts.

It has been tried at least twice and failed.
http://www.theregister.co.uk/2002/05/13/realnames_goes_titsup_com/
http://www.idcommons.net

--Michael Dillon



Re: MEDIA: ICANN rejects .xxx domain

2006-05-12 Thread Michael . Dillon

 Why have a TLD when for most of the world:
 
 www.cnn.CO.UK is forwarded to www.cnn.COM
 
 www.microsoft.NET is forwarded to www.microsoft.COM
 
 www.google.NET is forwarded to www.google.COM

Not all organizations simply FORWARD sites.

At different times I have used www.google.com, www.google.co.uk,
www.google.ca, www.google.ru, www.google.de, and www.google.com.au
They are different because I can select different subsets
of the total database to search.

www.apple.ca does forward, but not as you think. Try it
right now, look at the price of that MacBook Pro
and then see what your Apple Store sells it for.

In the past, some ISPs have use .net for internal
email addresses and .com for customers of their
mail services.

Whether or not it is COMMON for organizations to make
distinctions based on TLDs, some have clearly done so
and I don't see why we should subtract that capability.

Many of the new TLDs that are in operation, and
that are being proposed, are primarily MARKETING EXERCISES.
Let me ask you, does the world need a new way for
pornography to be marketed? When .COM, .EDU, .NET
and .ORG were invented, they had a purpose other than
as marketing exercises. If only we could get some serious
support for new TLDs that make some kind of sense, other
than as marketing opportunities for the small number of
people in the registry and registrar business.

--Michael Dillon



Re: Tier Zero (was Re: Tier 2 - Lease?)

2006-05-05 Thread Michael . Dillon

  On 5/4/06, [EMAIL PROTECTED] [EMAIL PROTECTED]
 karoshi.com wrote:
  
   why would anyone do that?

 Hopefully this comes out clearly, as writing can be more confusing
 than speaking...

 My point is it is hard to do anything beyond the first AS# for any SLA
 that you would be paying, since after that the packet switches to no
 money packets on a paid connection, pushing out the issue for things
 sent down that pipe...

Are you saying that there *IS* a good reason why
anyone would buy paid transit from all SFP providers?
And that the reason is so that you have a contractual
SLA with all of those providers?

If so then two questions come to mind. Couldn't you
achieve the same thing by having paid peering with
the SFP providers? Assuming that you do have contractual
service with all of the SFP providers and that there
is an SLA in all of those contracts, how do you deal
with the fact that there is no SLA (to you) on packets
which leave the set of SFP networks? Packets could leave
by going to a transit customer of an SFP network or
by going to a non-SFP peer of an SFP network.

Quite frankly, while terminology like transit,
settlement free peering and paid peering are useful
to analyze and talk about network topography, I don't think
they are useful by themselves when making purchase decisions.
They need to be backed up with some hard technical data
about the network in question as well as the contractual
terms (transit or peering) in place.

It is not possible to say that a given network architecture
is BETTER if you only know the transit/peering arrangements
between that network and some subset of the other network
operators. SFP operators will always be a subset of the entire
public Internet. Membership in that set changes from time to
time for various reasons. And the importance of non-members
also varies from time to time, especially content-provider
networks.

--Michael Dillon

P.S. I purposely did not use the term tier because I
do not believe that current usage of this term refers to
network architecture. It has more to do with market dominance
than anything else and even there it is relative because
there is no longer a single Internet access market.



  1   2   3   4   5   6   7   >