Re: the O(N^2) problem

2008-04-13 Thread Owen DeLong


On Apr 13, 2008, at 5:36 PM, Edward B. DREGER wrote:



Bottom line first:

We need OOB metadata ("trust/distrust") information exchange that  
scales

better than the current O(N^2) nonsense, yet is not PKI.


Not sure why PKI should be excluded, but, so far, this is too abstract
to know what the question is...


And now, the details... which ended up longer reading than I intended.
My apologies.  As Mark Twain said, "I didn't have time to write a  
short

letter, so I wrote a long one instead." :-)

When it comes to establishing trust:

* The current SMTP model is O(N^2);


I don't see SMTP as even a "trust" model since there's pretty much
nothing trustworthy in SMTP.


* I posit that the current IP networking model is sub-O(N);


Again, I'm not seeing IP as a trust model, but, YMMV.


* PKI models are pretty much O(1).

Polynomial-order just doesn't scale well.  It's mathematical fact, and
particularly painful when the independent variable is still increasing
quickly.


Sure.

Many operators seem to reject PKI as "power in too few hands".  I'll  
not

disagree with that.


Depends on the PKI.  For example, the PGP/GPG Web of Trust concept
pretty much lets each individual build their own trust model to whatever
O(x) they choose where greater values of x require more effort and also
provide greater security/trust granularity and lower values of x involve
greater trust of others that you claim you can trust and less direct  
effort

on your part.


Let's also draw upon operational lessons from a couple old-timers.  I
recall using a critter known as "NNTP".  And once upon a time,  
before my

days on the Internet, lived a funny little beast called "UUCP".


I remember UUCP.  It was pretty much O(n^2).

We track email quality from all mailservers that hit us.  I can whip  
up

a list of MXes/organizations that I'm willing to "trust" -- and let's
leave that term imprecisely-defined for now.


Uh, OK.  Starting to understand what the question might be aiming
towards.


Here's what I propose:

Establish a "distrust protocol".  Let path weight be "distrust".  The
"trust path" is of secondary importance to "path weight", although not
completely irrelevant.  SMTP endpoint not in graph?  Fine; have some
default behavior.

Let _trust_ be semi-transitive, a la BGP -- a technology that we know,
understand, and at least sort of trust to run this crazy, giant  
network

that dwarfs even a 50M-user provider.

Let actual _content_ still be end-to-end, so that we do not simply
reincarnate NNTP or UUCP.


Now I'm lost again.  You've mixed so many different metaphors from
interdomain routing to distance-vector computaton to store-and-forward
that I simply don't understand what you are proposing or how one
could begin to approach implementing it or what problem you seem
to think it solves (although it sort of seems like you're wanting to  
attack

the trustworthiness of email to battle SPAM through some mechanism
that depends only on the level of trust for the (source, arrival path)
tuple from whence it came.

What am I missing?

Owen



Re: v4 exhaustion and v6 impact [Re: cost of dual-stack vs v6-only]

2008-03-13 Thread Owen DeLong




While the goal may be good, a reality check might be in order.  
AFAICS, the impact will be that residential and similar usage will  
be more heavily NATted.  Enterprises need to pay higher cost per  
public v4 address.  IPv4 multihoming practises will evolve (e.g.,  
instead of multihoming with PI, you multihome with one provider's PA  
space; you use multiconnecting to one ISP instead of multihoming).   
Newcomers to market (whether ISPs or those sites which wish to start  
multihoming) are facing higher costs (the latter of which is also a  
good thing). Obviously DFZ deaggregation will increase but we still  
don't end up routing /32's globally.


I am confused by your statement.  It appears you are saying that it is  
a good
thing for sites that wish to multihome to face higher costs.  If that  
is truly
what you are saying, then, I must strenuously disagree.  I think that  
increased

cost for resilient networking is a very bad thing.

While price for a /20 or /16 of address space might go up pretty  
high, a /24 can still be obtained with a reasonable cost.  Those  
ISPs with lots of spare or freeable v4 space will be best placed to  
profit from new customers and as a result v6 will remain an  
unattractive choice for end-users.


Only for some limited period of time.  Even those "freeable" /24s will  
get

used up fairly quickly.

IANA and RIRs running out of v4 space may allow making a better case  
to an ISP's management that their backbone should be made v6 capable  
(to support customers who want v6) but it doesn't provide the case  
for the ISP to deploy v6 to its residential users, and it doesn't  
provide a case for the enterprises to start v6 transition (because  
they need to support v4 anyway).  It may also make a case for ISPs  
which don't have much spare IPv4 space and cannot free or obtain it  
to try to market v6 to their end-users.



The case for IPv6 end-user deployment will most likely occur when new
IPv4 addresses for those customers become more costly than supporting
a NAT-PT infrastructure with the appropriate DNS hackery and such.

It would be nice (and cheaper in the long run) if ISPs were ahead of  
that
curve in some way, but, the reality is that's probably going to be the  
driver.

Eventually, enough NAT-PT eyeballs will drive IPv6 native content
capabilities (although in ability to get IPv4 addresses for new content
hosts may also serve as a driver there).

In terms of enterprise, I think that will be the last group to convert.
I don't think you will see much enterprise level migration until they
are faced with their ISPs wanting to shut down IPv4 and raising the
IPv4 transit costs accordingly.  However, once we reach somewhat
minimal critical mass in IPv6 content, and, NAT-PT solutions are
more readily available and better understood, I think you'll see
most new enterprise deployments being done with IPv6.

So v6 capabilities in the ISP backbones will improve but the end- 
users and sites still don't get v6 ubiquituously.  This is a  
significant improvement from v6 perspective but is still not enough  
to get to 90% global v6 deployment.



I'm not sure why 90% is necessary or even desirable in the short
term.  What's magic about 90%?  What I think is more interesting
is arriving at the point where you can deploy a new site entirely
with IPv6 without concerns about being disconnected from some
(significant) portion of the internet (intarweb?).

Once we're at that point, the rest can sort itself as the timeframe
becomes merely an issue of economics.  Prior to that point, the
issues are of much greater potential impact beyond the mere
financial.

Owen



Re: YouTube IP Hijacking

2008-02-24 Thread Owen DeLong



On Feb 24, 2008, at 2:14 PM, Tomas L. Byrnes wrote:



I figured as much, but it was worth a try.

Which touches on the earlier discussion of the null routing of /32s
advertised by a special AS (as a means of black-holing DDOS traffic).

It seems to me that a more immediately germane matter regarding BGP
route propagation is prevention of hijacking of critical routes.

Perhaps certain ASes that are considered "high priority", like Google,
YouTube, Yahoo, MS (at least their update servers), can be trusted to
propagate routes that are not aggregated/filtered, so as to give them
control over their reachability and immunity to longer-prefix  
hijacking

(especially problematic with things like MS update sites).



That's just inviting the injection of forged AS routes to commit
abuse.

Owen




-Original Message-
From: Simon Lockhart [mailto:[EMAIL PROTECTED]
Sent: Sunday, February 24, 2008 2:07 PM
To: Tomas L. Byrnes
Cc: Michael Smith; [EMAIL PROTECTED]; [EMAIL PROTECTED];
nanog@merit.edu
Subject: Re: YouTube IP Hijacking

On Sun Feb 24, 2008 at 01:49:00PM -0800, Tomas L. Byrnes wrote:

Which means that, by advertising routes more specific than the ones
they are poisoning, it may well be possible to restore universal
connectivity to YouTube.


Well, if you can get them in there Youtube tried that, to
restore service to the rest of the world, and the
announcements didn't propogate.

Simon





Re: IPV4 as a Commodity for Profit

2008-02-24 Thread Owen DeLong



On Feb 24, 2008, at 12:45 PM, Stephen Sprunk wrote:



Thus spake "Tom Vest" <[EMAIL PROTECTED]>

On Feb 23, 2008, at 1:54 PM, Stephen Sprunk wrote:
Rechecking my own post to PPML, 73 Xtra Large orgs held 79.28% of   
ARIN's address space as of May 07; my apology for a faulty  
memory,  but it's not off by enough to invalidate the point.


The statistics came from ARIN Member Services in response to an  
email
inquiry.  I don't believe they publish such things anywhere  
(other  than what's in WHOIS), but you can verify yourself if you  
wish;  they were quite willing to
give me any stats I asked for if they had the necessary data   
available.


Thanks for the information Stephen.
In order to be perfectly clear on how to interpret this, it would  
be  good to know whether this sum includes the pre-ARIN  
delegations, or  just reflects what has happened since ARIN was  
established.


The wording of the question and response referred only to "ARIN  
members". That does not include most orgs with _only_ legacy  
allocations, but it would include orgs with both legacy and non- 
legacy allocations.  Presumably, if an org had both types, both  
would have been included, but that wasn't explicitly stated since it  
wasn't relevant to the questions I was asking at the time.


Not necessarily.  Orgs which are end-users and not LIR/ISP subscriber  
members may have

resources from ARIN without being members.

Owen



No Webcast of IPv4 Free Pool BoF today

2008-02-19 Thread Owen DeLong


I have been informed by Merit that there will be no webcast of this  
afternoon's
AC Hosted BoF.  I apologize for any inconvenience.  I am posting this  
because

I received a number of inquiries on this topic.

Owen



Re: IPv4 Resource Distribution After IANA Free Pool Exhaustion -- ARIN AC BoF

2008-02-15 Thread Owen DeLong

The proposal was posted to PPML, but, since the AC has not yet
moved it forward to formal proposal status, it doesn't have a number
and isn't on the ARIN web site just yet.

The thread on PPML is available here:

[ppml] Policy Proposal: IPv4 Transfer Policy Proposal

Owen



IPv4 Resource Distribution After IANA Free Pool Exhaustion -- ARIN AC BoF

2008-02-14 Thread Owen DeLong


Members of the ARIN AC will be present to discuss IPv4 after IANA Free  
Pool Exhaustion
and get input from the community on how they feel this should be  
handled.


Members of the community attending the NANOG conference are encouraged  
to

attend this session and give your input.

The session will be from 4:00 to 5:30 on Tuesday, February 19th and  
will be

in the Crystal Room on Level B.

We will be discussing the recently posted transfer policy proposal and  
other

ideas around the IPv4 free-pool exhaustion process(es).

Thanks,

Owen DeLong
ARIN AC




Area Social Activity

2008-02-14 Thread Owen DeLong


Sorry for the short notice.

For anyone coming to NANOG early who is a certified SCUBA DIver, I'll
be diving in Monterey (about 1 hour drive from San Jose) Saturday and
Sunday.

If you're interested in joining me, send an email off-list.

Owen DeLong
Open Water SCUBA Instructor (PADI)



Re: EU Official: IP Is Personal

2008-01-25 Thread Owen DeLong


I don't know about your IP addresses, but, people can use my IP  
addresses

from a number of locations which are nowhere near the jurisdiction in
which my network operates, so, I don't really see the correlation  
here

with license plates or phone numbers.


I'm not clear if you mean legitimately here, or not.  If you've  
authorised
people to relay traffic through you in some way, you'd be the right  
first
contact.  If you're talking about unauthorised spoofing, it's a lot  
like
the first two cases (I'd say a fair bit easier / cheaper than the  
second,

not substantially more so than the first).


In my case, yes, 100% legitimately.

I can be contacted, but, the reality is that I don't track it.  I am  
no longer

in direct contact with a number of people who have legitimate use of
my IP addresses.  If I find them doing something I consider abuse, then,
I'll turn off the access.  However, I don't maintain contact  
information or

the ability to personally identify the correlation between the person
and the access.  So far, abuse has been rare enough that this has
not been an issue.  I've had to turn off two services I used to provide
as a result of abuse in approximately 20 years of operating a network
here.

Owen



Re: EU Official: IP Is Personal

2008-01-25 Thread Owen DeLong



On Jan 25, 2008, at 6:05 AM, Stephane Bortzmeyer wrote:



On Fri, Jan 25, 2008 at 10:42:44AM +,
Roland Perry <[EMAIL PROTECTED]> wrote
a message of 15 lines which said:


in the UK it [phone number portability] 's done with something
similar to DNS. The telephone system looks up the first N digits of
the number to determine the operator it was first issued to. And
places a query to them. That either causes the call to be accepted
and routed, or they get an answer back saying "sorry, that number
has been ported to operator FOO-TEL, go ask them instead".


What happens when a phone number is ported twice, from BAR-TEL to
FOO-TEL and then to WAZ-TEL? Does the call follows the list? What if
there is a loop?

The solution you describe does not look like the DNS to me. A solution
more DNS-like would be to have a root (which is not an operator)
somewhere and every call triggers a call to the root which then
replies, "send to WAS-TEL".


There is a shared root in the US SS7 system.

The security of said root follows a rather interesting model.  At  
least until
fairly recently, any "trusted" carrier (LEC, ILEC, RBOC, or IEC) could  
put

pretty much whatever they wanted into the database.

Of course, the consequence of getting caught with your hand in the  
cookie
jar there was sufficient that it tended to prevent invalid entries  
other than
by accident, but, still, it was a remarkable trust model for such an  
industry.


Owen



Re: EU Official: IP Is Personal

2008-01-24 Thread Owen DeLong



On Jan 24, 2008, at 8:55 PM, [EMAIL PROTECTED] wrote:


On Thu, 24 Jan 2008 20:39:53 PST, [EMAIL PROTECTED] said:


What we can do with IP addresses is conclude that the user of the
machine with an address is likely to be one of its usual users. We
can't say that with 100% certainty, because there are any number of
ways people can get "unusual" access. But even so, if one can show a
pattern of usage, the usual suspects can probably figure out which of
them, or what other "unusual" user, might have done this or that.


And oddly enough, license plates on cars act *exactly the same way*  
- but
nobody seems at all surprised when police can work backwards from a  
plate
and come up with a suspect (who, admittedly, may not have been  
involved if

the car was borrowed/stolen/etc).

In order to be using the license plate, you had to be physically  
present in the car.


You can work backwards from a phone number to a person, without a  
*guarantee*

that you have the right person - but I don't see anybody claiming that
phone numbers don't qualify as "personal information" under the EU  
definition.


In order to be on the telephone number, you (almost always) need to be  
present

at the site where that phone number is terminated.

I don't know about your IP addresses, but, people can use my IP  
addresses
from a number of locations which are nowhere near the jurisdiction in  
which
my network operates, so, I don't really see the correlation here with  
license

plates or phone numbers.

Owen



Re: EU Official: IP Is Personal

2008-01-24 Thread Owen DeLong


I'm sorry, but, I have a great deal of difficulty seeing how an IP can  
be considered

personally identifying.

For example, in my home, I have static addresses.  However, the number  
of
different people using those addresses would, to me, imply that you  
cannot

personally identify anyone based solely on the IP address they are using
within my network.  Certainly, you cannot say that I initiated all of  
the packets

which came from my addresses.

Another example would be a retail store that I work with as a SCUBA  
Instructor.
They also have static IP addresses, but, I would not say that any of  
the traffic
coming from the store is necessarily personally identifiable.  Our  
entire

staff (half a dozen instructors, a dozen or so divemasters and AIs, the
owner, and at least one other retail assistant) source traffic from  
within

that network.

The larger the business, the less identifiable the addresses become,  
generally.
However, even in these ultra-small examples, I don't feel that the  
addresses

are, in themselves, personally identifying.

Owen



Re: v6 subnet size for DSL & leased line customers

2007-12-24 Thread Owen DeLong



On Dec 24, 2007, at 9:43 PM, Kevin Loch wrote:



Iljitsch van Beijnum wrote:

On 24 dec 2007, at 20:00, Kevin Loch wrote:

RA/Autoconf won't work at all for some folks with deployed server
infra,
That's just IPv4 uptightness. As long as you don't change your MAC  
address you'll get the same IPv6 address every time, this works  
fine for servers unless you need a memorable address. BTW, I don't  
know anyone who uses DHCP for their servers.

Hopefully vrrpv6 will work with RA turned completely off.
With router advertisements present you don't need VRRP because you  
have dead neighbor detection.


And that helps the hosts on the same l2 segment that need different
gateways how?  Or hosts with access to multiple l2 segments with
different gateways?

I think the point is that RA and VRRPv6 are not designed to depend on  
each

other in any way.

While you can certainly run both on any given segment, it is hard to  
imagine

many cases where one would want to do so.

In places where all you need is to know a valid gateway that can do  
the right
thing with your packets, RA is probably the right solution.  This,  
generally,
turns out to be the vast majority of LAN segments.  Clearly, RA is  
intended

only for scenarios where a gateway is a gateway is a gateway.

In places where you need tighter control over the usage of various  
gateways

on a common L2 segment, VRRP probably makes more sense.  However,
as things currently stand, that means static routing configuration on  
the host

since for reasons passing understanding, DHCP6 specifically won't do
gateway assignment.

I don't know the state of the current VRRP6 draft or protocol, but, I  
can't
imagine what would be left in VRRP6 if it couldn't be statically  
configured
without RA.  If that's the case, then, what would it possibly do,  
exactly, that

would be different from RA without VRRP?

Owen



Re: v6 subnet size for DSL & leased line customers

2007-12-24 Thread Owen DeLong


"Well, you say we need to spend more money every year on address  
space.
Right now we're paying $2,250/year for our /32, and we're able to  
serve
65 thousand customers.  You want us to start paying $4,500/year, but  
Bob

tells me that we're wasting a lot of our current space, and if we were
to begin allocating less space to customers [aside: /56 vs /48],  
that we
could actually serve sixteen million users for the same cash.  Is  
there

a compelling reason that we didn't do that from the outset?"


Right... Let's look at this in detail:

/48 per customer == 65,536 customers at $2,250 == $0.03433/customer
/56 per customer == 16,777,216 customers at $2,250 == $0.00013/customer

So, total savings per customer is $0.0342/customer _IF_ you have
16,777,216 customers.  On the other hand, sir, for those customers
who need more than 256 subnets, we're running the risk of having
to assign them multiple noncontiguous prefixes.  Although the cost
of doing so is not readily apparent, each router has a limit to the  
number

of prefixes that can be contained in the routing table.  The cost of
upgrading all of our routers later probably far exceeds the $0.03
per customer that we would save.  Really, in general, I think that
the place to look for per-customer savings opportunities would
be in places where we have a potential recovery greater than
$0.03 per customer.

This discussion is getting really silly; the fact of the matter is  
that

this /is/ going to happen.  To pretend that it isn't is simply naive.

How high are your transit&equipment bills again, and how are you  
exactly

charging your customers? ah, not by bandwidth usage, very logical!


Perhaps end-user ISP's don't charge by bandwidth usage...


True, but, they don't, generally, charge by the address, either.
Usually, they charge by the month.  If you can't cover $0.03/year/ 
customer

for address space in your monthly fees, then, raise your monthly
fee by $0.05.  I'm betting your customers won't care.

As an enduser I would love to pay the little fee for IP space that  
the

LIR (ISP in ARIN land) pays to the RIR and then simply pay for the
bandwidth that I am using + a little margin so that they ISP also  
earns

some bucks and can do writeoffs on equipment and personnel.


Sure, but that's mostly fantasyland.  The average ISP is going to  
want to
monetize the variables.  You want more bandwidth, you pay more.  You  
want

more IP's, you pay more.  This is one of the reasons some of us are
concerned about how IPv6 will /actually/ be deployed ...  quite  
frankly,

I would bet that it's a whole lot more likely that an end-user gets
assigned a /64 than a /48 as the basic class of service, and charge  
for

additional bits.  If we are lucky, we might be able to s/64/56/.

I mean, yeah, it'd be great if we could mandate /48 ...  but I just  
can't

see it as likely to happen.

I'm betting that competition will drive the boundary left without  
additional
fees.  After all, if you're only willing to dole out /64s and your  
competitor is
handing out /56 for the same price, then all the customers that want  
multiple
subnets are going to go to your competitor.  The ones that want /48s  
will

find a competitor that offers that.

That's how the real world works.  I remember having to repeatedly  
involve

senior management in rejecting requests for /24s from customers who
could not justify them because our sales people at Exodus kept promising
them.  The sales people continuously suggested that our competitors
were offering everyone /24s and that they had to do that to win the  
deals.


OTOH, "Raw bandwidth communications" seems to want to charge bandwidth
utilization not actually based on the bandwidth utilized, but, the  
number of

IP addresses routed.  They are not my ISP for that reason.

Different providers have different business models.  Consumers will
find the provider with the business model that best fits their needs.
That's the way it works in the real world.


So, the point is, as engineers, let's not be completely naive.  Yes,  
we
/want/ end-users to receive a /56, maybe even a /48, but as an  
engineer,
I'm going to assume something more pessimistic.  If I'm a device  
designer,
I can safely do that, because if I don't assume that a PD is going  
to be
available and plan accordingly, then my device is going to work in  
both

cases, while the device someone who has relied on PD is going to break
when it isn't available.


Assuming that PD is available is naive.  However, assuming it is not is
equally naive.  The device must be able to function in both  
circumstances
if possible, or, should handle the case where it can't function in a  
graceful

and informative manner.

Owen



Re: v6 subnet size for DSL & leased line customers

2007-12-21 Thread Owen DeLong



On Dec 21, 2007, at 9:39 AM, Joe Greco wrote:


The primary reasons I see for separate networks on v6 would include
firewall policy (DMZ, separate departmental networks, etc)...


This is certainly one reason for such things.


Really, in most "small business" networks I've seen, it's by far the  
main

one if we want to be honest about it.  The use of multiple networks to
increase performance, for example, is something you can design around
differently, and modern hardware supports things like LAG without  
having
to get into the realm of unimaginably expensive hardware.  Even if  
you do
end up putting a quad port ethernet into a server with v6, the sizes  
of
the allocations we're discussing would allow you 64 completely  
separate
"workgroups" with their own server at the /56 allocation size (64 *  
4 =

256).


Agreed.  In fact, in any network large enough to matter, most modern
hardware forwards L2 and L3 at the same speed, so, there's essentially
no performance barrier.

OTOH, in many business netwoks I've seen, there is reason to segment
things into administrative boundaries, boundaries that result from media
conversion creating routed separation of segments, and, other topology
meets physical limitation issues.  I find these to be at least as common
as the separation between Internal/External/DMZ.


And I'm having some trouble envisioning a residential end user that
honestly has a need for 256 networks with sufficiently differently
policies.  Or that a firewall device can't reasonably deal with  
those

policies even on a single network, since you mainly need to protect
devices from external access.


Perhaps this is a lack of imagination.

Imagine that your ethernet->bluetooth gateway wants to treat the
bluetooth
and ethernet segments as separate routed segments.


That /is/ a lack of imagination.  ;-)  Or, at least, reaching pretty  
far.
The history of these sorts of devices has been, to date, one of  
trying to

keep network configuration simple enough that an average user can use
them.  That implies a default mode of bridging will be available.


You are ignoring the reality of the difference between IPv4 and IPv6.

With DHCP6 prefix delegation, creating a hierarchical routed topology
becomes as simple (from the end user perspective) as the bridged
topology today, and, requires a lot less thinking ability on the device.
Especially when you consider the possibility of many such topologies
evolving in a situation that could create a loop and the fact that most
such existing devices implement bridging without spanning tree.

Now, imagine that some of your bluetooth connected devices have  
reasons

to have some topology behind them... For example, you have a master
appliance control center which connects via Bluetooth to your  
network,
but, uses a different household control bus network to talk to  
various

appliances.  For security reasons, you've decided not to have your
kitchen appliances be able to talk to your media devices (Who wants
a virus in some downloaded movie to be able to change the temperature
in your refrigerator?).


Yes, and?  You're saying there are no access controls at the gateway
level?  I'm not entirely sure that I care for the idea of making  
people
route things at the IP level just so they can protect their fridge  
from

their DVD.


I'm saying that bridges tend not to have access controls or at least not
adequate access controls except in a few (l2 firewall oddities like
Netscreen/PIX in Bridge mode) exceptional cases.  The point here
is that in IPv6, you aren't "making people route things", the routing
topology will mostly handle itself automatically, although, people
may wish to intervene to design the security policy or at least have
the ability to modify it from the default.

You are trying to apply strictly IPv4 thinking to IPv6, and, there are
some reasons that a significant paradigm shift is required.


I keep coming to the conclusion that an end-user can be made to work
on
a /64, even though a /56 is probably a better choice.  I can't find
the
rationale from the end-user's side to allocate a /48.  I can maybe  
see

it if you want to justify it from the provider's side, the cost of
dealing
with multiple prefix sizes.


I can easily envision the need for more than a /64 in the average  
home

within short order.


You should probably correct that from "need" to "want."  There is  
nothing
preventing the deployment of all of the below on a single /64, it  
would
simply mean that there would be a market for smart firewalling  
switches
that could isolate devices by address or range, rather than having  
smart

firewalling routers that could isolate devices by subnet.


We will agree to disagree on this.  Enforcing security policy within
a subnet is ugly at best and unreliable at worst. It makes  
troubleshooting

harder.  It makes security policy design more complex.  It causes many
many more problems than it solves in my opinion.


If nothing else, the aver

Re: v6 subnet size for DSL & leased line customers

2007-12-21 Thread Owen DeLong



The primary reasons I see for separate networks on v6 would include
firewall policy (DMZ, separate departmental networks, etc)...


This is certainly one reason for such things.


And I'm having some trouble envisioning a residential end user that
honestly has a need for 256 networks with sufficiently differently
policies.  Or that a firewall device can't reasonably deal with those
policies even on a single network, since you mainly need to protect
devices from external access.


Perhaps this is a lack of imagination.

Imagine that your ethernet->bluetooth gateway wants to treat the  
bluetooth

and ethernet segments as separate routed segments.

Now, imagine that some of your bluetooth connected devices have reasons
to have some topology behind them... For example, you have a master
appliance control center which connects via Bluetooth to your network,
but, uses a different household control bus network to talk to various
appliances.  For security reasons, you've decided not to have your
kitchen appliances be able to talk to your media devices (Who wants
a virus in some downloaded movie to be able to change the temperature
in your refrigerator?).

I keep coming to the conclusion that an end-user can be made to work  
on
a /64, even though a /56 is probably a better choice.  I can't find  
the

rationale from the end-user's side to allocate a /48.  I can maybe see
it if you want to justify it from the provider's side, the cost of  
dealing

with multiple prefix sizes.


I can easily envision the need for more than a /64 in the average home
within short order. If nothing else, the average home will probably
want to be able to accommodate:
Guest network
Home wired network
Wireless network(s)
Bluetooth segment(s)
Media network
Appliance Control netowrk
Lighting Control network
etc.

However, I agree that in any vision I can come up with today, the need
for more than 256 is beyond my current imagination.

I think it makes sense to assign as follows:

/64 for the average current home user.
/56 for any home user that wants more than one subnet
/48 for any home user that can show need.

Owen



Re: /48 for each and every endsite (Was: European ISP enables IPv6 for all?)

2007-12-19 Thread Owen DeLong




So my wondering is basically, if we say we have millions of end  
users right now and we want to give them a /56 each, and this is no  
problem, then the policy is correct. We might not have them all IPv6  
activated in 2 years which is the RIR planning horizon. I do concur  
with other posters here that the planning horizon for IPv6 should be  
longer than three years so we get fewer prefixes in the DFZ as a  
whole. Then again, *RIR people don't care about routing so I am  
still sceptical about that being taken into account.



So... I need to ask for some clarification here.

What, exactly, do you mean by "RIR people"?

Do you mean the staff at the RIR?

In that case, you're right, sort of.  They care about following the  
policies set by their
respective constituent communities.  In the case of ARIN, this would  
be essentially
anyone who cares to participate.  However, if people who care about  
routing choose
to participate (which they seem to vigorously in ARIN), then, their  
views will be
reflected in policy as a result (they certainly are, at least to some  
extent in the

ARIN policies).

Do you mean the RIR Boards, Advisory Councils, or other representative  
governing

bodies?

In that case, you're also partially right.  They care about  
representing their community
of users and the best fiduciary interests of the RIR.  I don't know  
about the structure of
the other RIRs, but, at least in the case of ARIN, the Advisory  
Council is definitely
primarily concerned with shaping policy according to the consensus of  
the constituent
community and the board is concerned with insuring that the AC is  
following the correct
processes in policy adoption and the fiduciary best interests of ARIN  
as an organization.


Do you mean the RIR end users and customers who receive address  
resources from the

RIRs?

In that case, I think, actually that most of them care a great deal  
about routing.


Note, in these statements, I am speaking only as an individual, and,  
not as someone who
was recently elected to a future term on the ARIN AC or on behalf of  
the AC in any way.


you will be having. Unless you will suddenly in a year grow by 60k  
clients (might happen) or really insanely with other large amounts  
your initial planning should hold up for quite some while


We grow by much more than 60k a year, it's just hard to plan for it.  
If we project for the highest amount of growth then we're most  
likely wasteful (in the case of IPv4 space anyway), if we project  
for lowest amount of growth then we get DFZ glut.


IPv6 needs a much longer time horizon than IPv4 in my opinion.  If  
nothing else, I would say
that you should be able to project your addressing needs for the next  
two years at least in the
ball-park of continuing your previous growth trends.  If you added  
100k customers last year and
80k customers the year before, then, I think it's reasonable,  
especially in IPv6, to plan for 125k

customer adds next year and 150k customer adds the following year.

If you're figures turn out to be excessive, then, in two years when  
you'd normally have to apply
for more space (I'd like to see this move to more like 5 for IPv6),  
you can skip that application

until you catch up.  No real problem for anyone in that case.

We would also like to do regional IPv6 address planning since we're  
too often in the habit of (without much notice for the operational  
people) selling off part of the business.



Heh... Then you should force the new owners to renumber.

Then again, with a /32 we can support ~16 million residential end- 
users with /56 each, which I guess will be enough for a while.


So split the difference and ask for a /28.  Personally, I think /56s  
are plenty for most
residential users.  I'm a pretty serious residential end-user, and, I  
can't imagine I'd need
more than a /56 in terms of address planning.  However, I have a /48  
because that's the

smallest direct assignment available for my multihomed end-site.

Owen



Re: IEEE 40GE & 100GE

2007-12-13 Thread Owen DeLong


So, assuming this translates roughly to optics being:

$1,000   4km
$1,300  10km
$2,600  40km

You'd rather have to pay $2,600 for all your campus links than
$1,300 for all your LAN links?

My preference would be quite different.  I'd much rather pay $1,300 for
the LAN links than $2,600 for the Campus links.

Owen

On Dec 13, 2007, at 1:51 PM, Stephen Sprunk wrote:



Thus spake "Chris Cole" <[EMAIL PROTECTED]>

The 40km/10km cost ratio is between 1.6x and 2x, depending on
the source.

The 10km/4km cost ratio is between 1.15x and 1.3x, again
depending on the source.


If those numbers translate into prices (not costs), then I'd prefer  
to see 40km and 4km optics, with no 10km optics.  The important  
point is that the 40km optics neet to be able to handle 4.1km links  
with no attenuators, preferably without any human tuning at all.   
You only pay the extra capital cost once (if there even is any, due  
to more volume of fewer parts), but you pay labor and sparing over  
and over.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking




Re: large organization nameservers sending icmp packets to dns servers.

2007-08-06 Thread Owen DeLong



On Aug 6, 2007, at 9:13 AM, Leigh Porter wrote:




But why would they care where the nameserver is? Point 2 would seem to
be a little stupid a thing to assume. Also, what happens if, at that
moment, the ICMP packet is stuck in a queue for a few ms making the
shortest route longer.


While point 2 is a bad assertion if you depend completely upon it, it's
not necessarily a bad starting point if you have no other data to go on.

1.  90+% of resolvers are topologically proximate to either the
requestor, or, the requestors NAT box that you will have to
talk to anyway.

2.  At the GLB level, you really don't have any data other than the
IP address of the resolver upon which to base your GLB decision.
Since you'll be right 90+% of the time, and, only sub-optimal,
not broken the other <10% of the time, it generally works OK.

3.  When I worked for Netli, before they were acquired in what I would
call a much less than ethical transaction, we maintained an
exception table for cases where we learned that the DNS
resolver was not topologically proximate to the requestors
that flowed through it.  We also spent a fair amount of time
explaining the benefits of having the resolver be topologically
proximate to our customers and their customers.

The Netli system was designed to be quite gentle in the amount of
probing it did, but, we did occasionally get messages from people
with paranoid IDS boxes.  Usually, once we explained that our
efforts were directed at improving the quality of service to their
users, and how the system worked and how little traffic we sent
their way to accomplish this, they were happy to reconfigure their
alarm preferences.

I don't have first hand knowledge of anyone elses use of these
kinds of ICMP probes, but, I would say that generally, they are
somewhat useful and mostly harmless.

Owen





Re: Network Level Content Blocking (UK)

2007-06-07 Thread Owen DeLong


On Jun 7, 2007, at 6:44 AM, Iljitsch van Beijnum wrote:



[trimmed other lists, not sure if they'd appreciate nanog volumes]

On 7-jun-2007, at 11:06, James Blessing wrote:


As many people are aware there is an 'expectation' that 'consumer'
broadband providers introduce network level content blocking for
specified content on the IWF list before the end of 07.


Where is this list, what type of stuff is on it and how do you  
translate from the real-world identification of that which is to be  
blocked into some kind of restriction in the network?


Whose expectation is it?  If it is not a LAW, then, ISPs should reset
the expectation and go back to the real problems
of running a network.

Owen



smime.p7s
Description: S/MIME cryptographic signature


Re: Security gain from NAT: Top 5

2007-06-06 Thread Owen DeLong

  #1 NAT advantage: it protects consumers from vendor
  lock-in.


Speaking of FUD...  NAT does nothing here that is not also accomplished
through the use of PI addressing.


  #2  NAT advantage: it protects consumers from add-on
  fees for addresses space.


More FUD.  The correct solution to this problem is to make it possible
for end users to get reasonable addresses directly from RIRs for
reasonable fees.


  #3  NAT advantage: it prevents upstreams from limiting
  consumers' internal address space.


Regardless of the amount of growth, do you really see the likelihood
of any household _EVER_ needing more than 65,536 subnets?
I don't even know the exact result of multiplying out 16*1024^6, but,
I'm betting you can't fill 65,536 subnets that big ever no matter how
hard you try.  So, again, I say FUD.


  #4  NAT advantage: it requires new protocols to adhere to
  the ISO seven layer model.


Quite the contrary... NAT has encouraged the development of hack upon
hack to accommodate these protocols.  Please explain to me how you
would engineer a call setup-tear-down protocol for an independent
audio stream that didn't require you to embed addresses in the payload.
Until you can solve this problem, we will have to have protocols that
break this model.  Other than from some sort of ISO purity model
(notice how popular OSI networking is today, compared to IP?), SIP
is actually a pretty clean solution to a surprisingly hard problem.
Unless you have a better alternative for the same capabilities, I'm
not buying it.  We shouldn't have to give up useful features for
architectural purity.  If the architecture can't accommodate real world
requirements, it is not the requirements that are broken.

That's sort of like saying that OSPF and BGP break the ISO layer model
because they talk about layer three addresses in layer 4-7 payload.
Heck, even ISIS is broken by that definition.  Again, I cry FUD.


  #5  NAT advantage: it does not require replacement security
  measures to protect against netscans, portscans, broadcasts
  (particularly microsoft's netbios), and other malicious
  inbound traffic.


??? This is pure FUD and patently untrue.  Example:  About the cheapest
NAT capable firewall you can buy is a Linksys WRT-54G.  If you put
real addresses on both sides of it and change a single checkbox in the
configuration GUI, you end up with a Stateful Inspection firewall that
gives you all the same security you had with the NAT, but, without the
penalties imposed by NAT.

Until you can show me a box that is more than USD 40 cheaper than
a WRT-54G that cannot have NAT turned off, again, I cry FUD.
Oh, btw, a WRT-54G sells for about USD 40 last time I bought one
brand new at Best Buy, so, that's a pretty hard metric to meet.


These are just some of the reasons why NAT is, and will continue to
be, an increasingly popular technology for much more than address
conservation.

Since each and every one of them is FUD, that is certainly the pot  
calling

the kettle black.  Unfortunately, time and again, american politics has
proven that FUD is a successful marketing tactic, so, you are probably
right, there will probably be a sufficient critical mass of ignorant  
consumers

and vendors that will buy into said FUD and avoid the real solution
in favor of continuing the abomination that is NAT and all the baggage
of STUN, difficult debugging, header mangling, address conflicts,
and the rest that tends to come with it.

Owen



smime.p7s
Description: S/MIME cryptographic signature


Re: Security gain from NAT (was: Re: Cool IPv6 Stuff)

2007-06-04 Thread Owen DeLong


On Jun 4, 2007, at 1:41 PM, David Schwartz wrote:




On Jun 4, 2007, at 11:32 AM, Jim Shankland wrote:



Owen DeLong <[EMAIL PROTECTED]> writes:

There's no security gain from not having real IPs on machines.
Any belief that there is results from a lack of understanding.



This is one of those assertions that gets repeated so often people
are liable to start believing it's true :-).



Maybe because it _IS_ true.


*No* security gain?  No protection against port scans from  
Bucharest?

No protection for a machine that is used in practice only on the
local, office LAN?  Or to access a single, corporate Web site?


Correct.  There's nothing you get from NAT in that respect that  
you do

not get from good stateful inspection firewalls.  NONE whatsoever.


Sorry, Owen, but your argument is ridiculous. The original  
statement was

"[t]here's no security gain from not having real IPs on machines". If
someone said, "there's no security gain from locking your doors",  
would you
refute it by arguing that there's no security gain from locking  
your doors

that you don't get from posting armed guards round the clock?


Except that's not the argument.  The argument would map better to:

There's no security gain from having a screen door in front of your
door with a lock and dead-bolt on it that you don't get from a door
with a lock and dead-bolt on it.

I posit that a screen door does not provide any security. A lock and
deadbolt provide some security.  NAT/PAT is a screen door.
Not having public addresses is a screen door.  A stateful inspection
firewall is a lock and deadbolt.

Owen



smime.p7s
Description: S/MIME cryptographic signature


Re: IPv6 Advertisements

2007-05-31 Thread Owen DeLong


On May 31, 2007, at 8:03 AM, Donald Stahl wrote:



The upside is that in the block you're expected to accept /48s,  
nobody will have a /32.  The downside is that anyone who gets a  
larger-than-minimum sized allocation/assignment can deaggregate  
down to that level.
I don't think ARIN is planning on giving out more less a /48 but  
more than a /32- at least that was the impression I got. End sites  
get a /48- ISP's get a /32 or larger- and that's it (I could  
certainly be wrong). As such, deaggragation in the /48 block should  
not be an issue because no one will have more than a /48 in the  
first place.


-Don


Yes, you can get a prefix between /32 and /48 if you can justify it.   
That is certainly

in line with the policy which resulted from proposal 2005-1.

Owen



smime.p7s
Description: S/MIME cryptographic signature


Re: IPv4 multihomed sites statistics

2007-05-15 Thread Owen DeLong


Also, is there a way to find the average number of peers that  
these sites

multihome with?  If not, how large is it in general?


Difficult to say, and lots of people have tried.  Route-Views @  
Oregaon, CAIDA, RIPE RIS, and many others has some data you might  
be able to morph into that.



The first problem is to identify what defines a site.

Is an ASN a site?  Would all of AS701 be considered a single site?   
If not,
then, how can one identify a site.  Is a prefix a site?  What about  
prefixes

that are aggregated and span multiple locations all over the world?

There is no clear correlation between routing data and any logical  
definition
of the term site.  Is a site a building, a campus, the collection of  
buildings
networked in a given metro. area? There isn't even a clear definition  
of the

term site, really.

If you can define what it is you want to measure, then, perhaps it  
can be

measured.  However, the question as stated cannot be measured because
the terms used are not sufficiently defined as to allow measurably
correct meaning.

Owen





smime.p7s
Description: S/MIME cryptographic signature


Re: Hotmail blackholing certain IP ranges ?

2007-04-26 Thread Owen DeLong

Tongue in cheek:

	Perhaps they upgraded to Vista on their servers and they are all  
waiting

for someone to come around and answer the "Someone is trying to send
mail through this server.  Cancel or Allow?" prompts.

Owen



smime.p7s
Description: S/MIME cryptographic signature


Re: IP Block 99/8

2007-04-23 Thread Owen DeLong

All reachable from the ARIN meeting.

Owen

On Apr 23, 2007, at 7:46 AM, James Blessing wrote:



Shai Balasingham wrote:


We recently started to assign these blocks. So all the ranges are not
assigned yet. Following are some...

99.245.135.129
99.246.224.1
99.244.192.1


All reachable from here (as8468)

J
--
COO
Entanet International
T: 0870 770 9580
W: http://www.enta.net/
L: http://tinyurl.com/3bxqez




smime.p7s
Description: S/MIME cryptographic signature


Re: UK ISP threatens security researcher

2007-04-21 Thread Owen DeLong
I think if you are referring to "public disclosure", yes, I think  
there's
little point of doing this, unless you are seeking attention. Of  
course,

reporting a problem to vendor privately always makes sense.


Public disclosure of the existence of a vulnerability and whatever
information is required to understand it well enough to mitigate
it, resolve it, or work around it is a good and useful thing.

Public disclosure of details of how to exploit the vulnerability
beyond what is required in my previous paragraph is not
useful and is both rude and counterproductive.  Generally,
however, I do not think it should be actionable or criminal.

If you leave your front door unlocked, that's dumb.  If I tell you
that you left your front door unlocked, that's a good thing.
If I tell your neighbors that you left your front door unlocked,
it's not necessarily helpful, but, it's not illegal, nor should it be.

OTOH, if you buy your lock from LockCo and I discover that
there is a key pattern that will open ALL LockCo locks, then,
it's good if I tell LockCo about that.  It's better if I also tell
the public so that people who choose to can either have
their locks repaired or can replace them if they so choose.
If I tell the public the exact key pattern required, that's not
so good, but, it's not illegal and it shouldn't be illegal or
actionable.  Now, if I used stolen LockCo engineering
diagrams to identify the key pattern in question, the use
of the stolen diagrams might be actionable and/or criminal.

Owen



smime.p7s
Description: S/MIME cryptographic signature


Re: UK ISP threatens security researcher

2007-04-19 Thread Owen DeLong


On Apr 19, 2007, at 10:20 AM, Will Hargrave wrote:



Gadi Evron wrote:


"A 21-year-old college student in London had his internet service
terminated and was threatened with legal action after publishing  
details
of a critical vulnerability that can compromise the security of  
the ISP's

subscribers."

I happen to know the guy, and I am saddened by this.


In his blog post [1] he did admit to accessing other routers of  
Be's customers
using the backdoor password; this is probably [2] a criminal  
offence in the UK.


He admitted to logging in, but, was clear that he didn't actually  
modify or
inspect the routers in detail.  It looks like he did the minimum  
necessary

to verify the extent of the security risk.

IANAL either, but, I would say that such actions are probably not
prohibited in the spirit of the law, even if they are prohibited in the
letter of the law.

Generally, anti-intrusion laws fall under either anti-theft (I don't
think you can really say he stole bandwidth or service by these
actions) or anti-vandalism (I don't think you can really call
his actions vandalism).

He was definitely in a gray area and could have handled things better,
but, the ISPs actions are way over the top and beyond reason for the
situation in question.

Owen



smime.p7s
Description: S/MIME cryptographic signature


Re: death of the net predicted by deloitte -- film at 11

2007-02-11 Thread Owen DeLong



On Feb 11, 2007, at 4:22 PM, Geo. wrote:





do what google is presumably doing (lots of fiber), or would they put
some capital and preorder into IDMR?


IDMR is great if you're a broadcaster or a backbone, but how does  
it help the last 2 miles, the phoneco ATM network or the ISP  
network where you have 10k different users watching 10k different  
channels? I'm not sure if it would help with a multinode  
replication network like what google is probably up to either  
(which explains why they want dedicated bandwidth, internode  
replication solves the backup problems as well).


I terms of available HD content, you're far more likely to face  
10,000 customers whatching 1,000 different channels, and,
there will likely be some clustering.  In that case, IDMR will help a  
lot with the exception of the last 2 miles, where, the
amount of bandwidth available to the home will probably remain the  
limiting factor for some time in the US.


I places where MAE is a common household network delivery mechanism,  
this is less of a factor.  I think it will
probably take the US a decade or so to get to where much of Europe  
and Japan is today.




Also forgetting that bandwidth issue for a moment, where is the  
draw that makes IPTV better than cable or satellite?  I mean come  
on guys, if the world had started out with IPTV live broadcasts  
over the internet and then someone developed cable, satellite, or  
over the air broadcasting, any of those would have been considered  
an improvement. IPTV needs something the others don't have and a  
simple advantage is that of an archive instead of broadcast medium.  
The model has to be different from the broadcast model or it's  
never going to fly.


IPTV today isn't an improvement, much as VOIP 5 years ago had nothing  
to offer over POTS.
Today, VOIP is rapidly gaining popularity even though the  
differentiators for it are small because

it does provide some cost savings in some cases.

As IPTV and especially HD IPTV starts to mature, and, as users begin  
to reclaim fair use and
space/time/device shifting rights that are theirs under the copyright  
act and take back what
the MPAA and RIAA continue to try to block, the rapid and convenient  
sharing of content,
the reduced cost of delivery to the content providers, and, other  
factors will eventually cause
IPTV to present an improvement over today's existing unidirectional  
services.


Today IPTV is in its infancy and is strictly a novelty for early  
adopters.  As the technology
matures and as the market develops an understanding of the  
possibilities creating pressure
on manufacturers and content providers to offer better, it will  
gradually become compelling.


TIVO type setup with a massive archive of every show so you can not  
only watch this weeks episode but you can tivo download any show  
from the last 6 years worth of your favorite series is one heck of  
a draw over cable or satellite and might be enough to motivate the  
public to move to a different service. A better tivo than tivo. As  
for making money, just stick a commercial on the front of every  
download. How many movies are claimed downloaded on the fileshare  
networks every week?


There are lots of ways to make money.  Personally, I think the long- 
term winning model
will be something similar to Netflix with IP replacing the USPO at  
layers 1-4.  Other
models will certainly be tested and probably some of them will  
succeed, too.  However,
Netflix without the postal delays or logistics could be compelling,  
even if it were
1.5-2x the current Netflix pricing.  Realistically, we should get to  
a point in the technology
relatively soon where a movie can be shipped across the net for about  
the same

cost as postage today.


Owen



Re: Google wants to be your Internet

2007-01-20 Thread Owen DeLong



On Jan 20, 2007, at 10:37 AM, Rodrick Brown wrote:



On 1/20/07, Mark Boolootian <[EMAIL PROTECTED]> wrote:



Cringley has a theory and it involves Google, video, and  
oversubscribed

backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

"Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth."


I'm not sure why you find that disturbing.  I can think of two  
reasons, and,

they depend almost entirely on your perspective:

If you are disturbed because you know that these users are early  
adopters

and that eventually, a much wider audience will adopt this technology
driving a need for much more bandwidth than is available today, then,
the solution is obvious.  As in the past, bandwidth will have to  
increase to

meet increased demand.

If you are disturbed by the inequity of it, then, little can be  
done.  There

will always be classes of consumers who use more than other classes
of consumers of any resource. Frankly, looking from my corner of the
internet, I don't think that statistic is entirely accurate.  From my  
perspective,

SPAM uses more bandwidth than BitTorrent.

OTOH, another thing to consider is that if all those video downloads
being handled by BitTorrent were migrated to HTTP connections
instead the required amount of bandwidth would be substantially
higher.

Owen



Re: How big a network is routed these days?

2007-01-17 Thread Owen DeLong

4.3.2.1 Single Connection

The minimum block of IP address space assigned by ARIN to end- 
users is a /20. [...]


4.3.2.2 Multihomed Connection

For end-users who demonstrate an intent to announce the  
requested space in a multihomed fashion, the minimum block of IP  
address space assigned is a /22. [...]


4.4 Micro-allocation

ARIN will make micro-allocations to critical infrastructure  
providers of the Internet, including public exchange points,  
core DNS service providers (e.g. ICANN-sanctioned root, gTLD and  
ccTLD operators) as well as the RIRs and IANA. These allocations  
will be no longer than a /24 using IPv4 or a /48 using IPv6. [...]



As far as I know, all of the PI /24's are thus "legacy" in nature.


As the above snippet from the policy manual suggests (and as my  
experience confirms) there are recent assignments made to end users  
by ARIN under the micro-allocation policy which were made with the  
expectation that individual /24s would be advertised globally.  
Clearly these are not the most usual case, as the description of  
those who qualify for such assignments above indicates, but it  
would be a mistake to assume that *all* /24 assignments are legacy.


Actually, generally, the expectation under 4.4 is that the addresses  
will not be advertised at all for the most
part, since, generally, there's no need to advertise the route to the  
exchange point, itself, into the global
routing table.  4.4 is intended to support internet exchanges, ala  
MAEs, etc.


In terms of 4.3.2.1 and 4.3.2.2, I believe ARIN has worked very hard  
to express no expectation or
intent about how assignments relate to route advertisements and  
routing policy.


Owen



smime.p7s
Description: S/MIME cryptographic signature


Re: Comment spammers chewing blogger bandwidth like crazy

2007-01-13 Thread Owen DeLong

Surprise, a spammer is operating from IPs with fake registration data.
I'm shocked... NOT!

Owen

On Jan 13, 2007, at 11:53 AM, Gregory Hicks wrote:





Date: Sat, 13 Jan 2007 18:58:02 + (GMT)
From: "Chris L. Morrow" <[EMAIL PROTECTED]>
Subject: Re: Comment spammers chewing blogger bandwidth like crazy
To: Thomas Leavitt <[EMAIL PROTECTED]>
Cc: nanog 



On Sat, 13 Jan 2007, Thomas Leavitt wrote:


Why has 195.225.177.46, an IP in Ukraine, been eating a tremendous
amount of bandwidth? What are they doing?


this isn't in the ukraine, it's in NYC behind ISPrime. Phil is fairly
hhelpful, you might ask them to 'figure out what the heck is going  
on'

with that ip :)

-Chris
(unless the ukraine got a whole lot closer to IAD than I thought:
64 bytes from 195.225.177.46: icmp_seq=1 ttl=55 time=13.1 ms
64 bytes from 195.225.177.46: icmp_seq=2 ttl=55 time=24.5 ms


Um-m-m-m...

% Information related to '195.225.176.0 - 195.225.179.255'

inetnum:195.225.176.0 - 195.225.179.255
netname:NETCATHOST
descr:  NetcatHosting
country:UA
admin-c:VS1142-RIPE
tech-c: VS1142-RIPE
status: ASSIGNED PI
mnt-by: RIPE-NCC-HM-PI-MNT
mnt-lower:  RIPE-NCC-HM-PI-MNT
mnt-by: NETCATHOST-MNT
mnt-routes: NETCATHOST-MNT
notify: [EMAIL PROTECTED]
changed:[EMAIL PROTECTED] 20040304
source: RIPE
remarks:***
remarks:* Abuse contacts: [EMAIL PROTECTED] *
remarks:***

person:   Vsevolod Stetsinsky
address:  01110, Ukraine, Kiev, 20Á, Solomenskaya street. room  
206.

phone:+38 050 6226676
e-mail:   [EMAIL PROTECTED]
nic-hdl:  VS1142-RIPE
changed:  [EMAIL PROTECTED] 20040303
source:   RIPE


)


-
Gregory Hicks   | Principal Systems Engineer
Cadence Design Systems  | Direct:   408.576.3609
555 River Oaks Pkwy M/S 9B1
San Jose, CA 95134

I am perfectly capable of learning from my mistakes.  I will surely
learn a great deal today.

"A democracy is a sheep and two wolves deciding on what to have for
lunch.  Freedom is a well armed sheep contesting the results of the
decision." - Benjamin Franklin

"The best we can hope for concerning the people at large is that they
be properly armed." --Alexander Hamilton





smime.p7s
Description: S/MIME cryptographic signature


Anyone have details on MCI outage yesterday

2007-01-07 Thread Owen DeLong

Yesterday, around 10:00 AM Pacific Time 1/5/07, Kwajalein Atoll lost all
connectivity to the mainland. We were told this was because MCI "lost 40
DS-3s due to someone shooting up a telephone pole in California"

This affected Internet, Telephones (although inbound phone calls to the
islands were possible), and television.

Kwajalein access to the mainlaind is via Satellite to Washington  
connected to

terrestrial link to Georgia.

The outage lasted more than 12 hours.

It seems odd to me in this day and age that:

1.  There wasn't a redundant path for these circuits.

2.  It took 12 hours to restore or reroute these circuits.

Any details would be appreciated.

Thanks,

Owen

P.S. For those wondering "Where?" There are excellent resources in  
Wikipedia, but,
the short answer is Kwajalein Atoll is part of the Marshall Islands  
and is about the

midpoint on a line from Honolulu to Sydney.  About 9N by 165E.




smime.p7s
Description: S/MIME cryptographic signature


Re: Collocation Access

2006-12-27 Thread Owen DeLong



On Dec 27, 2006, at 12:42 PM, Jim Popovitch wrote:



On Wed, 2006-12-27 at 09:06 -0800, Owen DeLong wrote:

Savvis wants to retain your ID if they issue a cage-key to you.


If they (or others) asked you to let them hold $50 cash to cover their
key/lock replacement costs would you feel more comfortable?

-Jim P.


Um, no.  I would, however, be willing to have them inform the primary
contact that the key had not been returned and then bill the customer
appropriately for whatever remedy was chosen by the primary contact.

Owen



Re: Collocation Access

2006-12-27 Thread Owen DeLong


Savvis wants to retain your ID if they issue a cage-key to you.

Owen

On Dec 27, 2006, at 8:52 AM, Joe Maimon wrote:





Randy Epstein wrote:


throughout the US.  In recent memory, I can think of two large  
collocation
centers that retain your ID.  One is in Miami and one in New York  
(I don't
think I need to name names, most of you know to which I refer).   
All others

(including AT&T) have never asked to retain my ID.


I dont mind naming names. telex. I left.




Re: Boeing's Connexion announcement

2006-10-15 Thread Owen DeLong
The actual law is insanely vague and requires "proof and a written  
record".


The court system and IRS have been all over the map about what  
constitutes
proof vs. just a written record, and, as such, accounting trolls have  
developed

a myriad of different policies.

However, I think we have wandered far afield from the operational  
portion

of this topic.

Owen



PGP.sig
Description: This is a digitally signed message part


Re: Boeing's Connexion announcement

2006-10-15 Thread Owen DeLong
This may be a nit, but, you will _NEVER_ see AC power at any, let  
alone all of

the seats.  Seat power that works with the iGo system is DC and is not
conventional 110 AC.

Owen

On Oct 15, 2006, at 3:39 AM, Mikael Abrahamsson wrote:



On Sun, 15 Oct 2006, Patrick W. Gilmore wrote:

e-mail from the plane. :)  Lack of seat power was not an issue, I  
just had two batteries.  And this was BOS -> MUC, which ain't a  
short flight.


It's quite likely that on a grander scale of things, it's better  
economy that the few people who want to use their laptop the whole  
flight, do get two batteries, than doing the investment of putting  
AC power in all seats.


Otoh, more batteries on planes increases the risk of fire due to  
exploding batteries happening in the plane :P


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]




PGP.sig
Description: This is a digitally signed message part


Re: Broadband ISPs taxed for "generating light energy"

2006-10-10 Thread Owen DeLong


On Oct 10, 2006, at 8:08 AM, Bill Woodcock wrote:




Sounds reasonable to me. Since the sale of energy is
usually measured in kilowatt-hours, how many kwh of
energy is transmitted across the average optical fibre
before it reaches the powereda mplifier in the destination
switch/router?


Also, remember, it's _net_ energy delivered which matters...  I'm  
sure the

customer is delivering light back toward the ISP as well.

-Bill


From my reading of the article, it appears that they are attempting to
tax at 12.5 percent, the ISPs entire service revenue because that
revenue is derived from the delivery of light energy, thus making the
"IP service" actually a "utility product".

It looks like the tax department is arguing that what is currently being
billed/taxed as a service is actually a product and such product should
be subject to VAT.

It would be akin to California adding 7.75% to my ISP bill for sales  
tax.


Owen



PGP.sig
Description: This is a digitally signed message part


Re: that 4byte ASN you were considering...

2006-10-10 Thread Owen DeLong


On Oct 10, 2006, at 4:34 AM, [EMAIL PROTECTED] wrote:




Well, it will break an applications that considers everything
consisting of numbers and dots to be an IP address/netmask/inverse
mask.  I don't think many applications do this, as they will then
treat the typo "193.0.1." as an IP address.


An application using "[0123456789.]*" will not break when it
sees the above typo. 193.0.1. *IS* an IP address-like object
and any existing code will likely report it as mistyped
IP address or mask.


Actually, most code will parse it as equivalant of 193.0.0.1.

Most of  the IP address parsers I have encountered will do
zero insertion in the middle, such that 10.253 is parsed the
same as 10.0.0.253, 10.3.24 is parsed as 10.0.3.24, 192.159.8
is parsed as 192.159.0.8, etc.  I'm not saying I think this is
necessarily good, but, it is the behavior observed.


The real question is what does the notation 1.0 add that the
notation 65536 does not provide?


It is (for me, and I guess most other humans) much easier to read and
remember, just as 193.0.1.49 is easier to read and remember than
3238002993.  It also reflects that on the wire there are two 16
bit numbers, rather than 1 32-bit number.


In my experience, ISPs do not transmit numbers by phone calls
and paper documents. They use emails and web pages which allow
cut'n'paste to avoid all transcription errors. And I know of no
earthly reason why a general written representation needs to
represent the format of bits on the wire. How many people
know or care whether their computer is bid-endian or little
endian?


Your experience differs from mine. There are lots of situations
where ASNs are discussed on telephone calls and/or transcribed
to/from yellow stickies, etc.

As to matching bits on the wire, no, it's not necessary, but, it is
a convenient side-effect.


1. If you are a 16-bit AS speaker (ASN16), then AS65536 is not just
the next one in the line, it is an AS that will have to be  
treated

differently.  The code has to recognize it and replace it by the
transistion mechanism AS.


And how is a special notation superior to

  if asnum > 65535 then
  process_big_as
  else
  process_little_as

In any case, people wishing to treat big asnums differently will need
to write new code so the dot notation provides them zero benefit.


The dot notation is an improvement in human readability. It offers
no benefit to machines as they don't care as long as they have a good
parser for whatever notation is chosen.  The notation is for the human
interface.


My point is that if we do NOT introduce a special notation
for ASnums greater than 65536, then tools only need to be
checked, not updated. If your tool was written by someone
who left the company 7 years ago then you might want to
do such checking by simply testing it with large as numbers,
not by inspecting the code. The dot notation requires that
somebody goes in and updates/fixes all these old tools.


So will the colon notation for IPv6 addresses.

Owen



PGP.sig
Description: This is a digitally signed message part


Re: ARIN sucks?

2006-09-17 Thread Owen DeLong


On Sep 17, 2006, at 12:22 PM, Jon Lewis wrote:



On Sun, 17 Sep 2006, Hank Nussbacher wrote:

Also, you're incorrect on the process.  You can definitely get an  
ASN

without IP space.


I find that fascinating.  The ARIN template:
http://www.arin.net/registration/templates/asn-request.txt
states:
12. Indicate all IP address blocks currently in use in
   your network.
You fill in "none" and ARIN has given you an ASN?  Under what  
conditions?


If you have no IP space in use, what do you plan to do with an  
ASN?  It is pretty common to get an ASN from ARIN while using PA IP  
space, never getting (or requesting) space from ARIN or other RIRs.


Actually, in more than one case, I have been able to fill in "none"  
on an initial
assignment template and still get the ASN.  The ASN is a prerequisite  
to qualifying
under the multihoming end-user policy, so, yes, if you are starting  
from zero and
applying as an initial end-user, you can apply for an ASN with "none"  
as long as
you can demonstrate that you have contracts for service with at least  
two ISPs.


You can then use that ASN to apply for IP space under the multi-homed  
end-user

policy.

Of course, if you can show existing utilization of PA space, that  
becomes much
easier, because it is easier for ARIN to verify your utilization and  
requirements,
but, with sufficient appropriate documentation, you can apply without  
existing

IP space and get an ASN and an assignment.

Owen



PGP.sig
Description: This is a digitally signed message part


Fwd: Blogger post failed

2006-09-13 Thread Owen DeLong
Apologies to the list, but,  I have no other way to contact the person who thought thiswas a good idea...Could whoever thought it was a good idea to gateway NANOG messages to a bloggerplease fix their blogger gateway or turn it off.OwenBegin forwarded message:From: [EMAIL PROTECTED]Date: September 13, 2006 9:14:44 AM PDT (CA)To: [EMAIL PROTECTED]Subject: Blogger post failed Blogger does not accept multipart/signed files.Error code: 7.5CD98COriginal message:From: [EMAIL PROTECTED]Date: Wed, 13 Sep 2006 08:56:30 -0700Subject: Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]None 

Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-13 Thread Owen DeLong


On Sep 13, 2006, at 8:43 AM, D'Arcy J.M. Cain wrote:



On Wed, 13 Sep 2006 05:37:05 -0700
David Conrad <[EMAIL PROTECTED]> wrote:

I'm sure the same argument was used for telephone numbers when
technical folk were arguing against number portability.


Oh come on.  You know perfectly well that phone numbers are not the
same as IP.  No one knows me by my IP address.  They know me by my
email address(es).  Heck, even I don't know my own IP address without
running ifconfig and I installed it and maintain the system.

If we were still calling central and asking "Hi Mabel, can you put me
through to Doc," no one would give a rat's ass about phone number
portability.  Notice that no one is getting worked up about circuit
number portability.


In point of fact, phone numbers as David is describing them are much
more of a parallel to DNS than to IP.  BTNs (Billing Telephone Numbers)
which are not portable are like IP addresses.

The way the telephone system works is when you dial a number, it is
looked up in the SS7 database and mapped to a BTN. The call is then
routed based on that BTN to its destination, with the dialed number in
the DNIS field and the BTN in the destination field.

Much like an HTTP request to a virtual server.

Owen



PGP.sig
Description: This is a digitally signed message part


Re: Kremen's Buddy?

2006-09-12 Thread Owen DeLong


On Sep 12, 2006, at 4:52 PM, Richard A Steenbergen wrote:



On Tue, Sep 12, 2006 at 06:55:11PM -0400, Joe Abley wrote:


I find the references to alleged, inherent difficulties with the ARIN
resource assignment process increasingly tedious. Even if the
templates were "impossible to decipher", this isn't the forum to
discuss them.

In my opinion, you do the argument in favour of open trading of
addresses as commodities a rank disservice by linking it to this kind
of FUD.


Ever notice the only folks happy with the status quo are the few  
who have
already have an intimate knowledge of the ARIN allocation process,  
and/or
have the right political connections to resolve the "issues" that  
come up

when dealing with them?


I'm not sure I completely buy this.  However, I guess these days I'm
one of the "few" who already have an intimate knowledge.
I do remember being frustrated with the process when I was new
to the process and even more so when the process was new.
However, I can say that today, the process is much better documented,
simpler, and more efficient than it was 10 or even 5 years ago.

Try looking at it from an outsider's point of view instead. If  
you're new
to dealing with ARIN, it is not uncommon to find the process is  
absolutely
baffling, frustrating, slow, expensive, and requiring intrusive  
disclosure

just shy of an anal cavity probe.


I've had several clients who indeed perceived it this way.  However, in
each of their cases, I was able to spend a few hours working with them
to collect the necessary information, fill out the ARIN template on  
their

behalf, and, obtain address space for them in between 5 and 20
man hours.  In terms of elapsed calendar time from initial submission
to allocation, it ranged from 4-10 days if you don't count delays  
induced

by my clients not having certain prerequisites in place on time.
In any kind of free market system, competition would have  
bitchslapped the
current ARIN way of doing things a long, long time ago. Personally  
I find
the single most compelling reason to move to IPv6 to be the removal  
of any

justification for ARIN's continued existance in its current form.

I'm not sure this is true.  I think if you compare the ARIN process  
for getting

IP addresses to the FCC process for getting spectrum, ARIN's process is
MUCH easier.  Care to venture what it takes to get an allocation for a
geosynchronous orbital slot?  Guaranteed that's quite a bit harder than
ARIN's process.  Ever try to get your own issuance of phone numbers
from NANPA or another telephone number registry?  Yeah, that's quite
a bit harder than ARIN, too.

Can you please point to another registry for globally unique limited
numeric addresses which is easier to deal with than ARIN?
Somehow I suspect the only folks who wouldn't welcome this are the  
ones

who benefit from the one thing ARIN is actually good at doing, namely
paying for frequent business class travel and accomodations to exotic
locations around the world under the pretense of "meetings". Hrm  
guess I

had better offer dinner in St Louis is on me for whichever one of my
friends on the "ARIN travel plan" complains about this post first. :)


while I have not always seen eye-to-eye with ARIN, this comment is
flat out unjustified in my opinion.  ARIN works very hard to provide
an open and transparent governance process.  They put significant
effort into outreach trying to make the process easier and more
accessible to newcomers.  They have made significant effort to
help people gain access to the addresses they need while still
trying to be an effective gatekeeper against unwarranted hoarding
or unjustified address acquisition.

I'm not on the "ARIN travel plan", but, I do find the public policy
meetings a useful forum. I think that combined with the PPML,
they provide about the best possible process for the evolution of
IP policy in the ARIN service region.  If you have a better idea,
let's hear it.  How would you like to see things done?  The primary
difference between whining and constructive criticism is that
constructive criticism includes suggested remedies to the
situation.

Owen



PGP.sig
Description: This is a digitally signed message part


Re: [Fwd: RE: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-12 Thread Owen DeLong


Look at this page: http://www.arin.net/cgi-bin/member_list.pl
Every one of those organizations has disclosed to ARIN
all their customer names, etc... That is the way things
are done. If you don't want to play ball like the rest
of us, then you are not going to get IP addresses. That's
the simple truth. We have a level playing field and you
are asking for special privileges that other organizations
don't feel are necessary.

--Michael Dillon


Michael,
I think you are confusing ARIN membership with ARIN
resource recipient.  The two are not synonymous although there
is a great deal of overlap.

An end-user recipient is not necessarily an ARIN member.
An ARIN member is not necessarily a recipient.  True, all
ISP recipients are ARIN members since that is an automatic
aspect of their subscriber status, providing much of the overlap,
but, not the complete definition.

Owen



PGP.sig
Description: This is a digitally signed message part


Re: [Fwd: RE: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-11 Thread Owen DeLong

IP addresses appear to be property - - read http://news.findlaw.com/
hdocs/docs/cyberlaw/kremencohen72503opn.pdf.  Given that domain names
are property, IP addresses should be property, especially in
California where are constitution states "All things of value are
property"


I'm not sure how you can say that 32 bit integers have monetary
value.  There are more than 4 billiion of them and anyone can
use any number they choose.  What is valuable is the unique
registration service which provides for a set of cooperating
entities to share a single 32 bit number space without collision.


Also, what about ARINS hardcore attitude making it near impossible
to aquire ip space, even when you justify it's use?  I have had
nightmares myself as well as MANY of my collegues share similar  
experiences.
I am having an issue right now with a UNIVERSITY in Mexico tryin to  
get
ip's from the mexican counterpart.  Why is it that they involve  
lawyers,
ask you all your customers names and etc... This is more  
information than

I think they should be requiring. Any company that wishes to engage in
business as an ISP or provider in some capacity should be granted the
right to their own ip space. We cannot trust using ips swipped to  
us by
upstreams and the like. Its just not safe to do that and you lose  
control.


I have a great deal of difficulty identifying with this.  The  
information ARIN

requests is, in my experience, reasonable and necessary for them to
accurately verify that your request is in compliance with allocation
policies.  If you don't provide customer names, you can claim any number
of customers you want and fabricate as large an artificial network as
you like with no checks or balances.

Having said that, in my experience, a properly filled out template in
compliance with the policies has little or no difficulty getting  
addresses

issued by ARIN.  If you don't like the policies, then, there is an open
process to change them.  Having participated in that process for
a number of years and having worked actively to make it possible
to get address space for smaller entities (2002-3, Assignments
of /22, for example) and portable IPv6 assignments for end-users
(Policy 2005-1), I know it is possible to change ARIN policy.  However,
like any form of governance, this is a slow process and requires the
building of consensus.

Actually, is there anyone else who shares these nightmares with me?
I brought up the lawsuit with Kremen and ARIN to see if this is a  
common

issue.  What are your views, and can someone share nightmare stories?


There may be people who share your nightmares, but, I suspect it
would be less of a nightmare if you worked with the ARIN staff
instead of railing against them.

Don't get me wrong, I think there has to be SOME due dilligence,
however their methodology is a bit hitlerish.


This is completely opposite of my experience.  There was a time
when I might have agreed with you, but, ARIN has changed a lot
and is a much friendlier organization today than even 5 years ago.

If you have had similar problems, contact me off list or on, if you  
wish.

I'd love to talk to you. AIM is preferred.


I've had the opposite experience across a number of
ARIN allocations and assignments for organizations of various
sizes.

Owen



PGP.sig
Description: This is a digitally signed message part


Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-08 Thread Owen DeLong


On Sep 8, 2006, at 10:33 AM, Stephen Sprunk wrote:



Thus spake <[EMAIL PROTECTED]>

[ I said ]
The debate there will be around the preferential treatment that  
larger

ARIN members get (in terms of larger allocations, lower per address
fees, etc), which Kremen construes as being anticompetitive via
creating artificial barriers to entry.  That may end up being  
changed.


Your statement about preferential treatment is factually
incorrect. Larger ARIN members do not get larger allocations.
It is the larger network infrastructures that get the larger
allocations which is not directly tied to the size of the
company. Yes, larger companies often have larger infrastructures.


And that's the point: A company that is established gets  
preferential treatment over one that is not; that is called a  
barrier to entry by the anti-trust crowd.  You may feel that such a  
barrier is justified and fair, but those on the other side of it  
(or more importantly, their lawyers) are likely to disagree.



As for fees, there are no per-address fees and there
never have been. When we created ARIN, we paid special
attention to this point because we did not want to create
the erroneous impression that people were "buying" IP
addresses. The fees are related to the amount of effort
required to service an organization and that is not
directly connected to the number of addresses.


Of course it's directly connected; all you have to do is look at  
the current fee schedule and you'll see:


/24 = $4.88/IP
/23 = $2.44/IP
/22 = $1.22/IP
/21 = $0.61/IP
/20 = $0.55/IP
/19 = $0.27/IP
/18 = $0.27/IP
/17 = $0.137/IP
/16 = $0.069/IP
/15 = $0.069/IP
/14 = $0.034/IP

While the price points quoted may be accurate, they don't really  
reflect the pricing
tiers in use at ARIN and I notice you completely ignore the fact that  
no matter how
much more than a /14 you have, you pay $18,000 and additional  
allocations don't

really cost anything.

Further, the prices you suggest refer to registration fees for ISPs,  
but, the current
fee schedule if you are NOT an ISP is a bit different (end user  
subscriber):


/24 = $100/year
/23 = $100/year
/22 = $100/year
/21 = $100/year
...
/8 = $100/year
etc.


So, just between the two ends of the fee schedule, we have a  
difference of _two orders of magnitude_ in how much an registrant  
pays divided by how much address space they get.  Smaller folks may  
use this to say that larger ISPs, some of whose employees sit on  
the ARIN BOT/AC, are using ARIN to make it difficult for  
competitors to enter the market.


Since you are paying for registration services and not for the IPs  
themselves, that is perfectly reasonable.
End users registration needs are simple, thus, the $100/year flat  
fee.  ISPs do some volume of
sub-assignment registration and as a result, the larger the network,  
the more registration effort
involved.  However, the effort does not scale linearly with the  
address space and the fee

structure reflects this.
Since that argument appears to be true _on the surface_, ARIN will  
need to show how servicing smaller ISPs incurs higher costs per  
address and thus the lower fees for "large" allocations are simply  
passing along the savings from economy of scale.  Doable, but I  
wouldn't want to be responsible for coming up with that proof.


I don't even think it is all that difficult.  Especially given the  
end-user fees.


Besides the above, Kremen also points out that larger prefixes are  
more likely to be routed, therefore refusing to grant larger  
prefixes (which aren't justified, in ARIN's view) is another  
barrier to entry.  Again, since the folks deciding these policies  
are, by and large, folks who are already major players in the  
market, it's easy to put an anticometitive slant on that.


It is my experience that any prefix within ARIN policy is generally  
equally routable.  I would say that
in my experience, Kremen's assertion in this area is false.   
Additionally, the characterization of
the ARIN policy process is largely detached from reality.  While the  
BoT technically has final
and complete authority, I cannot recall a situation in which a policy  
with consensus was not
accepted by the board, nor can I recall a situation where the board  
adopted a policy without
consensus.  Since the public policy meetings and mailing lists where  
consensus is judged
are open to any interested party, it is very hard to view this as an  
anti-competitive act in my

opinion.

Owen



PGP.sig
Description: This is a digitally signed message part


BCP Question: Handling trouble reports from non-customers

2006-09-01 Thread Owen DeLong

I think my previous post may have touched on a more global issue.

Given the number of such posts I have seen over time, and, my  
experiences trying to
report problems to other ISPs in the past, it seems to me that a high  
percentage of
ISPs, especially the larger ones, simply don't allow for the  
possibility of a non-customer
needing to report a problem with the ability to reach one of their  
customers.


I'm curious how people feel about this.  As I see it, there are a  
number of possible

responses:

1.	Don't help the person at all.  Tell them to contact the customer  
they are

trying to reach and have the customer report the problem.  This seems,
by far, to be the most popular approach in my experience, but, it makes
for a very frustrating experience to the person reporting the problem.

2.	Accept any trouble report and attempt to resolve it or determine  
that it
	is outside of your network.  This approach is the least frustrating  
to the

end user, but, probably creates a resource allocation and cost problem.

3.  Have a procedure for triage which allows a quick determination if the
problem appears to be within your network.  Using that procedure,
reject problems which appear to be outside of your network while
accepting problems that appear to be within your network.

It seems to me that option 3 probably poses the best cost/benefit  
tradeoff,

but, it is the approach least taken from my observations.  So, I figured
I'd try and start a discussion on the topic and see what people thought.

Feel free to comment on list or directly to me (I'll summarize), but,  
if you
want to tell me I'm off-topic or whatever, please complain directly  
to me
without bothering the rest of the people on the list.  I believe that  
this

is an operational issue within scope of Nanog, but, I can see the
argument that it's a business practices question instead.

Owen



PGP.sig
Description: This is a digitally signed message part


AT&T (SBCGLOBAL) problems?

2006-09-01 Thread Owen DeLong
Apologies to the list, but, I'm at Witts End on this problem.Can someone from SBCGLOBAL with 1/2 a clue please contact me?I'm seeing an issue between dist4-g9-3.pltnca.sbcglobal.net andbras2-g9-0.pltnca.sbcglobal.net with intermittent complete packetloss...                           Matt's traceroute  [v0.54]owen                                                   Fri Sep  1 08:56:21 2006Keys:  D - Display mode    R - Restart statistics    Q - Quit                                           Packets               PingsHostname                                %Loss  Rcv  Snt  Last Best  Avg  Worst 1. delong-sjca-02-e1.delong.sj.ca.us       1%  509  511     1    1    4    118 2. zy652-a.delong.sj.ca.us                 1%  509  511     2    1    2     11 3. sms0.sc.meer.net                        2%  503  511   232   21  180   1332 4. metro2-transit.sc.svcolo.com            1%  505  511   241   22  178   1343 5. 339.ge-5-1-1.er10a.sjc2.us.above.ne     2%  502  511   175   21  193   1344 6. so-2-0-0.mpr3.sjc2.us.above.net         3%  499  510   107   21  182   1307 7. so-3-0-0.mpr2.sjc7.us.above.net         4%  494  510   118   21  181   1316 8. ex2-p4-1.eqsjca.sbcglobal.net           2%  504  510   140   21  188   1345 9. bb1-p6-0.crscca.sbcglobal.net           2%  500  510   364   23  189   125610. bb2-p5-0.pltnca.sbcglobal.net          2%  503  510   193   24  183   127511. dist4-g9-3.pltnca.sbcglobal.net        5%  487  510   192   24  176   129212. bras2-g9-0.pltnca.sbcglobal.net       46%  276  510   140   25  190   132313. adsl-69-105-41-206.dsl.pltn13.pacbe   49%  264  510   107   33  191   109414. adsl-69-105-74-210.dsl.pltn13.pacbe   49%  263  510    45   35  200   1266I am seeing this problem from multiple locations when trying to reachdestination 69.105.74.210.Your technical support department refuses to escalate the issue unlessI can come up with the DSL phone number for the affected customer.I can be reached at 408-921-6984.Owen

PGP.sig
Description: This is a digitally signed message part


Re: IP failover/migration question.

2006-06-27 Thread Owen DeLong

> Uptime might not matter for small hosts that do mom and pop websites
> or so-called "beta" blog-toys, but every time Level3 takes a dump,
> it's my wallet that feels the pain. It's actually a rather frustrating
> situation for people who aren't big enough to justify a /19 and an
> AS#, but require geographically dispersed locations answering on the
> same IP(s).

I'm not sure why you think you need to be that big to get portable IP
space.  Policy 2002-3 allows for the issuance of a /22 to any organization
which can show a need and the ability to utilize at least 50% of a /22
with multihoming.  An ASN can be obtained pretty easily if you intend
to multihome.  About the only thing that might stand in the way of
a small organization is the up front cost, but, even that is less than
$2000.

Owen


-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpKcOw8cZD6r.pgp
Description: PGP signature


RE: key change for TCP-MD5

2006-06-23 Thread Owen DeLong
Why couldn't the network device do an AH check in hardware before passing
the
packet to the receive path?  If you can get to a point where all connections
or traffic TO the router should be AH, then, that will help with DOS.

If you can limit what devices _SHOULD_ talk to the router and at least
define
some subset of that from which you demand AH on every packet, that helps but
isn't a complete solution.

Owen


--On June 23, 2006 11:49:33 AM -0700 "Barry Greene (bgreene)"
<[EMAIL PROTECTED]> wrote:

> 
>  
> 
>> If DOS is such a large concern, IPSEC to an extent can be 
>> used to mitigate against it. And IKEv1/v2 with IPSEC is not 
>> the horribly inefficient mechanism it is made out to be. In 
>> practice, it is quite easy to use.
> 
> IPSEC does nothing to protect a network device from a DOS attack. You
> know that.
> 
> DOS prevention on a network device needs to happen before the TCP/Packet
> termination - not the Key/MD5/IPSEC stage. The signing or encrypting of
> the BGP message protects against Man in the Middle and replay attacks -
> not DOS attacks. Once a bad packet gets terminated, your DOS stress on
> the router kicks in (especially on ASIC/NP routers). The few extra CPU
> cycles it takes for walking through keys or IPSEC decrypt are irrelevant
> to the router's POV. You SOL if a miscreant can get a packet through
> your classification & queuing protections on the router and have it
> terminated. 
> 
> The key to DOS mitigation on a network device is to have many fields in
> the packet to classify as possible before the TCP/Packet termination.
> The more you have to classify on, the more granular you can construct
> your policy. This is one of the reasons for GTSM - which adds one more
> field (the IP packet's TTL) to the classification options. 
> 
> Yes Jared - our software does the TTL after the MD5, but the hardware
> implementations does the check in hardware before the packet gets punted
> to the receive path. That is exactly where you need to do the
> classification to minimize DOS on a router - as close to the point where
> the optical-electrical-airwaves convert to a IP packet as possible.



-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpYN2wZJ65ox.pgp
Description: PGP signature


Re: IP ranges, re- announcing 'PA space' via BGP, etc

2006-04-15 Thread Owen DeLong


--On April 14, 2006 9:26:56 PM +0100 Andy Davidson <[EMAIL PROTECTED]>
wrote:

> 
> On Fri, Apr 07, 2006 at 01:13:19PM +0200, Alexander Koch wrote:
>  > When a random customer (content hoster) asks you to accept
>  > something out of 8/8 that is Level(3) space, and there is no
>  > route at this moment in the routing table, do you accept it,
>  > or does Level(3) have some fancy written formal process and
>  > they get approval to do it, etc.?
> 
> Initial instinct has to just be 'yuck', but in the interest of getting
> the job done, I'd look at :
> 
>  - how is it registered ?  Are your customer mentioned ?
>  - is it already a prefix which is announced seperately from the rest of
>the aggregated block ?
>  - if the customer wants to multihome, have they even considered PI ?
>  - are the customer happy for you to talk to the aggregating company ?
>are you happy to talk to them ?
>  - it's still 'yuck'.

Frankly, if the customer is multihomed, then, it might be preferable for
them
to go direct to ARIN for an end-user assignment.  These are now available as
small as a /22 since the adoption of policy 2002-3.

Owen

-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgp8wVh5ze9Ae.pgp
Description: PGP signature


Re: How to handle AAAA query for v4 only host

2006-04-12 Thread Owen DeLong



--On April 13, 2006 8:13:27 AM +0930 Mark Smith 
<[EMAIL PROTECTED]> wrote:



On Wed, 12 Apr 2006 17:27:54 -0400
Owen DeLong <[EMAIL PROTECTED]> wrote:


Apologies if anyone thinks this does not require coordination or is
somehow not operational.

However, I have a situation where some nameservers for which I am
responsible
are receiving  queries for hosts for which we are authoritative.  We
return the SOA only as it seems we are supposed to, but, we are seeing a
significant delay before we get an A query back from the resolver, which,
we believe represents a significant delay for the end user in getting to
the web page in question.



I'd have thought you were supposed to return a record not found error,
which would then cause the remote resolver to immediately revert to
performing an A query.


Strangely enough, RFCs 1866, 3596, and 4074 seem to specifically say
this is a bad thing.


Is there a better way to answer an  query for a v4 only host?  Is it
permitted and/or desirable to return a 6to4 or IPv4-Mapped address?
Is there some other preferable thing to return?



As long as you don't not respond, like doublelclick don't or didn't
used to. Very frustrating waiting for  queries to timeout before a
page will fully load.


Nope... As near as I can tell, responding with SOA only data is the
right thing to do.  FWIW, that's what f.root-servers.net seems to do
as well.

Owen




pgpnotvLagSrW.pgp
Description: PGP signature


How to handle AAAA query for v4 only host

2006-04-12 Thread Owen DeLong

Apologies if anyone thinks this does not require coordination or is somehow
not operational.

However, I have a situation where some nameservers for which I am 
responsible

are receiving  queries for hosts for which we are authoritative.  We
return the SOA only as it seems we are supposed to, but, we are seeing a
significant delay before we get an A query back from the resolver, which,
we believe represents a significant delay for the end user in getting to
the web page in question.

Is there a better way to answer an  query for a v4 only host?  Is it
permitted and/or desirable to return a 6to4 or IPv4-Mapped address?
Is there some other preferable thing to return?

Thanks,

Owen


--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.


pgptERTCJCB8D.pgp
Description: PGP signature


Re: "Bad bgp identifier"

2006-03-31 Thread Owen DeLong

Unicast currently ends at 223.255.255.255.
224.0.0.0/4 is multicast and I believe that
240.0.0.0/5
248.0.0.0/6
252.0.0.0/6 are listed as reserved for experimental purposes.

Owen


--On March 31, 2006 5:06:54 AM -0500 Joe Maimon <[EMAIL PROTECTED]> wrote:



4271 specifies that bgp identifier must be a valid unicast ip address

So what is the larget 32 bit value expressed as a dotted quad that meets
this requirement?

Is it the last address in class c? class e? Can 255.x.x.x be used?

Do all vendors implement this?

I understand that  draft-ietf-idr-bgp-identifier-06.txt does away with
the above requirements. Is this something I should ask vendors if they
will support it?

Thanks,

Joe




--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.


pgpKZehsY7nOT.pgp
Description: PGP signature


Re: Network graphics tools

2006-03-22 Thread Owen DeLong

I've had pretty good luck with OmniGraffle Professional, and, it's fairly
cheap, too.  Has many of the features Visio has, and, is gaining more
on a regular basis.  It lacks the Visio silly pictures (although you could
create your own easily enough), but, it does understand connections between
objects and has some more advanced metadata features I haven't yet learned
to use.  It's also got half-way decent auto-layout capabilities.

http://www.omnigroup.com

Owen


--On March 21, 2006 9:17:44 PM -0500 "Howard C. Berkowitz" 
<[EMAIL PROTECTED]> wrote:




Much of the enterprise market seems wedded to Visio as their network
graphics tool, which locks them into Windows. Personally, I hate both
little pictures of equipment and Cisco hockey-puck icons; I much prefer
things like rectangles saying "7507 STL-1" or "M160 NYC-3".

Assuming you use *NIX platforms (including BSD under Mac OS X), what are
your preferred tools for network drawings, both for internal and external
use?  I'd hate to be driven to Windows only because I need Visio.




--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.


pgpgYmbMdNBcw.pgp
Description: PGP signature


Re: shim6 @ NANOG

2006-03-07 Thread Owen DeLong



--On March 7, 2006 1:38:50 PM -0500 John Curran <[EMAIL PROTECTED]> wrote:


At 1:08 PM -0800 3/6/06, Owen DeLong wrote:

I've got no opposition to issuing addresses based on some geotop. design,
simply because on the off chance it does provide useful aggregation, why
not.  OTOH, I haven't seen anyone propose geotop allocation as a policy
in the ARIN region (hint to those pushing for it).


Does anyone have statistics for the present prefix mobility experiment
in the US with phone number portability?  It would be interesting to
know what percent of personal and business numbers are now routed
permanently outside their original NPA assignment area...


Almost impossible to get statistics on this because of the influx
of portable VOIP devices and cellular phones.

However, if you're just talking about at the SS7 level, then, realistically,
LNP doesn't really provide NPA level portability of numbers.  At least
I don't know of any telcos allowing you to take your 408 number to the 212
area when you move.


If one presumes a modest movement rate across the entire population
of businesses, and locality for some percent of those moves (which may
be hidden from global visibility due to regional interconnects/exchanges),
would the resulting global routing table really be any larger then the
current AS allocation count?   It certainly would result in a lot of happy
businesses to have a PI allocation from a local LIR, even if portability
wasn't assured if they relocated to another state.


Interesting question.  I wonder if CAIDA has any statistics which
could provide illumination on this question.


/John

p.s.  personal thoughts only, designed entirely to encourage
discussion... :-)






pgp5PsRpK7Th7.pgp
Description: PGP signature


Re: shim6 @ NANOG

2006-03-07 Thread Owen DeLong



--On March 7, 2006 4:29:28 PM +0100 Iljitsch van Beijnum 
<[EMAIL PROTECTED]> wrote:



On 6-mrt-2006, at 22:08, Owen DeLong wrote:


What I hear is "any type of geography can't work because network
topology != geography". That's like saying cars can't work
because  they
can't drive over water which covers 70% of the earth's surface.



No, it's more like saying "Cars which can't operate off of freeways
won't work" because there are a lot of places freeways don't go.
Hmmm... Come to think of it, I haven't seen anyone selling a car
which won't operate off of a freeway.


If we slightly open this up to "vehicles on wheels" and "long  distance
infrastructure created specially for said vehicles" trains  would
qualify...


True, and, a good case in point.  A relatively small percentage of the
US population finds trains routinely useful.  An even smaller percentage
(infinitessimal, actually) finds them useful enough to not have a car.


I've got no opposition to issuing addresses based on some geotop.
design,
simply because on the off chance it does provide useful
aggregation, why
not.


Exactly, that's all I ask.


OTOH, I haven't seen anyone propose geotop allocation as a policy
in the ARIN region (hint to those pushing for it).


Hm, I would rather do this globally but maybe this is the way to go...


The only way to achieve global policy is to achieve a similar policy in
each RIR and then get them to agree on a globally consistent one together.
This is by design because it is a process which allows each region to
have full input into the process without the stakeholders in any region
being steamrolled by the needs of another region.

Owen



pgphz9FnKSlR3.pgp
Description: PGP signature


Re: AW: Italy orders ISPs to block sites

2006-03-07 Thread Owen DeLong



--On March 7, 2006 8:12:59 AM -0500 "Patrick W. Gilmore" 
<[EMAIL PROTECTED]> wrote:




On Mar 7, 2006, at 3:56 AM, Owen DeLong wrote:


I understand, that from an American point of view this kind of
restriction
looks strange and is against your act of freedom, however here in
Europe
gambling is a state controlled business that supports the state
economy
and in most European countries gambling outside state controlled
casinos
is simply illegal and forbidden by law.


Even in the US, this is true.  Gambling in California is illegal
(except
indian casions, long story), because Nevada has a powerful lobby in
California.


That's an interesting comment.

The largest cardroom in the world is in California (Commerce  Casino).
And there are plenty of places to play poker.

The difference is that California has decided (properly, IMHO) that
poker is a game of _skill_, not chance.  And there are other games  you
can play at these cardrooms, but you play them against other  players,
not the house.  And most online gambling sites either allow  poker or
sports betting.  I guess you could call sports begging  "gambling", but
there is skill involved there too.

Not that things like "facts" matter to politicians, or even  lawyers
:-)


Actually, it's not so much skill vs. luck, but, the fact that CA has
certain exceptions for "mutual benefits" betting which is a fancy
term for the house gets a fixed percentage no matter who wins.  This
allows for card rooms and horse tracks.




I don't question the validity of the law.  That's between the
Italians and
their government.  I question the practicality of enforcing the law
because
the way the internet and the international economies work, it is
virtually
impossible to enforce this short of something like the great firewall
of China (which still allows SSH through for the most part, so...).


Bringing this back to Operational Content , this is the big  point.
I honestly do no believe you can stop people from getting to  sites they
want to see without stopping Internet access as a whole.   Even the Great
Firewall Of China is essentially swiss cheese to  anyone who wants to get
around it.  Fear of "meat-space" punishment  is probably more important
than the technology used.


You'd be surprised how effective the GFOC can be.  The Chinese government
doesn't hesitate to walk in and literally cut the power to a datacenter
if they so desire.


Yes, most people use their ISP's recursive NS, but that's 'cause  they're
lazy.  When it stops working, they'll use something else.   Block
$DEFAULT_PORT for filesharing, they'll find another.  So unless  you
proxy 100% of the traffic (possible, but difficult), and watch  for
proxies outside your proxy (nearly impossible), people will get  through.


Not even possible to proxy 100% of traffic unless you block all SSL and
prevent SSH.


Seeing governments try to legislate around technology they do not
understand is ... amusing.  If they want to stop this activity,  making a
law regarding routers or servers is not the way to do it.


Certainly not the effective way to do it.

Owen


--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.


pgpMpy8KiQb37.pgp
Description: PGP signature


Re: AW: Italy orders ISPs to block sites

2006-03-07 Thread Owen DeLong




--On March 7, 2006 9:13:21 AM +0100 tom <[EMAIL PROTECTED]> wrote:



Hi Folks across the ocean..

I understand, that from an American point of view this kind of restriction
looks strange and is against your act of freedom, however here in Europe
gambling is a state controlled business that supports the state economy
and in most European countries gambling outside state controlled casinos
is simply illegal and forbidden by law.


Even in the US, this is true.  Gambling in California is illegal (except
indian casions, long story), because Nevada has a powerful lobby in 
California.



So I doubt, that the European Court would really rule agaist this
Each country has specific laws, that othewr nations do not not understand
and we all should accept that.


I wouldn't expect the court to rule against it, but, I do suspect that
motivated Italians will trivially work around it.



Imagine, if kids in the US would be able to order Cannabis from
Online-shops in the Netherlands (as it is leaglized there)through mail
order? Would you or your legislation agree to that?


Nope, but, the hard part there is the importation of the Cannabis.  Frankly,
kids here CAN order it from the online shops.  The hard part is getting the
delivery to arrive without getting prosecuted.

However, for gambling, it's a bit more complicated.  Generally, the movement
of money in and out of most countries is not restricted, and, what the money
does while it is in the other countries is even harder to control unless
the two countries in question have treaties about such things.  As such, 
since
gambling involves no physical product other than money, and, technically, 
the

Italians are moving the money out of Italy, gambling on foreign soil, then
moving their winnings back into Italy (much like they flew, for example, to
Monaco, gambled in the Casinos there, then flew home with their winnings),
it's quite a bit harder to enforce.

I don't question the validity of the law.  That's between the Italians and
their government.  I question the practicality of enforcing the law because
the way the internet and the international economies work, it is virtually
impossible to enforce this short of something like the great firewall
of China (which still allows SSH through for the most part, so...).

Owen


--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.



Re: Italy orders ISPs to block sites

2006-03-07 Thread Owen DeLong




--On March 7, 2006 1:35:05 PM +0530 Suresh Ramasubramanian 
<[EMAIL PROTECTED]> wrote:



On 3/7/06, Owen DeLong <[EMAIL PROTECTED]> wrote:


Singapore seems to force all of their ISPs to send all HTTP requests
through a proxy that has a set of rules defining sites you are not
allowed to visit.



As does (for example) the UAE, and China.  But not Italy.

So this is quite moot, I expect.

Also - having all local cable / broadband / dialup providers do
something like this would cover the vast majority of internet users in
the country .. not too many people or companies are going to be
running their own resolvers, at least in a small country like Italy.


I guess that depends.  Afterall, all you need to run your own resolver
is a copy of bind and linux, macos, or windows to run it on. A caching
recursive resolver is pretty easy to set up.  If that becomes what it
takes to get around government regulations, I suspect gamblers who
really want to gamble will learn fairly quickly.


The numbers are likely to be trivially small as compared to the number
of people just using their ISP resolvers.  So a fake zone loaded into
the resolvers redirecting these banned sites elsewhere should do just
fine, I guess.


Today, true.  Tomorrow, depends on the motivation level of the affected
audience and the publication level of the trivial solution to the currently
prescribed method of control.

Owen


--
Suresh Ramasubramanian ([EMAIL PROTECTED])







Re: Italy orders ISPs to block sites

2006-03-06 Thread Owen DeLong


Singapore seems to force all of their ISPs to send all HTTP requests
through a proxy that has a set of rules defining sites you are not allowed
to visit.

Owen


--On March 7, 2006 1:48:39 AM + "Christopher L. Morrow" 
<[EMAIL PROTECTED]> wrote:





On Tue, 7 Mar 2006, Marco d'Itri wrote:



On Mar 06, Rodney Joffe <[EMAIL PROTECTED]> wrote:

> It appears that Italy has ordered Italian ISPs to block access to a
> number of Internet Gambling sites. It would be interesting to see how
> the Italian ISPs are handling this, what with dynamic DNS and all
> that...
So far, the method officially recommended by the government entity
involved with collecting the gambling fees has been to create fake
zones on the caching resolvers of the large consumer ISPs.


good thing people use dns servers other than those put up by their ISP :)
when last faced with this situation, State-of-PA ChildPorn Law... Null
routing the affected ip-addresses was the only 'good' solution :(

-Chris




--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-06 Thread Owen DeLong


Not to digress too far, but, I guess that depends on your definition of
best.

I am sure that many peoples of this world would argue that capitalism has
been rather catastrophic in terms of resource allocation and resulting
effects with regard to oil, for example.

Owen



Re: Italy orders ISPs to block sites

2006-03-06 Thread Owen DeLong
This just means that there will be an offshore proxy market in the near
future.

Owen


--On March 6, 2006 12:41:24 PM -0700 Rodney Joffe <[EMAIL PROTECTED]>
wrote:

> 
> It appears that Italy has ordered Italian ISPs to block access to a
> number of Internet Gambling sites. It would be interesting to see how
> the Italian ISPs are handling this, what with dynamic DNS and all  that...
> 
> 
>  From Monsters and Critics.com
> 
> Tech News
> Italy bans unauthorised online gambling sites
> By DPA
> Mar 3, 2006, 19:00 GMT
> 
> Rome - Italy has become the first European nation to outlaw scores of
> unauthorised gambling sites that are available on the Internet.
> 
> Italy\'s Economy Ministry has published a list of more than 600  offshore
> gaming sites that are in the process of being made  unavailable to
> Italian internet users.
> 
> The list includes popular gambling sites such as 888.com, which is  based
> in the Caribbean island of Antigua and which describes itself  as \'the
> world\'s No. 1 online casino & poker room.\'
> 
> A spokesperson for Italy\'s State Monopolies told Deutsche Presse-
> Agentur dpa on Friday that Italian police were currently moving to
> prevent Italian internet providers from allowing connections to the
> banned sites. Internet providers that fail to comply face fines of up  to
> 180,000 euros (216,000 dollars). ...
> 
> More available at http://tech.monstersandcritics.com/news/
> article_1134456.php



-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpEJE4aR8mRc.pgp
Description: PGP signature


Re: shim6 @ NANOG

2006-03-06 Thread Owen DeLong


--On March 6, 2006 12:46:51 PM +0100 Iljitsch van Beijnum
<[EMAIL PROTECTED]> wrote:

> 
> On 6-mrt-2006, at 3:52, Roland Dobbins wrote:
> 
>> fixed geographic allocations (another nonstarter for reasons which  
>> have been elucidated previously)
> 
> What I hear is "any type of geography can't work because network
> topology != geography". That's like saying cars can't work because  they
> can't drive over water which covers 70% of the earth's surface.
> 
No, it's more like saying "Cars which can't operate off of freeways
won't work" because there are a lot of places freeways don't go.
Hmmm... Come to think of it, I haven't seen anyone selling a car
which won't operate off of a freeway.

> Early proposals for doing any geographic stuff were fatally flawed  but
> there is enough correlation between geography and topology to  allow for
> useful savings. Even if it's only at the continent level  that would
> allow for about an 80% reduction of routing tables in the  future when
> other continents reach the same level of multihoming as  North America
> and Europe.


I've got no opposition to issuing addresses based on some geotop. design,
simply because on the off chance it does provide useful aggregation, why
not.  OTOH, I haven't seen anyone propose geotop allocation as a policy
in the ARIN region (hint to those pushing for it).

Owen


-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgp7Ie5AZ0LS1.pgp
Description: PGP signature


Re: shim6 @ NANOG

2006-03-05 Thread Owen DeLong


--On March 5, 2006 3:28:05 PM -0500 Joe Abley <[EMAIL PROTECTED]> wrote:

> 
> On 5-Mar-2006, at 14:16, Owen DeLong wrote:
> 
>> It flies if you look at changing the routing paradigm instead of  
>> pushing
>> routing decisions out of the routers and off to the hosts.  Source  
>> Routing
>> is a technology that most of the internet figured out is problematic
>> years ago.  Making source routing more complicated and calling it  
>> something
>> else doesn't make it less of a bad idea.
> 
> Calling shim6 source-routing when it's not in order to give it an  aura
> of evil is similarly unproductive :-)
> 
Sorry, I guess we'll agree to disagree on this, but, I see very little
difference between shim6 and LSR other than the mechanism of implementation
(shim6 requires a bit more overhead).

>> I don't think it will be as expensive as you think to fix it.  I  
>> think if
>> we start working on a new routing paradigm today in order to  
>> support IDR
>> based on AS PATH instead of Prefix, we would realistically see this in
>> deployable workable code within 3-5 years.
> 
> I'm confused by statements such as these.
> 
> Was it not the lack of any scalable routing solution after many years  of
> trying that led people to resort to endpoint mobility in end  systems, à
> la shim6?
> 
I haven't seen any concrete proposals presented around the idea of IDR
based on something other than prefix.  Everything I've seen leading up
to shim6 was about ways to continue to use prefixes and, to me, shim6
is just another answer to the wrong question... "How can we help scale
prefix based routing?".  The right question still hasn't been asked by
most people in my opinion... "What can we use for routing instead of
prefixes that will scale better?"  As much as I agree the internet is
not the PSTN, this is one place where we have a lot to learn from SS7.
No, SS7 is not perfect... Far from it, but, there are lessons to be
learned that are applicable to the internet, and, separating the
end system identifier from the routing function is one we still seem
determined to avoid for reasons passing my understanding.

Owen

> 
> Joe
> 



-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpzu6K7u4ZlK.pgp
Description: PGP signature


Re: shim6 @ NANOG

2006-03-05 Thread Owen DeLong

> You are absolutely right that having to upgrade not only all hosts in  a
> multihomed site, but also all the hosts they communicate with is an
> important weakness of shim6. We looked very hard at ways to do this  type
> of multihoming that would work if only the hosts in the  multihomed site
> were updated, but that just wouldn't fly.
> 
It flies if you look at changing the routing paradigm instead of pushing
routing decisions out of the routers and off to the hosts.  Source Routing
is a technology that most of the internet figured out is problematic
years ago.  Making source routing more complicated and calling it something
else doesn't make it less of a bad idea.

>> In IPv6/shim6 what happens if shim6 has an unanticipated  
>> bottleneck, security issue, or scaleability problem? Everyone,  
>> everywhere, has to upgrade at some point. This means that upgrade/ 
>> workaround has to backwards compatible indefinitely, since it isn't  
>> going to be possible to force all the world's servers, desktops and  
>> devices to upgrade by some flag day.
> 
> That's why it's extremely important to get it right the first time.  On
> the other hand, the fact that shim6 works between just two hosts  without
> having to upgrade the whole internet first makes it a lot  easier to test
> and work out the kinks.
> 
Sure, it's really easy to test shim6 between two hosts without involving
the internet.  I'll buy that.  I'm not sure what the benefit of that is
supposed to be to the average end user, but, I can accept that as a reason
that developers of shim6 might like it.

> But again, it cuts both ways: if only two people run shim6 code,  those
> two people gain shim6 benefits immediately.
> 
Only to the extent that they are talking to each other.  If I deploy shim6,
but, the top 5000 web sites that my users need to talk to do not, then,
there's no benefit whatsoever to shim6 to the majority of my users.

> One thing I'll take away from these discussions is that we should
> redouble our efforts to support shim6 in middleboxes as an  alternative
> for doing it in individual hosts, so deployment can be  easier.
> 
That's a small step in the right direction.  Looking at the possibility
to change the fundamental routing paradigm for interdomain routing
is probably a better possibility.  Of course, these options are not
technically mutually exclusive, but, I think the latter will be actually
easier to deploy and yield greater benefit.

>> The real "injustice" about this is that it's creating two classes  
>> of organizations on the internet. One that's meets the guidelines  
>> to get PI space, multihomes with BGP, and can run their whole  
>> network(including shim6less legacy devices) with full redundancy,  
>> even when talking to shim6 unaware clients. Another(most likely  
>> smaller) that can't meet the rules to get PI space, is forced to  
>> spend money upgrading hardware and software to shim6 compatible  
>> solution or face a lower reliability than their bigger competitors.
> 
> And that's exactly why it's so hard to come up with a good PI policy:
> you can't just impose an arbitrary limit, because that would be anti-
> competitive.
> 
A good PI policy is easy.  Coming up with a scalable routing solution
that supports good PI policy is hard.  That's what we should be working
on.  A good PI policy is "If you need PI space, you get it."  What
we are trying to come up with in the meantime is a PI policy that will
meet the needs of the majority of users while somewhat constraining
growth in the routing table until that problem can be solved.
(At least that's my intent with 2005-1)

>> Someone earlier brought up that a move to shim6, or not being able  
>> to get PI space was similar to the move to name based vhosting(HTTP/ 
>> 1.1 shared IP hosting). It is, somewhat. It was developed to allow  
>> hosting companies to preserve address space, instead of assigning  
>> one IP address per hostname. (Again, however, this could be done  
>> slowly, without forcing end users to do anything.)
> 
Yeah, and it worked right up to the point that SSL became improtant
and then it pretty much went away as a practical matter.

> Tthis isn't that good an analogy. With name based virtual hosting,  the
> server either is name based or IP based. If you run name based,  old HTTP
> 1.0 clients won't be served the content they're looking for.  So people
> running servers had to wait until a large enough percentage  of users ran
> clients that supported HTTP 1.1 (or HTTP 1.0 with the  host: variable).
> Fortunately, there was a browser war on at that time  so people upgraded
> their web browser software relatively often, but  it still took a few
> years before name based virtual hosting became  viable.
> 
And you think it won't be years before shim6 provides tangible benefits?
You're dreaming.

> Shim6 is completely backward compatible. If either end doesn't  support
> the protocol, everything still works, but without multihoming  benefits
>

Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-02 Thread Owen DeLong

>> The other PI assignment policies that have been proposed either 
>> require that you have a /19 already in IPv4 (lots of hosting 
>> companies don't have anything this size), or have tens/hundreds of 
>> thousands of devices.
> 
> It has also been suggested that the simple presence of
> multihoming should be sufficient justification for PI space.
> 
Current PI policy in the ARIN region is /22 for IPv4.

>> Even if a hosting company does get a /32 or a /44 or whatever, the 
>> "you can't deaggregate your assignment at all" policy rules out 
>> having multiple independent POPs unless you somehow arrange to get 
>> multiple allocations(which isn't possible now).
> 
> People have done creative things with tunnels in the past. 
> The widespread existence of MPLS backbones makes that 
> even easier. You will always be able to find one situation
> that simply will not fit a given policy. Regardless, we still
> need to have some reasonable policy that creates a level
> playing field, does not unecessarily restrain trade, and
> creates possibilities that smart entrepreneurs can exploit
> to expand the network.
> 
Another option is to create separate ORGs for each colo and get
an allocation for each ORG.

Owen


-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgplNJVTvVywK.pgp
Description: PGP signature


Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-02 Thread Owen DeLong
Please consider also 2005-1 at
http://www.arin.net/policy/proposals/2005_1.html

Owen



pgpg8cW8ERncu.pgp
Description: PGP signature


Re: Shim6 vs PI addressing

2006-03-02 Thread Owen DeLong


--On March 2, 2006 9:37:12 AM -0500 Jared Mauch <[EMAIL PROTECTED]>
wrote:

> On Wed, Mar 01, 2006 at 03:01:22PM -0800, Owen DeLong wrote:
>> >I think you're missing that some people do odd
>> > things with their IPs as well, like have one ASN and 35
>> > different sites where they connect to their upstream Tier69.net
>> > all with the same ASN.  This means that their 35 offices/sites
>> > will each need a /32, not one per the entire asn in the table.
>> > 
>> People who are doing that have not read the definition of the
>> term ASN and there is no reason that the community or public
>> policy should concern itself with supporting such violations
>> of the RFCs.  An AS is a collection of prefixes with a consistent
>> and common routing policy.  By definition, an AS must be a
>> contiguous collection of prefixes or it is not properly a
>> single AS.  Using the same ASN to represent multiple AS is
>> a clear violation.
>> 
>> It doesn't fit the RFC definition of AS.  Therefore, there is no
>> reason to support such usage on a continuing basis.  You violate
>> the RFC's you takes your chances.
> 
>   I guess all those root servers that use the same asn
> but connect to different networks (anycast) should get shut down
> quickly.
> 
No... In the case of anycast, there is a consistent routing policy
for the address.  There are services that don't work because
of that routing policy, but, that's a decision of the service
provider in question.  However, they are using the equivalent
of one /32 per entire ASN, not one per site.

If they are advertising different prefixes from different sites
in an inconsistent manner using the same ASN, that is broken.
That's not what anycast does.

>   This is a part of networking life today in the v4 space,
> and without any current changes, it will (is) the same in v6
> routing as there is nothing different except a few more bits 32 => 128.
> 
Anycast is part of networking life today.  What you described initially
is _NOT_ how anycast works.

Owen

-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgp6jhsnZoR8o.pgp
Description: PGP signature


Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-02 Thread Owen DeLong


--On March 2, 2006 3:15:59 PM +0100 Iljitsch van Beijnum
<[EMAIL PROTECTED]> wrote:

> 
> On 2-mrt-2006, at 14:49, [EMAIL PROTECTED] wrote:
> 
>> Clearly, it would be extremely unwise for an ISP or
>> an enterprise to rely on shim6 for multihoming. Fortunately
>> they won't have to do this because the BGP multihoming
>> option will be available.
> 
> I guess you have a better crystal ball than I do.
> 
> One thing is very certain: today, a lot of people who have their own  PI
> or even PA block with IPv4, don't qualify for one with IPv6. While  it's
> certainly possible that the rules will be changed such that more  people
> can get an IPv6 PI or PA block, it is EXTREMELY unlikely that  this will
> become as easy as with IPv4.
> 
Possibly, but, if that is true, then, to that extent, it will delay or
prevent the adoption of IPv6 by those people.

> Ergo: some people who multihome with BGP in IPv4 today won't be able  to
> do the same with IPv6. And if you manage to get a PI or PA block  you
> will very likely find that deaggregating won't work nearly as  well with
> IPv6 as it does with IPv4.
> 
And why would those people consider migrating to IPv6?

> So learn to love shim6 or help create something better. Complaining
> isn't going to solve anything.

I'm trying to create something better.  I doubt many people in the
operational
community will ever learn to love shim6.

Owen


-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpfiKVFebhS8.pgp
Description: PGP signature


Re: Shim6 vs PI addressing

2006-03-02 Thread Owen DeLong


--On March 2, 2006 11:31:51 AM +0100 Jeroen Massar <[EMAIL PROTECTED]> wrote:

> On Thu, 2006-03-02 at 02:21 -0800, Owen DeLong wrote:
>> >> Personally, I think a better solution is to stop overloading IDR
>> >> meaning onto IP addresses and use ASNs for IDR and prefixes for
>> >> intradomain routing only.
>> > 
>> > Did you notice that 32bit ASN's are coming and that IPv4 addresses are
>> > 32bits? :) Which effectively means that we are going to route IPv6 with
>> > an IPv4 address space. Or when one would use the 32bit ASN for IPv4:
>> > routing a 32bit address space with an 32bit routing ID. The mere
>> > difference
>> > 
>> Yes, I am well aware of 32bit ASNs.  However, some things to consider:
>> 
>> 1.   Just because ASNs are 32 bits doesn't mean we'll instantly
>>  issue all 4 billion of them.  The reality is that we probably
>>  only need about 18 bits to express all the ASNs well need for
>>  the life of IPv6, but, 32 is the next convenient size and there's
>>  really no benefit to going with less than 32.
> 
> True. If we would take the 170k routes that are in BGP at the moment
> then a 18bits address space is enough to give every route a dedicated
> ASN. The issue is that there are way more people who might want to
> multihome than that, just take the number of businesses on this planet,
> add some future growth and we'll end up using the 24th bit too quite
> quickly. Which is, according to some people who do routing code, no
> problem at all. Like shim6, see first then believe.
> 
>> 2.   In my current thinking on how to achieve ASN based IDR, we
>>  would not need ASNs for every organization that multihomes,
>>  only for each organization that provides transit.  This
>>  would greatly reduce some of the current and future demand
>>  for ASNs.
> 
> Paper/draft/description/website? :)
> 
Paper: Haven't gotten that far yet.
Draft: Haven't gotten that far yet.
Description: See below
Website: Haven't gotten that far yet.

Description:  This is still knocking around in my head so far.  I've
discussed
and described it to a few folks, but, there are lots of details to work out
yet.

So, this will require a fair amount of imagination on your part, and, it
will
require letting go of a lot of assumptions built on the current dogma and
paradigm.  This is in many ways a completely different paradigm for
interdomain
routing.

Basically, internet routers would come in three flavors:

1.  Intradomain Routers -- Routers which have a default route
and limited or no detailed knowledge of topology beyond
the local ASN.

2.  DFZ Edge Routers -- Routers which participate in the IDR
process ("full BGP feeds") which have adjacencies with
Intradomain Routers.

3.  DFZ Core Routers -- Routers which participate in IDR as
in 2 above, but, which do not have any adjacencies with
routers from category 1 above.

In the long run, routers in category 2 and 3 would only carry prefix
information for routes terminating in the local AS.  For all exterior
routes and peering sessions, only AS PATH data would be exchanged,
without any prefix information. (In the interim, BGP would be unchanged
and routing table bloat would continue to be an issue, but, the routing
process could change on a router-by-router basis without requiring a
"flag day" conversion).

Routers in category 2 would insert an IPv6 extension header of type 53
with a new subtype (yet to be defined, probably 1) which would contain
the Destination ASN for the packet.  The lookup of Prefix->ASN mapping
would be accomplished by a process similar to DNS (See Route Resolvers
below).

Routers in category 2 and 3 would forward packets by the following ruleset:

Is extension header present?

Yes: Is it my local ASN?
(A) Yes: -- Prefix route available?
Yes: Route packet by IGP
No: Perform exterior resolution and rewrite
ASN header if possible.  Otherwise,
drop packet. (see loop prevention
below for details)
(B) No: -- Forward based on ASPATH data to reach AS

No: Resolve ASN -- Local?
Yes: -- Continue process from (A) above
No: -- Insert Extension header and continue
from (B) above.
Unresolvable: -- Drop packet, send Unreachable no route

Route Resolver:

Two 

Re: Shim6 vs PI addressing

2006-03-02 Thread Owen DeLong

I think that is overly pessimistic.  I would say that SHIM6 _MAY_
become a routing trick, but, so far, SHIM6 is a still-born piece
of overly complicated vaporware of minimal operational value, if any.


Vaporware part is true, upto now, operational value is to be seen.


Well... I can only go based on the existing drafts since there's no
running code to base an opinion on, but, my opinion of the drafts
is that it will provide minimal operational value.

It's the wrong answer to the wrong question, in my opinion.


Personally, I think a better solution is to stop overloading IDR
meaning onto IP addresses and use ASNs for IDR and prefixes for
intradomain routing only.


Did you notice that 32bit ASN's are coming and that IPv4 addresses are
32bits? :) Which effectively means that we are going to route IPv6 with
an IPv4 address space. Or when one would use the 32bit ASN for IPv4:
routing a 32bit address space with an 32bit routing ID. The mere
difference


Yes, I am well aware of 32bit ASNs.  However, some things to consider:

1.  Just because ASNs are 32 bits doesn't mean we'll instantly
issue all 4 billion of them.  The reality is that we probably
only need about 18 bits to express all the ASNs well need for
the life of IPv6, but, 32 is the next convenient size and there's
really no benefit to going with less than 32.

2.  In my current thinking on how to achieve ASN based IDR, we
would not need ASNs for every organization that multihomes,
only for each organization that provides transit.  This
would greatly reduce some of the current and future demand
for ASNs.


Yep, 2005-1 fits my idea pretty well. Takes care of the folks needing
address space now while being able to use it differently later when it
is needed.

Though as Joe Abley also mentioned (and I also quite a number of times
already ;) anyone with even a vague definition of a plan for 200
customers can get a /32 IPv6 without a problem. Just check the GRH list
for companies in your neighbourhood who did get it.


True, but, until recently, I was being told that ARIN insisted that the
200 "customers" had to be non-related third parties.  E.g. Chevron
couldn't use all their different business units as 200 customers of
Chevron Corporate IT.  It appears based on some recent allocations that
they may have relaxed that stance.

Regards,

Owen





pgpJQC40MCVGM.pgp
Description: PGP signature


Re: Shim6 vs PI addressing

2006-03-01 Thread Owen DeLong
>   I think you're missing that some people do odd
> things with their IPs as well, like have one ASN and 35
> different sites where they connect to their upstream Tier69.net
> all with the same ASN.  This means that their 35 offices/sites
> will each need a /32, not one per the entire asn in the table.
> 
People who are doing that have not read the definition of the
term ASN and there is no reason that the community or public
policy should concern itself with supporting such violations
of the RFCs.  An AS is a collection of prefixes with a consistent
and common routing policy.  By definition, an AS must be a
contiguous collection of prefixes or it is not properly a
single AS.  Using the same ASN to represent multiple AS is
a clear violation.

>   And they may use different carriers in different
> cities.  Obviously this doesn't fit the definition that some have
> of "autonomous system", as these are 35 different discrete networks
> that share a globally unique identifier of sorts.
> 
It doesn't fit the RFC definition of AS.  Therefore, there is no
reason to support such usage on a continuing basis.  You violate
the RFC's you takes your chances.

Owen



-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgp5GsDd6XvvY.pgp
Description: PGP signature


Re: Shim6 vs PI addressing

2006-03-01 Thread Owen DeLong

> Please don't mix up addressing and routing. "PI addressing" as you
> mention is addressing. SHIM6 will become a routing trick.
> 
I think that is overly pessimistic.  I would say that SHIM6 _MAY_
become a routing trick, but, so far, SHIM6 is a still-born piece
of overly complicated vaporware of minimal operational value, if any.

Personally, I think a better solution is to stop overloading IDR
meaning onto IP addresses and use ASNs for IDR and prefixes for
intradomain routing only.

> Greets,
>  Jeroen
> 
> (who simply would like a policy where endsites that want it could
> request a /48 or /40 depending on requirements from a dedicated block
> which one day might be used for identity purposes and not pop up in the
> bgp tables or whatever we have then anymore)
> 
I would, for one.  Policy proposal 2005-1 (I am the author) comes reasonably
close to that.  It will be discussed at the ARIN policy meeting in
Montreal in April.

Owen



-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpadaQxSQlNp.pgp
Description: PGP signature


Re: Transit LAN vs. Individual LANs

2006-02-26 Thread Owen DeLong


--On February 26, 2006 7:53:40 AM -0600 Pete Templin
<[EMAIL PROTECTED]> wrote:

> 
> 
>>> An argument could be made for individual VLANs to keep things like b- 
>>> cast storms isolated.  But I think the additional complexity will  
>>> cause more problems than it will solve.
> 
>> One must keep in mind that human error is the dominant cause of outages, 
>> and since there's not likely to be backhoes running around in a data 
>> center, IMHO the goal should be to remove as many ways as possible that 
>> your coworkers can muck things up.
> 
> Individual PTP links means a muckup probably affects only two devices.
> Switched LANs means a muckup possibly affects all devices (on one of the
> LANs), and not all of them may detect the problem at the same time.
> 
> pt
Except when you implement the PTP links as VLANs on switches, it means
a muckup (to use your term) at the switch side can really muckup your
PTP links in non-obvious and often hard-to-troubleshoot ways.  There
are tradeoffs either way.  Personally, when interconnecting routers, I
tend to prefer the PTP hard link and skip the switches.  Sometimes that's
not feasible.  In those cases, generally, I prefer to go with rational
groups of routers on VLAN segments rather than synthetic PTP links.
However, each situation is different and the tradeoffs should be
considered in light of the particular situation.

Owen



-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpQ5gzF8t1xu.pgp
Description: PGP signature


RE: Transit LAN vs. Individual LANs

2006-02-25 Thread Owen DeLong


--On February 25, 2006 8:09:22 PM + "Christopher L. Morrow"
<[EMAIL PROTECTED]> wrote:

> 
> 
> On Sat, 25 Feb 2006, Neil J. McRae wrote:
> 
>> 
>>  > An argument could be made for individual VLANs to keep things
>> > like b- cast storms isolated.  But I think the additional
>> > complexity will cause more problems than it will solve.
>> 
>> Vlans will not stop all typres of broadcast storm.
>> 
> 
> So, perhaps I missed the earlier explanation, but why use switched
> segments at all? if the purpose is to connect routers to routers putting
> something that WILL FAIL in the middle is only going to increase your
> labor costs later :(
> 
> So, for router-router links, GE doesn't have to mean switched...

Very true.  In fact, GE is even easier because part of the GE standard
for UTP requires it to be Auto-MDI-Sensing (MDI vs MDI-X is handled
automatically in ALL compliant GE/TP interfaces).  Thus, you can use
any eia-568[ab] cable, straight or crossed between them.  (Note, USOC
cables still won't work, it has to be 568a or 568b pairing)


Owen



-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgp7IqnO671oZ.pgp
Description: PGP signature


Re: Transit LAN vs. Individual LANs

2006-02-25 Thread Owen DeLong


--On February 25, 2006 11:04:12 AM -0500 "Patrick W. Gilmore"
<[EMAIL PROTECTED]> wrote:

> 
> On Feb 24, 2006, at 9:03 PM, Scott Weeks wrote:
> 
>> I have 2 core routers (CR) and 3 access routers (AR)
>> currently connected point-to-point where each AR connects to
>> each CR for a total of 6 ckts.  Now someone has decided to
>> connect them with Gig-E.  I was wondering about the benefits
>> or disadvantages of keeping the ckts each in their own
>> individual LANs or tying them all into one VLAN for a
>> "Transit LAN" as those folks that decided on going to Gig-E
>> aren't doing any logical network architecting (is that a
>> real word?).

In my experience either solution has tradeoffs and the correct
one depends greatly on your traffic patterns.  Having said that,
what I find causes most of the problems in either solution is
when the Layer 3 topology starts to diverge from the Layer 2
topology.

Owen

-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpVUaJfmfuFz.pgp
Description: PGP signature


Re: USG posts RFI re: IANAI

2006-02-24 Thread Owen DeLong
Because so far, DOC still thinks they control the oversight functions of
some aspects of what used to be under the NSF and the USG wants to continue
pretending that they control the internet.

Owen


--On February 24, 2006 9:27:40 AM -0500 Martin Hannigan
<[EMAIL PROTECTED]> wrote:

> 
> 
> 
> 
> This was interesting, operationally. I don't know why the USG does
> RFI's on stuff like IANA:
> 
> http://www.fbo.gov/spg/DOC/OS/OAM/Reference-Number-DOCNTIARFI0001/Synopsi
> sR.html
> 
> 
> -M<
> 
> 
> --
> Martin Hannigan(c) 617-388-2663
> Renesys Corporation(w) 617-395-8574
> Member of Technical Staff  Network Operations
> [EMAIL PROTECTED]  



-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgp5QLR9mejzA.pgp
Description: PGP signature


Re: Fed Bill Would Restrict Web Server Logs

2006-02-14 Thread Owen DeLong
> 
> Original posting from Declan McCullagh's PoliTech mailing list. Thought
> NANOGers would be interested since, if this bill passes, it would impact
> almost all of us. Just imagine the impact on security of not being able
> to login IP address and referring page of all web server connections!
> 

Seems to me that security would be a "legitimate business purpose" for
keeping
the information around.

Owen

-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpnpH8YSLGet.pgp
Description: PGP signature


Re: Compromised machines liable for damage?

2005-12-29 Thread Owen DeLong


--On December 29, 2005 5:51:04 AM -0500 [EMAIL PROTECTED] wrote:

> On Wed, 28 Dec 2005 13:20:51 PST, Owen DeLong said:
> 
>> Denying patches doesn't tend to injure the trespassing user so much as
>> it injures the others that get attacked by his compromised machine.
>> I think that is why many manufacturers release security patches to
>> anyone openly, while restricting other upgrades to registered users.
> 
> Color me cynical, but I thought the manufacturers did that because a
> security issue has the ability to convince non-customers that your
> product sucks, while other bugs and upgrades only convince the sheep that
> already bought the product that the product is getting Even
> Better!(tm).

That could be a factor, but, I know first hand from the legal departments
of at least two software "manufacturers" that it was at least a factor
in the decision, and, they do have concerns about being liable for
damages caused by security flaws in their software.

Owen


-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpdCOCWcHAy4.pgp
Description: PGP signature


Re: Compromised machines liable for damage?

2005-12-28 Thread Owen DeLong


--On December 28, 2005 11:09:31 AM -0800 Douglas Otis
<[EMAIL PROTECTED]> wrote:

> 
> 
> On Dec 27, 2005, at 5:03 AM, Steven M. Bellovin wrote:
> 
>> 
>> In message  
>> <[EMAIL PROTECTED]
>> om>, "Hannigan, Martin" writes:
>> 
>>> 
>>> In the general sense, possibly, but where there are lawyers there  
>>> is =
>>> always discoragement.
>>> 
>>> Suing people with no money is easy, but it does stop them from =
>>> contributing in most cases. There are always a few who like getting =
>>> sued. RIAA has shown companies will widescale sue so your argument  
>>> is =
>>> suspect, IMO..
>>> 
>> 
>> I've spent a *lot* of time talking to lawyers about this.  In fact,  
>> a few
>> years ago I (together with an attorney I know) tried to organize a  
>> "moot
>> court" liability trial of a major vendor for a security flaw.  (It
>> ended up being a conference on the issue.)
>> 
>> The reason there have not been any lawsuits against vendors is because
>> of license agreements -- every software license I've ever read,
>> including the GPL, disclaims all warranties, liability, etc.  It's not
>> clear to me that that would stand up with a consumer plaintiff, as  
>> opposed
>> to a business; that hasn't been litigated.  I tried to get around that
>> problem for the moot court by looking at third parties who were  
>> injured
>> by a problem in a software package they hadn't licensed -- think
>> Slammer, for example, which took out the Internet for everyone.
> 
> There have been successful cases for pedestrians that used a train
> trestle as a walk-way, where warnings were clearly displayed, and a
> fence had been put in place, but the railroad failed to ensure repair  of
> the fence.  The warning sign was not considered adequate.  Would  this
> relate to trespassers that use an invalid copy of an OS refused  patches?
> Would this be similar to not repairing the fence?  Clearly  the
> pedestrians are trespassing, nevertheless the railroad remains
> responsible for the safety of their enterprise.
> 
> -Doug

While I think it is unfair in the case of the railroad, and, burglars that 
injure themselves in peoples stores/houses, it works for me in the case
of software.

Denying patches doesn't tend to injure the trespassing user so much as
it injures the others that get attacked by his compromised machine.
I think that is why many manufacturers release security patches to
anyone openly, while restricting other upgrades to registered users.

Owen


-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpGdut6jxLtH.pgp
Description: PGP signature


Re: Compromised machines liable for damage?

2005-12-28 Thread Owen DeLong


--On December 28, 2005 9:38:11 AM -0500 Jason Frisvold
<[EMAIL PROTECTED]> wrote:

> 
> On 12/27/05, Owen DeLong <[EMAIL PROTECTED]> wrote:
>> Look at it another way... If the software is open source, then, there
>> is no requirement for the author to maintain it as any end user has
>> all the tools necessary to develop and deploy a fix.  In the case of
>> closed software, liability may be the only tool society has to
>> protect itself from the negligence of the author(s).  What is the
>> liability situation for, say, a Model T car if it runs over someone?
>> Can Ford still be held liable if he accident turns out to be caused
>> by a known design flaw in the car? (I don't know the answer, but,
>> I suspect that it would be the same for "old" software).
> 
> But can't something similar be said for closed source?  You know
> there's a vulnerability, stop using it...  (I'm aware that this is
> much harder in practice)
> 
One other thing I forgot to say here... With closed software, you don't
have the option of fixing it yourself.  With open source, that claim
cannot be made.  As such, since there are some cases in which the
damage done by stopping use must be weighed against the damage
done by continued use, it's a harder question WRT closed software,
especially when it is an operating system.

Owen


-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgp9IOspUaPyk.pgp
Description: PGP signature


Re: Compromised machines liable for damage?

2005-12-28 Thread Owen DeLong


--On December 28, 2005 9:38:11 AM -0500 Jason Frisvold
<[EMAIL PROTECTED]> wrote:

> On 12/27/05, Owen DeLong <[EMAIL PROTECTED]> wrote:
>> Look at it another way... If the software is open source, then, there
>> is no requirement for the author to maintain it as any end user has
>> all the tools necessary to develop and deploy a fix.  In the case of
>> closed software, liability may be the only tool society has to
>> protect itself from the negligence of the author(s).  What is the
>> liability situation for, say, a Model T car if it runs over someone?
>> Can Ford still be held liable if he accident turns out to be caused
>> by a known design flaw in the car? (I don't know the answer, but,
>> I suspect that it would be the same for "old" software).
> 
> But can't something similar be said for closed source?  You know
> there's a vulnerability, stop using it...  (I'm aware that this is
> much harder in practice)
> 
Yes... You say that as if I have a problem with people using bad software
being held liable for the damage it does.  I do not.

> 
> 
>> In general, if the gross act of stupidity was reasonably foreseeable,
>> the manufacturer has a "duty to care" to make some attempt to mitigate
>> or prevent the customer from taking such action.  That's why toasters
>> all come with warnings about unplugging them before you stick a
>> fork in them.  That's why every piece of electronic equipment says
>> "No user serviceable parts inside" and "Warning risk of electric shock".
> 
> So what if Microsoft put a warning label on all copies of Windows that
> said something to the tune of "Not intended for use without firewall
> and anti-virus software installed" ?  :)  Isn't the consumer at least
> partially responsible for reasonable precautions?
> 
Yes.  Again, I have no problem if every user of Windows starts paying
for failing to prevent it from damaging the network (or any other
software that does damage in this context).  Perhaps that will finally
start showing corporate america the true cost of running windows.

>> They feel for the carpenter and the only option they have to help
>> him is to take money from the corporation.
> 
> I'm all for compassion, but sometimes it's a bit much..  :)
> 
No argument.  My point was that it isn't so much the judge as some
aspects of our jury system that are at the root of many of these
decisions.
> 
> I guess, in a nutshell, I'm trying to understand the liability
> issue...  It seems, based on the arguments, that it generally applies
> to "stuff" that was received due to some monetary transaction.  And
> that the developer/manufacturer/etc is given a chance to repair the
> problem, provided that problem does not exist due to gross negligence
> on the part of the developer/manufacturer/etc ...  Does that about sum
> it up?
> 
Mostly.  Certainly, liability is more certain in those circumstances
than if any of those things are not present.

> [From your other mail]
>> SPAM does a lot of actual harm.  There are relatively high costs
>> associated with SPAM.  Machine time, network bandwidth, and, labor.
> 
> *nod*  I agree..  My point here was that SPAM, when compared to
> something like a virus, is *generally* less harmful.  Granted, SPAM is
> more of a constant problem rather than a single virus that may attack
> for a few days before mitigation is possible.  I spend a great deal of
> time tweaking my mail servers to prevent spam..  :)
> 
The primary output of viruses these days is SPAM.  The primary harm done
by viruses is SPAM.  Sure, there are occasional DOS issues, but, there
is actually more harm done by SPAM than DOS from a monetary perspective.

Owen

-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpTXfc6QyzeA.pgp
Description: PGP signature


Re: Compromised machines liable for damage?

2005-12-27 Thread Owen DeLong
[snip]
> And I would agree with this reasoning.  If the software is defective,
> fix it or stop selling it.  However, I don't think all software
> developers have "control" over the selling of the software after it's
> sent to the publisher.  (I'm by no means intimate with how all this
> works)  So, for instance, if developer A creates product A+, publisher
> P deals with packaging it up, distributing it, etc.  A few months
> later, developer A goes out of business for some insane reason. 
> Publisher P continues to sell the software in which a security hole is
> discovered a month later.  There's no way for developer A to fix the
> hole, they don't exist.  And publisher P isn't near smart enough to
> fix it.  So they just continue selling it.  Life goes on, it
> eventually falls into the bargain bin where publisher P continues to
> package it, but in recycled fish wrap instead of the pristine new
> boxes it used to.
> 
> So is developer A still liable?  Is publisher P liable?  Should they be?
> 
Liability generally ends at death.  Since developer A is essentially dead
(no longer exists), no.

If publisher P is the current copyright owner, then probably yes.

If they have been informed of the defect and continue to sell the defective
product, yes.

> So who do I sue?  McDonalds for selling the coffee?  Or the driver who
> put it between his/her legs?
> 
In the case of an accident and you are the driver she hit, you would
sue the driver.  The driver may then sue McDonalds if the coffee was
"too hot", but, your cause of action is against the direct actor...
The driver, and, the owner of the vehicle that hit you.

> If it's a known issue and the developer continues to ignore it, then
> yeah, they should probably be held accountable.  But, there's still
> the issue of what is bad and what isn't.  Madden 2006 for the PSP
> reboots when I end a franchise mode game.  It destroys the data I just
> spent 30 minutes generating while playing the game.  Is that bad
> enough that the company should be held liable for it?  (Yes, I'm aware
> they're replacing the discs now.  Excellent move on EA's part)
> 
I guess that depends on how much you feel you are harmed by that loss
of data.  However, in that case, you probably accepted an EULA that
says "We aren't liable for the software not functioning."  This is
much more a gray area than what I think is the first issue that should
be addressed.  What if, instead, your PSP was network enabled, and,
at the end of your game, it not only rebooted, but, it wiped out all
data from all PSPs it could find on the network.  Then, the owner
of thoses PSPs should have a cause of action against EA (and possibly
you).  They didn't agree to an EULA allowing EAs software to wipe
their data.  That's the situation of the third parties being harmed
by exploited hosts.

> There's another form mailer out there that I dealt with, and wrote a
> large post on Bugtraq about, that continues to allow relaying even
> after a complete bug report with a fix.  Should that developer be held
> liable for damages?  It's just spam, it's not really hurting anyone,
> is it?
> 
SPAM does a lot of actual harm.  There are relatively high costs associated
with SPAM.  Machine time, network bandwidth, and, labor.

> Then there's something like Internet Explorer.  Any one of the dozens
> of exploits "allows a remote attacker to assume control of the
> computer" ...  That's bad..  That's definitely an issue.  I could
> agree that the developer should be held liable for that ...
> 
Yes.  These are the sorts of things we are really talking about primarily.

> Maden 2006 I had to pay for.  IE came with Windows, so I didn't
> *really* have to pay for it, depending on how you look at it.  The
> form mailer was free on the internet.  Does having to pay for it
> determine if the developer should be liable?  What if Linux had a
> security hole that was reported and never fixed?  Should Linus get
> sued?  Wow..  who would you even sue in that instance?
> 
You did pay for it.  It was part of what you paid for when you bought
Windows.  If Windows came bundled with your machine, you still paid
for it in the form of buying the machine and it was part of what was
included.  In any case, you still paid for IE.

As to Linux, I don't believe Linus ever sold it.  For the most part,
there's nobody to sue because nobody got paid.  Further, since
it is open source, you have the ability and responsibility to fix it
if you are informed your machine is doing harm.  You don't have the
ability to fix IE.  In the case of packages like Red Hat Enterprise
Linux and such, yes, if they are exploited, it is not unlikely that
Red Hat could be sued by injured third parties, and, this is not
inappropriate.

> Software confuses things a bit I think..  I can agree that an IE bug,
> unchecked, should be liable.  But a form mailer?  It was free to begin
> with, so just move on to something else...
> 
Software doesn't confuse things.  Things given away for f

Re: Compromised machines liable for damage?

2005-12-27 Thread Owen DeLong


--On December 27, 2005 10:39:38 AM -0500 Jason Frisvold
<[EMAIL PROTECTED]> wrote:

> On 12/27/05, Marshall Eubanks <[EMAIL PROTECTED]> wrote:
>> There was a lot of discussion about this in the music / technology /
>> legal community
>> at the time of  the Sony root exploit CD's - which
>> I and others thought fully opened  Sony for liability for 2nd party
>> attacks. (I.e., if a hacker uses the Sony
>> root kit to exploit your machine, then Sony is probably liable,
>> regardless of the EULA. They put
>> it in there; they made the attack possible.) IANAL, but I believe
>> that if a vendor has even a
>> partial liability, they can be liable for the whole.
> 
> But, what constitutes an exploit severe enough to warrant liability of
> this type?  For instance, let's look at some scripts ...  formmail is
> a perfect example.  First, there was no "real" EULA.  I'm definitely
> not a laywer, but I would think that would open up the writer to all
> sorts of liability...  Anyways, the script was, obviously, flawed. 
> Spammers took notice and used that script to spam all over the place. 
> This hurt the hoster of the script, the people who were spammed, and
> probably the ISPs that wasted the bandwidth carrying the spam.
> 
It's not just about the severity of the exploit.  What did you pay
for formmail?  Did the author have a "duty to care"?  If money
did not change hands, then, liability becomes much more difficult
unless you can show gross negligence.  Further, since formmail
is provided in source form, the server owner could have fully evaluated it
for
vulnerability prior to deploying it.  Thus, even if there is some
liablity, it primarily falls to the person/organization who
placed the script in use on the server, not the author.

> So, should the writer of the script be sued for this?  Is he liable
> for damages?  If that's the case, then I'm gonna hang up my
> programming hat and go hide in a closet somewhere.  I'm far from
> perfect and, while I'm relatively sure there are none, exploitable
> bugs *might* exist in my software.  Or, perhaps, the exploit exists in
> a library I used.  I've written a lot of PHP code, perhaps PHP has the
> flaw..  Am I still liable, or is PHP now liable?
> 
Again, it all boils down to whether money changed hands or not.
If you didn't get paid for your script, you probably aren't liable.
Since PHP is free (and there's not really a legal entity to sue
for it anyway), PHP probably isn't liable.

> This has scary consequences if it becomes a blanket argument. 
> Alternatively, if the programmer is made aware of the problem and does
> nothing, then perhaps they should be held accountable.  But, then,
> what happens to "old" software that is no longer maintained?
> 
Look at it another way... If the software is open source, then, there
is no requirement for the author to maintain it as any end user has
all the tools necessary to develop and deploy a fix.  In the case of
closed software, liability may be the only tool society has to
protect itself from the negligence of the author(s).  What is the
liability situation for, say, a Model T car if it runs over someone?
Can Ford still be held liable if he accident turns out to be caused
by a known design flaw in the car? (I don't know the answer, but,
I suspect that it would be the same for "old" software).

>> I suspect that eventually EULA's will prove to be weak reeds, in much
>> the same way that manufacturers may be
>> liable when bad things happen, even if the product is being grossly
>> misused. My intuition says that
>> unfortunately somebody is going to have to die to establish this, as
>> part of a wrongful death suit.
>> With the explosion in VOIP use, this is probably only a matter of time.
> 
> Personally, I feel that is a person "grossly misuses" a product and is
> hurt as a result, they deserve it.  Within some acceptable reason, of
> course.  One expects that if you place a cup of coffee in your lap,
> that you just purchased, I might add, that it may burn you if it
> spills.  Or, if you puncture a can of hair spray near an open fire,
> you may experience a slight burning sensation a few seconds later.
> 
The first one here is not your best choice of examples.  It turns out
that in that suit, McDonalds was violating ANSI/ISO standards and
handing out liquids that were hotter than the industry considers
"safe".  There is a major difference in the level of injury that
occurs above a certain temperature (I think it's 180F if memory
serves), and, their coffee was shown to be well above that.  They
had been repeatedly informed of this problem prior to the incident
and had refused to do anything about it.

Yes, you expect to get burned, and, if you keep the coffee below
a serving temperature of 180F, then, there's no liability.  However,
serving it above 180F is not "reasonable and prudent" and that is
why the jury found for the plaintiff.

In general, if the gross act of stupidity was reasonably foreseeable,
the manufacturer has

Re: Compromised machines liable for damage?

2005-12-27 Thread Owen DeLong



The reason there have not been any lawsuits against vendors is because
of license agreements -- every software license I've ever read,
including the GPL, disclaims all warranties, liability, etc.  It's not
clear to me that that would stand up with a consumer plaintiff, as opposed
to a business; that hasn't been litigated.  I tried to get around that
problem for the moot court by looking at third parties who were injured
by a problem in a software package they hadn't licensed -- think
Slammer, for example, which took out the Internet for everyone.


Yes, I think this is the only way it will work.  Plaintiffs that are not
subject to the EULA will have to sue the manufacturer of vulnerable
software installed on remote systems that attack their site.  Otherwise,
the liability waivers they signed make it much harder.  Of course, 
interestingly,

automobile manufacturers cannot get around having to build cars that
meet safety standards regardless of waivers customers may sign.  Perhaps
what we need first is a consortium to agree on a set of standards for
software security followed by someone like Ralph Nader doing the
"Unsafe at any clockspeed" campaign.


The issue of liability based on operational practices is untested.  As
I concluded in that book chapter from 1994, I (and the attorneys who
helped me (a lot) with it) felt that there may very well be cause for a
lawsuit.  However, to the best of my knowledge there have been no court
rulings on this issue.  Unless and until that happens, we're just
guessing.  I'll give two short quotes that illustrate why I'm concerned.
This one is from a standard textbook on tort law:


Yep... I think that is true.  However, unless and until someone steps up
and actually does it (and frankly, I think the effective strategy here
would be coordinating a large number of injured parties in small offices
and residences to sue in small claims court at roughly the same time),
all we'll be able to do is guess.


The standard of conduct imposed by the law is an external one,
based upon what society demands generally of its members,
rather than upon the actor's personal morality or individual
sense of right and wrong.  A failure to conform to the standard
is negligence, therefore, even if it is due to clumsiness,
stupidity, forgetfulness, an excitable temperament, or even
sheer ignorance.  An honest blunder, or a mistaken belief that
no damage will result, may absolve the actor from moral blame,
but the harm to others is still as great, and the actor's
individual standards must give way in this area of the law to
those of the public.  In other words, society may require of a
person not to be awkward or a fool.


So, does that mean that if most of society is ignorant enough to tolerate
insecure buggy software, we must accept that as the standard for software
performance?  That is an unfortunately low barrier indeed for a profession
like software development.  In general, professional liability is different
from general civil liability.  Once money changes hands, you have a much
greater "duty to care" about the potential harm caused by your "product"
than an individual citizen.

For example, a guy that pours gasoline into his gopher holes and lights
it is an idiot.  However, as long as everything he blows up is his own
and he harms noone else, he's still just an idiot, but, not liable.

However, if he packages gas cans and matches together and sells them
with instructions as a "Gopher Eradication Kit", he gets to be liable
for the damage to all the houses of all the people dumb enough to
use his product, and, any neighbors unfortunate enough to live within
the blast radii.

Let's face it, some software vendors are selling the moral equivalent
of a minivan with no seatbelts and no airbags.


The second, a quote from a 1932 (U.S.) Court of Appeals opinion, was
for a case where some barges sank because the tugboat pulling them had
no radio receivers, and hence didn't know the weather forecast:

Indeed in most cases reasonable prudence is in face common
prudence; but strictly it is never its measure; a whole
calling may have unduly lagged in the adoption of new and available
devices.  It may never set its own tests, however persuasive be its
usages.  Courts must in the end say what is required; there are
precautions so imperative that even their universal disregard will
not excuse their omission. ...  But here there was no custom at all
as to receiving sets; some had them, some did not; the most that
can be urged is that they had not yet become general.
Certainly in such a case we need not pause; when some have thought
a device necessary, at least we may say that they were
right, and the others too slack.
...
We hold [against] the tugs therefore because [if] they had been
prope

RE: Compromised machines liable for damage?

2005-12-26 Thread Owen DeLong

I don't think anyone is talking about suing the writers of the botnet
code.  Afterall, that's already occuring on those rare occasions when
they can be tracked down.  In some cases, they're even getting
prosecuted.  What people are talking about is suing the authors of
vulnerable and exploitable code.  I think there's merit to this idea,
and, I don't think it will have a negative impact on open source.

Owen


--On December 26, 2005 11:36:02 PM -0500 "Hannigan, Martin" 
<[EMAIL PROTECTED]> wrote:





Botnet code is open source, as far as I know.  Maybe not by design, but I
have gigs of it and its all googleable.

Not being a lawyer, I'd guess the plaintiff size is highy debateable
based on source or destination.

Marty



 -Original Message-
From:   Owen DeLong [mailto:[EMAIL PROTECTED]
Sent:   Mon Dec 26 23:32:04 2005
To: Hannigan, Martin; Joseph Jackson
Cc: NANOG
Subject:RE: Compromised machines liable for damage?

RIAA is a very different context from what we are talking about here.

First, the number of people getting attacked from Open Source systems
is very small, so, you have a very small class of plaintiffs.  Second,
said class of plaintiffs is probably not as well funded as RIAA.

OTOH, the number of people/organizations being attacked from Micr0$0ft
based systems is relatively high, so, a large class of plaintiffs,
and, some of them being enterprises are relatively well funded.

Second, in the case of RIAA, it is businesses suing to do what they
perceive as protecting their profit stream, and, they know they
are suing a collection of defendants that are relatively poorly
funded and have no organization.  In the case of Open Source, I
think there is a pretty good track record of the community coming
to the aid of those that get sued for various reasons (DeCSS comes
to mind).

Sure, it's easy to sue someone who doesn't have any money, but,
there's no point in doing so.  Frankly, it's not the people with
no money that are at risk here.  It's the people with some money
and some assets.  If you have nothing, you're pretty safe ignoring
a civil suit because you have nothing to lose.  Frankly, if RIAA
were to sue me, it wouldn't cost me $250,000 to fight it.  It
might cost me a few thousand if I chose to involve a lawyer in
some portion of the process, but, initially, I think I could
make their life difficult enough to get them to go away without
involving a lawyer.

I've already made MPAA/Disney go away twice without a lawyer.  Admittedly,
they went away before even filing a suit, so, technically, I haven't been
sued, but, I've been threatened by them, and, I'm sure if I'd
buckled under or failed to confront them appropriately, I would
have either gotten sued or ended up handing over money.

The costs of defending a suit are $0 until you hire a lawyer.

Owen


--On December 26, 2005 11:18:46 PM -0500 "Hannigan, Martin"
<[EMAIL PROTECTED]> wrote:




In the general sense, possibly, but where there are lawyers there is
always discoragement.

Suing people with no money is easy, but it does stop them from
contributing in most cases. There are always a few who like getting sued.
RIAA has shown companies will widescale sue so your argument is suspect,
IMO..




 -Original Message-
From:   Owen DeLong [mailto:[EMAIL PROTECTED]
Sent:   Mon Dec 26 23:11:13 2005
To: Hannigan, Martin; Joseph Jackson
Cc: NANOG
Subject:RE: Compromised machines liable for damage?

I've seen this argument time and again, and, the reality is that it is
absolutely
false.

In fact, it will do nothing but encourage freeware.  Liability for a
product
generally doesn't exist until money changes hands.  If you design a piece
of
equipment and post the drawings in the public domain, you are not liable
if someone builds it and harms themselves.  You are liable if someone
pays you for the design, because, the money changing hands creates a
"duty to care".
Outside of a "duty to care", the only opening for liability is if they
can prove that you failed to take some precaution that would be expected
of any "reasonably prudent" person.

So, liability for bad software and the consequences it creates would be
bad for the Micr0$0ft and Oracles of the world, but, generally, very good
for the Free Software movement.  It might turn out to be bad for
organizations
like Cygnus and RedHat, but, that's more of a gray area.

As to the specific example cited...

If no update has been released, in the case of Open Source, that's no
excuse.
You have the source, so, you don't have to wait for an update.  In the
case
of closed software, then, I think manufacturer liability is a good thing
for the industry in general.

Owen


--On December 26, 2005 10:07:20 PM -0500 "Hannigan, Martin"
<[EMAIL PROTECTED]> wrote:




If

RE: Compromised machines liable for damage?

2005-12-26 Thread Owen DeLong

RIAA is a very different context from what we are talking about here.

First, the number of people getting attacked from Open Source systems
is very small, so, you have a very small class of plaintiffs.  Second,
said class of plaintiffs is probably not as well funded as RIAA.

OTOH, the number of people/organizations being attacked from Micr0$0ft
based systems is relatively high, so, a large class of plaintiffs,
and, some of them being enterprises are relatively well funded.

Second, in the case of RIAA, it is businesses suing to do what they
perceive as protecting their profit stream, and, they know they
are suing a collection of defendants that are relatively poorly
funded and have no organization.  In the case of Open Source, I
think there is a pretty good track record of the community coming
to the aid of those that get sued for various reasons (DeCSS comes
to mind).

Sure, it's easy to sue someone who doesn't have any money, but,
there's no point in doing so.  Frankly, it's not the people with
no money that are at risk here.  It's the people with some money
and some assets.  If you have nothing, you're pretty safe ignoring
a civil suit because you have nothing to lose.  Frankly, if RIAA
were to sue me, it wouldn't cost me $250,000 to fight it.  It
might cost me a few thousand if I chose to involve a lawyer in
some portion of the process, but, initially, I think I could
make their life difficult enough to get them to go away without
involving a lawyer.

I've already made MPAA/Disney go away twice without a lawyer.  Admittedly,
they went away before even filing a suit, so, technically, I haven't been
sued, but, I've been threatened by them, and, I'm sure if I'd
buckled under or failed to confront them appropriately, I would
have either gotten sued or ended up handing over money.

The costs of defending a suit are $0 until you hire a lawyer.

Owen


--On December 26, 2005 11:18:46 PM -0500 "Hannigan, Martin" 
<[EMAIL PROTECTED]> wrote:





In the general sense, possibly, but where there are lawyers there is
always discoragement.

Suing people with no money is easy, but it does stop them from
contributing in most cases. There are always a few who like getting sued.
RIAA has shown companies will widescale sue so your argument is suspect,
IMO..




 -Original Message-
From:   Owen DeLong [mailto:[EMAIL PROTECTED]
Sent:   Mon Dec 26 23:11:13 2005
To: Hannigan, Martin; Joseph Jackson
Cc: NANOG
Subject:RE: Compromised machines liable for damage?

I've seen this argument time and again, and, the reality is that it is
absolutely
false.

In fact, it will do nothing but encourage freeware.  Liability for a
product
generally doesn't exist until money changes hands.  If you design a piece
of
equipment and post the drawings in the public domain, you are not liable
if someone builds it and harms themselves.  You are liable if someone pays
you for the design, because, the money changing hands creates a "duty to
care".
Outside of a "duty to care", the only opening for liability is if they
can prove that you failed to take some precaution that would be expected
of any "reasonably prudent" person.

So, liability for bad software and the consequences it creates would be
bad for the Micr0$0ft and Oracles of the world, but, generally, very good
for the Free Software movement.  It might turn out to be bad for
organizations
like Cygnus and RedHat, but, that's more of a gray area.

As to the specific example cited...

If no update has been released, in the case of Open Source, that's no
excuse.
You have the source, so, you don't have to wait for an update.  In the
case
of closed software, then, I think manufacturer liability is a good thing
for the industry in general.

Owen


--On December 26, 2005 10:07:20 PM -0500 "Hannigan, Martin"
<[EMAIL PROTECTED]> wrote:




If you want to choke off freeware(gnu, et. Al), sure, go after them. I
doubt the licensing agreement allows it though. (IANAL).

I think all you'd do is encourage people to write more music about
'freeing the software'. I'd rather not be stricken in that fashion.

I think that angle is DOA.

Martin


 -Original Message-
From:   Joseph Jackson [mailto:[EMAIL PROTECTED]
Sent:   Mon Dec 26 03:13:02 2005
To: Hannigan, Martin
Cc: NANOG
Subject:RE: Compromised machines liable for damage?

What about the coders that write the buggy software in the first place?
Don't they hold some of the responsibility also?  IE I am running some
webserver software that a bug is found in it.  Attackers use that bug in
the
software to generate a DOS attack against you from my machines.  No
update has been released for the software I am running and/or no warning
as been released. You sue me I sue the coders.  What a wonderful world.
(I'm not for this but its another 

RE: Compromised machines liable for damage?

2005-12-26 Thread Owen DeLong
I've seen this argument time and again, and, the reality is that it is 
absolutely

false.

In fact, it will do nothing but encourage freeware.  Liability for a product
generally doesn't exist until money changes hands.  If you design a piece of
equipment and post the drawings in the public domain, you are not liable
if someone builds it and harms themselves.  You are liable if someone pays
you for the design, because, the money changing hands creates a "duty to 
care".

Outside of a "duty to care", the only opening for liability is if they
can prove that you failed to take some precaution that would be expected
of any "reasonably prudent" person.

So, liability for bad software and the consequences it creates would be
bad for the Micr0$0ft and Oracles of the world, but, generally, very good
for the Free Software movement.  It might turn out to be bad for 
organizations

like Cygnus and RedHat, but, that's more of a gray area.

As to the specific example cited...

If no update has been released, in the case of Open Source, that's no 
excuse.

You have the source, so, you don't have to wait for an update.  In the case
of closed software, then, I think manufacturer liability is a good thing
for the industry in general.

Owen


--On December 26, 2005 10:07:20 PM -0500 "Hannigan, Martin" 
<[EMAIL PROTECTED]> wrote:





If you want to choke off freeware(gnu, et. Al), sure, go after them. I
doubt the licensing agreement allows it though. (IANAL).

I think all you'd do is encourage people to write more music about
'freeing the software'. I'd rather not be stricken in that fashion.

I think that angle is DOA.

Martin


 -Original Message-
From:   Joseph Jackson [mailto:[EMAIL PROTECTED]
Sent:   Mon Dec 26 03:13:02 2005
To: Hannigan, Martin
Cc: NANOG
Subject:RE: Compromised machines liable for damage?

What about the coders that write the buggy software in the first place?
Don't they hold some of the responsibility also?  IE I am running some
webserver software that a bug is found in it.  Attackers use that bug in
the
software to generate a DOS attack against you from my machines.  No update
has been released for the software I am running and/or no warning as been
released. You sue me I sue the coders.  What a wonderful world.  (I'm not
for this but its another side of the issue.)



  _

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Hannigan, Martin
Sent: Sunday, December 25, 2005 9:22 PM
To: Steven M. Bellovin
Cc: Dave Pooser; NANOG
Subject: Re: Compromised machines liable for damage?





Yes, I agree. As usual, I too am 'IANAL'.

Marty



 -Original Message-
From:   Steven M. Bellovin [mailto:[EMAIL PROTECTED]
 ]
Sent:   Sun Dec 25 23:52:27 2005
To: Hannigan, Martin
Cc: Dave Pooser; NANOG
Subject:Re: Compromised machines liable for damage?

In message
<[EMAIL PROTECTED]
om>, "Hannigan, Martin" writes:



Dave, RIAA wins almost 100pct vs p2p'ers ir sues. Its an interesting =
dichotomy.



"Wins" is too strong a word, since I don't think any have gone to
court -- see
http://www.nytimes.com/aponline/arts/AP-Music-Download-Suit.html

as my source.

Besides, it's a very different situation.  For my take on liability
issues -- note that I'm not a lawyer, and note that this is from 1994
-- see http://www.wilyhacker.com/1e/chap12.pdf


--Steven M. Bellovin, http://www.cs.columbia.edu/~smb









--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.


pgp3iqPGh02ei.pgp
Description: PGP signature


Re: [ppml] Fw: ":" - Re: Proposed Policy: 4-Byte AS Number Policy Proposal

2005-12-15 Thread Owen DeLong

Actually, for actual implementation, there are subtle differences between
AS 0x0002 ans AS 0x0002.  True, they are the same AS in 16 and 32 bit
representation, and, for allocation policy, they are the same, but, in
actual router guts, there are limited circumstances where you might actually
care which one you are talking about.

Owen


--On December 15, 2005 1:45:20 PM -0500 Todd Vierling <[EMAIL PROTECTED]> wrote:



On Wed, 14 Dec 2005, Robert Bonomi wrote:


> That's an example of the lack of plain English in the
> proposal. Why don't we just talk about AS numbers greater
> than 65535 or AS numbers less than 65536?

Because there is more to it than just that.  :)


No, there isn't.  AS numbers are integers.  It just so happens that there
are now two representations of said integers with different domain bounds.

Any other interpretation simply adds too much confusion.  After all, "2
byte AS2" vs. "4 byte AS2" implies *more than* 4 bytes -- because you have
to use metadata beyond the 4 bytes to represent which "type" of AS you
have.




--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.


pgprNAWP0YkJt.pgp
Description: PGP signature


Re: IP Prefixes are allocated ..

2005-11-28 Thread Owen DeLong
IP prefixes are NOT allocated to AS numbers, they are allocated to 
Organizations

just like AS numbers.

Perhaps this is part of why you can't find such a list.

Owen


--On November 28, 2005 11:45:58 AM +0530 Glen Kent <[EMAIL PROTECTED]> 
wrote:




to different Autonomous systems.

Is there a central/distributed database somewhere that can tell me
that this particular IP prefix (say x.y.z.w) has been given to foo AS
number?

I tried searching through all the WHOIS records for a domain name. I
get the IP address but i dont get the AS number.

Any clues on how i can get the AS number?

Glen




--
If this message was not signed with gpg key 0FE2AA3D, it's probably
a forgery.


pgpSdhlo8yiGS.pgp
Description: PGP signature


Re: What do we mean when we say "competition?"

2005-11-26 Thread Owen DeLong
VZ certainly shouldn't remove any copper that doesn't belong to VZ.  So, 
unless

they are the ILEC in Apple Valley, that may or may not be an issue.

Owen


pgpYRQjKGEHor.pgp
Description: PGP signature


Re: What do we mean when we say "competition?"

2005-11-16 Thread Owen DeLong



--On November 16, 2005 9:25:29 PM -0800 David Barak <[EMAIL PROTECTED]> 
wrote:






--- Owen DeLong <[EMAIL PROTECTED]> wrote:



> Windows 98 price (in 1997) -> $209
> Office 97 Standard (in 1997) -> $689
> Windows XP price (now) -> $199.
> Office 2003 (now) -> $399.
>
> Want to try that again?
>
Yes... Here's some more accurate data:

Windows 3.1 price $49
Windows 3.1.1 price $99
Windows 95 (Personal) price $59
Windows 98 (Personal) price $99
Windows ME (Home) price $99
Windows NT WS price $99
Windows 2000 Pro price $299
Windows XP Pro Price $299


Just because I didn't quote the emails from my history, does
not mean these are not accurate.  These are
the list prices quoted by vendors of M$ products over the years
in my mail history file.  It's not an assertion, it's actual data.
True, they are not the "street" or "discounted" prices, but,
they are the MSRP.


So it goes from 209 to either 199 or 299 depending on
whether you want "home" or "pro."  That's hardly an
egregious markup for a better OS, several years later.


Without getting into the argument about which version of
Windows is or is not an improvement, it's certainly the
most expensive OS in the market today:

MacOS X: $99 (List) -- Includes HTTP, DNS, DHCP servers and
other basic essentials like SMTP and LDAP servers, etc.
http://www.apple.com

Windows XP Pro $299 (List) -- Includes HTTP (sort of), but,
no ability to be DNS, DHCP, SMTP, or, LDAP server without
additional software.
http://www.microsoft.com (pricing link)

Solaris x86 $49.95 (CD) -- $9.95 DVD, $0 download
http://www.sun.com (downloads->get solaris 10) Full Server
or desktop Version

Red Hat Enterprise Linux Basic $179 -- Includes all Server
software, but, missing some GUIs for managing, limited support.
http://www.redhat.com/en_us/USA/rhel/compare/client

Fedora Core $0 -- Full server/desktop version
http://fedora.redhat.com

FreeBSD $0 -- Full server/desktop version
http://www.freebsd.org

So... Microsoft has a monopoly on Windows and the basic OS costs
you $299 with virtually no server capabilities.

In the POSIX-style OS world, where you have multiple competitors,
prices range from $0 to $179.

Next?


I was doing a similar apples-to-apples comparison.
Look, just accept that not all data points will line
up with your assertions - find some others instead.
If there are so many, then there have to be better
examples than these.


True, but, this one does.  There are multiple ways to skin a cat,
and, multiple versions of Windows pricing.  Any way you slice it,
MicroSoft remains the most expensive OS in the market.
Everyone elses OS prices have come down since the days of Win 3.1,
Microsoft's have gone up (about 600% -- $49 to $299).




Finally, the price of the client software is
actually not the primary
problem with M$ monopolistic pricing.  It is the
back-end software
where they really are raising the prices.  Compare
NT Server to
2K or XP Server or Advanced Server.  XP AS is nearly
double 2000 AS
last time I looked.


Microsoft hardly has a monopoly on servers.  If their
prices are too high, use something else.


Microsoft has a monopoly on Active Directory servers.
Microsoft has a monopoly on Exchange servers.

If you are unfortunate enough to need either of these things
(I thank my lucky stars every day that I am not), you have to
buy them from Micr0$0ft.




> The argument regarding ILECs is reversed.  I
> appreciate the citation of Standard Oil, but it is
a
> fallacy to think that there is a one-to-one
mapping
> between SO and any/all of the ILECs.
>
True.  What is the point?


Standard Oil is a strawman argument.  The ILECs are
dissimilar in nature and behavior from Standard Oil.
An assertion otherwise requires evidence.


I think that the anti-competitive behavior of SBC and that
of SOCA are, indeed, very similar.  If you prefer a more
similar example, we can compare Comcast and SBC, or, perhaps
you would prefer to compare Pacific Bell and US West (prior
to them all becoming part of SBC).

Pick your poison, there's certainly a record of anti-competitive
practices available.


"History doesn't repeat itself.  Historians do."
-unknown (to me at least)


Unknown and untrue... History is replete with examples of history
repeating itself.  In many ways, WWI and WWII are examples of history
repeating itself.  Korea, Viet Nam, Iraq are examples.  Sure, slightly
different results, but, if you roll dice more than a couple of times,
you usually get different numbers, too.  Many Many Many similarities
in costs, casualties, efficacy, etc.

If you want closer examples:  US Involvement in Viet Nam vs. Soviet
involvement in Afghanistan.  VERY similar results all the way around.


Don't fight the last war, and especially don't fight
it in a way which will impede future innovation.


Agreed.  Instead of granti

Re: What do we mean when we say "competition?"

2005-11-16 Thread Owen DeLong

> Windows 98 price (in 1997) -> $209
> Office 97 Standard (in 1997) -> $689 
> Windows XP price (now) -> $199.
> Office 2003 (now) -> $399.
> 
> Want to try that again?
> 
Yes... Here's some more accurate data:

Windows 3.1 price $49
Windows 3.1.1 price $99
Windows 95 (Personal) price $59
Windows 98 (Personal) price $99
Windows ME (Home) price $99
Windows NT WS price $99
Windows 2000 Pro price $299
Windows XP Pro Price $399

If you're going to use list prices, use list prices all the way through.
The above represent, to the best of my knowledge, M$ retail pricing for
the lowest level of their "client" version of their OS available at
the time.

I confess I haven't followed pricing on M$ Office, but, I'm willing to
bet that an apples-to-apples comparison would reveal similar results.

Finally, the price of the client software is actually not the primary
problem with M$ monopolistic pricing.  It is the back-end software
where they really are raising the prices.  Compare NT Server to
2K or XP Server or Advanced Server.  XP AS is nearly double 2000 AS
last time I looked.


> The problems most people have with microsoft's
> monopoly status have nothing whatsoever to do with the
> price of the software which forms the basis of their
> monopoly (windows + office), but rather their
> willingness to use the profits from them to subsidize
> other losing ventures to drive out other competitors.
> 
Actually, it's both.

> The argument regarding ILECs is reversed.  I
> appreciate the citation of Standard Oil, but it is a
> fallacy to think that there is a one-to-one mapping
> between SO and any/all of the ILECs.  
> 
True.  What is the point?

> Assertions that "monopolies do X and they're bad, and
> we know that Y will eventually do bad because they're
> a monopoly" are circular.
> 
Statements like "In the past, monopolies have done X, and, the
results of X are bad.  Since Y is a monopoly, we can expect them to do
X as well, with similar negative results." are not circular.  They
are attempting to learn from history rather than repeat it.

There are a number of monopoly ILECs in the US which engage regularly
in anticompetitive practices and use their ownership of the LMI to
reduce competition, delay innovation, and, provide less than
acceptable service to their subscribers.  If you don't believe this,
please look through the records of almost any PUC in the country.

Since that is the case, I cannot believe that preserving such
a monopoly on LMI is a good thing.

Since the market is risky to deploy LMI once, you will have a hard
time that the market exists to pay for multiple copies of a given
LMI in order to support competition.

Owen

-- 
If it wasn't crypto-signed, it probably didn't come from me.


pgpIxcungL4MH.pgp
Description: PGP signature


RE: What do we mean when we say "competition?" (was: Re: [Latest draft of Internet regulation bill])

2005-11-16 Thread Owen DeLong


--On November 16, 2005 4:23:20 AM -0800 David Schwartz
<[EMAIL PROTECTED]> wrote:

> 
> 
>> In any case, the bottom line is that whether through subsidy, "deal",
>> or other mechanism, the "last-mile" infrastructure tends to end up being
>> a monopoly or duopoly for most terrestrial forms of infrastructure.
>> As such, I think we should accept that monopoly and limit the monopoly
>> zone to that area (MPOE<->B-Box or MPEO<->MDF) and prevent an unfair
>> advantage by separating the management of that section of infrastructure
>> from the service providers offering services which use said
>> infrastructure.
> 
>   This is the same "create a free market through extensive regulation" 
> that
> has created the disaster we have now. Any last mile technology whose cost
> of deployment can only be justified by the value of a monopoly on its
> deployment just won't be deployed in this model. That's not a free market.
> 
>   This separation model may turn out to be a very good one or a very bad
> one. But if we choose it and stick with it, what will happen in 50 or 100
> years when it's either broken or irrelevent? Remember, we got to where we
> are now by choosing models that made sense in the voice telco time and
> make no sense at all now.
> 
The model we have now is a certain amount of facilities from a given center
to each residential unit within that centers "serving area".  For any form
of terrestrial facilities, I don't see many alternatives to that model.  I
don't see how you plan to change that model.  Please explain to me what the
alternative model for deploying last-mile terrestrial facilities is.

As long as we are stuck with that model, I don't see how you will ever get
to a point where parallel facilities are cost effective to deploy, and, I
don't think that's necessary to get them deployed.  The reality is that
a "monopoly on the facilities" isn't required to make them cost effective
to deploy, but, it is unlikely to ever be cost effective to deploy parallel
facilities to create competition.  This is the "natural monopoly" scenario
of which I speak.

If such facilities would never be deployed under said model, then, why do
we have:

The golden gate bridge
The bay bridge
The Carquinez straits bridge
The new Carquinez straits bridge

The Interstate Highway system

Residential Telephone service without party lines

CATV

Realistically, for last-mile base infrastructure, there are really only a
few options today.  Co-ax, UTP, and Fiber.  A carrier neutral monopoly
provider of these base facilities in a given serving area (and nothing
says the same monopoly has to run all three) could serve multiple providers
of just about any possible service.

If we come up with a new terrestrial delivery method which has sufficient
promise, I'm betting it won't be that hard to get it deployed.  Fiber,
however, scales pretty far.  A single DWDM pair to the home really is
a pretty large amount of bandwidth.  We'll have many years of warning on
superior technology as it will have to be well and truly deployed in
the backbone prior to any need for it at the edge.

>   Had we done this twenty years ago, the last mile would be dialup and
> billions of public dollars would have been spent to create and maintain an
> irrelevent technology. Meanwhile, the newer technologies wouldn't be
> deployed.
> 
Nope... That shows me how much you truly don't understand how little I'm
talking about monopolizing.  The UTP between the MPOE and the MDF can be
used to serve POTS, ISDN, DSL, DS0, T1, and other services.  Since any
service provider could put any equipment they wanted at the ends of any
of those wire pairs, paying the monopoly maintainer only for the lease of
the dry copper pair, we would have seen a much more rapid deployment of
ISDN and DSL because the RBOCs would not have had any power to delay it.
We would have seen multiple providers competing for PSTN service on an
equal footing.  We might have seen providers offering true T1 services
at residential pricing.

>> This, at least on a theoretical level creates a carrier-neutral
>> party managing the monopoly portion while maximizing and levelling
>> the playing field in all other areas.
> 
>   A carrier-netural party may not be technology neutral, business model
> neutral, or neutral in many ways that may turn out to be important. As I
> see it, you give up on everything that's important from the very first
> step. What if a non-carrier neutral last mile turns out to be the scheme
> most people really want when it's offered to them?
> 
What technology... This is literally just the dumbest layer 1 part of the
network.  It's Wire or Fiber.  There aren't really any other options.  I
don't care if we create a monopoly for each of these technologies.  Since
all they can do is lease an unlit/unpowered piece of wire or fiber from
a serving center to a building MPOE, and, nothing else, and they are not
allowed to

RE: What do we mean when we say "competition?" (was: Re: [Latest draft of Internet regulation bill])

2005-11-16 Thread Owen DeLong



--On November 15, 2005 11:02:18 PM -0800 David Schwartz 
<[EMAIL PROTECTED]> wrote:






--On November 15, 2005 8:14:38 PM -0800 David Schwartz
<[EMAIL PROTECTED]> wrote:



>> --On November 15, 2005 6:28:21 AM -0800 David Barak
>> <[EMAIL PROTECTED]> wrote:



>> OK... Let me try this again... True competition requires
>> that it be PRACTICAL for multiple providers to enter the
>> market, including the creation of new providers to seize
>> opportunities being ignored by the existing ones.



>The worse the existing provider it is, the more practical it is to
> compete with them. If they are providing what people want at a
> reasonable
> price, there is no need for competition. If they are not, then the it
> becomes practical for multiple providers to enter the market. If you
> assume that the cost to develop existing infrastructure is not insanely
> less than the cost to develop new infrastructure, the isolation from
> competition comes directly from the investment.



1.  The existing infrastructure is usually all that is needed for
many of the services in question.  Laying parallel copper
as a CLEC is not only prohibitively expensive, in most
areas, it's actually illegal.  Usually, municipalities
have granted franchise rights of access to right of
way to particular companies on an exclusive basis.  That
makes it pretty hard for a competitor to enter the market
if they can't get wholesale access to the existing copper.


For now this may be true. But you'll set up another generation of the
same problem if you continue to advocate subsidized infrastructure. At
some point that infrastructure will be inadequate, and you will have done
nothing to make it easier to build competitive new infrastructure. If
munipalities granting monopolies is a problem, then stop such monopolies
-- don't advocate them!


The problem is that because of cost and other factors of last-mile
deployment of terrestrial infrastructure, these are natural monopolies
whether you like it or not.  For example, how many streets from how
many different providers pass in front of your house?  How many different
telcos have copper to the junction box that could be used to provide
service to your home?  How many cable companies have fiber or co-ax
in your street?  For the vast majority of the united States, it is
very hard to answer anything other than 0 or 1 to any of these
questions.

The primary problem as I see it is not the monopoly of the infrastructure,
but, the inherent connection between the management of that monopoly
infrastructure and one of the competitors for the provision of services
over that infrastructure.


2.  The existing copper was actually deployed (at least in most
of the united States) using public subsidies.  The taxpayers
actually paid for the network.  The physical infrastructure
should be the property of the people.  The ownership claim
of the telephone companies is almost as baseless as the
Verisign clame that they own the data in whois.


It doesn't much matter and it can't be fixed. The static value of the
infrastructure is basically depreciated to zero by now. The profits have
been reaped. Don't justify future bad decisions on past inquities that
can't be fixed anyway. Just start right from now on.


If I thought what I was suggesting was a bad decision, I wouldn't be
suggesting it.  However, I think there is much more justification for
this decision than past inequities.  As long as you have an area
that tends to create a natural monopoly and allow one competitor that
uses that infrastructure to also own said infrastructure, it creates
an unfair environment for other competitors.

Are you really advocating that the market is best served by multiple
providers laying last-mile fiber?  Doubling the cost of FTTH to
create just two FTTH providers in an area seems pretty stupid to
me.  OTOH, a 10% increase (probably much less) in the cost of FTTH
to facilitate a virtually unlimited number of service providers
being able to access said fiber on a consumer-choice basis
doesn't seem so stupid to me.


>For example, if Bill Gates took a few billion dollars out
> of his pocket
> and launched 80 satellites to provide wireless Internet access, it
> would be damn hard to compete with him if he wasn't trying to recover
> those few
> billion dollars. But if you spend a few billion, you get a few billion
> worth. Anyone else can spend the same amount and get the same
> advantage.



3.  Except when you consider that there are only so many orbital
slots that can be maintained.  (see 1 above as well).  If Bill
manages to launch N satellites and N leaves N/2 orbital slots
available for other uses, then, it's pretty hard to launch
another N satellites at any cost.


The present infrastructure in no way impedes the construction of future
infrastructure. If it d

RE: What do we mean when we say "competition?" (was: Re: [Latest draft of Internet regulation bill])

2005-11-16 Thread Owen DeLong



--On November 16, 2005 1:48:39 AM -0500 Sean Donelan <[EMAIL PROTECTED]> 
wrote:



On Tue, 15 Nov 2005, Owen DeLong wrote:

areas, it's actually illegal.  Usually, municipalities
have granted franchise rights of access to right of
way to particular companies on an exclusive basis.  That
makes it pretty hard for a competitor to enter the market
if they can't get wholesale access to the existing copper.


Where do you think this happens?  Federal law and FCC regulations don't
permit exclusive franchises, and require cities to allow non-discrimintory
access to right of ways.


Try to be a competing cable company in San Jose.

The city seems to have interpreted that to mean backbone rights of ways
and not last-mile rights of ways in neighborhoods.




2.  The existing copper was actually deployed (at least in most
of the united States) using public subsidies.  The taxpayers
actually paid for the network.  The physical infrastructure
should be the property of the people.  The ownership claim
of the telephone companies is almost as baseless as the
Verisign clame that they own the data in whois.


Again, where are these public subsidies?  In rural (i.e. non-RBOC) areas
with USDA borrowing authority which I think is actually a revolving
borrowing authority, i.e. rural utilities have to pay the money back?  I
don't think the RBOCs ever qualified for USDA borrowering.

I think you are confusing taxpayers with shareholders and ratepayers. In
same places governments provide companies incentitives to attract
investment in their areas, such as building new factories, etc; but
normally people don't think that gives the government a lien on
the factory.


While this is true, you will find that it is my considered opinion
that if HP wants to call it the HP arena, they should have reimbursed
me my 5% utility tax that I have been paying for years and continue
to pay towards the cost of building it.

I am actually opposed to government doing this in any form other than
a loan which is expected to be paid back.  Especially when it comes
to such things as ballparks, arenas, etc.




Semi-regulated monopolies that think they own an infrastructure
built with taxpayer money. (see also 2 above)


Again, I think you are confusing taxpayers with shareholders and
ratepayers.



Huh?  How does this favor one set of business models?  What it does is
take the portion of the infrastructure that was built with taxpayer
money and put it back in the hands of the taxpayer so that whatever
carrier the tax payer wants to buy service from has equal access to the
infrastructure.


What taxpayer money, other than the government paying its telephone
bills, do you think was used to build the RBOC or MSO infrastructure?



My understanding, as I stated, was that in the pre-Greene days, cities
actually paid AT&T a "fee" to get neighborhoods wired either retroactively,
or, as they were built.


Today, nobody can put CATV infrastructure anywhere in San Jose
if their name isn't Comcast.  Period.  The city sold us out to
an exclusive franchise deal.  The current bill proposed eliminates
that.  That's a good thing.


Exclusive cable franchises were eliminated in by federal law in 1992.

Comcast has a non-exclusive franchise in San Jose.  Of course, I live
across the railroad tracks in Sunnyvale, another non-exclusive franchise
territory dominated by Comcast.  Comcast chosen not to deploy advanced
cable services on my side of the railroad tracks.  The original cable
franchises were often divided up into multiple areas in a city, e.g.
Philidelphia has four different franchise areas, City of Los Angeles has
14 different franchise areas.  Cable companies had a phased roll out of
services in different areas over many years.  Even today, cable companies
don't have DVR, HDTV, Voice or Video on Demand rolled out to 100% of their
service areas.


So... Who besides Comcast is operating in San Jose?  Nobody.  Why?  Because
whether you consider their deal an exclusive franchise or not, for
all practical purposes, it is.  Comcast got very favorable rates from
the city on a number of things in exchange for promising to build out
a certain level of infrastructure.  As I see it unless the city is
obliged to provide the same deal to any other company, that's effectively
a subsidy supporting a Comcast monopoly.


Currently cable companies do not need to obtain a state license to offer
voice or telecommunication services over its facilities.  Telephone
companies, and other cable competitors, still need to negotiated
individual municiple franchises in order to offer video services of
its facilities.



In any case, the bottom line is that whether through subsidy, "deal",
or other mechanism, the "last-mile" infrastructure tends to end up being
a monopoly or duopoly for most terrestrial f

Re: What do we mean when we say "competition?"

2005-11-15 Thread Owen DeLong

I think what is really represented there is that
because
they own an existing network that was built with
public
subsidy and future entrants have no such access to
public
subsidy to build their own network, ...


Sean's post correctly identified the problem with this
assertion, so I won't


And I provided a response to Sean's email, so, I won't
repeat it here.


The government should recognize that the existing
build
has actually been paid for mostly by public subsidy
anyway
and as such, should require the ILECs to split into
two
separate divisions.


You mean the existing FIBER build was mostly paid by
public subsidy?  Do you have a reference for that?


No... I'm talking about "last-mile" ifrastructure, not
backbone.  I'm much less concerned about the cost of
new backbone _IF_ providers can get fair and equal
access to public right-of-way.

Most places have no fiber "last-mile".  Some do.  Of those
that do, I know that many were installed by cable companies
and that there are in many of those places utility taxes
that are being collected and passed along to at least
partially fund said buildout.  I know that Comcast
signed a huge sweet-heart deal with the city of San Jose,
for example before they started tearing up my neighborhood.
They seem to have laid interduct to the curb and co-ax
to the home.  I haven't seen them bring any fiber anywhere
yet, but, I presume that's what the interduct is for at
some point.

Mostly all they've delivered so far is damage.  4 sepearte
loss-of-phone service incidents (they cut F1 (twice), F2
(once) and the cable in my street (once)).  I'm not a Comcast
subscriber and probably never will be, but, that doesn't
prevent the city from taxing me to support this buildout.


One division would be a
wholesale
only infrastructure delivery company that would
maintain
the physical infrastructure.  As part of this,
ownership
of the physical infrastructure in place would be
transferred to an appropriate local civil body
(city,
county, district, etc.) and said body should have an
initial 5 year contract with the infrastructure
portion
of the ILEC to provide existing services on a
provider-
neutral basis (same price to all ILECs, Clecs,
etc.).

At the end of that 5 year contract, the maintenance
of
the infrastructure should be up for bid, and, if the
existing ILEC infrastructure portion can't win the
bid,
they are out of luck.


I don't know how familiar you are with what the
government contracting process is like, but the word
"unpleasant" comes to mind: it's long, hard, and
cumbersome.  Your model would substantially increase
the amount of government contracting required, so you
would need to be able to show a benefit to society of
corresponding magnitude.


Huh?  I'm talking about doing this once every 5 years
so that the infrastructure management company has to
face some potential recourse if they do a lousy job.
I'm not talking about switching contractors on a monthly
basis or anything like that.  Actually, I think a once
every 5 year contract would be a lot less cumbersome
than the number of PUC applications processed today.
How familiar are you with THAT process?


Right, but, faced with potential competition, they
are
notorious for temporarily lowering prices well below
sustainable levels in order to eliminate said
competition.


Are you alleging that the ILECs/RBOCs are providing
services below cost?  If so, call a regulator.  If
not, while the profits may be lower than desired by
the ILEC/RBOC, it's certainlly "sustainable"


I'm alleging that the ILECs/RBOCs have lower costs than
their competitors because they own an infrastructure
that was paid for, at least in large part, by others.
Further, they have an incentive to provide better service
to themselves than to their competitors and do so.


The '96 telecom act did nothing to take the
last
mile infrastructure out of the hands of the existing
ILEC.


You are correct.  However, the '96 telecom act did
give lots of other companies the OPPORTUNITY to build
their own last mile access.  Your proposal actually
drives toward a more monopolistic, regulated
environment.


Not really.  First, the '96 telecom act did nothing to
remove Cities' ability to enter into exclusive franchise
agreements for public right-of-way.  Second, my proposal
includes the idea of OPEN ACCESS to public right of
way for anyone who wants to build infrastructure and
the elimination of such franchise deals.

So, my intent, at least, is to give equal access to the
existing infrastructure for all comers while simultaneously
making putting new infrastructure in public right-of-way
more accessible to more providers.

That having been said, the reality is that there is no
rational cost-model where it makes sense to put parallel
separate fiber/copper/whatever into every home/business/etc.

The last mile is notoriously the highest cost with the
lowest return.  As such, it lends itself to natural
monopoly regardless of other factors.  Recognizing
this fact and limiti

Re: What do we mean when we say "competition?"

2005-11-15 Thread Owen DeLong



--On November 15, 2005 11:23:50 PM -0500 Sean Donelan <[EMAIL PROTECTED]> 
wrote:



On Tue, 15 Nov 2005, Owen DeLong wrote:

I think what is really represented there is that because
they own an existing network that was built with public
subsidy and future entrants have no such access to public
subsidy to build their own network,


Some people may think "public subsidy" implies using taxpayer funds such
as giving incentivies to companies to build factories, job training
programs, re-locate corporate headquarters or even build sports
stadiums.

Are you refering to the exclusive franchises granted to various cable and
telephone companies in parts of the country as the "subsidy?"  Or are you
referring the the High Cost Support funds which used to be implicit
internal transfers in the old Bell System (not taxpayer funds), or now
explicit transfers through the Universial Service Fund? Or are you
referring to the US Department of Agriculture Rural Utilities Service
financing which assists non-RBOC rural telephone and utility companies?


A combination of the franchises (which at least in San Jose, hardly
what I would call rural) and the pre-1996 copper plant.  Certainly,
I can guarantee you that the copper in many neighborhoods in San Jose
dates back to the 1970s.  My neighborhood is one such area.  According
to my local Cable Maintenance guys, the icky-pick cable that passes
for an F1 from my CO was laid in the 1960s and hasn't been replaced
(except a few feet here and there thanks to the Comcast inept
contractor and their rock-wheels).  As I understand it, up until
the divestiture, AT&T received a certain amount of tax funding for
each neighborhood they laid copper into.

There is no indication in most of San Jose that the pre-1996 copper
is going away any time soon.

Finally, claiming that USF is just an explicit transfer is a fallacy.
Look on your phone bill.  Have you ever seen anyone who receives
a credit on the USF or HCR lines on their bill?  Everyone I've ever
seen is a charge.  So, either the phone companies are pocketing
that money, or, there's some group of citizens somewhere who
are receiving what my friends and I are putting into that pot.


Competitors have been given access to the legacy telephone copper plant
(but generally not the cable coaxial plant) in most of the country. The
legacy copper outside plant is quickly being replaced by post-1996 outside
plant.  Soon there may be little or no pre-1996 outside copper plant
left.  Ownership of inside wiring was transfered to the property owner
a couple of decades ago.


Yes, but, the FCC is now reducing that access.  Also, the fact that
the current ILEC acts as gatekeeper for said access results in various
anti-competitive practices being used by the ILEC to reduce service
quality to competitive carriers.  OTOH, if the ILEC was in a position
of either operating the infrastructure, OR, competing with other
providers, not doing both at the same time, I believe the behavior
would be substantially different.  As it stands today, the ILEC
has a substantial incentive to provide better infrastructure response
to it's in-house service provider than to competing service
providers that don't own their own infrastructure.


Several municipalities in the US have spent taxpayer funds, or taxpayer
backed, to build a municiple outside plants.


True.


What's interesting is there is relatively little competitive activity or
demand for access in locations (i.e. rural) with the largest government
incentives,  while there is a lot of demand in areas (i.e. urban) which
had minimimal or no government incentives and were funded by shareholders
and other investors.  The RBOCs and MSOs have been selling off their rural
assets to other companies for any years.


Also true, but, in part, that is why those incentives are there.


So what is it exactly you think taxpayer funds paid for and should now
own?


1.  Most of the existing pre-1996 copper "last-mile" infrastructure
2.  The right-of-way
3.  Most of the B-Boxes
4.  At least an easment for access to the MDFs if not the MDFs themselves.
5.  The ridiculous amount of money granted to Pacific Bell as a result
of A-95-12-03 where they actually convinced the PUC that converting
from D4/AMI to B8ZS-ESF required them to completely replace their
inter-co infrastructure and that they only had to do this to accomodate
ISDN.  (At the time most of the D4/AMI infrastructure was deployed,
the need for and superiority of B8ZS/ESF was well known and this
was really just another example of Pacific Bell's passive aggressive
attitude towards ISDN).

I'm sure if I reviewed the last 10 years of rulings I could find other 
examples
of Pacific Bell/Pacific Telesis/SBC receiving sbusidies disguised as 
rate-hikes

from the California PUC.

Owen



pgpKr1IuQUNRS.pgp
Description: PGP signature


RE: What do we mean when we say "competition?" (was: Re: [Latest draft of Internet regulation bill])

2005-11-15 Thread Owen DeLong



--On November 15, 2005 8:14:38 PM -0800 David Schwartz 
<[EMAIL PROTECTED]> wrote:






--On November 15, 2005 6:28:21 AM -0800 David Barak
<[EMAIL PROTECTED]> wrote:



OK... Let me try this again... True competition requires
that it be PRACTICAL for multiple providers to enter the
market, including the creation of new providers to seize
opportunities being ignored by the existing ones.


The worse the existing provider it is, the more practical it is to
compete with them. If they are providing what people want at a reasonable
price, there is no need for competition. If they are not, then the it
becomes practical for multiple providers to enter the market. If you
assume that the cost to develop existing infrastructure is not insanely
less than the cost to develop new infrastructure, the isolation from
competition comes directly from the investment.


1.  The existing infrastructure is usually all that is needed for
many of the services in question.  Laying parallel copper
as a CLEC is not only prohibitively expensive, in most
areas, it's actually illegal.  Usually, municipalities
have granted franchise rights of access to right of
way to particular companies on an exclusive basis.  That
makes it pretty hard for a competitor to enter the market
if they can't get wholesale access to the existing copper.

2.  The existing copper was actually deployed (at least in most
of the united States) using public subsidies.  The taxpayers
actually paid for the network.  The physical infrastructure
should be the property of the people.  The ownership claim
of the telephone companies is almost as baseless as the
Verisign clame that they own the data in whois.


For example, if Bill Gates took a few billion dollars out of his pocket
and launched 80 satellites to provide wireless Internet access, it would
be damn hard to compete with him if he wasn't trying to recover those few
billion dollars. But if you spend a few billion, you get a few billion
worth. Anyone else can spend the same amount and get the same advantage.


3.  Except when you consider that there are only so many orbital
slots that can be maintained.  (see 1 above as well).  If Bill
manages to launch N satellites and N leaves N/2 orbital slots
available for other uses, then, it's pretty hard to launch
another N satellites at any cost.


If he already has the satellites and is providing the service other
people want at a low price, then other competitors will lose. But so what?
Consumers win. And competition doesn't exist to benefit the competitors.


4.  But, what tends to happen instead is that Bill charges whatever
he can get to recoup his billions until someone else launches
their satellites (has expended the capital).  Then, when they
start to go after revenue, Bill drops his prices to something
they can't sustain because they don't have his bankroll and
have to recoup their costs.  They go out of business and Bill
either buys their satellites, or, they become space-junk.
Bill brings his prices back up to previous levels, and,
consumers lose and the competition loses too.

Even if Bill doesn't actually do this, the knowledge that he could
causes investors to view the new satellite company as a bad risk,
so, Bill's monopoly position prevents investment into competitive
entry into the market.

Finally, since Bill doesn't have to worry about anyone else being
actually able to launch competing satellites, Bill has no reason
to innovate unless Bill can see a much higher profit margin
at the end of said innovation. (look at today's Telco as a prime
example of this form of complacency.  Actually, telco's are
very innovative, but, they focus on regulatory innovation instead
of technical innovation).


If he already has the satellites but is not providing the service other
people want or isn't charging a reasonable price, or both, then anyone
else can make the same infrastructure investment for a comparable cost.
If he's not satisfying demand, the demand is still there, and he's just
losing some of the benefits his infrastructure could be giving him.


5.  But, if you want this analogy to match the current copper plant
in the ground in most of the US, then, you have to also account
for the fact that Bill received 30-45 of his 60 billion in
investment in the form of public subsidies.  Are you going to
give all comers the same public subsidy (blank check)?  Instead,
you end up with exactly what we have today in the telcos.
Semi-regulated monopolies that think they own an infrastructure
built with taxpayer money. (see also 2 above)


No... Actually, the lack of market forces in the 

  1   2   3   4   5   >