[ NNSquad ] Vint Cerf talks with Peter Nowak about Google/Verizon deal.

2010-08-13 Thread Kevin McArthur
Peter Nowak has interviewed Rick Whitt and Vint Cerf on the 
Google/Verizon deal. The audio is here:


http://www.cbc.ca/video/news/audioplayer.html?clipid=1565890135

Parsing of the interview is done for this story:

http://www.cbc.ca/technology/story/2010/08/13/net-neutrality-google-vint-cerf.html

FYI,

Kevin McArthur


[ NNSquad ] Re: Peering dispute cuts off Sprint-Cogent Internet traffic

2008-10-31 Thread Kevin McArthur

George,

In this case however, we're talking major carriers, which have many many 
peers (massively multihomed). There are multiple routes around, as 
evidenced by the fact that people are using proxying services to get 
around the damage at the application layer.


As for BGP propagation, if its following the RFC then the routes are 
already advertised and before the peering agreement broke down, they 
just happened to be a shorter path. Once that peer dies, it should fall 
back to the longer paths in near real-time. (within moments, as soon as 
the bad route stops advertising (which it cant do if it cant reach the 
other end)) This is a core routing principle of the internet, and how it 
is supposed to be tolerant to attacks on infrastructure -- If this 
mechanism isn't working, then we have some serious resiliency problems 
on critical backbones.


You would have to do something special to stop BGP from rerouting -- 
like, for example, falsely advertising a route that doesn't work while 
making it appear closer than the alternatives. You could also 
theoretically block the remaining peers from advertising routes to that 
network, but again, that would be a massive net neutrality violation as 
they would be actively blocking a pathway, and not simply just not 
peering. Essentially, saying if I don't want to peer with you, no one 
else can either.


There's more going on here

Kevin McArthur



George Ou wrote:

There's no violation of any RFCs here, it's a peering dispute which is quite
common on the Internet.

It's a long running myth that routes are automatically rerouted on the
Internet.  Unless one of the two end-points is dual-homed with 2 completely
separate ISPs configured for BGP (or DNS remapping), any break in the route
means a disconnection between the two points.  Even when BGP does exist, it
takes some time for the routes to propagate so there's always some outage
for a period of time when there's a break in the link.


George

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Barry Gold
Sent: Friday, October 31, 2008 1:38 PM
To: NNSquad
Subject: [ NNSquad ] Re: Peering dispute cuts off Sprint-Cogent Internet
traffic

From: Ed Jankiewicz [EMAIL PROTECTED]
Subject: Total Filtering

  
As many news organizations are now reporting, Sprint-Nextel (Embarq) has 
decided to sever its Internet connection with Cogent, another Internet 
service provider.  This action has caused a hole or rip in the 
internet, meaning that Sprint-Nextel (Embarq) and Cogent customers may 
find they cannot access resources hosted by the other company's 
customers. Similar standoffs have occurred in the past, and usually one 
company backs down after a few days, but no one can predict what will 
happen in this case.
  



OK, so what has happened to the treats censorship as damage and routes 
around it Internet?  Even if Embarq and Cogent are no longer talking to 
each other, the routers should be automatically finding routes via other 
carriers and sending the packets -- around Robin Hood's barn if 
necessary, but the Internet is supposed to be _robust_.  Jon Postel 
designed it that way -- I've read the RFCs.  That's what ARPA specified 
when they paid for the development of first the ARPANet and later the 
Internet -- and what NSF paid for when they branched off NSFNet and 
allowed commercial traffic.


Are these guys programming their routers to just drop packets with 
certain destination IP addresses, instead of finding the shortest 
available route?


I'm beginning to think that Congress (or perhaps an international body 
similar to the WTO) should make the core RFCs (IP, TCP, BGP, FTP, HTTP, 
SMTP, and RFC 822) have the force of law.  And anybody who violates 
those protocols should be fined and/or have their connections turned off.


  


--

Kevin McArthur

StormTide Digital Studios Inc.
Author of the recently published book, Pro PHP
http://www.stormtide.ca



[ NNSquad ] Re: NY Times: Verizon offers system to improve P2P transfers

2008-03-14 Thread Kevin McArthur

Verizon does continue to set itself apart.

The statement:

Pasko stressed, however, that Verizon wants to work with P2P companies 
that are focusing on delivery of legitimate media, like Pando -- not 
systems where anyone can upload anything, which usually means lots of 
pirated material.


does strike me as having the potential to run into neutrality concerns 
when the carriers begin picking winners and losers in the P2P technology 
competition. As we all know, Bittorrent is open-source (and as a 
company, focused on legitimate media) while other solutions are either 
closed source or subject to content controls, patents and other 
nonsense. I'd hate to see the carriers giving competitive advantage to 
one but not the other just based upon their ownership of the gateway.


As with most things, Verizon management should take note that open 
apis/public specifications would enable P2P software authors to take 
advantage of any locational data Verizon wants to publish about it's 
customers in a neutral fashion -- that said, most torrent clients will 
pick faster peers and as such will naturally tend to favor closer 
networks already. So ... while there is probably room for some 
optimization in transfers with a very large list of potential peers, 
more common transfers with only a few dozen peers would likely see no 
benefit as their clients are already capable of finding the fastest 
peers with which to participate.


Anyway, certainly is a razors edge they're walking,

Kevin McArthur


Frank A. Coluccio wrote:

Verizon continues to stand out from the pack in a growing number of new and
interesting ways. Whether this reaching out to P2P users is legit, or involves
hidden gotchas, remains to be seen. Although, if the abundance of bandwidth
afforded by its FiOS offering is any indication of its philosophical bent,
then I'd be inclined to think that it's for real, especially when one
considers the efficiencies that Verizon itself stands to gain from this
approach. The one potential cause for citing a gotcha that I can see, thus far
-- and I've not looked at this in any depth, but this could play out to be
large-- might involve how Verizon goes about influencing Layer 3 routing,
since, in this case, we see a last mile network operator is having a say,
albeit in collaboration with a P2P vendor, in where traffic is, and is not,
being directed to and from. Thoughts on this or any other facet of this
release? Anyone?

--- [EMAIL PROTECTED] wrote:

From: Lauren Weinstein [EMAIL PROTECTED]
To: nnsquad@nnsquad.org
Cc: [EMAIL PROTECTED]
Subject: [ NNSquad ] NY Times: Verizon offers system to improve P2P transfers
Date: Thu, 13 Mar 2008 22:14:25 -0700 (PDT)


NY Times: Verizon offers system to improve P2P transfers

http://www.nytimes.com/aponline/us/AP-P2P-Verizon.html

--Lauren--
NNSquad Moderator


  


[ NNSquad ] Re: INTELLIGENT network management? (far from IP)

2008-03-02 Thread Kevin McArthur

Fred,

I have to take exception to your suggestion that QoS is definitely 
required for proper VoIP operation. Most VoIP today operates without 
any specific QoS support -- even ISPs that offer this 'thinly veiled 
VoIP tax' have carried VoIP successfully without traffic management for 
years. I have worked as a VoIP software architect and can state 
unequivocally that QoS is not required for VoIP operation -- in fact, 
its not even close to the top reliability concern -- which is actually 
'traffic management' of inbound ports.


VoIP doesn't need QoS, and this is just another mechanism ISPs are using 
to leverage their own 'digital phone' offerings at the expense of the 
free market.


For more details refer to Vonage Canada's filing about Shaw Cable to the 
CRTC that was made about Shaw's $10/mo VoIP QoS service. Maybe we 
should let the VoIP companies, not the incumbent competitors tell us 
what type of traffic management is required. From my perspective, they 
already have, and are against these types of anti competitive services.


From my perspective, QoS is totally unnecessary on public links, and 
ample alternative business models exist to the carriers plans of radical 
over-subscription.


$0.02

Kevin McArthur



Fred Reimer wrote:

Hmm.  I have to agree with Brett on most of his comments.  QoS is definitely
part of the IETF RFC's.  And QoS is definitely required for VoIP, in any
network, for it to work properly.  The problem is that there is no common
global, or for that matter national, agreement as to how classifications and
markings are done.  Without that there would be little reason for the
various network owners to trust each other.  There may be one-off agreements
between two ISP's or an ISP and a backbone carrier.  However, unless there
is a national/global standard then we would never get to the point where
end-users can mark their own traffic as they see fit, and have those
markings honored throughout the Internet as long as they complied with their
agreement with their ISP.

I disagree when it comes to the intelligence of the network, and whether
network owners should be able to make policy as to what types of content is
appropriate just because the routers and other network infrastructure
devices have intelligence.  The Internet is an end-to-end network, not a
client-server network.

Fred Reimer


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brett
Glass
Sent: Friday, February 29, 2008 3:04 PM
To: Bob Frankston; nnsquad@nnsquad.org
Subject: [ NNSquad ] Re: INTELLIGENT network management? (far from IP)

At 12:28 PM 2/29/2008, Bob Frankston wrote:
 
  

If you require QoS for VoIP then you have the PSTN not the Internet.


Period.

QoS was (and is) part of the original design of the Internet. Note the
Type of Service fields in both IPv4 and IPv6, as well as the push
bit. (Interestingly, there's no shove bit. Don't know why. ;-) )

  

VoIP cannot rely on QoS because you don't have enough control over the
network. 



I do have control over my local network and over my interface to my
backbone provider. And when I prioritize VoIP on those, it helps
tremendously.

  

And VoIP does not rely on QoS -- I verified this with Tom Evslin
who supplied much of the backbone for VoIP.



VoIP becomes nearly unusable in times of heavy loads without QoS. In
fact, it becomes unusable when someone on the same node runs
unthrottled BitTorrent. That's why we prioritize and do P2P mitigation.

  
Let's not base policies on misconceptions. 



I agree. Hopefully the above will clear up some of those misconceptions.

  

Yes you can build your own
intelligent network but let's not confuse it with the Internet.



The Internet was never meant to be unintelligent. By design, it relies upon 
routers which have tremendous computing power and very large amounts of 
memory. (And it must. It can't scale or have redundancy without these

things.)
What's more, every end node is ITSELF a router and devotes intelligence to
that. It is unreasonable to attempt to exile technology, innovation, or
intelligence from any part of the Internet.

  
If VoIP fails it fails. 



You may be able to say that, but we can't. We lose the customer if he or
she can't do VoIP.

  

And if you require real time HD streaming that may fail to. So what?



I believe that it was you who, in a previous message, were voicing
discontent with the performance of HD streaming on your FiOS connection.
We can't support HD streaming on the typical residential connection, but
we DO want to support it if the customer is buying sufficient bandwidth.
If we don't, again, we're out of business. Or someone goes to the FCC
and complains that we're not supporting that medium and must be
regulated

--Brett Glass

  


[ NNSquad ] Re: Global ISP content filtering information (from IP)

2008-02-28 Thread Kevin McArthur

Lauren,

In addition to what is at that location, I would point out that the 
organization that is associated with our (Canada) firewall just got a 
massive federal investment.


You can find the details at, 
http://www.publicsafety.gc.ca/media/bk/2005/bg20050124-eng.aspx . It has 
also been bragged about repeatedly in the house of commons and can 
mentions be found in hansard.


/$3.5 million over five years was committed to fund *Cybertip.ca*, 
Canada's national tipline for reporting the sexual exploitation of 
children on the Internet. This initiative fulfills the Government of 
Canada's commitment, made in the February 2004 Speech from the Throne to 
do more to ensure the safety of our children. The Strategy is supported 
by the reinstatement of child protection legislation, Bill C-2, on 
October 8, 2004, which will help reduce the risk of child sexual 
exploitation of children. Public Safety and Emergency Preparedness 
Canada is leading efforts to implement the Strategy./


I still find it questionable whether or not this monitoring system is 
consistent with our privacy and telecommunications acts, as I am not 
aware of any review by the privacy commissioner or specific 
authorization by the CRTC (an arms length, independent, regulator) which 
is arguably required for any system that interferes with traffic under 
s.36 of the act.


To my knowledge, the only time the CRTC was asked to authorize the 
blocking of these types of sites, they declined to permit their 
voluntary blocking. Some details @ 
http://www.thestar.com/comment/columnists/article/95518


As for the technical details, these systems appear to work by using 
something similar to a dnsbl, wherein if the subscriber tries to access 
an ip on the list, their traffic will be further scrutinized for 
blocking. They claim to have the ability to block at the url level, but 
that not all urls are monitored. Details of exactly how these systems 
work or what exactly they block is nearly impossible to scrutinize and 
this lack of transparency, and oversight has lead to a lot of criticism 
of the firewall. In recent months, they have taken a number of steps to 
address these criticisms, including better integration with law 
enforcement and the judiciary.


Hope that helps add to the background,

Kevin McArthur


Lauren Weinstein wrote:

--- Forwarded Message


From: Seth Finkelstein [EMAIL PROTECTED]
Sent: Wednesday, February 27, 2008 4:39 PM
To: David Farber; ip
Subject: libertus.net - ISP Censorware Voluntary / Mandatory

ISP Voluntary / Mandatory Filtering
http://libertus.net/censor/ispfiltering-gl.html

This page contains information about ISP-level filtering systems
implemented, by various ISPs in various countries, to prevent
accidental access to child sexual abuse material on web pages/sites.
It has been researched and produced in the context of the Australian
Federal Labor Government's 2008 plan to mandate that Australian ISPs
block access to a vastly larger type and quantity of web pages.

- --
Seth Finkelstein  Consulting Programmer  http://sethf.com
Infothought blog - http://sethf.com/infothought/blog/
Interview: http://sethf.com/essays/major/greplaw-interview.php

  


[ NNSquad ] Re: Speculation, how ATT can implement copyright filtering without wiretapping/dpi...

2008-01-28 Thread Kevin McArthur
I'm going to wind-down on this thread, but what would happen to video's 
like 'Telus Idol', that are very controversial in that copyright is not 
crystal clear, and the dealing likely fair (news reporting).


See: http://www.michaelgeist.ca/content/view/1999/125/ for the back story.

Would the ISPs have the ability to censor negative media about 
themselves as has happened with the DMCA? When we mix copyright and net 
neutrality, we inevitably end up with censorship -- whether its blocking 
access to workers-for-change or blocking the viewing of telus idol.


Any analysis of these types of systems has to presume the worst 
intentions by the carriers and media companies involved. We have a track 
record here, and it can't be ignored. Its unreasonable to think that the 
only thing a carrier need do to suppress bad pr is to kick it off the 
web and say 'sue us' in order to get it back online. The value of 
censoring, or delaying the release of the content drastically outweighs 
any potential damages and costs associated with a lawsuit.


Any censorship system, no matter how well meaning, will be abused.

K



lynn wrote:

substitute spam/mers. imo that is a worse problem. would you have every
web developer code in such a manner that spammers can't get thru the forms
and use your mail server? spam is illegal. would you ban exchange? which
has legal uses but is used very often bt spammers? would any of this be an
issue if it was a lone (read not wealthy) copyright holder complaining? is
ap, reuters, cnn, etc complaining because people copy their pages (and
give credit)? is most of youtube.com about to be prosecuted? or google?
why is this discussion only about p2p?

  

As I understand it, this list was formed in reaction to Comcast being
caught red-handed ... engaging in responsible network management. If
it's meant to be a piracy rights forum, I was mislead.

It's important, I think, for us to distinguish legitimate and
illegitimate forms of traffic control, as well as to identify the
innocent victims of over-zealous enforcement of copyrights and all that.

Large-scale piracy is a problem that cries out for a technical solution.
The problem is too blatant to ignore and we all bear the costs of it. If
half of residential broadband's capacity is devoted to stolen material,
cleaning up these networks makes more available to the rest of us at
lower cost. It can only help, as long as it's done right.

The EFF argued with me at NN2008 that pirates would resort to crypto and
all that to avoid detection, but that bird doesn't fly. In order to
collude with someone you don't know to pirate MS Office, you need a
rendezvous system of some kind, If that system is heavily cloaked to
avoid detection it will be ineffective. The movement of piracy toward
cloaked systems actually serves the aims of the content owners even
better than immediate blocking or post-hoc prosecution. They want this
sort of thing not to happen at all, naturally, but are willing to accept
that a certain amount is unavoidable.

The level of piracy we have today with Mininova, The Pirate's Bay and
their kin is so blatant we can't really expect the content owners to do
nothing about it.

RB

Edward Almasy wrote:


On Jan 28, 2008, at 4:32 AM, Richard Bennett wrote:
  

There is a risk of unfair shut-offs, but it's very, very small and
can be dealt with after the fact in some reasonable way.


I would suggest that the very existence of NNSquad belies this
argument.  It's likely that few if any on this list are spammers,
however most here have been directly affected in one fashion or
another by anti-spammer measures, and I would suspect many of us are
here in part because of the prospect of similar unfair measures being
introduced.

Ed



  



  


[ NNSquad ] Re: Richard Bennett on Comcast and Fairness (from IP)

2008-01-16 Thread Kevin McArthur

I'll respond to the comments on my reply.

I agree, Kevin, that as a matter of principle it's not the network's 
job to determine the value of bits, but I disagree that all bits are 
therefore of equal value. We all know that some information is more 
valuable to us personally than other information, and we're quite good 
at sorting it all out. I propose that we communicate our own 
determination to the network, and require it to convey bits (packets, 
really) at the priorities we've specified. This is what we do in WiFi 
networks with WME enabled, maintain separate priority queues for four 
types of data, and it works quite well, and with no Telco in the picture. 


You're mixing personal QoS which can occur at a residential router and 
network QoS. We have personal technology, it works, anyone can go down 
to a retailer and get a QoS router. Where your logic breaks down is 
where your QoS preferences as a network user interfere with mine.


My priorities are not my neighbors priorities and they're certainly not 
my ISPs. If I communicate that my bittorrent download is priority, can 
we expect that an ISP will just accept that? If not, will they want to 
bill for that 'priority' and will that not lead to the type of 
competition differentials we're currently seeing for VoIP products in 
Canada. The ISPs charging 'thinly veiled VoIP tax[es]' so that competing 
services continue to work?


Some Questions:

Do you propose that we create a gradient of bandwidth pricing?
Would top priority bandwidth cost more?
Isn't this the two-tier scenario, and highly prejudicial to the poor?
Wouldn't this discourage the development of new media services that 
require both high bandwidth and low latency?
Wouldn't this give a distinct competitive advantage to the ISP over 
third-party competitors?
Doesn't this work as as a disincentive to create faster networks, as if 
the normal pipe is purposefully broken or neglected, then all users will 
be forced onto the priority pipe and therefore generate more revenue for 
carriers?


Is it not just cheaper, easier and more socially fair that the carriers 
be required to build their network's capacity in ratio to overall usage 
so that all applications and participants get the best possible service?


The business of packet priority is not a technical one, it is instead a 
social question of considerable consequence.


Kevin McArthur



Richard Bennett wrote:
A few responses to some of the remarks on my article posted on 
NNSquad, for the mutual benefit and what-not.


Kevin McArthur wrote:
It is not the purpose of a network to determine the value of bits, 
nor is it right to treat any bit as better than another. A text 
message might be really important to someone else, but my ability to 
watch a streaming news report is really important to me. Which one 
will the carrier prioritize? This isn't a determination they can 
make, nor is it one where the value of the transmission can be 
determined by the number or amount of bits traveling.
I agree, Kevin, that as a matter of principle it's not the network's 
job to determine the value of bits, but I disagree that all bits are 
therefore of equal value. We all know that some information is more 
valuable to us personally than other information, and we're quite good 
at sorting it all out. I propose that we communicate our own 
determination to the network, and require it to convey bits (packets, 
really) at the priorities we've specified. This is what we do in WiFi 
networks with WME enabled, maintain separate priority queues for four 
types of data, and it works quite well, and with no Telco in the picture.


Barry Gold wrote:
But even if the excessive user _were_ blocking the line to 
the...buffet (presumably by filling the local loop up with his 
packets), dropping packets is a useful solution.  The ISP can (or 
should be able to) program the cable modem to drop the packets before 
they ever get on the local loop -- right there in the user's 
house/apartment/business.  Or if the user owns the modem, the ISP can 
put a minimal router with usage control at the point where the wire 
emerges from the user's building, or where it connects to the main 
cable at the utility pole or undergound system.
As others have pointed out, the DOCSIS cable modem carrier doesn't 
have the ability to instruct the user's modem to drop packets rather 
than attempt to transmit them. Dropping packets also has no immediate 
effect on the load on the local segment caused by BitTorrent 
handshakes. Packet drop reduces the load on a segment caused by an 
ongoing stream of TCP traffic, but it does nothing to reduce load 
caused by SYN responses when the SYNs are coming from outside the 
segment.


Andy Richardson wrote:

They can go in several different directions:
(1)  upgrade their infrastructure to handle the traffic
(2)  lower prices to make up for lower network performance
(3)  lose customers until the problem basically fixes itself
(4)  establish tiers

[ NNSquad ] Re: Richard Bennett on Comcast and Fairness (from IP)

2008-01-16 Thread Kevin McArthur

Richard, another Trotskyite argument eh?

All government services, whether libraries, roads, or the internet must 
take social concerns into account before pure market capitalism. What 
you're proposing a system where not only can you buy a nicer car, but 
you get to drive faster than everyone else, not stop at signals and 
force others to pull over to make way. In the real world, we reserve 
this type of priority for emergency vehicles -- and the internet should 
be no different.


I wont bother to point out the technical fallacy with trying to compare 
Bandwidth and QoS, but clearly people will understand the difference in 
a real world example. If when buying groceries, another checkstand line 
is opened, it does not adversely affect those already queued. 
(Bandwidth) However, when you keep the same checkstand open and start 
letting people jump the line, you adversely affect those already queued. 
(QoS)


You cant compare the two as if they have the same social affect.

Kevin McArthur


Richard Bennett wrote:
Kevin, you're fighting a battle that was settled many years ago. We 
already have bandwidth pricing and tiered services, and it's quite 
well-accepted. I can buy a dial-up connection to the Internet, or 
several flavors of DSL, or several flavors of cable modem service, 
differentiated by bandwidth and a range of uses. And I'm able to buy 
Internet access bundled with TV and telephone if I so desire, all of 
them operating in separate frequency or time bands and providing the 
necessary latencies to make the service work as desired. It's 
certainly true that the rich can afford better networking services 
than can the poor, but that's no different than access to fast cars, 
good health care, safe housing and a host of other things. This is 
capitalism, and after many years of experience we've learned it's 
better to allow disparities between rich and poor than to go the Cuban 
route and make everyone poor just to be perfectly fair.


So no, I'm not proposing a revolution and and end to capitalism. I am 
proposing something a bit more practical, which is a scheme for 
proving end users a way to tell their ISP which streams need low 
latency, which are normal, and which don't care, so that the ISP can 
provide the best networking experience for the range of customers. The 
architecture of the IP frame permits this, of course, but it hasn't 
been widely used outside private networks to date.


It's just a thought, and not highly germane to the issue of how best 
to whack your ISP around.


RB

Kevin McArthur wrote:

I'll respond to the comments on my reply.

I agree, Kevin, that as a matter of principle it's not the network's 
job to determine the value of bits, but I disagree that all bits 
are therefore of equal value. We all know that some information is 
more valuable to us personally than other information, and we're 
quite good at sorting it all out. I propose that we communicate our 
own determination to the network, and require it to convey bits 
(packets, really) at the priorities we've specified. This is what we 
do in WiFi networks with WME enabled, maintain separate priority 
queues for four types of data, and it works quite well, and with no 
Telco in the picture. 


You're mixing personal QoS which can occur at a residential router 
and network QoS. We have personal technology, it works, anyone can go 
down to a retailer and get a QoS router. Where your logic breaks down 
is where your QoS preferences as a network user interfere with mine.


My priorities are not my neighbors priorities and they're certainly 
not my ISPs. If I communicate that my bittorrent download is 
priority, can we expect that an ISP will just accept that? If not, 
will they want to bill for that 'priority' and will that not lead to 
the type of competition differentials we're currently seeing for VoIP 
products in Canada. The ISPs charging 'thinly veiled VoIP tax[es]' so 
that competing services continue to work?


Some Questions:

Do you propose that we create a gradient of bandwidth pricing?
Would top priority bandwidth cost more?
Isn't this the two-tier scenario, and highly prejudicial to the poor?
Wouldn't this discourage the development of new media services that 
require both high bandwidth and low latency?
Wouldn't this give a distinct competitive advantage to the ISP over 
third-party competitors?
Doesn't this work as as a disincentive to create faster networks, as 
if the normal pipe is purposefully broken or neglected, then all 
users will be forced onto the priority pipe and therefore generate 
more revenue for carriers?


Is it not just cheaper, easier and more socially fair that the 
carriers be required to build their network's capacity in ratio to 
overall usage so that all applications and participants get the best 
possible service?


The business of packet priority is not a technical one, it is instead 
a social question of considerable consequence.


Kevin McArthur



Richard Bennett wrote

[ NNSquad ] Re: Richard Bennett on Comcast and Fairness (from IP)

2008-01-15 Thread Kevin McArthur
Hi Lauren, where is the original source to this reply. I'd love to see 
the full context that the author seems to be talking about.


   [ Presumably the article of interest is:
 http://www.theregister.co.uk/2007/11/06/richard_bennett_comcastle/
-- Lauren Weinstein
   NNSquad Moderator ]

As for comments, it is interesting that a network engineer cannot see 
his inherent bias for the telecommunications perspective. Net Neutrality 
certainly isn't a science, rather it is a social-political issue, and 
one where the policy will probably have to drive the technology.


The biggest item that I really disagree with is:

But in the final analysis, we all know that some of our bits are more 
important than others, and the network will work better if the layer 3 
and layer 2 parts can communicate that sort of information between each 
other. 


It is not the purpose of a network to determine the value of bits, nor 
is it right to treat any bit as better than another. A text message 
might be really important to someone else, but my ability to watch a 
streaming news report is really important to me. Which one will the 
carrier prioritize? This isn't a determination they can make, nor is it 
one where the value of the transmission can be determined by the number 
or amount of bits traveling.


In essence, it presumes a state of operation where the network is always 
overloaded.  A state of operation that is simply not necessary and can 
be solved with strong carrier service quality standards and adequate 
provisioning. Adding a bunch of equipment to manage scarcity, instead of 
just eliminating that scarcity is just a bad allocation of resources. It 
might make sense to the carriers, as it will allow them to derive 
revenue from artificially created resource scarcity, but it certainly 
doesn't help the consumer or the internet industries gain access to more 
bandwidth.


I am content to advocate absolute neutrality, let the carriers charge 
based upon neutral usage, and have competition in Internet service 
resemble something similar to how power companies operate -- without any 
regard to how the service is used.


Kevin McArthur

Lauren Weinstein wrote:

--- Forwarded Message
From: David Farber [EMAIL PROTECTED]
To: ip [EMAIL PROTECTED]
Date: Mon, 14 Jan 2008 15:21:28 -0800
Subject: [IP] Interesting -- comment from author -- F.C.C. to Look at


 ---

From: Richard Bennett [EMAIL PROTECTED]
Sent: Monday, January 14, 2008 4:23 PM
To: David Farber
Subject: Re: [IP] Re: F.C.C. to Look at Complaints Comcast Interferes With 
Net - New York Times

As the author of the article in question, I'll gladly defend it. The
fundamental point I was trying to make is simply that there's a huge
hole in the architecture of the IETF protocol suite with respect to
fairness. I'm a layer two protocol designer (Ethernet over UTP, WiFi 11n
MSDU aggregation, and UWB DRP are in my portfolio), and in the course of
my career have devoted an embarrassing amount of time to engineering
fairness in network access. Most the younger generation takes it as
given that if you understand TCP/IP you understand networking, but in
fact most of the progress in network architectures over the last 30
years has been at layers 1 and 2. And with the TCP-centric mindset, they
tend to believe that all problems of networking can be solved by the
application of the right RFCs. But in fact we all connect to our ISP
over a layer 2 network, and each of these has its own challenges and
problems.

The carriers are often criticized for not using packet drop to resolve
fairness problems, but that's not really the scope of packet drop, which
is actually a solution to Internet congestion, not to the lack of
fairness that may (or may not) be the underlying cause of the
congestion. We need a different solution to fairness at layer 3,
especially on layer 2 networks  like DOCSIS where packet drop closes the
door after the horse has run off.

The buffet analogy needs a little refinement. What the bandwidth hog
does is block the line to the all-you-can-eat buffet so that nobody else
can get any food. That's not a behavior that would be tolerated in a
restaurant, and it shouldn't be tolerated in a residential network.
Unfortunately, it wasn't the huge problem when DOCSIS was designed, so
the 1.0 and 1.1 versions of the technology don't address it, certainly
not as well as Full-Duplex Ethernet, 802.11e WiFi, and DSL do.

Some may argue that the Internet doesn't need a fairness system as it's
mostly a local problem, and I have some sympathy for that point of view.
But in the final analysis, we all know that some of our bits are more
important than others, and the network will work better if the layer 3
and layer 2 parts can communicate that sort of information between each
other.

I don't view this as a moral problem as much as an engineering problem.
Moral philosophy is certainly a fascinating subject (as is video
coding), but it's outside