Re: Comcast blocking p2p uploads

2007-10-21 Thread Joe Greco

 Leo Bicknell wrote:
  I'm a bit confused by your statement. Are you saying it's more
  cost effective for ISP's to carry downloads thousands of miles
  across the US before giving them to the end user than it is to allow
  a local end user to upload them to other local end users?
   
 Not to speak on Joe's behalf, but whether the content comes from 
 elsewhere on the Internet or within the ISP's own network the issue is 
 the same: limitations on the transmission medium between the cable modem 
 and the CMTS/head-end.  The issue that cable companies are having with 
 P2P is that compared to doing a HTTP or FTP fetch of the same content 
 you will use more network resources, particularly in the upstream 
 direction where contention is a much bigger issue.  On DOCSIS 1.x 
 systems like Comcast's plant, there's a limitation of ~10mbps of 
 capacity per upstream channel.  You get enough 384 - 768k connected 
 users all running P2P apps and you're going to start having problems in 
 a big hurry.  It's to remove some of the strain on the upstream channels 
 that Comcast has started to deploy Sandvine to start closing *outbound* 
 connections from P2P apps.

That's part of it, certainly.  The other problem is that I really doubt
that there's as much favoritism towards local clients as Leo seems to
believe.  Without that, you're also looking at a transport issue as you
shove packets around.  Probably in ways that the network designers did
not anticipate.

Years ago, dealing with web caching services, there was found to be a
benefit, a limited benefit, to setting up caching proxies within a major
regional ISP's network.  The theoretical benefit was to reduce the need 
for internal backbone and external transit connectivity, while improving
user experience.

The interesting thing is that it wasn't really practical to cache on a
per-POP basis, so it was necessary to pick cache locations at strategic
locations within the network.  This meant you wouldn't expect to see a
bandwidth savings on the internal backbone from the POP to the
aggregation point.

The next interesting point is that you could actually improve the cache
hit rate by combining the caches at each aggregation point; the larger
userbase meant that any given bit of content out on the Internet was
more likely to be in cache.  However, this had the ability to stress the
network in unexpected ways, as significant cache-site to cache-site data 
flows were happening in ways that network engineering hadn't always 
anticipated.

A third interesting thing was noted.  The Internet grows very fast. 
While there's always someone visiting www.cnn.com, as the number of other
sites grew, there was a slow reduction in the overall cache hit rate over
the years as users tended towards more diverse web sites.  This is the
result of the ever-growing quantity of information out there on the
Internet.

This doesn't map exactly to the current model with P2P, yet I suspect it
has a number of loose parallels.

Now, I have to believe that it's possible that a few BitTorrent users in
the same city will download the same Linux ISO.  For that ISO, and for
any other spectacularly popular download, yes, I would imagine that there
is some minor savings in bandwidth.  However, with 10M down and 384K up,
even if you have 10 other users in the city who are all sending at full
384K to someone new, that's not full line speed, so the client will still
try to pull additional capacity from elsewhere to get that full 10M speed.

I've always seen P2P protocols as behaving in an opportunistic manner.
They're looking for who has some free upload capacity and the desired
object.  I'm positive that a P2P application can tell that a user in
New York is closer to me (in Milwaukee) than a user in China, but I'd
quite frankly be shocked if it could do a reasonable job of
differentiating between a user in Chicago, Waukesha (few miles away),
or Milwaukee.

In the end, it may actually be easier for an ISP to deal with the
deterministic behaviour of having data from me go to the local 
upstream transit pipe than it is for my data to be sourced from a
bunch of other random nearby on-net sources.

I certainly think that P2P could be a PITA for network engineering.
I simultaneously think that P2P is a fantastic technology from a showing-
off-the-idea-behind-the-Internet viewpoint, and that in the end, the 
Internet will need to be able to handle more applications like this, as 
we see things like videophones etc. pop up.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Comcast blocking p2p uploads

2007-10-21 Thread Adrian Chadd

On Sun, Oct 21, 2007, Joe Greco wrote:

 A third interesting thing was noted.  The Internet grows very fast. 
 While there's always someone visiting www.cnn.com, as the number of other
 sites grew, there was a slow reduction in the overall cache hit rate over
 the years as users tended towards more diverse web sites.  This is the
 result of the ever-growing quantity of information out there on the
 Internet.

Then the content became very large and very static; and site owners
try very hard to maximise their data flows rather than making it easier
for people to cache it locally.

Might work in America and Europe. Developing nations hate it.

 I certainly think that P2P could be a PITA for network engineering.
 I simultaneously think that P2P is a fantastic technology from a showing-
 off-the-idea-behind-the-Internet viewpoint, and that in the end, the 
 Internet will need to be able to handle more applications like this, as 
 we see things like videophones etc. pop up.

P2P doesn't have to be a pain in the ass for network engineers. It just
means you have to re-think how you deliver data to your customers.
QoS was a similar headache and people adapted..

(QoS on cable networks? Not possible! anyone remember that?)



Adrian



Can P2P applications learn to play fair on networks?

2007-10-21 Thread Sean Donelan


Much of the same content is available through NNTP, HTTP and P2P. 
The content part gets a lot of attention and outrage, but network 
engineers seem to be responding to something else.


If its not the content, why are network engineers at many university 
networks, enterprise networks, public networks concerned about the impact 
particular P2P protocols have on network operations?  If it was just a

single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications 
cooperate and fairly share network resources.  NNTP is usually considered 
a very well-behaved network protocol.  Big bandwidth, but sharing network 
resources.  HTTP is a little less behaved, but still roughly seems to 
share network resources equally with other users. P2P applications seem

to be extremely disruptive to other users of shared networks, and causes
problems for other polite network applications.

While it may seem trivial from an academic perspective to do some things,
for network engineers the tools are much more limited.

User/programmer/etc education doesn't seem to work well. Unless the 
network enforces a behavor, the rules are often ignored. End users 
generally can't change how their applications work today even if they 
wanted too.


Putting something in-line across a national/international backbone is 
extremely difficult.  Besides network engineers don't like additional

in-line devices, no matter how much the sales people claim its fail-safe.

Sampling is easier than monitoring a full network feed.  Using netflow 
sampling or even a SPAN port sampling is good enough to detect major 
issues.  For the same reason, asymetric sampling is easier than requiring 
symetric (or synchronized) sampling.  But it also means there will be

a limit on the information available to make good and bad decisions.

Out-of-band detection limits what controls network engineers can implement 
on the traffic. USENET has a long history of generating third-party cancel 
messages. IPS systems and even passive taps have long used third-party
packets to respond to traffic. DNS servers been used to re-direct 
subscribers to walled gardens. If applications responded to ICMP Source 
Quench or other administrative network messages that may be better; but 
they don't.




Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Florian Weimer

* Sean Donelan:

 If its not the content, why are network engineers at many university
 networks, enterprise networks, public networks concerned about the
 impact particular P2P protocols have on network operations?  If it was
 just a single network, maybe they are evil.  But when many different
 networks all start responding, then maybe something else is the
 problem.

Uhm, what about civil liability?  It's not necessarily a technical issue
that motivates them, I think.

 The traditional assumption is that all end hosts and applications
 cooperate and fairly share network resources.  NNTP is usually
 considered a very well-behaved network protocol.  Big bandwidth, but
 sharing network resources.  HTTP is a little less behaved, but still
 roughly seems to share network resources equally with other users. P2P
 applications seem to be extremely disruptive to other users of shared
 networks, and causes problems for other polite network applications.

So is Sun RPC.  I don't think the original implementation performs
exponential back-off.

If there is a technical reason, it's mostly that the network as deployed
is not sufficient to meet user demands.  Instead of providing more
resources, lack of funds may force some operators to discriminate
against certain traffic classes.  In such a scenario, it doesn't even
matter much that the targeted traffic class transports content of
questionable legaility.  It's more important that the measures applied
to it have actual impact (Amdahl's law dictates that you target popular
traffic), and that you can get away with it (this is where the legality
comes into play).


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Sean Donelan


On Sun, 21 Oct 2007, Florian Weimer wrote:

If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations?  If it was
just a single network, maybe they are evil.  But when many different
networks all start responding, then maybe something else is the
problem.


Uhm, what about civil liability?  It's not necessarily a technical issue
that motivates them, I think.


If it was civil liability, why are they responding to the protocol being
used instead of the content?


So is Sun RPC.  I don't think the original implementation performs
exponential back-off.


If lots of people were still using Sun RPC, causing other subscribers to 
complain, then I suspect you would see similar attempts to throttle it.




If there is a technical reason, it's mostly that the network as deployed
is not sufficient to meet user demands.  Instead of providing more
resources, lack of funds may force some operators to discriminate
against certain traffic classes.  In such a scenario, it doesn't even
matter much that the targeted traffic class transports content of
questionable legaility.  It's more important that the measures applied
to it have actual impact (Amdahl's law dictates that you target popular
traffic), and that you can get away with it (this is where the legality
comes into play).


Sandvine, packeteer, etc boxes aren't cheap either.  The problem is giving
P2P more resources just means P2P consumes more resources, it doesn't 
solve the problem of sharing those resources with other users. Only if 
P2P shared network resources with other applications well does increasing 
network resources make more sense.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Mikael Abrahamsson


On Sun, 21 Oct 2007, Sean Donelan wrote:

Sandvine, packeteer, etc boxes aren't cheap either.  The problem is 
giving P2P more resources just means P2P consumes more resources, it 
doesn't solve the problem of sharing those resources with other users. 
Only if P2P shared network resources with other applications well does 
increasing network resources make more sense.


If your network cannot handle the traffic, don't offer the services.

It all boils down to the fact that the only thing that end users really 
have to give us as ISPs, is their source address (which we usually assign 
to them), the destination address of the packet they want transported, and 
we can implicitly look at the size of the packet and get that information. 
That's the ONLY thing they have to give us. Forget looking at L4 or alike, 
that will be encrypted as soon as ISPs start to discriminate on it. Users 
have enough computing power available to encrypt everything.


So any device that looks inside packets to decide what to do with them is 
going to fail in the long run and is thus a stop-gap measure before you 
can figure out anything better.


Next step for these devices is to start doing statistical analysis of 
traffic to find patterns, such as you're sending traffic to hundreds of 
different IPs simultaneously, you must be filesharing or alike. This is 
the next step and a lot of the box manufacturers are already looking into 
this. So, trench war again, I can see countermeasures to this also.


The long term solution is of course to make sure that you can handle the 
traffic that the customer wants to send (because that's what they can 
control), perhaps by charging for it by some scheme that involves not 
offering flat-fee.


Saying p2p doesn't play nice with the rest of the network and blaming 
p2p, only means you're congesting due to insufficient resources, and the 
fact that p2p uses a lot of simultaneous TCP sessions and individually 
they're playing nice, but together they're not when compared to web 
surfing.


The solution is not to try to change p2p, the solution is to fix 
the network or the business model so your network is not congesting.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Sean Donelan


On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:

If your network cannot handle the traffic, don't offer the services.


So your recommendation is that universities, enterprises and ISPs simply 
stop offering all Internet service because a few particular application

protocols are badly behaved?

A better idea might be for the application protocol designers to improve 
those particular applications.  In the mean time, universities, 
enterprises and ISPs have a lot of other users to serve.





BitTorrent swarms have a deadly bite on broadband nets

2007-10-21 Thread Sean Donelan



http://www.multichannel.com/article/CA6332098.html

  The short answer: Badly. Based on the research, conducted by Terry Shaw,
  of CableLabs, and Jim Martin, a computer science professor at Clemson
  University, it only takes about 10 BitTorrent users bartering files on a
  node (of around 500) to double the delays experienced by everybody else.
  Especially if everybody else is using normal priority services, like
  e-mail or Web surfing, which is what tech people tend to call
  best-effort traffic.

Adding more network bandwidth doesn't improve the network experience of 
other network users, it just increases the consumption by P2P users. 
That's why you are seeing many universities and enterprises spending 
money on traffic shaping equipment instead of more network bandwidth.




Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Mikael Abrahamsson


On Sun, 21 Oct 2007, Sean Donelan wrote:

So your recommendation is that universities, enterprises and ISPs simply 
stop offering all Internet service because a few particular application 
protocols are badly behaved?


They should stop to offer flat-rate ones anyway. Or do general per-user 
ratelimiting that is protocol/application agnostic.


There are many ways to solve the problem generally instead of per 
application, that will also work 10 years from now when the next couple of 
killer apps have arrived and past away again.


A better idea might be for the application protocol designers to improve 
those particular applications.


Good luck with that.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Sean Donelan


On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs simply 
stop offering all Internet service because a few particular application 
protocols are badly behaved?


They should stop to offer flat-rate ones anyway.


Comcast's management has publically stated anyone who doesn't like the 
network management controls on its flat rate service can upgrade to 
Comcat's business class service.


Problem solved?

Or would some P2P folks complain about having to pay more money?



Or do general per-user ratelimiting that is protocol/application agnostic.


As I mentioned previously about the issues involving additional in-line 
devices and so on in networks, imposing per user network management and 
billing is a much more complicated task.


If only a few protocol/applications are causing a problem, why do you need 
an overly complex response?  Why not target the few things that are 
causing problems?



A better idea might be for the application protocol designers to improve 
those particular applications.


Good luck with that.


It took a while, but it worked with the UDP audio/video protocol folks who 
used to stress networks.  Eventually those protocol designers learned to 
control their applications and make them play nicely on the network.




Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Eric Spaeth


Mikael Abrahamsson wrote:

If your network cannot handle the traffic, don't offer the services.
In network access for the masses, downstream bandwidth has always been 
easier to deliver than upstream.  It's been that way since modem 
manufacturers found they could leverage a single digital/analog 
conversion in the PTSN to deliver 56kbps downstream data rates over 
phone lines.  This is still true today in nearly every residential 
access technology: DSL, Cable, Wireless (mobile 3G / EVDO), and 
Satellite all have asymmetrical upstream/downstream data rates, with 
downstream being favored in some cases by a ratio of 20:1.  Of that 
group, only DSL doesn't have a common upstream bottleneck between the 
subscriber and head-end.   For each of the other broadband technologies, 
the overall user experience will continue to diminish as the number of 
subscribers saturating their upstream network path grows.


Transmission technology issues aside, how do you create enough network 
capacity for a technology that is designed to use every last bit of 
transport capacity available?  P2P more closely resembles denial of 
service traffic patterns than standard Internet traffic.
The long term solution is of course to make sure that you can handle 
the traffic that the customer wants to send (because that's what they 
can control), perhaps by charging for it by some scheme that involves 
not offering flat-fee.
I agree with the differential billing proposal.  There are definitely 
two sides to the coin when it comes to Internet access available to most 
of the US; on one side the open and unrestricted access allows for the 
growth of new ideas and services no matter how unrealistic (ie, unicast 
IP TV for the masses), but on the other side sets up a tragedy of the 
commons situation where there is no incentive _not_ to abuse the 
unlimited network resources.   Even with as insanely cheap as web 
hosting has become, people are still electing to use P2P for content 
distribution over $4/mo hosting accounts because it's cheaper; the 
higher network costs in P2P distribution go ignored because the end user 
never sees them.  The problem in converting to a usage-based billing 
system is that there's a huge potential to simultaneously lose both 
market share and public perception of your brand.  I'm sure every 
broadband provider would love to go to a system of usage-based billing, 
but none of them wants to be the first.


-Eric




Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Florian Weimer

* Sean Donelan:

 On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
 If your network cannot handle the traffic, don't offer the services.

 So your recommendation is that universities, enterprises and ISPs
 simply stop offering all Internet service because a few particular
 application protocols are badly behaved?

I think a lot of companies implement OOB controls to curb P2P traffic,
and those controls remain in place even without congestion on the
network.  It's like making sure that nobody uses the company plotter to
print posters.

In my experience, a permanently congested network isn't fun to work
with, even if most of the flows are long-living and TCP-compatible.  The
lack of proper congestion control is kind of a red herring, IMHO.


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-21 Thread Marshall Eubanks


Note that this is from 2006. Do you have a link to the actual paper, by
Terry Shaw, of CableLabs, and Jim Martin of Clemson ?

Regards
Marshall

On Oct 21, 2007, at 1:03 PM, Sean Donelan wrote:




http://www.multichannel.com/article/CA6332098.html

  The short answer: Badly. Based on the research, conducted by  
Terry Shaw,
  of CableLabs, and Jim Martin, a computer science professor at  
Clemson
  University, it only takes about 10 BitTorrent users bartering  
files on a
  node (of around 500) to double the delays experienced by  
everybody else.
  Especially if everybody else is using normal priority services,  
like

  e-mail or Web surfing, which is what tech people tend to call
  best-effort traffic.

Adding more network bandwidth doesn't improve the network  
experience of other network users, it just increases the  
consumption by P2P users. That's why you are seeing many  
universities and enterprises spending money on traffic shaping  
equipment instead of more network bandwidth.






Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Sean Donelan


On Sun, 21 Oct 2007, Florian Weimer wrote:

In my experience, a permanently congested network isn't fun to work
with, even if most of the flows are long-living and TCP-compatible.  The
lack of proper congestion control is kind of a red herring, IMHO.


Why do you think so many network operators of all types are implementing 
controls on that traffic?


http://www.azureuswiki.com/index.php/Bad_ISPs

Its not just the greedy commercial ISPs, its also universities, 
non-profits, government, co-op, etc networks.  It doesn't seem to matter 
if the network has 100Mbps user connections or 128Kbps user connection, 
they all seem to be having problems with these particular applications.




Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
  So your recommendation is that universities, enterprises and ISPs simply 
  stop offering all Internet service because a few particular application 
  protocols are badly behaved?
 
  They should stop to offer flat-rate ones anyway.
 
 Comcast's management has publically stated anyone who doesn't like the 
 network management controls on its flat rate service can upgrade to 
 Comcat's business class service.
 
 Problem solved?

Assuming a business class service that's reasonably priced and featured?
Absolutely.  I'm not sure I've seen that to be the case, however.  Last
time I checked with a local cable company for T1-like service, they wanted
something like $800/mo, which was about $300-$400/mo more than several of
the CLEC's.  However, that was awhile ago, and it isn't clear that the
service offerings would be the same.

I don't class cable service as being as reliable as a T1, however.  We've
witnessed that the cable network fails shortly after any regional power
outage here, and it has somewhat regular burps in the service anyways.

I'll note that I can get unlimited business-class DSL (2M/512k ADSL) for
about $60/mo (24m), and that was explicitly spelled out to be unlimited-
use as part of the RFP.

By way of comparison, our local residential RR service is now 8M/512k for 
about $45/mo (as of just a month or two ago).

I think I'd have to conclude that I'd certainly see a premium above and
beyond the cost of a residential plan to be reasonable, but I don't expect
it to be many multiples of the resi service price, given that DSL plans
will promise the bandwidth at just a slightly higher cost.

 Or would some P2P folks complain about having to pay more money?

Of course they will.

  Or do general per-user ratelimiting that is protocol/application agnostic.
 
 As I mentioned previously about the issues involving additional in-line 
 devices and so on in networks, imposing per user network management and 
 billing is a much more complicated task.
 
 If only a few protocol/applications are causing a problem, why do you need 
 an overly complex response?  Why not target the few things that are 
 causing problems?

Well, because when you promise someone an Internet connection, they usually
expect it to work.  Is it reasonable for Comcast to unilaterally decide that
my P2P filesharing of my family photos and video clips is bad?

  A better idea might be for the application protocol designers to improve 
  those particular applications.
 
  Good luck with that.
 
 It took a while, but it worked with the UDP audio/video protocol folks who 
 used to stress networks.  Eventually those protocol designers learned to 
 control their applications and make them play nicely on the network.

:-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Sean Donelan


On Sun, 21 Oct 2007, Joe Greco wrote:

If only a few protocol/applications are causing a problem, why do you need
an overly complex response?  Why not target the few things that are
causing problems?


Well, because when you promise someone an Internet connection, they usually
expect it to work.  Is it reasonable for Comcast to unilaterally decide that
my P2P filesharing of my family photos and video clips is bad?


So what about the other 490 people on the node expecting it to work?  Do 
you tell them sorry, but 10 of your neighbors are using badly behaved 
applications so everything you are trying to use it for is having 
problems.  Maybe Comcast should just tell the other 490 neighbors the 
10 names and addresses of poorly behaved P2P users and let the neighhood

solve the problem.

Is it reasonable for your filesharing of your family photos and video 
clips to cause problems for all the other users of the network?  Is that 
fair or just greedy?





Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Florian Weimer

* Eric Spaeth:

 Of that group, only DSL doesn't have a common upstream bottleneck
 between the subscriber and head-end.

DSL has got that, too, but it's much more statically allocated and
oversubscription results in different symptoms.

If you've got a cable with 50 wire pairs, and you can run ADSL2+ at 16
Mbps downstream on one pair, you can't expect to get full 800 Mbps
across the whole cable, at least not with run-of-the-mill ADSL2+.
(Actual numbers may be different, but there's a significant problem with
interference when you get closer to theoretical channel limits.)


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Matthew Kaufman


Sean Donelan wrote:
So what about the other 490 people on the node expecting it to work?  Do 
you tell them sorry, but 10 of your neighbors are using badly behaved 
applications so everything you are trying to use it for is having 
problems.  


Maybe Comcast should fix their broken network architecture if 10 users 
sending their own data using TCP (or something else with TCP-like 
congestion control) can break the 490 other people on a node.


Or get on their vendor to fix it, if they can't.

If that means traffic shaping at the CPE or very near the customer, then 
perhaps that's what it means, but installing a 3rd-party box that sniffs 
away and then sends forged RSTs in order to live up to its advertised 
claims is clearly at the wrong end of the spectrum of possible solutions.


 Maybe Comcast should just tell the other 490 neighbors the 10
 names and addresses of poorly behaved P2P users and let the neighhood
 solve the problem.

Maybe Comcast's behavior will cause all 500 neighbors to find an ISP 
that isn't broken. We can only hope.


Is it reasonable for your filesharing of your family photos and video 
clips to cause problems for all the other users of the network?  Is that 
fair or just greedy?


It isn't fair or greedy, it is a bug that it does so. Greedy would be if 
you were using a non-congestion-controlled protocol like most naive 
RTP-based VoIP apps do.


Matthew Kaufman
[EMAIL PROTECTED]



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Florian Weimer

* Sean Donelan:

 On Sun, 21 Oct 2007, Florian Weimer wrote:
 If its not the content, why are network engineers at many university
 networks, enterprise networks, public networks concerned about the
 impact particular P2P protocols have on network operations?  If it was
 just a single network, maybe they are evil.  But when many different
 networks all start responding, then maybe something else is the
 problem.

 Uhm, what about civil liability?  It's not necessarily a technical issue
 that motivates them, I think.

 If it was civil liability, why are they responding to the protocol being
 used instead of the content?

Because the protocol is detectable, and correlates (read: is perceived
to correlate) well enough with the content?

 If there is a technical reason, it's mostly that the network as deployed
 is not sufficient to meet user demands.  Instead of providing more
 resources, lack of funds may force some operators to discriminate
 against certain traffic classes.  In such a scenario, it doesn't even
 matter much that the targeted traffic class transports content of
 questionable legaility.  It's more important that the measures applied
 to it have actual impact (Amdahl's law dictates that you target popular
 traffic), and that you can get away with it (this is where the legality
 comes into play).

 Sandvine, packeteer, etc boxes aren't cheap either.

But they try to make things better for end users.  If your goal is to
save money, you'll use different products (even ngrep-with-tcpkill will
do in some cases).

 The problem is giving P2P more resources just means P2P consumes more
 resources, it doesn't solve the problem of sharing those resources
 with other users.

I don't see the problem.  Obviously, there's demand for that kind of
traffic.  ISPs should be lucky because they're selling bandwidth, so
it's just more business for them.

I can see two different problems with resource sharing: You've got
congestion not in the access network, but in your core or on some
uplinks.  This is just poor capacity planning.  Tough luck, you need to
figure that one out or you'll have trouble staying in business (if you
strike the wrong balance, your network will cost much more to maintain
than what the competition pays for therir own, or it will inadequate,
leading to poor service).

The other issue are ridiculously oversubscribed shared media networks on
the last mile.  This only works if there's a close-knit user community
that can police themselves.  ISPs who are in this situation need to
figure out how they ended up there, especially if there isn't cut-throat
competition.  In the end, it's probably a question of how you market
your products (up to 25 Mbps of bandwidth and stuff like that).

 In my experience, a permanently congested network isn't fun to work
 with, even if most of the flows are long-living and TCP-compatible.  The
 lack of proper congestion control is kind of a red herring, IMHO.

 Why do you think so many network operators of all types are
 implementing controls on that traffic?

Because their users demand more bandwidth from the network than actually
available, and non-user-specific congestion occurs to a significant
degree.  (Is there a better term for that?  What I mean is that not just
the private link to the customer is saturated, but something that is not
under his or her direct control, so changing your own behavior doesn't
benefit you instantly; see self-policing above.)  Selectively degrading
traffic means that you can still market your service as unmetered
25 Mbps, instead of unmetered 1 Mbps.

One reason for degrading P2P traffic I haven't mentioned so far: P2P
applications have got the nice benefit that they are inherently
asynchronous, so cutting the speed to a fraction doesn't fatally impact
users.  (In that sense, there isn't strong user demand for additional
network capacity.)  But guess what happens if there's finally more
demand for streamed high-entropy content.  Then you'll have got not much
choice; you need to build a network with the necessary capacity,


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Eric Spaeth


Joe Greco wrote:

Well, because when you promise someone an Internet connection, they usually
expect it to work.  Is it reasonable for Comcast to unilaterally decide that
my P2P filesharing of my family photos and video clips is bad?
  


Comcast is currently providing 1GB of web hosting space per e-mail 
address associated with each account; one could argue that's a 
significantly more efficient method of distributing that type of content 
and it still doesn't cost you anything extra.


The use case you describe isn't the problem though, it's the gluttonous 
kid in the candy store reaction that people tend to have when they're 
presented with all of the content available via P2P networks.  This type 
of behavior has been around forever, be it in people tagging up 
thousands of Usenet articles, or setting themselves up on several DCC 
queues on IRC.  Certainly innovations like newsreaders capable of using 
NZB files have made retrieval of content easier on Usenet, but nothing 
has lowered the barrier to content access more than P2P software.  It's 
to the point now where people will download anything and everything via 
P2P whether they want it or not.   For the AP article they were 
attempting to seed the Project Gutenburg version of the King James Bible 
-- a work that is readily available with a 3 second Google search and a 
clicked hyperlink straight to the eBook.  Even with that being the case, 
the folks doing the testing still saw connection attempts against 
against their machine to retrieve the content.   Must of this is due to 
a disturbing trend in users subscribing to RSS feeds for new torrent 
content, with clients automatically joining in the distribution of any 
new content presented to the tracker regardless of what it is.  Again, 
flat-rate pricing does little to discourage this type of behavior.


-Eric


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Eric Spaeth


Matthew Kaufman wrote:
Maybe Comcast should fix their broken network architecture if 10 users 
sending their own data using TCP (or something else with TCP-like 
congestion control) can break the 490 other people on a node.




That's somewhat like saying you should fix your debt problem by 
acquiring more money.  Clearly there are things that need to be improved 
in broadband networks as a whole, but the path to that solution isn't 
nearly as simple as you make it sound.

Or get on their vendor to fix it, if they can't.



They have.   Enter DOCSIS 3.0.   The problem is that the benefits of 
DOCSIS 3.0 will only come after they've allocated more frequency space, 
upgraded their CMTS hardware, upgraded their HFC node hardware where 
necessary, and replaced subscriber modems with DOCSIS 3.0 capable 
versions.   On an optimistic timeline that's at least 18-24 months 
before things are going to be better; the problem is things are broken 
_today_.
If that means traffic shaping at the CPE or very near the customer, 
then perhaps that's what it means, but installing a 3rd-party box that 
sniffs away and then sends forged RSTs in order to live up to its 
advertised claims is clearly at the wrong end of the spectrum of 
possible solutions.




On a philosophical level I would agree with you, but we also live in a 
world of compromise.  Sure, Comcast could drop their upstream sync rate 
to 64kbps, but why should they punish everyone on the node for the 
actions of a few?  From the perspective of practical network 
engineering, as long as impact can be contained to just seeding 
activities from P2P applications I don't think injected resets are as 
evil as people make them out to be.   You don't see people getting up in 
arms about spoofed TCP ACKs that satellite internet providers use to 
overcome high latency effects on TCP transfer rates.  In both cases the 
ISP is generating traffic on your behalf, the only difference is the 
outcome.   In Comcast's case I believe for their solutions the net 
effect is the same; by limiting the number of seeding connections they 
are essentially rate limiting P2P traffic.   It just happens that reset 
inject is by far the easiest option to implement. 
Maybe Comcast's behavior will cause all 500 neighbors to find an ISP 
that isn't broken. We can only hope.


Broken is a relative term.  If Comcast's behavior causes their heavy P2P 
users to find another ISP then those who remain will not have broken 
service.  For $40/mo you can't expect the service to be all things to 
all people, and given the shared nature of the service I find little 
moral disagreement with a utilitarian approach to network management.


-Eric


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Daniel Senie


At 01:59 PM 10/21/2007, Sean Donelan wrote:



On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs 
simply stop offering all Internet service because a few particular 
application protocols are badly behaved?


They should stop to offer flat-rate ones anyway.


Comcast's management has publically stated anyone who doesn't like 
the network management controls on its flat rate service can upgrade 
to Comcat's business class service.


I have Comcast business service in my office, and residential service 
at home. I use CentOS for some stuff, and so tried to pull a set of 
ISOs over BitTorrent. First few came through OK, now I can't get 
BitTorrent to do much of anything. I made the files I obtained 
available for others, but noted the streams quickly stop.


This is on my office (business) service, served over cable. It's 
promised as 6Mbps/768K and costs $100/month. I can (and will) solve 
this by just setting up a machine in my data center for the purpose 
of running BT, and shape the traffic so it only gets a couple of Mbps 
(then pull the files over VPN to my office) But no, their business 
service is being stomped in the same fashion. So if they did say 
somewhere (and I haven't seen such a statement) that their business 
service is not affecteds by their efforts to squash BitTorrent, then 
it appears they're not being truthful.




Problem solved?

Or would some P2P folks complain about having to pay more money?



Or do general per-user ratelimiting that is protocol/application agnostic.


As I mentioned previously about the issues involving additional 
in-line devices and so on in networks, imposing per user network 
management and billing is a much more complicated task.


If only a few protocol/applications are causing a problem, why do 
you need an overly complex response?  Why not target the few things 
that are causing problems?


Ask the same question about the spam problem. We spend plenty of 
dollars and manpower to filter out an ever-increasing volume of 
noise. The actual traffic rate of desired email to and from our 
customers has not appreciably changed (typical emails per customer 
per day) in several years.




A better idea might be for the application protocol designers to 
improve those particular applications.


Good luck with that.


It took a while, but it worked with the UDP audio/video protocol 
folks who used to stress networks.  Eventually those protocol 
designers learned to control their applications and make them play 
nicely on the network.


If BitTorrent and similar care to improve their image, they'll need 
to work with others to ensure they respect networks and don't flatten 
them. Otherwise, this will become yet another arms race (as if it 
hasn't already) between ISPs and questionable use.




Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Jim Popovitch

On Sun, 2007-10-21 at 17:10 -0400, Daniel Senie wrote:
 I have Comcast business service in my office, and residential service 
 at home. I use CentOS for some stuff, and so tried to pull a set of 
 ISOs over BitTorrent. First few came through OK, now I can't get 
 BitTorrent to do much of anything. I made the files I obtained 
 available for others, but noted the streams quickly stop.

I have Comcast residential service and I've been pulling down torrents
all weekend (Ubuntu v7.10, etc.), with no problems.  I don't think that
Comcast is blocking torrent downloads, I think they are blocking a
zillion Comcast customers from serving torrents to the rest of the
world.  It's a network operations thing... why should Comcast provide a
fat pipe for the rest of the world to benefit from?  Just my $.02.

-Jim P.



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 Is it reasonable for your filesharing of your family photos and video 
 clips to cause problems for all the other users of the network?  Is that 
 fair or just greedy?

It's damn well fair, is what it is.  Is it somehow better for me to go and
e-mail the photos and movies around?  What if I really don't want to
involve the ISP's servers, because they've proven to be unreliable, or I
don't want them capturing backup copies, or whatever?

My choice of technology for distributing my pictures, in this case, would
probably result in *lower* overall bandwidth consumption by the ISP, since
some bandwidth might be offloaded to Uncle Fred in Topeka, and Grandma
Jones in Detroit, and Brother Tom in Florida who happens to live on a much
higher capacity service.

If filesharing my family photos with friends and family is sufficient to 
cause my ISP to buckle, there's something very wrong.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-21 Thread Steven M. Bellovin

On Sun, 21 Oct 2007 13:03:11 -0400 (EDT)
Sean Donelan [EMAIL PROTECTED] wrote:

 
 
 http://www.multichannel.com/article/CA6332098.html
 
The short answer: Badly. Based on the research, conducted by Terry
 Shaw, of CableLabs, and Jim Martin, a computer science professor at
 Clemson University, it only takes about 10 BitTorrent users bartering
 files on a node (of around 500) to double the delays experienced by
 everybody else. Especially if everybody else is using normal
 priority services, like e-mail or Web surfing, which is what tech
 people tend to call best-effort traffic.
 
 Adding more network bandwidth doesn't improve the network experience
 of other network users, it just increases the consumption by P2P
 users. That's why you are seeing many universities and enterprises
 spending money on traffic shaping equipment instead of more network
 bandwidth.
 
This result is unsurprising and not controversial.  TCP achieves
fairness *among flows* because virtually all clients back off in
response to packet drops.  BitTorrent, though, uses many flows per
request; furthermore, since its flows are much longer-lived than web or
email, the latter never achieve their full speed even on a per-flow
basis, given TCP's slow-start.  The result is fair sharing among
BitTorrent flows, which can only achieve fairness even among BitTorrent
users if they all use the same number of flows per request and have an
even distribution of content that is being uploaded.

It's always good to measure, but the result here is quite intuitive.
It also supports the notion that some form of traffic engineering is
necessary.  The particular point at issue in the current Comcast
situation is not that they do traffic engineering but how they do it.


--Steve Bellovin, http://www.cs.columbia.edu/~smb


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 Joe Greco wrote:
  Well, because when you promise someone an Internet connection, they usually
  expect it to work.  Is it reasonable for Comcast to unilaterally decide that
  my P2P filesharing of my family photos and video clips is bad?

 
 Comcast is currently providing 1GB of web hosting space per e-mail 
 address associated with each account; one could argue that's a 
 significantly more efficient method of distributing that type of content 
 and it still doesn't cost you anything extra.

Wow, that's incredibly ...small.  I've easily got ten times that online
with just one class of photos.  There's a lot of benefit to just letting
people yank stuff right off the old hard drive.  (I don't /actually/ use
P2P for sharing photos, we have a ton of webserver space for it, but I
know people who do use P2P for it)

 The use case you describe isn't the problem though,

Of course it's not, but the point I'm making is that they're using a 
shotgun to solve the problem.

[major snip]

 Again, 
 flat-rate pricing does little to discourage this type of behavior.

I certainly agree with that.  Despite that, the way that Comcast has
reportedly chosen to deal with this is problematic, because it means
that they're not really providing true full Internet access.  I don't
expect an ISP to actually forge packets when I'm attempting to
communicate with some third party.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Simon Lyall

On Sun, 21 Oct 2007, Sean Donelan wrote:
 Its not just the greedy commercial ISPs, its also universities,
 non-profits, government, co-op, etc networks.  It doesn't seem to matter
 if the network has 100Mbps user connections or 128Kbps user connection,
 they all seem to be having problems with these particular applications.

I'm going to call bullshit here.

The problem is that the customers are using too much traffic for what is
provisioned. If those same customers were doing the same amount of traffic
via NNTP, HTTP or FTP downloads then you would still be seeing the same
problem and whining as much [1] .

In this part of the world we learnt (the hard way) that your income has
to match your costs for bandwidth. A percentage [2] of your customers are
*always* going to move as much traffic as they can on a 24x7 basis.

If you are losing money or your network is not up to that then you are
doing something wrong, it is *your fault* for not building your network
and pricing it correctly. Napster was launched 8 years ago so you can't
claim this is a new thing.

So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your pricing and
service to match your costs.


[1] See SSL and ISP traffic shaping? at http://www.usenet.com/ssl.htm

[2] - That percentage is always at least 10% . If you are launching a new
flat rate, uncapped service at a reasonable price it might be closer to
80%.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Adrian Chadd

On Mon, Oct 22, 2007, Simon Lyall wrote:

 So stop whinging about how bitorrent broke your happy Internet, Stop
 putting in traffic shaping boxes that break TCP and then complaining
 that p2p programmes don't follow the specs and adjust your pricing and
 service to match your costs.

So which ISPs have contributed towards more intelligent p2p content
routing and distribution; stuff which'd play better with their networks?
Or are you all busy being purely reactive? 

Surely one ISP out there has to have investigated ways that p2p could
co-exist with their network..




Adrian



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Sean Donelan


On Mon, 22 Oct 2007, Simon Lyall wrote:

So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your pricing and
service to match your costs.


Folks in New Zealand seem to also whine about data caps and fair usage 
policies, I doubt changing US pricing and service is going to stop the 
whining.


Those seem to discourage people from donating their bandwidth for P2P 
applications.


Are there really only two extremes?  Don't use it and abuse it?  Will
P2P applications really never learn to play nicely on the network?



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Brandon Galbraith
On 10/21/07, Sean Donelan [EMAIL PROTECTED] wrote:


 On Mon, 22 Oct 2007, Simon Lyall wrote:
  So stop whinging about how bitorrent broke your happy Internet, Stop
  putting in traffic shaping boxes that break TCP and then complaining
  that p2p programmes don't follow the specs and adjust your pricing and
  service to match your costs.

 Folks in New Zealand seem to also whine about data caps and fair usage
 policies, I doubt changing US pricing and service is going to stop the
 whining.

 Those seem to discourage people from donating their bandwidth for P2P
 applications.

 Are there really only two extremes?  Don't use it and abuse it?  Will
 P2P applications really never learn to play nicely on the network?


Can last-mile providers play nicely with their customers and not continue to
offer Unlimited (but we really mean only as much as we say, but we're not
going to tell you the limit until you reach it) false advertising? It skews
the playing field, as well as ticks off the customer. The P2P applications
are already playing nicely. They're only using the bandwidth that has been
allocated to the customer.

-brandon


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Roland Dobbins



On Oct 22, 2007, at 7:50 AM, Sean Donelan wrote:

 Will P2P applications really never learn to play nicely on the  
network?


Here are some more specific questions:

Is some of the difficulty perhaps related to the seemingly  
unconstrained number of potential distribution points in systems of  
this type, along with 'fairness' issues in terms of bandwidth  
consumption of each individual node for upload purposes, and are  
there programmatic ways of altering this behavior in order to reduce  
the number, severity, and duration of 'hot-spots' in the physical  
network topology?


Is there some mechanism by which these applications could potentially  
leverage some of the CDNs out there today?  Have SPs who've deployed  
P2P-aware content-caching solutions on their own networks observed  
any benefits for this class of application?


Would it make sense for SPs to determine how many P2P 'heavy-hitters'  
they could afford to service in a given region of the topology and  
make a limited number of higher-cost accounts available to those  
willing to pay for the privilege of participating in these systems?   
Would moving heavy P2P users over to metered accounts help resolve  
some of the problems, assuming that even those metered accounts would  
have some QoS-type constraints in order to ensure they don't consume  
all available bandwidth?


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

   I don't sound like nobody.

   -- Elvis Presley



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Jim Popovitch

On Mon, 2007-10-22 at 12:55 +1300, Simon Lyall wrote:
 The problem is that the customers are using too much traffic for what is
 provisioned. 

Nope.  Not sure where you got that from.  With P2P, it's others outside
the Comcast network that are over saturating the Comcast customers'
bandwidth.  It's basically an ebb and flow problem, 'cept there is more
of one than the other. ;-) 

Btw, is Comcast in NZ?

-Jim P.



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Provo

On Mon, Oct 22, 2007 at 08:08:47AM +0800, Adrian Chadd wrote:
[snip]
 So which ISPs have contributed towards more intelligent p2p content
 routing and distribution; stuff which'd play better with their networks?
 Or are you all busy being purely reactive? 
 
A quick google search found the one I spotted last time I was looking
around http://he.net/faq/bittorrent.html
...and last time I talked to any HE folks, they didn't get much uptick
for the service.  

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Provo

On Mon, Oct 22, 2007 at 12:55:08PM +1300, Simon Lyall wrote:
 On Sun, 21 Oct 2007, Sean Donelan wrote:
  Its not just the greedy commercial ISPs, its also universities,
  non-profits, government, co-op, etc networks.  It doesn't seem to matter
  if the network has 100Mbps user connections or 128Kbps user connection,
  they all seem to be having problems with these particular applications.
 
 I'm going to call bullshit here.
 
 The problem is that the customers are using too much traffic for what is
 provisioned. If those same customers were doing the same amount of traffic
 via NNTP, HTTP or FTP downloads then you would still be seeing the same
 problem and whining as much [1] .

There is significant protocol behavior differences between BT and FTP.
Hint - downloads are not the Problem.

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Adrian Chadd

On Sun, Oct 21, 2007, Christopher E. Brown wrote:

 Where is there a need to go beyond simple remarking and WRED?  Marking
 P2P as scavenger class and letting the existing QoS configs in the
 network deal with it works well.

Because the p2p client authors (and users!) are out to maximise throughput
and mess entirely with any concept of fairness.

Ah, if people understood cooperativeness..

 A properly configured scavenger class allows up to X to be used at any
 one timer where X is the capacity unused by the rest of the traffic at
 that time.



Adrian



Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-21 Thread Joel Jaeggli

Steven M. Bellovin wrote:

 This result is unsurprising and not controversial.  TCP achieves
 fairness *among flows* because virtually all clients back off in
 response to packet drops.  BitTorrent, though, uses many flows per
 request; furthermore, since its flows are much longer-lived than web or
 email, the latter never achieve their full speed even on a per-flow
 basis, given TCP's slow-start.  The result is fair sharing among
 BitTorrent flows, which can only achieve fairness even among BitTorrent
 users if they all use the same number of flows per request and have an
 even distribution of content that is being uploaded.
 
 It's always good to measure, but the result here is quite intuitive.
 It also supports the notion that some form of traffic engineering is
 necessary.  The particular point at issue in the current Comcast
 situation is not that they do traffic engineering but how they do it.
 

Dare I say it, it might be somewhat informative to engage in a priority
queuing exercise like the Internet-2 scavenger service.

In one priority queue goes all the normal traffic and it's allowed to
use up to 100% of link capacity, in the other queue goes the traffic
you'd like to deliver at lower priority, which given an oversubscribed
shared resource on the edge is capped at some percentage of link
capacity beyond which performance begins to noticably suffer... when the
link is under-utilized low priority traffic can use a significant chunk
of it. When high-priority traffic is present it will crowd out the low
priority stuff before the link saturates. Now obviously if high priority
traffic fills up the link then you have a provisioning issue.

I2 characterized this as worst effort service. apps and users could
probably be convinced to set dscp bits themselves in exchange for better
performance of interactive apps and control traffic vs worst effort
services data transfer.

Obviously there's room for a discussion of net-neutrality in here
someplace. However the closer you do this to the cmts the more likely it
is to apply some locally relevant model of fairness.

   --Steve Bellovin, http://www.cs.columbia.edu/~smb
 



The next broadband killer: advanced operating systems?

2007-10-21 Thread Leo Bicknell

Windows Vista, and next week Mac OS X Leopard introduced a significant
improvement to the TCP stack, Window Auto-Tuning.  FreeBSD is
committing TCP Socket Buffer Auto-Sizing in FreeBSD 7.  I've also
been told similar features are in the 2.6 Kernel used by several
popular Linux distributions.

Today a large number of consumer / web server combinations are limited
to a 32k window size, which on a 60ms link across the country limits
the speed of a single TCP connection to 533kbytes/sec, or 4.2Mbits/sec.
Users with 6 and 8 MBps broadband connections can't even fill their
pipe on a software download.

With these improvements in both clients and servers soon these
systems may auto-tune to fill 100Mbps (or larger) pipes.  Related
to our current discussion of bittorrent clients as much as they are
unfair by trying to use the entire pipe, will these auto-tuning
improvements create the same situation?

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org


pgphJlpgm86OO.pgp
Description: PGP signature


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Geo.




Surely one ISP out there has to have investigated ways that p2p could
co-exist with their network..


Some ideas from one small ISP.

First, fileshare networks drive the need for bandwidth, and since an ISP 
sells bandwidth that should be viewed as good for business because you 
aren't going to sell many 6mb dsl lines to home users if they just want to 
do email and browse.


Second, the more people on your network running fileshare network software 
and sharing, the less backbone bandwidth your users are going to use when 
downloading from a fileshare network because those on your network are going 
to supply full bandwidth to them. This means that while your internal 
network may see the traffic your expensive backbone connections won't (at 
least for the download). Blocking the uploading is a stupid idea because now 
all downloading has to come across your backbone connection.


Uploads from your users are good, this is the traffic that everyone looks 
for when looking for peering partners.


Ok now all that said, the users are going to do what they are going to do. 
If it takes them 20 minutes or 3 days to download a file they are still 
going to download that file. So it's like the way people thought back in the 
old dialup days when everyone said you can't build megabit pipes on the last 
mile because the network won't support it. People download what they want 
then the bandwidth sits idle. Nothing you do is going to stop them from 
using the internet as they see fit so either they get it fast or they get it 
slow but the bandwidth usage is still going to be there and as an ISP your 
job is to make sure supply meets demand.


If you expect them to pay for 6mb pipes, they better see it run faster than 
it does on a 1.5mb pipe or they are going to head to your competition.


Geo.

George Roettger
Netlink Services 



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

  Surely one ISP out there has to have investigated ways that p2p could
  co-exist with their network..
 
 Some ideas from one small ISP.
 
 First, fileshare networks drive the need for bandwidth, and since an ISP 
 sells bandwidth that should be viewed as good for business because you 
 aren't going to sell many 6mb dsl lines to home users if they just want to 
 do email and browse.

One of the things to remember is that many customers are simply looking
for Internet access, but couldn't tell a megabit from a mackerel.

Given that they don't really have any true concept, many users will look
at the numbers, just as they look at numbers for other things they
purchase, and they'll assume that the one with better numbers is a better
product.  It's kind of hard to test drive an Internet connection, anyways.

This has often given cable here in the US a bit of an advantage, and I've
noticed that the general practice of cable providers is to try to maintain
a set of numbers that's more attractive than those you typically land with
DSL.

[snip a bunch of stuff that sounds good in theory, may not map in practice]

 If you expect them to pay for 6mb pipes, they better see it run faster than 
 it does on a 1.5mb pipe or they are going to head to your competition.

A small number of them, perhaps.

Here's an interesting issue.  I recently learned that the local RR
affiliate has changed its service offerings.  They now offer 7M/512k resi
for $45/mo, or 14M/1M for $50/mo (or thereabouts, prices not exact).

Now, does anybody really think that the additional capacity that they're
offering for just a few bucks more is real, or are they just playing the
numbers for advertising purposes?  I have no doubt that you'll be able to
burst higher, but I'm a bit skeptical about continuous use.

Noticed about two months ago that ATT started putting kiosks for U-verse
at local malls and movie theatres.  Coincidence?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Jim Popovitch

On Sun, 2007-10-21 at 22:45 -0400, Geo. wrote:
 Second, the more people on your network running fileshare network software 
 and sharing, the less backbone bandwidth your users are going to use when 
 downloading from a fileshare network because those on your network are going 
 to supply full bandwidth to them. 

H... me wonders how you know this for fact?   Last time I took the
time to snoop a running torrent, I didn't get the the impression it was
pulling packets from the same country as I, let alone my network
neighbors.

-Jim P.



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joel Jaeggli

Jim Popovitch wrote:
 On Sun, 2007-10-21 at 22:45 -0400, Geo. wrote:
 Second, the more people on your network running fileshare network software 
 and sharing, the less backbone bandwidth your users are going to use when 
 downloading from a fileshare network because those on your network are going 
 to supply full bandwidth to them. 
 
 H... me wonders how you know this for fact?   Last time I took the
 time to snoop a running torrent, I didn't get the the impression it was
 pulling packets from the same country as I, let alone my network
 neighbors.
 
 -Jim P.

http://www.bittorrent.org/protocol.html

Peer selection algorithm is based on which peers have the blocks, and
their willingness to serve them. You will note that peers that allow you
to download from them are treated preferentially as far as uploads
relative to those which do not (Which is a problem from the perspective
of comcast customers).

It's unclear to me from the outset, how many peers for a given torrent
would be required before one could place a preference on topological
locality over availability of blocks and willingness to serve.

The principle motivator here is after all displacing costs of downloads
onto a cooperative set of peers where it's assumed to be a marginal
incremental cost. Reciprocity is a plausible basis for a social
contract, or at least that's what I learned in Montessori school.


[admin] Re: Can P2P applications learn to play fair on networks? and Re: Comcast blocking p2p uploads

2007-10-21 Thread Alex Pilosov

[note that this post also relates to the thread Re: Comcast blocking p2p 
uploads]

While both discussions started out as operational, most of the mail
traffic is things that are not very much related to technology or
operations.  

To clarify, things like these are on-topic:

* Whether p2p protocols are well-behaved, and how can we help making 
them behave.

* Filtering non-behaving applications, whether these are worms or p2p 
applications.

* Helping p2p authors write protocols that are topology- and
congestion-aware

These are on-topic, but all arguments for and against have already been
made. Unless you have something new and insightful to say, please avoid
continuing conversations about these subjects:

* ISPs should[n't] have enough capacity to accomodate any application, no 
matter how well or badly behaved
* ISPs should[n't] charge per byte
* ISPs should[n't] have bandwidth caps
* Legality of blocking and filtering

These are clearly off-topic:
* End-user comments about their particular MSO/ISP, pricing, etc. 
* Morality of blocking and filtering

As a guideline, if you can expect a presentation at nanog conference about
something, it belongs on the list. If you can't, it doesn't. It is a clear
distinction. In addition, keep in mind that this is the network
operators mailing list, *not* the end-user mailing list.

Marty Hannigan (MLC member) already made a post on the Comcast blocking
p2p uploads  asking to stick to the operational content (vs, politics and
morality of blocking p2p application), but people still continue to make
non-technical comments.

Accordingly, to increase signal/noise (as applied to network operations)  
MLC (that's us, the team who moderate this mailing list) won't hesitate to
warn posters who ignore the limits set by AUP and guidance set up by MLC.

If you want to discuss this moderation request, please do so on 
nanog-futures.

-alex [mlc chair]