Leo Bicknell wrote:
I'm a bit confused by your statement. Are you saying it's more
cost effective for ISP's to carry downloads thousands of miles
across the US before giving them to the end user than it is to allow
a local end user to upload them to other local end users?
Not to
On Sun, Oct 21, 2007, Joe Greco wrote:
A third interesting thing was noted. The Internet grows very fast.
While there's always someone visiting www.cnn.com, as the number of other
sites grew, there was a slow reduction in the overall cache hit rate over
the years as users tended towards
Much of the same content is available through NNTP, HTTP and P2P.
The content part gets a lot of attention and outrage, but network
engineers seem to be responding to something else.
If its not the content, why are network engineers at many university
networks, enterprise networks, public
* Sean Donelan:
If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network, maybe they are evil. But when many different
On Sun, 21 Oct 2007, Florian Weimer wrote:
If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network, maybe they are evil.
On Sun, 21 Oct 2007, Sean Donelan wrote:
Sandvine, packeteer, etc boxes aren't cheap either. The problem is
giving P2P more resources just means P2P consumes more resources, it
doesn't solve the problem of sharing those resources with other users.
Only if P2P shared network resources with
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
If your network cannot handle the traffic, don't offer the services.
So your recommendation is that universities, enterprises and ISPs simply
stop offering all Internet service because a few particular application
protocols are badly behaved?
http://www.multichannel.com/article/CA6332098.html
The short answer: Badly. Based on the research, conducted by Terry Shaw,
of CableLabs, and Jim Martin, a computer science professor at Clemson
University, it only takes about 10 BitTorrent users bartering files on a
node (of around
On Sun, 21 Oct 2007, Sean Donelan wrote:
So your recommendation is that universities, enterprises and ISPs simply
stop offering all Internet service because a few particular application
protocols are badly behaved?
They should stop to offer flat-rate ones anyway. Or do general per-user
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs simply
stop offering all Internet service because a few particular application
protocols are badly behaved?
They should stop to offer flat-rate ones anyway.
Comcast's management
Mikael Abrahamsson wrote:
If your network cannot handle the traffic, don't offer the services.
In network access for the masses, downstream bandwidth has always been
easier to deliver than upstream. It's been that way since modem
manufacturers found they could leverage a single
* Sean Donelan:
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
If your network cannot handle the traffic, don't offer the services.
So your recommendation is that universities, enterprises and ISPs
simply stop offering all Internet service because a few particular
application protocols are
Note that this is from 2006. Do you have a link to the actual paper, by
Terry Shaw, of CableLabs, and Jim Martin of Clemson ?
Regards
Marshall
On Oct 21, 2007, at 1:03 PM, Sean Donelan wrote:
http://www.multichannel.com/article/CA6332098.html
The short answer: Badly. Based on the
On Sun, 21 Oct 2007, Florian Weimer wrote:
In my experience, a permanently congested network isn't fun to work
with, even if most of the flows are long-living and TCP-compatible. The
lack of proper congestion control is kind of a red herring, IMHO.
Why do you think so many network operators
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs simply
stop offering all Internet service because a few particular application
protocols are badly behaved?
They should stop to offer flat-rate ones anyway.
Comcast's
On Sun, 21 Oct 2007, Joe Greco wrote:
If only a few protocol/applications are causing a problem, why do you need
an overly complex response? Why not target the few things that are
causing problems?
Well, because when you promise someone an Internet connection, they usually
expect it to work.
* Eric Spaeth:
Of that group, only DSL doesn't have a common upstream bottleneck
between the subscriber and head-end.
DSL has got that, too, but it's much more statically allocated and
oversubscription results in different symptoms.
If you've got a cable with 50 wire pairs, and you can run
Sean Donelan wrote:
So what about the other 490 people on the node expecting it to work? Do
you tell them sorry, but 10 of your neighbors are using badly behaved
applications so everything you are trying to use it for is having
problems.
Maybe Comcast should fix their broken network
* Sean Donelan:
On Sun, 21 Oct 2007, Florian Weimer wrote:
If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network,
Joe Greco wrote:
Well, because when you promise someone an Internet connection, they usually
expect it to work. Is it reasonable for Comcast to unilaterally decide that
my P2P filesharing of my family photos and video clips is bad?
Comcast is currently providing 1GB of web hosting space
Matthew Kaufman wrote:
Maybe Comcast should fix their broken network architecture if 10 users
sending their own data using TCP (or something else with TCP-like
congestion control) can break the 490 other people on a node.
That's somewhat like saying you should fix your debt problem by
At 01:59 PM 10/21/2007, Sean Donelan wrote:
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs
simply stop offering all Internet service because a few particular
application protocols are badly behaved?
They should stop to
On Sun, 2007-10-21 at 17:10 -0400, Daniel Senie wrote:
I have Comcast business service in my office, and residential service
at home. I use CentOS for some stuff, and so tried to pull a set of
ISOs over BitTorrent. First few came through OK, now I can't get
BitTorrent to do much of
Is it reasonable for your filesharing of your family photos and video
clips to cause problems for all the other users of the network? Is that
fair or just greedy?
It's damn well fair, is what it is. Is it somehow better for me to go and
e-mail the photos and movies around? What if I
On Sun, 21 Oct 2007 13:03:11 -0400 (EDT)
Sean Donelan [EMAIL PROTECTED] wrote:
http://www.multichannel.com/article/CA6332098.html
The short answer: Badly. Based on the research, conducted by Terry
Shaw, of CableLabs, and Jim Martin, a computer science professor at
Clemson
Joe Greco wrote:
Well, because when you promise someone an Internet connection, they usually
expect it to work. Is it reasonable for Comcast to unilaterally decide that
my P2P filesharing of my family photos and video clips is bad?
Comcast is currently providing 1GB of web hosting
On Sun, 21 Oct 2007, Sean Donelan wrote:
Its not just the greedy commercial ISPs, its also universities,
non-profits, government, co-op, etc networks. It doesn't seem to matter
if the network has 100Mbps user connections or 128Kbps user connection,
they all seem to be having problems with
On Mon, Oct 22, 2007, Simon Lyall wrote:
So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your pricing and
service to match your costs.
So which
On Mon, 22 Oct 2007, Simon Lyall wrote:
So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your pricing and
service to match your costs.
Folks in New
On 10/21/07, Sean Donelan [EMAIL PROTECTED] wrote:
On Mon, 22 Oct 2007, Simon Lyall wrote:
So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your
On Oct 22, 2007, at 7:50 AM, Sean Donelan wrote:
Will P2P applications really never learn to play nicely on the
network?
Here are some more specific questions:
Is some of the difficulty perhaps related to the seemingly
unconstrained number of potential distribution points in systems of
On Mon, 2007-10-22 at 12:55 +1300, Simon Lyall wrote:
The problem is that the customers are using too much traffic for what is
provisioned.
Nope. Not sure where you got that from. With P2P, it's others outside
the Comcast network that are over saturating the Comcast customers'
bandwidth.
On Mon, Oct 22, 2007 at 08:08:47AM +0800, Adrian Chadd wrote:
[snip]
So which ISPs have contributed towards more intelligent p2p content
routing and distribution; stuff which'd play better with their networks?
Or are you all busy being purely reactive?
A quick google search found the one I
On Mon, Oct 22, 2007 at 12:55:08PM +1300, Simon Lyall wrote:
On Sun, 21 Oct 2007, Sean Donelan wrote:
Its not just the greedy commercial ISPs, its also universities,
non-profits, government, co-op, etc networks. It doesn't seem to matter
if the network has 100Mbps user connections or
On Sun, Oct 21, 2007, Christopher E. Brown wrote:
Where is there a need to go beyond simple remarking and WRED? Marking
P2P as scavenger class and letting the existing QoS configs in the
network deal with it works well.
Because the p2p client authors (and users!) are out to maximise
Steven M. Bellovin wrote:
This result is unsurprising and not controversial. TCP achieves
fairness *among flows* because virtually all clients back off in
response to packet drops. BitTorrent, though, uses many flows per
request; furthermore, since its flows are much longer-lived than web
Windows Vista, and next week Mac OS X Leopard introduced a significant
improvement to the TCP stack, Window Auto-Tuning. FreeBSD is
committing TCP Socket Buffer Auto-Sizing in FreeBSD 7. I've also
been told similar features are in the 2.6 Kernel used by several
popular Linux distributions.
Surely one ISP out there has to have investigated ways that p2p could
co-exist with their network..
Some ideas from one small ISP.
First, fileshare networks drive the need for bandwidth, and since an ISP
sells bandwidth that should be viewed as good for business because you
aren't going
Surely one ISP out there has to have investigated ways that p2p could
co-exist with their network..
Some ideas from one small ISP.
First, fileshare networks drive the need for bandwidth, and since an ISP
sells bandwidth that should be viewed as good for business because you
aren't
On Sun, 2007-10-21 at 22:45 -0400, Geo. wrote:
Second, the more people on your network running fileshare network software
and sharing, the less backbone bandwidth your users are going to use when
downloading from a fileshare network because those on your network are going
to supply full
Jim Popovitch wrote:
On Sun, 2007-10-21 at 22:45 -0400, Geo. wrote:
Second, the more people on your network running fileshare network software
and sharing, the less backbone bandwidth your users are going to use when
downloading from a fileshare network because those on your network are
[note that this post also relates to the thread Re: Comcast blocking p2p
uploads]
While both discussions started out as operational, most of the mail
traffic is things that are not very much related to technology or
operations.
To clarify, things like these are on-topic:
* Whether p2p
42 matches
Mail list logo