And of course, if you still believe just adding bandwidth
will solve the problems
Joe St. Sauver probably said it best when he pointed out in slide 5 here
http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt
the N-body problem can be a complex problem to try to
[EMAIL PROTECTED] schrieb:
If P2P software relied on an ISP middlebox to mediate the transfers,
then each middlebox could optimize the local situation by using a whole
smorgasbord of tools.
Are there any examples of middleware being adopted by the market? To me, it
looks like the clear
Technologies, Inc.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Stefan Bethke
Sent: Monday, October 29, 2007 8:37 AM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?
[EMAIL PROTECTED] schrieb
[EMAIL PROTECTED] wrote:
And of course, if you still believe just adding bandwidth
will solve the problems
Joe St. Sauver probably said it best when he pointed out in slide 5 here
http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt
the N-body problem can be a
On Mon, 29 Oct 2007, Fred Reimer wrote:
That and the fact that an ISP would be aiding and abetting
illegal activities, in the eyes of the RIAA and MPAA. That's not
to say that technically it would not be better, but that it will
never happen due to political and legal issues, IMO.
As always
When we put the application intelligence in the network. We
have to upgrade the network to support new applications. I
believe that's a mistake from the application innovation angle.
Putting middleboxes into an ISP is not the same thing as
putting intelligence into the network. Think Akamai
-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Sean Donelan
Sent: Monday, October 29, 2007 12:34 PM
To: nanog@merit.edu
Subject: RE: Can P2P applications learn to play fair on networks?
On Mon, 29 Oct 2007, Fred Reimer wrote:
That and the fact that an ISP would be aiding and abetting
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Sean
Donelan
Sent: Saturday, October 27, 2007 6:31 PM
To: Mohacsi Janos
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?
On Sat, 27 Oct 2007, Mohacsi Janos wrote:
Agreed
On Thu, 25 Oct 2007 12:50:32 -0400 (EDT)
Sean Donelan [EMAIL PROTECTED] wrote:
Comcast's network is QOS DSCP enabled, as are many other large provider
networks. Enterprise customers use QOS DSCP all the time. However, the
net neutrality battles last year made it politically impossible for
On Sat, 27 Oct 2007, Sean Donelan wrote:
Why artificially keep access link speeds low just to prevent upstream
network congestion? Why can't you have big access links?
You're the one that says that statistical overbooking doesn't work, not
anyone else.
Since I know people that offer
On 26 okt 2007, at 18:29, Sean Donelan wrote:
And generating packets with false address information is more
acceptable? I don't buy it.
When a network is congested, someone is going to be upset about any
possible response.
That doesn't mean all possible responses are equally acceptable.
On Sun, 28 Oct 2007, Mikael Abrahamsson wrote:
Why artificially keep access link speeds low just to prevent upstream
network congestion? Why can't you have big access links?
You're the one that says that statistical overbooking doesn't work, not
anyone else.
If you performed a simple
On Sun, 28 Oct 2007, Sean Donelan wrote:
If you performed a simple Google search, you would have discovered many
universities around the world having similar problems.
The university network engineers are saying adding capacity alone isn't
solving their problems.
You're welcome to provide
On Sun, 28 Oct 2007, Mikael Abrahamsson wrote:
If you performed a simple Google search, you would have discovered many
universities around the world having similar problems.
The university network engineers are saying adding capacity alone isn't
solving their problems.
You're welcome to
On Sat, 27 Oct 2007, Mohacsi Janos wrote:
Agreed. Measures, like NAT, spoofing based accelerators, quarantining
computers are developed for fairly small networks. No for 1Gbps and above and
20+ sites/customers.
small is a relative term. Hong Kong is already selling 1Gbps access
links to
On Fri, Oct 26, 2007, Paul Ferguson wrote:
If I'm sitting at the end of 8Mb/768k cable modem link, and paying
for it, I should damned well be able to use it anytime I want.
24x7.
As a consumer/customer, I say Don't sell it it if you can't
deliver it. And not just sometimes or only
On Fri, 26 Oct 2007, Paul Ferguson wrote:
As a consumer/customer, I say Don't sell it it if you can't
deliver it. And not just sometimes or only during foo time.
All the time. Regardless of my applications. I'm paying for it.
I think you have confused a circuit switch network with a packet
On Fri, 26 Oct 2007, Sean Donelan wrote:
When 5% of the users don't play nicely with the rest of the 95% of the
users; how can network operators manage the network so every user
receives a fair share of the network capacity?
By making sure that the 5% of users upstream capacity doesn't
On Fri, Oct 26, 2007, Paul Ferguson wrote:
If I'm sitting at the end of 8Mb/768k cable modem link, and paying
for it, I should damned well be able to use it anytime I want.
24x7.
As a consumer/customer, I say Don't sell it it if you can't
deliver it. And not just sometimes or
On 25-okt-2007, at 18:50, Sean Donelan wrote:
Comcast's network is QOS DSCP enabled, as are many other large
provider networks. Enterprise customers use QOS DSCP all the
time. However, the net neutrality battles last year made it
politically impossible for providers to say they use QOS
The problem is that ISPs work under the assumption that users only
use a certain percentage of their available bandwidth, while (some) users
work under the assumption that they get to use all their available
bandwidth 24/7 if they choose to do so.
My home dsl is 6mb/384k, so what exactly
Sean Donelan wrote:
When 5% of the users don't play nicely with the rest of the 95% of
the users; how can network operators manage the network so every user
receives a fair share of the network capacity?
This question keeps getting asked in this thread. What is there about a
scavenger class
Rep. Boucher's solution: more capacity, even though it has been
demonstrated many times more capacity doesn't actually solve this
particular problem.
That would seem to be an inaccurate statement.
Is there something in humans that makes it difficult to understand
the difference between
From: Geo. [EMAIL PROTECTED]
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?
Date: Fri, 26 Oct 2007 06:18:01 -0400
The problem is that ISPs work under the assumption that users only
use a certain percentage of their available bandwidth, while
hold
Hunter S Tolkien Fear and Loathing in Barad Dur
Iain Bowen [EMAIL PROTECTED]
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Ferguson
Sent: Friday, October 26, 2007 1:19 AM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Can P2P applications
PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul
Ferguson
Sent: Friday, October 26, 2007 12:19 AM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- -- Sean Donelan [EMAIL PROTECTED] wrote
On Fri, 26 Oct 2007, Joe Greco wrote:
So, what happens when you add sufficient capacity to the packet switch
network that it is able to deliver committed bandwidth to all users?
Answer: by adding capacity, you've created a packet switched network where
you actually get dedicated capacity for
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- -- Sean Donelan [EMAIL PROTECTED] wrote:
On Fri, 26 Oct 2007, Paul Ferguson wrote:
As a consumer/customer, I say Don't sell it it if you can't
deliver it. And not just sometimes or only during foo time.
All the time. Regardless of my
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- -- Jamie Bowden [EMAIL PROTECTED] wrote:
It would seem that the state of NY agrees with you:
http://www.networkworld.com/community/node/20981
The part of this discussion that really infuriates me (and Joe
Greco has hit most of the salient
On Fri, 26 Oct 2007, Paul Ferguson wrote:
No, I'm talking about deceptive marketing practices, consumer
expectations, and customer retention.
From the Comcast order page:
Actual speeds may vary and are not guaranteed. Many factors affect
download speed.
From the Trend Micro order
On Fri, 26 Oct 2007, Iljitsch van Beijnum wrote:
And generating packets with false address information is more acceptable? I
don't buy it.
When a network is congested, someone is going to be upset about any
possible response.
Within the limitations the network operator has, using a TCP RST
On Fri, 26 Oct 2007, Sean Donelan wrote:
If Comcast had used Sandvine's other capabilities to inspect and drop
particular packets, would that have been more acceptable?
Yes, definately.
Dropping random packets (i.e. FIFO queue, RED, not good on multiple-flows)
Dropping particular packets
On Fri, 26 Oct 2007, Mikael Abrahamsson wrote:
If Comcast had used Sandvine's other capabilities to inspect and drop
particular packets, would that have been more acceptable?
Yes, definately.
So another in-line device is better than an out-of-band device.
... but terminating the
On Fri, 26 Oct 2007, Paul Ferguson wrote:
The part of this discussion that really infuriates me (and Joe
Greco has hit most of the salient points) is the deceptiveness
in how ISPs underwrite the service their customers subscribe to.
For instance, in our data centers, we have 1Gb uplinks to our
On Fri, 26 Oct 2007, Paul Ferguson wrote:
The part of this discussion that really infuriates me (and Joe
Greco has hit most of the salient points) is the deceptiveness
in how ISPs underwrite the service their customers subscribe to.
For instance, in our data centers, we have 1Gb
On 10/22/07 2:01 AM, Mikael Abrahamsson [EMAIL PROTECTED] wrote:
Could someone who knows DOCSIS 3.0 (perhaps these are general
DOCSIS questions) enlighten me (and others?) by responding to a few things
I have been thinking about.
Let's say cable provider is worried about aggregate upstream
On Wed, 24 Oct 2007, Iljitsch van Beijnum wrote:
The result is network engineering by politician, and many reasonable things
can no longer be done.
I don't see that.
Here come the Congresspeople. After ICANN, next legistlative IETF
standards for what is acceptable network management.
Rep. Boucher's solution: more capacity, even though it has
been demonstrated many times more capacity doesn't actually
solve this particular problem.
Where has it been proven that adding capacity won't solve the P2P
bandwidth problem? I'm aware that some studies have shown that P2P
demand
On Oct 25, 2007, at 12:24 PM, [EMAIL PROTECTED] wrote:
Rep. Boucher's solution: more capacity, even though it has
been demonstrated many times more capacity doesn't actually
solve this particular problem.
Where has it been proven that adding capacity won't solve the P2P
bandwidth problem?
On Thu, 25 Oct 2007, [EMAIL PROTECTED] wrote:
Where has it been proven that adding capacity won't solve the P2P
bandwidth problem? I'm aware that some studies have shown that P2P
demand increases when capacity is added, but I am not aware that anyone
has attempted to see if there is an upper
On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I have raised this issue with P2P promoters, and they all feel that the
limit will be about at the limit of what people can watch (i.e., full
rate video for whatever duration they want to watch such, at somewhere
between 1
and 10 Mbps). From that
On Oct 25, 2007, at 1:09 PM, Sean Donelan wrote:
On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I have raised this issue with P2P promoters, and they all feel
that the
limit will be about at the limit of what people can watch (i.e., full
rate video for whatever duration they want to watch
On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I don't follow this, on a statistical average. This is P2P, right ? So if I
send you a piece
of a file this will go out my door once, and in your door once, after a
certain ( finite !) number of hops
(i.e., transmissions to and from other peers).
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- -- Sean Donelan [EMAIL PROTECTED] wrote:
When 5% of the users don't play nicely with the rest of the 95% of
the users; how can network operators manage the network so every user
receives a fair share of the network capacity?
I don't know if
On 23-okt-2007, at 19:43, Sean Donelan wrote:
The problem here is that they seem to be using a sledge hammer:
BitTorrent is essentially left dead in the water. And they deny
doing anything, to boot.
A reasonable approach would be to throttle the offending
applications to make them fit
On Wed, 24 Oct 2007, Iljitsch van Beijnum wrote:
There are many reasonable things providers could do.
So then why to you stick up for Comcast when they do something unreasonable?
Although yesterday there was a little more info and it seems they only stop
the affected protocols temporarily,
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic
principles, but for most network operators when the pain isn't
spread equally across the the customer base it represents a
fairness issue. If 490 customers are complaining about bad
On Oct 23, 2007, at 7:18 AM, Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic
principles, but for most network operators when the pain isn't
spread equally across the the customer base it represents a
On 23-okt-2007, at 14:52, Marshall Eubanks wrote:
I also would like to see a UDP scavenger service, for those
applications that generate lots of bits but
can tolerate fairly high packet losses without replacement. (VLBI,
for example, can in principle live with 10% packet loss without
much
Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic
principles, but for most network operators when the pain isn't spread
equally across the the customer base it represents a fairness
issue. If 490 customers
On Oct 23, 2007, at 9:07 AM, Iljitsch van Beijnum wrote:
On 23-okt-2007, at 14:52, Marshall Eubanks wrote:
I also would like to see a UDP scavenger service, for those
applications that generate lots of bits but
can tolerate fairly high packet losses without replacement. (VLBI,
for
On 23-okt-2007, at 15:43, Sam Stickland wrote:
What I would like is a system where there are two diffserv traffic
classes: normal and scavenger-like. When a user trips some
predefined traffic limit within a certain period, all their
traffic is put in the scavenger bucket which takes a
On Tue, Oct 23, 2007 at 01:18:01PM +0200, Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic
principles, but for most network operators when the pain isn't
spread equally across the the customer base it
Iljitsch van Beijnum wrote:
On 23-okt-2007, at 15:43, Sam Stickland wrote:
What I would like is a system where there are two diffserv traffic
classes: normal and scavenger-like. When a user trips some
predefined traffic limit within a certain period, all their traffic
is put in the
On 10/23/07, Joe Provo [EMAIL PROTECTED] wrote:
On Tue, Oct 23, 2007 at 01:18:01PM +0200, Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
The problem here is that they seem to be using a sledge hammer:
BitTorrent is essentially left dead in the water.
Wrong
Joe Provo wrote:
A provider-hosted solution which
managed to transparently handle this across multiple clients and
trackers would likely be popular with the end users.
but not with the rights holders...
J
--
COO
Entanet International
T: 0870 770 9580
W: http://www.enta.net/
L:
On Tue, 23 Oct 2007, Iljitsch van Beijnum wrote:
The problem here is that they seem to be using a sledge hammer: BitTorrent is
essentially left dead in the water. And they deny doing anything, to boot.
A reasonable approach would be to throttle the offending applications to make
them fit
On Sun, 21 Oct 2007, Eric Spaeth wrote:
They have. Enter DOCSIS 3.0. The problem is that the benefits of DOCSIS
3.0 will only come after they've allocated more frequency space, upgraded
their CMTS hardware, upgraded their HFC node hardware where necessary, and
replaced subscriber modems
actually, it would be really helpful to the masses uf us who are being
liberal with our delete keys if someone would summarize the two threads,
comcast p2p management and 204/4.
randy
On Mon, 22 Oct 2007, Randy Bush wrote:
actually, it would be really helpful to the masses uf us who are being
liberal with our delete keys if someone would summarize the two threads,
comcast p2p management and 204/4.
240/4 has been summarized before: Look for email with MLC Note in
subject.
It's a network
operations thing... why should Comcast provide a fat pipe for
the rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe?
--Michael Dillon
So which ISPs have contributed towards more intelligent p2p
content routing and distribution; stuff which'd play better
with their networks?
Or are you all busy being purely reactive?
Surely one ISP out there has to have investigated ways that
p2p could co-exist with their network..
On 10/22/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
It's a network
operations thing... why should Comcast provide a fat pipe for
the rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe?
You are correct, customers pay Comcast to
It's a network
operations thing... why should Comcast provide a fat pipe for the
rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe?
You are correct, customers pay Comcast to provide a fat pipe
for THEIR use (MSO's
One of the things to remember is that many customers are simply looking
for Internet access, but couldn't tell a megabit from a mackerel.
That may have been true 5 years ago, it's not true today. People learn.
Here's an interesting issue. I recently learned that the local RR
affiliate
H... me wonders how you know this for fact? Last time I took the
time to snoop a running torrent, I didn't get the the impression it was
pulling packets from the same country as I, let alone my network
neighbors.
That would be totally dependent on what tracker you use.
Geo.
On Sun, Oct 21, 2007 at 10:45:49PM -0400, Geo. wrote:
[snip]
Second, the more people on your network running fileshare network software
and sharing, the less backbone bandwidth your users are going to use when
downloading from a fileshare network because those on your network are
going to
Will P2P applications really never learn to play nicely on the network?
So from an operations perspective, how should P2P protocols be designed?
There appears that the current solution at the moment is for ISP's to
put up barriers to P2P usage (like comcasts spoof'd RSTs), and thus P2P
On Tue, Oct 23, 2007, Perry Lorier wrote:
Would having a way to proxy p2p downloads via an ISP proxy be used by
ISPs and not abused as an additional way to shutdown and limit p2p
usage? If so how would clients discover these proxies or should they be
manually configured?
Would stronger topological sharing be beneficial? If so, how do you
suggest end users software get access to the information required to
make these decisions in an informed manner?
I would think simply looking at the TTL of packets from it's peers should be
sufficient to decide who is
Adrian Chadd wrote:
[..]
Here's the real question. If an open source protocol for p2p content
routing and distribution appeared?
It is called NNTP, it exists and is heavily used for doing exactly where
most people use P2P for: Warezing around without legal problems.
NNTP is of course nice to
Sean Donelan wrote:
Much of the same content is available through NNTP, HTTP and P2P. The
content part gets a lot of attention and outrage, but network
engineers seem to be responding to something else.
If its not the content, why are network engineers at many university
networks,
* Adrian Chadd:
So which ISPs have contributed towards more intelligent p2p content
routing and distribution; stuff which'd play better with their
networks?
Perhaps Internet2, with its DC++ hubs? 8-P
I think the problem is that better routing (Bittorrent content is
*not* routed by the
Sean
I don't think this is an issue of fairness. There are two issues at play
here:
1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.
2) The breakdown of network engineering assumptions that are made when
network operators are designing networks.
I
On Mon, 22 Oct 2007, Bora Akyol wrote:
I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be
I see your point. The main problem I see with the traffic shaping or worse
boxes is that comcast/ATT/... Sells a particular bandwidth to the customer.
Clearly, they don't provision their network as Number_Customers*Data_Rate,
they provision it to a data rate capability that is much less than the
Bora Akyol wrote:
1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.
Instead of sending an icmp host unreachable, they are closing the connection via
spoofing. I think it's kinder than just dropping the packets all together.
2) The breakdown of
I'm a bit late to this conversation but I wanted to throw out a few bits of
info not covered.
A company called Oversi makes a very interesting solution for caching
Torrent and some Kad based overlay networks as well all done through some
cool strategically placed taps and prefetching. This
-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jack
Bates
Sent: Monday, October 22, 2007 12:35 PM
To: Bora Akyol
Cc: Sean Donelan; nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?
Bora Akyol wrote:
1) Legal Liability due to the content being
Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, October 22, 2007 1:02 AM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?
On Sun, 21 Oct 2007, Eric Spaeth wrote:
They have. Enter DOCSIS 3.0
] On Behalf Of Rich
Groves
Sent: Monday, October 22, 2007 3:06 PM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?
I'm a bit late to this conversation but I wanted to throw out a few bits of
info not covered.
A company called Oversi makes a very interesting solution
Hey Rich.
We discussed the technology before but the actual mental click here is
important -- thank you.
BTW, I *think* it was Randy Bush who said today's leechers are
tomorrow's cachers. His quote was longer but I can't remember it.
Gadi.
On Mon, 22 Oct 2007, Rich Groves wrote:
' [EMAIL PROTECTED]; nanog@merit.edu
Subject: RE: Can P2P applications learn to play fair on networks?
I don't see how this Oversi caching solution will work with today's HFC
deployments -- the demodulation happens in the CMTS, not in the field.
And
if we're talking about de-coupling the RF from
* Sean Donelan:
If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network, maybe they are evil. But when many different
On Sun, 21 Oct 2007, Florian Weimer wrote:
If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network, maybe they are evil.
On Sun, 21 Oct 2007, Sean Donelan wrote:
Sandvine, packeteer, etc boxes aren't cheap either. The problem is
giving P2P more resources just means P2P consumes more resources, it
doesn't solve the problem of sharing those resources with other users.
Only if P2P shared network resources with
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
If your network cannot handle the traffic, don't offer the services.
So your recommendation is that universities, enterprises and ISPs simply
stop offering all Internet service because a few particular application
protocols are badly behaved?
On Sun, 21 Oct 2007, Sean Donelan wrote:
So your recommendation is that universities, enterprises and ISPs simply
stop offering all Internet service because a few particular application
protocols are badly behaved?
They should stop to offer flat-rate ones anyway. Or do general per-user
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs simply
stop offering all Internet service because a few particular application
protocols are badly behaved?
They should stop to offer flat-rate ones anyway.
Comcast's management
Mikael Abrahamsson wrote:
If your network cannot handle the traffic, don't offer the services.
In network access for the masses, downstream bandwidth has always been
easier to deliver than upstream. It's been that way since modem
manufacturers found they could leverage a single
* Sean Donelan:
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
If your network cannot handle the traffic, don't offer the services.
So your recommendation is that universities, enterprises and ISPs
simply stop offering all Internet service because a few particular
application protocols are
On Sun, 21 Oct 2007, Florian Weimer wrote:
In my experience, a permanently congested network isn't fun to work
with, even if most of the flows are long-living and TCP-compatible. The
lack of proper congestion control is kind of a red herring, IMHO.
Why do you think so many network operators
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs simply
stop offering all Internet service because a few particular application
protocols are badly behaved?
They should stop to offer flat-rate ones anyway.
Comcast's
On Sun, 21 Oct 2007, Joe Greco wrote:
If only a few protocol/applications are causing a problem, why do you need
an overly complex response? Why not target the few things that are
causing problems?
Well, because when you promise someone an Internet connection, they usually
expect it to work.
* Eric Spaeth:
Of that group, only DSL doesn't have a common upstream bottleneck
between the subscriber and head-end.
DSL has got that, too, but it's much more statically allocated and
oversubscription results in different symptoms.
If you've got a cable with 50 wire pairs, and you can run
Sean Donelan wrote:
So what about the other 490 people on the node expecting it to work? Do
you tell them sorry, but 10 of your neighbors are using badly behaved
applications so everything you are trying to use it for is having
problems.
Maybe Comcast should fix their broken network
* Sean Donelan:
On Sun, 21 Oct 2007, Florian Weimer wrote:
If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network,
Joe Greco wrote:
Well, because when you promise someone an Internet connection, they usually
expect it to work. Is it reasonable for Comcast to unilaterally decide that
my P2P filesharing of my family photos and video clips is bad?
Comcast is currently providing 1GB of web hosting space
Matthew Kaufman wrote:
Maybe Comcast should fix their broken network architecture if 10 users
sending their own data using TCP (or something else with TCP-like
congestion control) can break the 490 other people on a node.
That's somewhat like saying you should fix your debt problem by
At 01:59 PM 10/21/2007, Sean Donelan wrote:
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs
simply stop offering all Internet service because a few particular
application protocols are badly behaved?
They should stop to
1 - 100 of 117 matches
Mail list logo