On Sun, 21 Oct 2007, Eric Spaeth wrote:
They have. Enter DOCSIS 3.0. The problem is that the benefits of DOCSIS
3.0 will only come after they've allocated more frequency space, upgraded
their CMTS hardware, upgraded their HFC node hardware where necessary, and
replaced subscriber modems
actually, it would be really helpful to the masses uf us who are being
liberal with our delete keys if someone would summarize the two threads,
comcast p2p management and 204/4.
randy
On Mon, 22 Oct 2007, Randy Bush wrote:
actually, it would be really helpful to the masses uf us who are being
liberal with our delete keys if someone would summarize the two threads,
comcast p2p management and 204/4.
240/4 has been summarized before: Look for email with MLC Note in
subject.
It's a network
operations thing... why should Comcast provide a fat pipe for
the rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe?
--Michael Dillon
So which ISPs have contributed towards more intelligent p2p
content routing and distribution; stuff which'd play better
with their networks?
Or are you all busy being purely reactive?
Surely one ISP out there has to have investigated ways that
p2p could co-exist with their network..
On 10/22/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
It's a network
operations thing... why should Comcast provide a fat pipe for
the rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe?
You are correct, customers pay Comcast to
It's a network
operations thing... why should Comcast provide a fat pipe for the
rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe?
You are correct, customers pay Comcast to provide a fat pipe
for THEIR use (MSO's
One of the things to remember is that many customers are simply looking
for Internet access, but couldn't tell a megabit from a mackerel.
That may have been true 5 years ago, it's not true today. People learn.
Here's an interesting issue. I recently learned that the local RR
affiliate
On Sun, 21 Oct 2007 19:31:09 -0700
Joel Jaeggli [EMAIL PROTECTED] wrote:
Steven M. Bellovin wrote:
This result is unsurprising and not controversial. TCP achieves
fairness *among flows* because virtually all clients back off in
response to packet drops. BitTorrent, though, uses many
H... me wonders how you know this for fact? Last time I took the
time to snoop a running torrent, I didn't get the the impression it was
pulling packets from the same country as I, let alone my network
neighbors.
That would be totally dependent on what tracker you use.
Geo.
On Sun, Oct 21, 2007 at 10:45:49PM -0400, Geo. wrote:
[snip]
Second, the more people on your network running fileshare network software
and sharing, the less backbone bandwidth your users are going to use when
downloading from a fileshare network because those on your network are
going to
Will P2P applications really never learn to play nicely on the network?
So from an operations perspective, how should P2P protocols be designed?
There appears that the current solution at the moment is for ISP's to
put up barriers to P2P usage (like comcasts spoof'd RSTs), and thus P2P
On Tue, Oct 23, 2007, Perry Lorier wrote:
Would having a way to proxy p2p downloads via an ISP proxy be used by
ISPs and not abused as an additional way to shutdown and limit p2p
usage? If so how would clients discover these proxies or should they be
manually configured?
Would stronger topological sharing be beneficial? If so, how do you
suggest end users software get access to the information required to
make these decisions in an informed manner?
I would think simply looking at the TTL of packets from it's peers should be
sufficient to decide who is
Adrian Chadd wrote:
[..]
Here's the real question. If an open source protocol for p2p content
routing and distribution appeared?
It is called NNTP, it exists and is heavily used for doing exactly where
most people use P2P for: Warezing around without legal problems.
NNTP is of course nice to
Sean Donelan wrote:
Much of the same content is available through NNTP, HTTP and P2P. The
content part gets a lot of attention and outrage, but network
engineers seem to be responding to something else.
If its not the content, why are network engineers at many university
networks,
* Adrian Chadd:
So which ISPs have contributed towards more intelligent p2p content
routing and distribution; stuff which'd play better with their
networks?
Perhaps Internet2, with its DC++ hubs? 8-P
I think the problem is that better routing (Bittorrent content is
*not* routed by the
Sean
I don't think this is an issue of fairness. There are two issues at play
here:
1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.
2) The breakdown of network engineering assumptions that are made when
network operators are designing networks.
I
Interesting. I imainge this could have a large impact to the typical
enterprise, where they might do large scale upgrades in a short period
of time.
Does anyone know if there are any plans by Microsoft to push this out as
a Windows XP update as well?
S
Leo Bicknell wrote:
Windows Vista,
On Mon, 22 Oct 2007, Bora Akyol wrote:
I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be
On Mon, 22 Oct 2007, Sam Stickland wrote:
Does anyone know if there are any plans by Microsoft to push this out as
a Windows XP update as well?
You can achieve the same thing by running a utility such as TCP Optimizer.
http://www.speedguide.net/downloads.php
Turn on window scaling and
I see your point. The main problem I see with the traffic shaping or worse
boxes is that comcast/ATT/... Sells a particular bandwidth to the customer.
Clearly, they don't provision their network as Number_Customers*Data_Rate,
they provision it to a data rate capability that is much less than the
Bora Akyol wrote:
1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.
Instead of sending an icmp host unreachable, they are closing the connection via
spoofing. I think it's kinder than just dropping the packets all together.
2) The breakdown of
In a message written on Mon, Oct 22, 2007 at 06:42:48PM +0200, Mikael
Abrahamsson wrote:
You can achieve the same thing by running a utility such as TCP Optimizer.
http://www.speedguide.net/downloads.php
Turn on window scaling and increase the TCP window size to 1 meg or so,
and you
I'm a bit late to this conversation but I wanted to throw out a few bits of
info not covered.
A company called Oversi makes a very interesting solution for caching
Torrent and some Kad based overlay networks as well all done through some
cool strategically placed taps and prefetching. This
I wonder how quickly applications and network gear would implement QoS
support if the major ISPs offered their subscribers two queues: a default
queue, which handled regular internet traffic but squashed P2P, and then a
separate queue that allowed P2P to flow uninhibited for an extra $5/month,
On 10/22/2007 at 3:02 PM, Frank Bulk [EMAIL PROTECTED] wrote:
I wonder how quickly applications and network gear would implement
QoS
support if the major ISPs offered their subscribers two queues: a
default
queue, which handled regular internet traffic but squashed P2P, and
then a
separate
... Why not suck up and go with the
economic solution? Seems like the easy thing is for the ISPs to come
clean and admit their unlimited service is not and put in upload
caps and charge for overages.
Who will be the first? If there *is* competition in the
marketplace, the cable company does
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- -- Crist Clark [EMAIL PROTECTED] wrote:
[...] How
many P2P protocols are already blocking/shaping evasive?
The Storm botnet? :-)
- - ferg
-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)
On Mon, Oct 22, 2007 at 05:16:08PM -0700, Crist Clark wrote:
It seems to me is what hurts the ISPs is the accompanying upload
streams, not the download (or at least the ISP feels the same
download pain no matter what technology their end user uses to get
the data[0]). Throwing more bandwidth
I'm not claiming that squashing P2P is easy, but apparently Comcast has
been successfully enough to generate national attention, and the bandwidth
shaping providers are not totally a lost cause.
The reality is that copper-based internet access technologies: dial-up, DSL,
and cable modems have
In a message written on Mon, Oct 22, 2007 at 08:24:17PM -0500, Frank Bulk wrote:
The reality is that copper-based internet access technologies: dial-up, DSL,
and cable modems have made the design-based trade off that there is
substantially more downstream than upstream. With North American
On Oct 22, 2007, at 9:55 PM, Leo Bicknell wrote:
Having now seen the cable issue described in technical detail over
and over, I have a question.
At the most recent Nanog several people talked about 100Mbps symmetric
access in Japan for $40 US.
This leads me to two questions:
1) Is that
Here's a few downstream/upstream numbers and ratios:
ADSL2+: 24/1.5 = 16:1 (sans Annex.M)
DOCSIS 1.1: 38/9 = 4.2:1 (best case up and downstream modulations and
carrier widths)
BPON: 622/155 = 4:1
GPON: 2488/1244 = 2:1
Only the first is non-shared, so that even though the ratio is
With PCMM (PacketCable Multimedia,
http://www.cedmagazine.com/out-of-the-lab-into-the-wild.aspx) support it's
possible to dynamically adjust service flows, as has been done with
Comcast's Powerboost. There also appears to be support for flow
prioritization.
Regards,
Frank
-Original
On 10/21/07, Leo Bicknell [EMAIL PROTECTED] wrote:
Windows Vista, and next week Mac OS X Leopard introduced a significant
improvement to the TCP stack, Window Auto-Tuning. FreeBSD is
committing TCP Socket Buffer Auto-Sizing in FreeBSD 7. I've also
been told similar features are in the 2.6
I don't see how this Oversi caching solution will work with today's HFC
deployments -- the demodulation happens in the CMTS, not in the field. And
if we're talking about de-coupling the RF from the CMTS, which is what is
happening with M-CMTSes
A lot of the MDUs and apartment buildings in Japan are doing fiber to the
basement and then VDSL or VDSL2 in the building, or even Ethernet. That's
how symmetrical bandwidth is possible. Considering that much of the
population does not live in high-rises, this doesn't easily apply to the
U.S.
David Andersen wrote:
http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/AR2007082801990.html
snip
Followed by a recent explosion in fiber-to-the-home buildout by NTT.
About 8.8 million Japanese homes have fiber lines -- roughly nine times
the number in the United States. --
According to
http://torrentfreak.com/comcast-throttles-bittorrent-traffic-seeding-impossible/
Comcast's blocking affects connections to non-Comcast users. This
means that they're trying to manage their upstream connections, not the
local loop.
For Comcast's own position, see
On Oct 22, 2007, at 11:02 PM, Jeff Shultz wrote:
David Andersen wrote:
http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/
AR2007082801990.html
snip
Followed by a recent explosion in fiber-to-the-home buildout by
NTT. About 8.8 million Japanese homes have fiber lines --
Hey Rich.
We discussed the technology before but the actual mental click here is
important -- thank you.
BTW, I *think* it was Randy Bush who said today's leechers are
tomorrow's cachers. His quote was longer but I can't remember it.
Gadi.
On Mon, 22 Oct 2007, Rich Groves wrote:
Once upon a time, David Andersen [EMAIL PROTECTED] said:
But no - I was as happy as everyone else when the CLECs emerged and
provided PRI service at 1/3rd the rate of the ILECs
Not only was that CLEC service concetrated in higher-density areas, the
PRI prices were often not based in reality.
On Mon, 22 Oct 2007, Majdi S. Abbas wrote:
What hurt these access providers, particularly those in the
cable market, was a set of failed assumptions. The Internet became a
commodity, driven by this web thing. As a result, standards like DOCSIS
developed, and bandwidth was allocated,
Frank,
The problem caching solves in this situation is much less complex than what
you are speaking of. Caching toward your client base brings down your
transit costs (if you have any)or lowers congestion in congested
areas if the solution is installed in the proper place. Caching
On Tue, Oct 23, 2007, Sean Donelan wrote:
On Mon, 22 Oct 2007, Majdi S. Abbas wrote:
What hurt these access providers, particularly those in the
cable market, was a set of failed assumptions. The Internet became a
commodity, driven by this web thing. As a result, standards like
On Mon, 22 Oct 2007 19:39:48 PDT, Hex Star said:
I can see advanced operating systems consuming much more bandwidth
in the near future then is currently the case, especially with the web
2.0 hype.
You obviously have a different concept of near future than the rest of us,
and you've apparently
47 matches
Mail list logo