On 2/8/13 9:46 AM, fredrik danerklint wrote:
About 40 - 50 Mbit/s. Not bad at all.

Downloading software does not have to be in real-time, like watching
a movie, does.
In both cases it's actually rather convenient if it's as fast as
possible,

Yes. What I would like to have is to allow the access switch, which a customer for an ISP is connected to, to let the customer have 1 Gbit/s
of bandwidth if the traffic is to or from the cache servers at their
ISP.

You're positing a situation where a cache infrastructure at scale built close to the user has a sufficiently high hit rate for rather large objects to be more cost effective than increasing capacity in the middle of the network as the bandwidth/price curve declines. My early career as an http cache dude makes me a bit suspicious. I'm pretty confident that denser/cheaper/faster silicon is less expensive than deploying boxes of spinning disks closer to the customer(s) than they are today (netflix's cache for example isn't that close to the edge (would support 2-10k simultaneous customers for that one application per box), it aims to get inside the isp however) when you add power/cooling/space/lifecycle-maintenance (I'm a datacenter operator) if it wasn't the CDN's would have pushed even closer to the edge. Of course if you can limit consumer choice then you can push your hit rate to 100% but then you're running a VOD service in a walled garden and there are plenty of those already.

That said provide compelling numbers and I'll change my mind.

Reply via email to