Ratios peering [was: The scale of streaming video on the Internet.]

2010-12-05 Thread Patrick W. Gilmore
On Dec 4, 2010, at 5:28 PM, Bill Stewart wrote:
 On Fri, Dec 3, 2010 at 9:35 AM, Leo Bicknell bickn...@ufp.org wrote:

 - Ratio needs to be dropped from all peering policies.  It made sense
  back when the traffic was two people e-mailing each other.  It was
  a measure of equal value.  However the net has evolved.  In the
  face of streaming audio and video, or rich multimedia web sites
  Content-User will always be wildly out of ratio.  It has moved from
  a useful measure, to an excuse to make Content pay in all
  circumstances.
 
 I think that's the key point here - ratios make sense when similar
 types of carriers are peering with each other, whether that's
 traditional Tier 1s or small carriers or whatever; they don't make
 sense when an eyeball network is connecting to a content-provider
 network.

Ratios either make sense, or they don't.  I don't see how type of network 
fits into it.  If you are a restaurant, you do not decide whether or not to 
charge customer for food based on whether or not they work at another 
restaurant.  If your are eyeball and content wants to peer with you, make a 
decision based on your costs and profits.

Ratios are a proxy for real cost / benefit.  As Leo mentioned (and Bill 
snipped), if $LARGE_CONTENT has a single location and $LARGE_EYEBALL has to 
carry it all over the country, the ratio matters supposedly because large 
eyeball has to carry those bits everywhere.  The implicit statement here is 
that large content gives a rats ass about large eyeball's costs.

Repeat after me: I DO NOT CARE ABOUT YOUR COSTS.  What's more, you don't care 
about mine.  If cisco says well, I know the Juniper has the same features and 
is cheaper, but my costs are higher!, do you then buy the cisco?  HELL NO.  
The other person's costs are irrelevant to your decision.

If large eyeball finds it cheaper to pay $LARGE_TRANSIT for those bits, perhaps 
because eyeball can make transit carry the bits to a local hub, then eyeball 
should not peer.  If eyeball would actually pay more to transit than carrying 
the bits from content's single location, yet still does not peer, then 
eyeball's peering manager should be fired.  You cost my company money to boost 
your ego, you're out on your ass.

Of course, I am glossing over the idea that eyeball could pay transit a short 
while to see if he can get a concession out of content.  Maybe eyeball assumes 
content has transit costs as well, so eyeball thinks he can force content to 
pay something.  This is probably where the idea of similar value popped into 
the peering lexicon.  But that is standard business negotiations, and honestly 
has nothing to do with similar value.  In reality, content  eyeball have no 
idea of the other's true costs (probably not even the $/Mbps they pay for 
transit), so the idea of coming to a similar value agreement is ludicrous.

Make the decisions that are best for your company.  Not best for your ego.

Remember people, the Internet is a business.  Peering is a business tool, not 
some playground where teacher is enforcing some notion of fairness.

-- 
TTFN,
patrick

P.S. I'm ignoring the idea of if we give it away free to one, everyone will 
want it free.  Trust me, they all want it free anyway.  And saying you gave 
it to him for free! sounds more like that schoolyard than a business 
negotiation.  Besides, if you come to me and say this other network got $FOO, 
I will tell you I couldn't possibly talk about that under NDA, their deal is 
irrelevant to our deal, and each deal is far too unique to be compared.  Then 
bitch at the other network for breaking our NDA.  Breaking NDAs is bad, 
m-KAY?




Re: The scale of streaming video on the Internet.

2010-12-05 Thread Lyndon Nerenberg (VE6BBM/VE7TFX)
 Just how much free time do you have?  :)

1 minute to google the capacity of a 747-400F.
1 minute to google the dimensions and weight of an lto-4 cartridge.
1 minute to punch the numbers into bc(1).

--lyndon




Re: The scale of streaming video on the Internet.

2010-12-04 Thread Bill Stewart
On Fri, Dec 3, 2010 at 9:35 AM, Leo Bicknell bickn...@ufp.org wrote:
 - Ratio needs to be dropped from all peering policies.  It made sense
  back when the traffic was two people e-mailing each other.  It was
  a measure of equal value.  However the net has evolved.  In the
  face of streaming audio and video, or rich multimedia web sites
  Content-User will always be wildly out of ratio.  It has moved from
  a useful measure, to an excuse to make Content pay in all
  circumstances.

I think that's the key point here - ratios make sense when similar
types of carriers are peering with each other, whether that's
traditional Tier 1s or small carriers or whatever; they don't make
sense when an eyeball network is connecting to a content-provider
network.  The eyeball network can argue that it's doing all the work,
because the content provider is handing it 99% of the bits, but the
content provider can argue that the eyeball network makes its money
delivering bits asymmetrically to its end users, and they'll be really
annoyed if they can't get the content they want.  There are still
balance-of-power issues - Comcast won't get much complaint if it drops
traffic from Podunk Obscure Hosting Services, so they can bully Podunk
into paying them, while Podunk Rural Wireless Services will get lots
of complaint from its users if it drops traffic from YouTube.

Level 3 is functioning not only as a transport provider for smaller
content providers, but also as an aggregated negotiation service,
though in this case the content provider, Netflix, is big enough to
matter.  (Some years ago, when they were DVDs by mail only, it was
estimated that they had a bandwidth about 1/3 that of the total (US?)
internet, just with slightly higher latency) (or significantly lower
latency, if you were still on modems.)

-- 

             Thanks;     Bill

Note that this isn't my regular email account - It's still experimental so far.
And Google probably logs and indexes everything you send it.



Re: The scale of streaming video on the Internet.

2010-12-04 Thread Jay Ashworth
- Original Message -
 Level 3 is functioning not only as a transport provider for smaller
 content providers, but also as an aggregated negotiation service,
 though in this case the content provider, Netflix, is big enough to
 matter. (Some years ago, when they were DVDs by mail only, it was
 estimated that they had a bandwidth about 1/3 that of the total (US?)
 internet, just with slightly higher latency) (or significantly lower
 latency, if you were still on modems.)

A station wagon full of magtape, yes.  Henry Spencer?

I recently calculated the capacity of a 747F full of LTO-4 tapes; it's
about 8.7 exabytes.  I *think* it's within weight and balance for the
airframe.

Cheers,
-- jra



Re: The scale of streaming video on the Internet.

2010-12-04 Thread Scott Morris
On 12/4/10 5:56 PM, Jay Ashworth wrote:
 I recently calculated the capacity of a 747F full of LTO-4 tapes; it's
 about 8.7 exabytes.  I *think* it's within weight and balance for the
 airframe.

 Cheers,
 -- jra


Just how much free time do you have?  :)

Scott





Re: The scale of streaming video on the Internet.

2010-12-04 Thread bmanning
On Sat, Dec 04, 2010 at 09:42:55PM -0500, Scott Morris wrote:
 On 12/4/10 5:56 PM, Jay Ashworth wrote:
  I recently calculated the capacity of a 747F full of LTO-4 tapes; it's
  about 8.7 exabytes.  I *think* it's within weight and balance for the
  airframe.
 
  Cheers,
  -- jra
 
 
 Just how much free time do you have?  :)
 
 Scott
 
 A

well... here are the numbers (using LTO-5's)


you can do the math.


747-8 -- 308647 lb  /  8130 km 
747-8 -- 600 cubic meters

lto-5 -- 3.0 Tb
lto-5 -- 0.6 lb
lto-5 -- 11.3 x 2.79 x 11.1 cm


and althugh its not generally available,  the LCF has 4x the load of 
the 747-4f

http://en.wikipedia.org/wiki/File:747_400LCF_DREAM_LIFTER.jpg

the killer is going to be the 280m/s write off the tapes. :)


--bill



Re: The scale of streaming video on the Internet.

2010-12-03 Thread Jared Mauch
Unless there is robust support for it in home nat/CPE it is dead. Same for 
these ipv6 challenges at the edge. I am also not aware of any major networks 
that currently have multicast on their backbone also deploying v6mcast. 
Corrections to that here or privately welcome.

Sent from my iThing

On Dec 2, 2010, at 4:44 PM, Antonio Querubin t...@lava.net wrote:

 On Thu, 2 Dec 2010, david raistrick wrote:
 
 If you, the multicast broadcaster, dont have extensive control of the 
 -entire- end to end IP network, it will be significantly broken significant 
 amounts of the time.
 
 
 ...david (former member of a team of engineers who built and maintained a 
 220,000 seat multicast video network)
 
 Which points to the need for service providers to deploy robust multicast 
 routing.
 
 Antonio Querubin
 808-545-5282 x3003
 e-mail/xmpp:  t...@lava.net



Re: The scale of streaming video on the Internet.

2010-12-03 Thread mikea
On Thu, Dec 02, 2010 at 06:29:54PM -1000, Antonio Querubin wrote:
 On Thu, 2 Dec 2010, Paul Ferguson wrote:
 
 Old skool.
 
 Twitter is much faster:
 
 http://www.thejakartaglobe.com/home/government-disaster-advisors-twitter-ha
 cked-used-to-send-tsunami-warning/408447
 
 But morse code is still faster :)
 
 http://www.google.com/search?q=morse+code+beats+textingie=utf-8oe=utf-8aq=trls=org.mozilla:en-US:officialclient=firefox-a

Faster and doesn't require infrastructure (other than possibly electrical
power). Those hams were throttled _way_ back, too, to about 21 words per
minute; I frequently hear Morse at speeds up to about 50 wpm in the ham
bands.

-- 
Mike Andrews, W5EGO
mi...@mikea.ath.cx
Tired old sysadmin 



Re: The scale of streaming video on the Internet.

2010-12-03 Thread Neil Harris

On 02/12/10 20:21, Leo Bicknell wrote:

Comcast has around ~15 million high speed Internet subscribers (based on
year old data, I'm sure it is higher), which means at peak usage around
0.3% of all Comcast high speed users would be watching.

That's an interesting number, but let's run back the other way.
Consider what happens if folks cut the cord, and watch Internet
only TV.  I went and found some TV ratings:

http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784

Sunday Night Football at the top last week, with 7.1% of US homes
watching.  That's over 23 times as many folks watching as the 0.3% in
our previous math!  Ok, 23 times 150Gbps.

3.45Tb/s.

Yowzer.  That's a lot of data.  345 10GE ports for a SINGLE TV show.

But that's 7.1% of homes, so scale up to 100% of homes and you get
48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
existing high speed subs dropped cable and watched the same shows over
the Internet.

I think we all know that streaming video is large.  Putting the real
numbers to it shows the real engineering challenges on both sides,
generating and sinking the content, and why comapnies are fighting so
much over it.

   


You might be interested in the EU-funded P2P-NEXT research initiative, 
which is creating a P2P system capable of handling P2P broadcasting at 
massive scale:


http://www.p2p-next.org/

-- Neil

(full disclosure: I'm associated with one of the participants in the 
project)





Re: The scale of streaming video on the Internet.

2010-12-03 Thread Jay Ashworth
- Original Message -
 From: Paul Ferguson fergdawgs...@gmail.com

  As to the emergency broadcast system, yeah, that's going to lose.
 
  Didn't we already replace that with twitter?
 
  quake/tsunami warnings flow via email rather quickly.
 
 Old skool.
 
 Twitter is much faster:
 
 http://www.thejakartaglobe.com/home/government-disaster-advisors-twitter-ha
 cked-used-to-send-tsunami-warning/408447

Ok, let's go here.

The problem, as a few seconds thought would reveal, is one of *provenance*.

You could call it authentication if you wanted to, but to the *end-user*,
what the authentication *authenticates* is the provenance.  And anti-spoofing
is pretty important, when the message might be run for the hills; the 
bombers is comin'!

Well, ok, more to the point: This is the Pinellas County Emergency Manager;
I'm declaring an official Level 3 evacuation ahead of Hurricane Guillermo.

You can put it on Twitter... but you can't *only* put it on Twitter.

Cheers,
-- jra



Re: The scale of streaming video on the Internet.

2010-12-03 Thread Ken A



On 12/3/2010 8:16 AM, Neil Harris wrote:

On 02/12/10 20:21, Leo Bicknell wrote:

Comcast has around ~15 million high speed Internet subscribers (based on
year old data, I'm sure it is higher), which means at peak usage around
0.3% of all Comcast high speed users would be watching.

That's an interesting number, but let's run back the other way.
Consider what happens if folks cut the cord, and watch Internet
only TV. I went and found some TV ratings:

http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784


Sunday Night Football at the top last week, with 7.1% of US homes
watching. That's over 23 times as many folks watching as the 0.3% in
our previous math! Ok, 23 times 150Gbps.

3.45Tb/s.

Yowzer. That's a lot of data. 345 10GE ports for a SINGLE TV show.

But that's 7.1% of homes, so scale up to 100% of homes and you get
48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
existing high speed subs dropped cable and watched the same shows over
the Internet.

I think we all know that streaming video is large. Putting the real
numbers to it shows the real engineering challenges on both sides,
generating and sinking the content, and why comapnies are fighting so
much over it.



You might be interested in the EU-funded P2P-NEXT research initiative,
which is creating a P2P system capable of handling P2P broadcasting at
massive scale:

http://www.p2p-next.org/


Veetle uses p2p too. It's stream isn't quite 'light speed'; perhaps 30 
seconds delayed.

Ken




-- Neil

(full disclosure: I'm associated with one of the participants in the
project)





--
Ken Anderson
Pacific Internet - http://www.pacific.net



Re: The scale of streaming video on the Internet.

2010-12-03 Thread William Herrin
On Thu, Dec 2, 2010 at 3:28 PM, Owen DeLong o...@delong.com wrote:
 On Dec 2, 2010, at 12:21 PM, Leo Bicknell wrote:
 Sunday Night Football at the top last week, with 7.1% of US homes
 watching.  That's over 23 times as many folks watching as the 0.3% in
 our previous math!  Ok, 23 times 150Gbps.

 3.45Tb/s.

 Yowzer.  That's a lot of data.  345 10GE ports for a SINGLE TV show.

 You are assuming the absence of any of the following optimizations:

 1.      Multicast
 2.      Overlay networks using P2P services (get parts of your stream
        from some of your neighbors).

Leo and Owen:

Thank you for reminding us to look at the other side of the problem.

If the instant problem is that the character of eyeball-level Internet
service has shifted to include a major component of data which is more
or less broadcast in nature (some with time shifting, some without).
There's a purely technical approach that can resolve it: deeply
deployed content caches.

Multicasting presents some difficult issues even with live broadcasts
and it doesn't work at all for timeshifted delivery (someone else
starts watching the same movie 5 minutes later). As for P2P...
seriously? I know a couple companies have tinkered with the idea but
even if you could get good algorithms for identifying the least
consumptive source, it still seems like granting random strangers the
use of your computer as a condition of service would get real old real
fast.

But there's a third mechanism worth considering as well: the caching proxy.

Perhaps the eyeball networks should build, standardize and deploy a
content caching system so that the popular Netflix streams (and the
live broadcast streams) can usually get their traffic from a local
source. Deploy a cache to the neighborhood box and a bigger one to the
local backend. Then organize your peering so that it's _less
convenient_ to request large bandwidths than to write your software so
it employs the content caches.

Maybe even make that a type of open peering: we'll give all comers any
sized port they want, but address-constrained so it can only talk to
our content caches.

Technology like web proxies has some obvious deficiencies. Implemented
transparently they reduce the reliability of your web access.
Implemented by configuration, finding the best proxy is a hassle.
Either way no real thought has been put in to how to determine that a
proxy is misbehaving and bypass it in a timely manner. It just isn't
as resilient as a bare Internet connection to the remote server.

But with a content cache designed to implement a near-real-time
caching protocol from the ground up, these are all solvable problems.
Use anycast to find the nearest cache and unicast to talk to it. Use
UDP to communicate and escalate lost, delayed or corrupted packets to
a higher level cache or even the remote server. Trade auth and
decryption keys with the remote server before fetching from the local
cache. And so on.


So, build a content caching system that gives you a multiplier effect
reducing bandwidth aggregates to a reasonable level. And then organize
your peering process so when technically possible, it's always more
convenient to to use your caching system than request a bigger pipe.

You'll still have to eventually address the fairness issues associated
with Network Neutrality. But having provided a reasonable technical
solution you can do it without the bugaboo of network video breathing
down your neck. And oh by the way you can deny your competitors
Netflix's business since they'll no longer need quite such huge
bandwidths after employing your technology.

Here's hoping nobody offers me a refund on my two cents...

Regards,
Bill Herrin


-- 
William D. Herrin  her...@dirtside.com  b...@herrin.us
3005 Crane Dr. .. Web: http://bill.herrin.us/
Falls Church, VA 22042-3004



Re: The scale of streaming video on the Internet.

2010-12-03 Thread Christopher Morrow
On Fri, Dec 3, 2010 at 10:47 AM, William Herrin b...@herrin.us wrote:

 If the instant problem is that the character of eyeball-level Internet
 service has shifted to include a major component of data which is more
 or less broadcast in nature (some with time shifting, some without).
 There's a purely technical approach that can resolve it: deeply
 deployed content caches.

snip
the above is essentially what Akamai (and likely other CDN products)
built/build... from what I understand (purely from the threads here)
Akamai lost out on the traffic-sales for NetFlix to L3's CDN. Comcast
(for this example) lost the localized in-network caching when that
happened.

Maybe L3 will chose to deploy some of their cache's into Comcast (or
other like minded networks) to make this all work out
better/faster/stronger for the whole set of participants?

 But there's a third mechanism worth considering as well: the caching proxy.

I think that's essentially what Akamai/LLNW are (not quite squid,
patrick will get all uppity about me calling the akamai boxies 'supped
up squid proxies' :) it's a simple model to keep in mind though)

Apparently Google-Global-Cache is somewhat like this as well, no?
http://www.afnog.org/afnog2008/.../Google-AFNOG-presentation-public.pdf

Admittedly these are 'owner specific' solutions, but they do what you
propose at the cost of a few gig links in the provider's network (or
10g links depending on the deployment) - all local and cheap
interfaces, not long-haul, and close to the consumer of the data.

 Perhaps the eyeball networks should build, standardize and deploy a
 content caching system so that the popular Netflix streams (and the
 live broadcast streams) can usually get their traffic from a local
 source. Deploy a cache to the neighborhood box and a bigger one to the
 local backend. Then organize your peering so that it's _less
 convenient_ to request large bandwidths than to write your software so
 it employs the content caches.

This brings with it an unsaid complication, the content-owner (netflix
in this example) now depends upon some 'service' in the network
(comcast in this example) to be up/operational/provisioned-properly
for a service to the end-user (comcast customer in this example), even
though NetFlix/Comcast may have no actual relationship.

Expand this to PornTube/JustinTV/etc or something similar, how do
these content owners assure (and measure and metric and route-around
in the case of deviation from acceptable numbers?)  that the SLA their
customer expects is being respected by the internediate network(s)?

How does this play if Comcast (in this example) ends up being just a
transit network for another downstream ISP ?

The owner-specific solutions today probably include some form of SLA
measurement/monitoring and problem avoidance, or I think they probably
do, Akamai I believe does at least. That sort of thing would have to
be open/available as well in the 'content owner neutral' solutions.

Oh, how do you deconflict situations where two content owners are
using the 'service' in Comcast, but one is abusing the service?
Should the content owners expect 'equal share'? or how does that work?
resources on the cache system are obviously at a premium, if Netflix
overruns (due to their customers demanding a more wide spread of
higher resource required content - HD 1080p streams say with a 'less
optimal' codec in use...) their share how does JustinTV deal with
this? Do they then shift their streams to more direct-to-customer and
not via the cache system? that increases their transit costs
(potentially) and the costs on Comcast at the peering locations?

-Chris



Re: The scale of streaming video on the Internet.

2010-12-03 Thread Leo Bicknell
In a message written on Fri, Dec 03, 2010 at 11:08:21AM -0500, Christopher 
Morrow wrote:
 the above is essentially what Akamai (and likely other CDN products)
 built/build... from what I understand (purely from the threads here)
 Akamai lost out on the traffic-sales for NetFlix to L3's CDN. Comcast
 (for this example) lost the localized in-network caching when that
 happened.

Playing devils advocate here

I think the issue here is that the Akamai model saves the end user
providers like Comcast a boatload of money.  By putting a cluster
in Fargo to serve those local users Comcast doesn't have to build
a network to say, Chicago Equinix to get the traffic from peers.

However, the convential wisdom is that the Akamai's of the world
pay Comcast for this privledge; Comcast charges them for space,
power, and port fees in Fargo.

The irony here is that Comcast's insistance to charge Akamai customer
rates for these ports in Fargo make Akamai's price to Netflix too
high, and drove them to Level 3 who wants to drop off the traffic
in places like Equinix Chicago.  Now they get to build backbone to
those locations to support it.  In many ways I feel they are reaping
what they sowed.

I think the OP was actually thinking that /Comcast/ should run the
caching boxes in each local market, exporting the 50-100 /32 routes
to content peers at Equinix's and the like, but NOT the end user
blocks.  This becomes more symbiotic though as the content providers
then need to know how to direct the end users to the Comcast caching
boxes, so it's not so simple.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpxLFRm0l1Tp.pgp
Description: PGP signature


Re: The scale of streaming video on the Internet.

2010-12-03 Thread Marshall Eubanks

On Dec 3, 2010, at 9:16 AM, Neil Harris wrote:

 On 02/12/10 20:21, Leo Bicknell wrote:
 Comcast has around ~15 million high speed Internet subscribers (based on
 year old data, I'm sure it is higher), which means at peak usage around
 0.3% of all Comcast high speed users would be watching.
 
 That's an interesting number, but let's run back the other way.
 Consider what happens if folks cut the cord, and watch Internet
 only TV.  I went and found some TV ratings:
 
 http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784
 
 Sunday Night Football at the top last week, with 7.1% of US homes
 watching.  That's over 23 times as many folks watching as the 0.3% in
 our previous math!  Ok, 23 times 150Gbps.
 
 3.45Tb/s.
 
 Yowzer.  That's a lot of data.  345 10GE ports for a SINGLE TV show.
 
 But that's 7.1% of homes, so scale up to 100% of homes and you get
 48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
 existing high speed subs dropped cable and watched the same shows over
 the Internet.
 
 I think we all know that streaming video is large.  Putting the real
 numbers to it shows the real engineering challenges on both sides,
 generating and sinking the content, and why comapnies are fighting so
 much over it.
 
   
 
 You might be interested in the EU-funded P2P-NEXT research initiative, which 
 is creating a P2P system capable of handling P2P broadcasting at massive 
 scale:
 
 http://www.p2p-next.org/

This already exists in China. 

http://www.ietf.org/proceedings/77/slides/P2PRG-3.pdf

Regards
Marshall



 
 -- Neil
 
 (full disclosure: I'm associated with one of the participants in the project)
 
 
 




Re: The scale of streaming video on the Internet.

2010-12-03 Thread William Herrin
On Fri, Dec 3, 2010 at 11:08 AM, Christopher Morrow
morrowc.li...@gmail.com wrote:
 On Fri, Dec 3, 2010 at 10:47 AM, William Herrin b...@herrin.us wrote:
 Perhaps the eyeball networks should build, standardize and deploy a
 content caching system so that the popular Netflix streams (and the
 live broadcast streams) can usually get their traffic from a local
 source. Deploy a cache to the neighborhood box and a bigger one to the
 local backend. Then organize your peering so that it's _less
 convenient_ to request large bandwidths than to write your software so
 it employs the content caches.

 This brings with it an unsaid complication, the content-owner (netflix
 in this example) now depends upon some 'service' in the network
 (comcast in this example) to be up/operational/provisioned-properly
 for a service to the end-user (comcast customer in this example), even
 though NetFlix/Comcast may have no actual relationship.

Actually, there was nothing particularly unsaid about the complication:

 Technology like web proxies has some obvious deficiencies. [...]
 Either way no real thought has been put in to how to determine that a
 proxy is misbehaving and bypass it in a timely manner. It just isn't
 as resilient as a bare Internet connection to the remote server.

 [...] these are all solvable problems.
 Use anycast to find the nearest cache and unicast to talk to it. Use
 UDP to communicate and escalate lost, delayed or corrupted packets to
 a higher level cache or even the remote server. Trade auth and
 decryption keys with the remote server before fetching from the local
 cache. And so on.

You put some basic intelligence in the app: if local cache != working,
try regional cache. If regional cache != working, go direct to main
server. There's no SLA issue... if the ISP doesn't maintain the
caching proxy (or doesn't deploy one at all) then they take the
bandwidth hit instead with the same protocol served by a CDN or the
originating company.


 Oh, how do you deconflict situations where two content owners are
 using the 'service' in Comcast, but one is abusing the service?
 Should the content owners expect 'equal share'? or how does that work?

What conflict? If the cache isn't working to the app's standard, the
app simply requests past it, straight to Netflix's servers if need be.
The point is to produce a protocol and system for video and other
broadcast delivery that can -opportunistically- reduce the long haul
bandwidth consumption (and therefore cost) borne by the eyeball
network. What would be the point in building something that critfails
because one guy doesn't want to play and another has minor broken
component in the middle?

Regards,
Bill Herrin



-- 
William D. Herrin  her...@dirtside.com  b...@herrin.us
3005 Crane Dr. .. Web: http://bill.herrin.us/
Falls Church, VA 22042-3004



Re: The scale of streaming video on the Internet.

2010-12-03 Thread Christopher Morrow
On Fri, Dec 3, 2010 at 11:18 AM, Leo Bicknell bickn...@ufp.org wrote:
 In a message written on Fri, Dec 03, 2010 at 11:08:21AM -0500, Christopher 
 Morrow wrote:
 the above is essentially what Akamai (and likely other CDN products)
 built/build... from what I understand (purely from the threads here)
 Akamai lost out on the traffic-sales for NetFlix to L3's CDN. Comcast
 (for this example) lost the localized in-network caching when that
 happened.

 Playing devils advocate here

 I think the issue here is that the Akamai model saves the end user
 providers like Comcast a boatload of money.  By putting a cluster
 in Fargo to serve those local users Comcast doesn't have to build
 a network to say, Chicago Equinix to get the traffic from peers.

right.

 However, the convential wisdom is that the Akamai's of the world
 pay Comcast for this privledge; Comcast charges them for space,
 power, and port fees in Fargo.

 The irony here is that Comcast's insistance to charge Akamai customer
 rates for these ports in Fargo make Akamai's price to Netflix too
 high, and drove them to Level 3 who wants to drop off the traffic
 in places like Equinix Chicago.  Now they get to build backbone to
 those locations to support it.  In many ways I feel they are reaping
 what they sowed.

right.

 I think the OP was actually thinking that /Comcast/ should run the
 caching boxes in each local market, exporting the 50-100 /32 routes

sure... which was what I was addressing. If comcast runs these boxes,
how does flix aim their customer 'through' them? how does flix assure
their SLA with their customer is being met? how do they then avoid
(and assure the traffic is properly handled) these boxes when problems
arise?

I get that the network operator (comcast here) has the best idea of
their internal painpoints and costs, I just don't see that them
running a set of boxes is going to actually happen/help. Also, do they
charge the content owners (or their customers?) for data that passes
through these boxes? how do they do cost-recovery operations for this
new infra that they must maintain?

 to content peers at Equinix's and the like, but NOT the end user
 blocks.  This becomes more symbiotic though as the content providers
 then need to know how to direct the end users to the Comcast caching
 boxes, so it's not so simple.

right, that was the point(s) I was trying to make... sadly I didn't
make them I guess.

-chris

 --
       Leo Bicknell - bickn...@ufp.org - CCIE 3440
        PGP keys at http://www.ufp.org/~bicknell/




Re: The scale of streaming video on the Internet.

2010-12-03 Thread Martin Hotze
 Date: Fri, 3 Dec 2010 10:47:44 -0500
 From: William Herrin b...@herrin.us
 Subject: Re: The scale of streaming video on the Internet.
 To: Owen DeLong o...@delong.com
 Cc: nanog@nanog.org
(...)
 But there's a third mechanism worth considering as well: the caching
 proxy.

IMHO it is a waste of bandwidth to use IP/network-based infrastructure for 
stuff like unidirectional data - like distributing a movie (on demand or 
scheduled). In this case nothing beats a satellite transponder and a dish, also 
cost-wise.

#m




Re: The scale of streaming video on the Internet.

2010-12-03 Thread Leo Bicknell
In a message written on Fri, Dec 03, 2010 at 11:39:32AM -0500, Christopher 
Morrow wrote:
 right, that was the point(s) I was trying to make... sadly I didn't
 make them I guess.

Well, I wasn't 100% sure, so best to confirm.

But it all goes to the heart of Network Neutrality.  It's easy to set up
the extreme straw men on both sides:

- If Netflix had a single data center in Seattle it is unreasonable for
  them to expect Comcast to settlment free peer with them and then haul
  the traffic to every local market.

- If Netflix pays Akamai (or similar) to place the content in all local 
  markets saving Comcast all of the backbone costs it is unreasonable for
  Comcast to then charge them.

The question is, what in the middle of those two is fair?  That
seems to be what the FCC is trying to figure out.  It's an extremely
hard question, I've pondered many business and technical ideas
proposed by a lot of great thinkers, and all of them have significant
problems.

At a high level, I think peering needs to evolve in two very important
ways:

- Ratio needs to be dropped from all peering policies.  It made sense
  back when the traffic was two people e-mailing each other.  It was
  a measure of equal value.  However the net has evolved.  In the
  face of streaming audio and video, or rich multimedia web sites
  Content-User will always be wildly out of ratio.  It has moved from
  a useful measure, to an excuse to make Content pay in all
  circumstances.

- Peering policies need to look closer at where traffic is being dropped
  off.  Hot potato was never a good idea, it placed the burden on the
  receiver and propped up ratio as a valid excuse.  We need more 
  cold potato routing, more peering only with regional ASN's/routes.
  Those connecting to the eyeball networks have a responbility to get
  the content at least in the same general areas as the end user, and
  not drop it off across the country.

If large ISP's really wanted to get the FCC off their back they would
look at writing 21st century peering policies, rather than trying to
keep shoehoring 20th century ones into working with a 21st century
traffic profile.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpo4xkEuBOes.pgp
Description: PGP signature


Re: The scale of streaming video on the Internet.

2010-12-03 Thread Michael Painter

mikea wrote:

Faster and doesn't require infrastructure (other than possibly electrical
power). Those hams were throttled _way_ back, too, to about 21 words per
minute; I frequently hear Morse at speeds up to about 50 wpm in the ham
bands.


In '56 ( I was 13 yrs old...got my General at 11), I handled traffic on PAN (Pacific Area Net) at around 30 wpm with a bug 
and a stick, stick being a pencil.

Bug here:
http://www.youtube.com/watch?v=yHz2rEiFnfwfeature=related

--Michael (ex K6IYC) 





The scale of streaming video on the Internet.

2010-12-02 Thread Leo Bicknell

Hidden in the Comcast and Level 3 press release war are some
facinating details about the scale of streaming video.

In http://blog.comcast.com/2010/11/comcasts-letter-to-fcc-on-level-3.html,
Comcast suggest that demanded 27 to 30 new interconnection ports.

I have to make a few assumptions, all of which I think are quite
reasonable, but I want to lay them out:

- ports means 10 Gigabit ports.  1GE's seems too small, 100GE's seems
  too large.  I suppose there is a small chance they were thinking OC-48
  (2.5Gbps) ports, but those seem to be falling out of favor for cost.
- They were provisioning for double the anticipated traffic.  That is,
  if there was 10G of traffic total they would ask for 20G of ports.
  This both provides room for growth, and the fact that you can't
  perfectly balance traffic over that many ports.
- That substantially all of that new traffic was for Netflix, or more
  accurately streaming video from their CDN.

Thus in round numbers they were asking for 300Gbps of additional
capacity across the US, to move around 150Gbps of actual traffic.

But how many video streams is 150Gbps?  Google found me this article:
http://blog.streamingmedia.com/the_business_of_online_vi/2009/03/estimates-on-what-it-costs-netflixs-to-stream-movies.html

It suggests that low-def is 2000Kbps, and high def is 3200Kbps.  If
we do the math, that suggests the 150Gbps could support 75,000 low
def streams, or 46,875 high def streams.  Let me round to 50,000 users,
for some mix of streams.

Comcast has around ~15 million high speed Internet subscribers (based on
year old data, I'm sure it is higher), which means at peak usage around
0.3% of all Comcast high speed users would be watching.

That's an interesting number, but let's run back the other way.
Consider what happens if folks cut the cord, and watch Internet
only TV.  I went and found some TV ratings:

http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784

Sunday Night Football at the top last week, with 7.1% of US homes
watching.  That's over 23 times as many folks watching as the 0.3% in
our previous math!  Ok, 23 times 150Gbps.

3.45Tb/s.

Yowzer.  That's a lot of data.  345 10GE ports for a SINGLE TV show.

But that's 7.1% of homes, so scale up to 100% of homes and you get
48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
existing high speed subs dropped cable and watched the same shows over
the Internet.

I think we all know that streaming video is large.  Putting the real
numbers to it shows the real engineering challenges on both sides,
generating and sinking the content, and why comapnies are fighting so
much over it.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpVCrweXqwG7.pgp
Description: PGP signature


Re: The scale of streaming video on the Internet.

2010-12-02 Thread Owen DeLong
You are assuming the absence of any of the following optimizations:

1.  Multicast
2.  Overlay networks using P2P services (get parts of your stream
from some of your neighbors).

These are not entirely safe assumptions.

Owen

On Dec 2, 2010, at 12:21 PM, Leo Bicknell wrote:

 
 Hidden in the Comcast and Level 3 press release war are some
 facinating details about the scale of streaming video.
 
 In http://blog.comcast.com/2010/11/comcasts-letter-to-fcc-on-level-3.html,
 Comcast suggest that demanded 27 to 30 new interconnection ports.
 
 I have to make a few assumptions, all of which I think are quite
 reasonable, but I want to lay them out:
 
 - ports means 10 Gigabit ports.  1GE's seems too small, 100GE's seems
  too large.  I suppose there is a small chance they were thinking OC-48
  (2.5Gbps) ports, but those seem to be falling out of favor for cost.
 - They were provisioning for double the anticipated traffic.  That is,
  if there was 10G of traffic total they would ask for 20G of ports.
  This both provides room for growth, and the fact that you can't
  perfectly balance traffic over that many ports.
 - That substantially all of that new traffic was for Netflix, or more
  accurately streaming video from their CDN.
 
 Thus in round numbers they were asking for 300Gbps of additional
 capacity across the US, to move around 150Gbps of actual traffic.
 
 But how many video streams is 150Gbps?  Google found me this article:
 http://blog.streamingmedia.com/the_business_of_online_vi/2009/03/estimates-on-what-it-costs-netflixs-to-stream-movies.html
 
 It suggests that low-def is 2000Kbps, and high def is 3200Kbps.  If
 we do the math, that suggests the 150Gbps could support 75,000 low
 def streams, or 46,875 high def streams.  Let me round to 50,000 users,
 for some mix of streams.
 
 Comcast has around ~15 million high speed Internet subscribers (based on
 year old data, I'm sure it is higher), which means at peak usage around
 0.3% of all Comcast high speed users would be watching.
 
 That's an interesting number, but let's run back the other way.
 Consider what happens if folks cut the cord, and watch Internet
 only TV.  I went and found some TV ratings:
 
 http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784
 
 Sunday Night Football at the top last week, with 7.1% of US homes
 watching.  That's over 23 times as many folks watching as the 0.3% in
 our previous math!  Ok, 23 times 150Gbps.
 
 3.45Tb/s.
 
 Yowzer.  That's a lot of data.  345 10GE ports for a SINGLE TV show.
 
 But that's 7.1% of homes, so scale up to 100% of homes and you get
 48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
 existing high speed subs dropped cable and watched the same shows over
 the Internet.
 
 I think we all know that streaming video is large.  Putting the real
 numbers to it shows the real engineering challenges on both sides,
 generating and sinking the content, and why comapnies are fighting so
 much over it.
 
 -- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/




Re: The scale of streaming video on the Internet.

2010-12-02 Thread Scott Helms



Sunday Night Football at the top last week, with 7.1% of US homes
watching.  That's over 23 times as many folks watching as the 0.3% in
our previous math!  Ok, 23 times 150Gbps.

3.45Tb/s.

Yowzer.  That's a lot of data.  345 10GE ports for a SINGLE TV show.

But that's 7.1% of homes, so scale up to 100% of homes and you get
48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
existing high speed subs dropped cable and watched the same shows over
the Internet.

I think we all know that streaming video is large.  Putting the real
numbers to it shows the real engineering challenges on both sides,
generating and sinking the content, and why comapnies are fighting so
much over it.

Anything that is live  likely to be watched by lots of people at the 
same time like sports can handled via multicast.  The IPTV guys have had 
a number of years to get that work fairly well in telco environments.  
The content that can't be handled with multicast, like on demand 
programming, is where you lose your economy of scale.


--
Scott Helms
Vice President of Technology
ISP Alliance, Inc. DBA ZCorum
(678) 507-5000

Looking for hand-selected news, views and
tips for independent broadband providers?

Follow us on Twitter! http://twitter.com/ZCorum





Re: The scale of streaming video on the Internet.

2010-12-02 Thread Seth Mattinen
On 12/2/10 12:28 PM, Owen DeLong wrote:
 You are assuming the absence of any of the following optimizations:
 
 1.Multicast

Multicast is great for simulating old school broadcasting, but I don't
see how it can apply to Netflix/Amazon style demand streaming where
everyone can potentially watch a different stream at different points in
time with different bitrates.

~Seth



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Jay Ashworth
- Original Message -
 From: Leo Bicknell bickn...@ufp.org
[...] 
 That's an interesting number, but let's run back the other way.
 Consider what happens if folks cut the cord, and watch Internet
 only TV. I went and found some TV ratings:
 
 http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784
 
 Sunday Night Football at the top last week, with 7.1% of US homes
 watching. That's over 23 times as many folks watching as the 0.3% in
 our previous math! Ok, 23 times 150Gbps.
 
 3.45Tb/s.
 
 Yowzer. That's a lot of data. 345 10GE ports for a SINGLE TV show.
 
 But that's 7.1% of homes, so scale up to 100% of homes and you get
 48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
 existing high speed subs dropped cable and watched the same shows over
 the Internet.
 
 I think we all know that streaming video is large. Putting the real
 numbers to it shows the real engineering challenges on both sides,
 generating and sinking the content, and why companies are fighting so
 much over it.

It also proves, though I doubt anyone important is listening, *why the
network broadcast architecture is shaped the way it is*, and it implies,
*to* anyone important who is listening, just how bad a fit that is for
a point- or even multi-point server to viewers environment.

Oh: and all the extra servers and switches necessary to set that up?

*Way* more power than the equivalent transmitters and TV sets.  Even if 
you add in the cable headends, I suspect.

In other news: viewers will tolerate Buffering... to watch last night's
daily show.  They will *not* tolerate it while they're waiting to see if
the winning hit in Game 7 is fair or foul -- which means that it will 
not be possible to replace that architecture until you can do it at 
technical parity... and that's not to mention the emergency communications
uses of real broadcasting, which will become untenable if enough 
critical mass is drained off of said real broadcasting by other 
services which are only Good Enough.

The Law of Unexpected Consequences is a *bitch*.  Just ask the NCS people;
I'm sure they have some interesting 40,000ft stories to tell about the
changes in the telco networks since 1983.

Cheers,
-- jra



RE: The scale of streaming video on the Internet.

2010-12-02 Thread Alex Rubenstein
 *Way* more power than the equivalent transmitters and TV sets.  Even if
 you add in the cable headends, I suspect.

Yeah, but...

This is really not comparable.

Transmitters and TV sets require that everyone watch what is being transmitted. 
People (myself included) don't like, or don't want this method anymore. I want 
to watch what I want, when I want to.

This is the new age of media. Out with the old.





Re: The scale of streaming video on the Internet.

2010-12-02 Thread Owen DeLong

On Dec 2, 2010, at 12:48 PM, Jay Ashworth wrote:

 - Original Message -
 From: Leo Bicknell bickn...@ufp.org
 [...] 
 That's an interesting number, but let's run back the other way.
 Consider what happens if folks cut the cord, and watch Internet
 only TV. I went and found some TV ratings:
 
 http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784
 
 Sunday Night Football at the top last week, with 7.1% of US homes
 watching. That's over 23 times as many folks watching as the 0.3% in
 our previous math! Ok, 23 times 150Gbps.
 
 3.45Tb/s.
 
 Yowzer. That's a lot of data. 345 10GE ports for a SINGLE TV show.
 
 But that's 7.1% of homes, so scale up to 100% of homes and you get
 48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
 existing high speed subs dropped cable and watched the same shows over
 the Internet.
 
 I think we all know that streaming video is large. Putting the real
 numbers to it shows the real engineering challenges on both sides,
 generating and sinking the content, and why companies are fighting so
 much over it.
 
 It also proves, though I doubt anyone important is listening, *why the
 network broadcast architecture is shaped the way it is*, and it implies,
 *to* anyone important who is listening, just how bad a fit that is for
 a point- or even multi-point server to viewers environment.
 
Yes and no... The existing system  is a multi-point (transmission towers)
to viewers (multicast) environment. No reason that isn't feasible on the
internet as well.

 Oh: and all the extra servers and switches necessary to set that up?
 
For equivalent service (linear programming), no need. For VOD, turns
out to be basically identical anyway.

 *Way* more power than the equivalent transmitters and TV sets.  Even if 
 you add in the cable headends, I suspect.
 
Not if you allow for multicast.

 In other news: viewers will tolerate Buffering... to watch last night's
 daily show.  They will *not* tolerate it while they're waiting to see if
 the winning hit in Game 7 is fair or foul -- which means that it will 
 not be possible to replace that architecture until you can do it at 
 technical parity... and that's not to mention the emergency communications
 uses of real broadcasting, which will become untenable if enough 
 critical mass is drained off of said real broadcasting by other 
 services which are only Good Enough.
 
Viewers already tolerate a fair amount of buffering for exactly that.
The bleepability delay and other technical requirements, the bouncing
of things off satellites, etc. all create delays in the current system.

If you keep the delay under 5s, most viewers won't actually know
the difference.

As to the emergency broadcast system, yeah, that's going to lose.

However, the reality is that things are changing and people are tending
to move towards wanting VOD based services more than linear programming.

Owen




Re: The scale of streaming video on the Internet.

2010-12-02 Thread Jeff Wheeler
On Thu, Dec 2, 2010 at 3:38 PM, Seth Mattinen se...@rollernet.us wrote:
 On 12/2/10 12:28 PM, Owen DeLong wrote:
 You are assuming the absence of any of the following optimizations:

 1.    Multicast

 Multicast is great for simulating old school broadcasting, but I don't
 see how it can apply to Netflix/Amazon style demand streaming where
I do.  Let's assume that there is a multicast future where it's being
legitimately used for live television, and whatever else.

The same mcast infrastructure will be utilized by Amazon.com to stream
popular titles (can you say New Releases) onto users' devices.  You
may be unicast for the first few minutes of the movie (if you really
want to start watching immediately) and change over to a
multicast-distributed stream once you have caught up to an
in-progress stream.

If Netflix had licensing agreements which made it possible for their
users to store movies on their local device, this would work even
better for Netflix, because of the queue and watch later nature of
their site and users.  I have a couple dozen movies in my instant
queue.  It may be weeks before I watch them all.  The most popular
movies can be multicast, and my DVR can listen to the stream when it
comes on, store it, and wait for me to watch it.

I am sure Amazon and Netflix have both thought of this already (if
not, they need to hire new people who still remember how pay-per-view
worked on C-band satellite) and are hoping multicast will one-day come
along and massively reduce their bandwidth consumption on the most
popular titles.  I am also certain the cable companies have thought of
it, and added it to the long list of reasons they will never offer
Internet multicast, or at least, not until a competitor pops up and
does it in such a way that customers understand it's a feature they
aren't getting.

-- 
Jeff S Wheeler j...@inconcepts.biz
Sr Network Operator  /  Innovative Network Concepts



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Ken Chase
On Thu, Dec 02, 2010 at 03:57:29PM -0500, Alex Rubenstein said:
  Transmitters and TV sets require that everyone watch what is being 
transmitted. People (myself included) don't like, or don't want this method 
anymore. I want to watch what I want, when I want to.
  
  This is the new age of media. Out with the old.

want? You going to pay for it? then go ahead!

So what's the cost then - if people paid for their bandwidth instead of 
freeloading
off the asymetric usage patterns? ie when that 0.3% becomes 80%. Anyone analysed
this out yet? I think the cost metrics will indicate that any network with
video is going to have to setup their own distribution and caching POP mesh
(ie a CDN!) to do it anywhere near economically. 

Additionally, while you may think you want to watch what you want to watch and
that's it, it seems likely there'll be a limited amount of material available
or the caching metrics go out the window, ie if everyone is watching something
different at any one time.

/kc
-- 
Ken Chase - k...@heavycomputing.ca - +1 416 897 6284 - Toronto CANADA
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front 
St. W.



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Jack Bates

On 12/2/2010 2:38 PM, Seth Mattinen wrote:

On 12/2/10 12:28 PM, Owen DeLong wrote:

You are assuming the absence of any of the following optimizations:

1.  Multicast


Multicast is great for simulating old school broadcasting, but I don't
see how it can apply to Netflix/Amazon style demand streaming where
everyone can potentially watch a different stream at different points in
time with different bitrates.


This isn't a take it or leave it deal. To start out and branch out, most 
streaming is VOD, which even within a cable network eats up huge amounts 
of bandwidth. In the end, it's expected that there will be a mix of 
multicast and VOD.


Watch the game live multicast. Missed the game? Watch it on demand. As 
things progress, we'll probably see more edge content delivery systems 
(like Akamai) to cache/store huge amounts of video for the local 
populace. It won't be every movie, but it will be the ones which have a 
high repeat rate to ease traffic off critical infrastructure, saving 
everyone money, making everyone happy.


What would be really awesome (unless I've missed it) is Internet access 
to the emergency broadcast system and local weather services; all easily 
handled with multicast.



Jack



Re: The scale of streaming video on the Internet.

2010-12-02 Thread david raistrick

On Thu, 2 Dec 2010, Jack Bates wrote:

Watch the game live multicast. Missed the game? Watch it on demand. As things 
progress, we'll probably see more edge content delivery systems (like Akamai)


Have you ever actually been involved with really large scale multicast 
implementations?   I take it that's a no.


The -only- way that would work internet wide, and it defeats the purpose, 
is if your client side created a tunnel back to your multicast source 
network.  Which would mean you're carrying your multicast data over 
anycast.


If you, the multicast broadcaster, dont have extensive control of the 
-entire- end to end IP network, it will be significantly broken 
significant amounts of the time.



...david (former member of a team of engineers who built and maintained a 
220,000 seat multicast video network)



--
david raistrickhttp://www.netmeister.org/news/learn2quote.html
dr...@icantclick.org http://www.expita.com/nomime.html




Re: The scale of streaming video on the Internet.

2010-12-02 Thread Jack Bates

On 12/2/2010 3:23 PM, david raistrick wrote:

Have you ever actually been involved with really large scale multicast
implementations? I take it that's a no.



Nope. I prefer small scale. :)


The -only- way that would work internet wide, and it defeats the
purpose, is if your client side created a tunnel back to your multicast
source network. Which would mean you're carrying your multicast data
over anycast.



So we don't use multicast, fallback to unicast deployments on the 
Internet today for various events/streams?



If you, the multicast broadcaster, dont have extensive control of the
-entire- end to end IP network, it will be significantly broken
significant amounts of the time.


Clients can't fallback to unicast when multicast isn't functional? I'd 
expect multicast to save some bandwidth, not all of it.




...david (former member of a team of engineers who built and maintained
a 220,000 seat multicast video network)


Cool. I did a 3 seat multicast video network, and honestly am largely 
ignorant of multicast over the Internet (on my list!) but do listen to 
people discuss it. :P



Jack



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Antonio Querubin

On Thu, 2 Dec 2010, Jay Ashworth wrote:


Oh: and all the extra servers and switches necessary to set that up?



*Way* more power than the equivalent transmitters and TV sets.  Even if
you add in the cable headends, I suspect.


Have you heard of multicast? :)

Antonio Querubin
808-545-5282 x3003
e-mail/xmpp:  t...@lava.net



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Jay Ashworth
- Original Message -
 From: Antonio Querubin t...@lava.net

 On Thu, 2 Dec 2010, Jay Ashworth wrote:
  Oh: and all the extra servers and switches necessary to set that up?
 
  *Way* more power than the equivalent transmitters and TV sets. Even
  if you add in the cable headends, I suspect.
 
 Have you heard of multicast? :)

Yes, Tony, but they can't *count the connected users that way*, you see.

For my part, as someone who used to run a small edge network, what I wonder 
is this: is there a multicast repeater daemon of some sort, where I can put
it on my edge, and have it catch any source requested by an inside user and
re-multicast it to my LAN, so that my uplink isn't loaded by multiple 
connections?

Or do I need to take the Multicast class again? :-)

Cheers,
-- jra



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Jack Bates

On 12/2/2010 4:08 PM, Jay Ashworth wrote:

Yes, Tony, but they can't *count the connected users that way*, you see.



Actually, given content protection, I highly expect any device receiving 
multicast video to also have a session open to handle various things, 
possibly even getting keys for decrypting streams. I doubt they want 
anyone hijacking a video stream. I also expect to see video shifting to 
region specific commercials. After all, why charge just one person for a 
commercial timeslot, when you can charge hundreds or thousands, each for 
their own local audience; more if they want national.



For my part, as someone who used to run a small edge network, what I wonder
is this: is there a multicast repeater daemon of some sort, where I can put
it on my edge, and have it catch any source requested by an inside user and
re-multicast it to my LAN, so that my uplink isn't loaded by multiple
connections?


If it's actual multicast, it should be there already. I've seen a few 
interesting daemons for taking unicast and splitting it out, though. 
Buddy had a little perl script setup with replay-tv which allowed a 
master connection who could control the replay-tv, and then all other 
connections were view only. Was simple and cute.



Or do I need to take the Multicast class again? :-)


I sure as hell need to read up again. I keep getting sidetracked with 
other things. Perhaps after I wrap up the IPv6 rollout, I can get back 
to Multicast support. I believe most of my NSPs support it, I just never 
have time to iron out the details to a level I'm comfortable enough to 
risk my production routers.



Jack



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Antonio Querubin

On Thu, 2 Dec 2010, Jay Ashworth wrote:


Yes, Tony, but they can't *count the connected users that way*, you see.


There are various ways to do that.  Eg. Windows Media Server can log
multicast Windows Media Clients.


For my part, as someone who used to run a small edge network, what I wonder
is this: is there a multicast repeater daemon of some sort, where I can put
it on my edge, and have it catch any source requested by an inside user and
re-multicast it to my LAN, so that my uplink isn't loaded by multiple
connections?


You might want to take a look at AMT:

http://tools.ietf.org/html/draft-ietf-mboned-auto-multicast-10

Antonio Querubin
808-545-5282 x3003
e-mail/xmpp:  t...@lava.net



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Antonio Querubin

On Thu, 2 Dec 2010, Jack Bates wrote:

I sure as hell need to read up again. I keep getting sidetracked with other 
things. Perhaps after I wrap up the IPv6 rollout, I can get back to Multicast 
support. I believe most of my NSPs support it, I just never have time to iron 
out the details to a level I'm comfortable enough to risk my production 
routers.


With the pending large scale IPv6 deployment across the Internet, service 
providers have a unique opportunity to deploy IPv6 multicast alongside 
IPv6 unicast instead of trying to shim it in afterwards.  The various IPv6 
wikis could use a good sprinkling of multicast howtos.



Antonio Querubin
808-545-5282 x3003
e-mail/xmpp:  t...@lava.net



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Owen DeLong

On Dec 2, 2010, at 2:08 PM, Jay Ashworth wrote:

 - Original Message -
 From: Antonio Querubin t...@lava.net
 
 On Thu, 2 Dec 2010, Jay Ashworth wrote:
 Oh: and all the extra servers and switches necessary to set that up?
 
 *Way* more power than the equivalent transmitters and TV sets. Even
 if you add in the cable headends, I suspect.
 
 Have you heard of multicast? :)
 
 Yes, Tony, but they can't *count the connected users that way*, you see.
 
Sure you can.

 For my part, as someone who used to run a small edge network, what I wonder 
 is this: is there a multicast repeater daemon of some sort, where I can put
 it on my edge, and have it catch any source requested by an inside user and
 re-multicast it to my LAN, so that my uplink isn't loaded by multiple 
 connections?
 

Sounds like you are describing a rendezvous point, but, perhaps I am
misunderstanding your intent.

Owen




Re: The scale of streaming video on the Internet.

2010-12-02 Thread Owen DeLong

On Dec 2, 2010, at 2:01 PM, david raistrick wrote:

 On Thu, 2 Dec 2010, Antonio Querubin wrote:
 
 -entire- end to end IP network, it will be significantly broken significant 
 amounts of the time.
 
 Which points to the need for service providers to deploy robust multicast 
 routing.
 
 No doubt - it also points to multicast itself needing a bit more sanity and 
 flexibility for implimentation.   When you have to tune -every- l3 device 
 along the path for each stream, well
 
It's not quite that bad. I've done multiple multicast implementations where 
this was utterly unnecessary, but, it does take
some configuration on most L3 devices to make it work reasonably well.

 
 As Owen pointed out, perhaps carriers will eventually be motivated to make 
 this happen in order to reduce their own bandwidth costs.  Eventually.
 
 In the meantime, speaking with my content hat on, we stick with unicast. :)
 
Wrong answer, IMHO. Where it makes sense, use multicast with a fast fallback to 
unicast if multicast isn't working.
In this way, it helps build the case that deploying multicast will save $$$. 
Without it, the mantra will be Multicast
doesn't matter, even if we implement it, none of the content will use it.

Owen




RE: The scale of streaming video on the Internet.

2010-12-02 Thread Ryan Finnesey
I have TWC in NYC.  I see now I can restart most of the shows I watch.  How is 
this done?

Cheers
Ryan


-Original Message-
From: Alex Rubenstein [mailto:a...@corp.nac.net] 
Sent: Thursday, December 02, 2010 3:57 PM
To: Jay Ashworth; NANOG
Subject: RE: The scale of streaming video on the Internet.

 *Way* more power than the equivalent transmitters and TV sets.  Even 
 if you add in the cable headends, I suspect.

Yeah, but...

This is really not comparable.

Transmitters and TV sets require that everyone watch what is being transmitted. 
People (myself included) don't like, or don't want this method anymore. I want 
to watch what I want, when I want to.

This is the new age of media. Out with the old.





Re: The scale of streaming video on the Internet.

2010-12-02 Thread Matthew Petach
On Thu, Dec 2, 2010 at 1:02 PM, Owen DeLong o...@delong.com wrote:
...
 As to the emergency broadcast system, yeah, that's going to lose.

Didn't we already replace that with twitter?

Matt



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Marshall Eubanks

On Dec 2, 2010, at 5:31 PM, Antonio Querubin wrote:

 On Thu, 2 Dec 2010, Jay Ashworth wrote:
 
 Yes, Tony, but they can't *count the connected users that way*, you see.
 
 There are various ways to do that.  Eg. Windows Media Server can log
 multicast Windows Media Clients.
 
 For my part, as someone who used to run a small edge network, what I wonder
 is this: is there a multicast repeater daemon of some sort, where I can put
 it on my edge, and have it catch any source requested by an inside user and
 re-multicast it to my LAN, so that my uplink isn't loaded by multiple
 connections?
 
 You might want to take a look at AMT:
 
 http://tools.ietf.org/html/draft-ietf-mboned-auto-multicast-10

Correct. That is exactly the problem AMT is intended to solve.

Regards
Marshall


 
 Antonio Querubin
 808-545-5282 x3003
 e-mail/xmpp:  t...@lava.net
 
 




Re: The scale of streaming video on the Internet.

2010-12-02 Thread Marshall Eubanks

On Dec 2, 2010, at 5:41 PM, Antonio Querubin wrote:

 On Thu, 2 Dec 2010, Jack Bates wrote:
 
 I sure as hell need to read up again. I keep getting sidetracked with other 
 things. Perhaps after I wrap up the IPv6 rollout, I can get back to 
 Multicast support. I believe most of my NSPs support it, I just never have 
 time to iron out the details to a level I'm comfortable enough to risk my 
 production routers.
 
 With the pending large scale IPv6 deployment across the Internet, service 
 providers have a unique opportunity to deploy IPv6 multicast alongside IPv6 
 unicast instead of trying to shim it in afterwards.

Note that IPv6 multicast doesn't really solve multicast problems, except that 
embedded RPs may make ASM easier to deploy. 

 The various IPv6 wikis could use a good sprinkling of multicast howtos.
 

True. Want to help with that ?

Regards
Marshall



 
 Antonio Querubin
 808-545-5282 x3003
 e-mail/xmpp:  t...@lava.net
 
 




Re: The scale of streaming video on the Internet.

2010-12-02 Thread Joel Jaeggli
On 12/2/10 4:56 PM, Matthew Petach wrote:
 On Thu, Dec 2, 2010 at 1:02 PM, Owen DeLong o...@delong.com wrote:
 ...
 As to the emergency broadcast system, yeah, that's going to lose.
 
 Didn't we already replace that with twitter?

quake/tsunami warnings flow via email rather quickly.

 Matt
 




Re: The scale of streaming video on the Internet.

2010-12-02 Thread Paul Ferguson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, Dec 2, 2010 at 7:53 PM, Joel Jaeggli joe...@bogus.com wrote:

 On 12/2/10 4:56 PM, Matthew Petach wrote:
 On Thu, Dec 2, 2010 at 1:02 PM, Owen DeLong o...@delong.com wrote:
 ...
 As to the emergency broadcast system, yeah, that's going to lose.

 Didn't we already replace that with twitter?

 quake/tsunami warnings flow via email rather quickly.


Old skool.

Twitter is much faster:

http://www.thejakartaglobe.com/home/government-disaster-advisors-twitter-ha
cked-used-to-send-tsunami-warning/408447

Sorry for the PGP line-wrap foo.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.5.3 (Build 5003)

wj8DBQFM+GsSq1pz9mNUZTMRAtuAAKCp/MEXyQ3BgzdyCIbHsXjL5GjIpACfcxDi
n8Q7jHq2XzANIvodHr1Ml3M=
=Ts6E
-END PGP SIGNATURE-



-- 
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawgster(at)gmail.com
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Antonio Querubin

On Thu, 2 Dec 2010, Paul Ferguson wrote:


Old skool.

Twitter is much faster:

http://www.thejakartaglobe.com/home/government-disaster-advisors-twitter-ha
cked-used-to-send-tsunami-warning/408447


But morse code is still faster :)

http://www.google.com/search?q=morse+code+beats+textingie=utf-8oe=utf-8aq=trls=org.mozilla:en-US:officialclient=firefox-a


Antonio Querubin
808-545-5282 x3003
e-mail/xmpp:  t...@lava.net



Re: The scale of streaming video on the Internet.

2010-12-02 Thread Antonio Querubin

On Thu, 2 Dec 2010, Marshall Eubanks wrote:


On Dec 2, 2010, at 5:41 PM, Antonio Querubin wrote:



The various IPv6 wikis could use a good sprinkling of multicast howtos.



True. Want to help with that ?


Working on it...

Antonio Querubin
808-545-5282 x3003
e-mail/xmpp:  t...@lava.net