Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Michael . Dillon

 2.  The question I don't understand is, why stream?  In these days, when 
a
 terabyte disk for consumer PCs is about to be introduced, why bother 
with
 streaming?  It is so much simpler to download (at faster than real-time 
rates,
 if possible), and play it back.

Very good question. The fact is that people have
been doing Internet TV without streaming for years
now. That's why P2P networks use so much bandwidth.
I've used it myself to download Russian TV shows
that are not otherwise available here in England.
Of course the P2P folks aren't just dumping raw DVB
MPEG-2 streams onto the network. They are recompressing
them using more advanced codecs so that they do not
consume unreasonable amounts of bandwidth.

Don't focus on the Venice project. They are just one
of many groups trying to figure out how to make TV
work on the Internet. Consumer ISPs need to do a better
job of communicating to their customers the existence
of GB/month bandwidth caps, the reason for the caps,
how video over IP creates problems, and how to avoid
those problems by using Video services which support
high-compression codecs. If it is DVB, MPEG-2 or MPEG-1
then it is BAD. Stay away.

Look for DIVX, MP4 etc.

Note that video caching systems like P2P networks can
potentially serve video to extremely large numbers of
users while consuming reasonably low levels of upstream
bandwidth. The key is in the caching. One copy of BBC's
Jan 8th evening news is downloaded to your local P2P
network consuming upstream bandwidth. Then local users
use local bandwidth to get copies of that broadcast over
the next few days. 

For this to work, you need P2P software whose algorithms
are geared to conserving upstream bandwidth. To date, the
software developers do not work in cooperation with ISPs 
and therefore the P2P software is not as ISP-friendly as
it could be. ISPs could change this by contacting P2P
developers. One group that is experimenting with better
algorithms is http://bittyrant.cs.washington.edu/

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Michael . Dillon

  That might be worse for download operators, because people may 
  download
  an hour of video, and only watch 5 minutes :/

 So, from that standpoint, making a video file available for download 
 is wasting order of 90% of the bandwidth used
 to download it.

Considering that this is supposed to be a technically
oriented list, I am shocked at the level of ignorance
of networking technology displayed here.

Have folks never heard of content-delivery networks,
Akamai, P2P, BitTorrent, EMule?

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Brandon Butterworth

 Note that video caching systems like P2P networks can
 potentially serve video to extremely large numbers of
 users while consuming reasonably low levels of upstream
 bandwidth.

The total bandwidth used is the same though, no escaping
that, someone pays.

 Then local users
 use local bandwidth to get copies of that broadcast over
 the next few days. 

If it was only redistributed locally. Even in that case it's not
helping much as it still consumes the most expensive bandwidth (for UK
ADSL). Transit is way cheaper than BT ADSL wholesale, you're saving
something that's cheap.

 For this to work, you need P2P software whose algorithms
 are geared to conserving upstream bandwidth

Or the caches that are being sold to fudge the protocols to
keep it local but if you're buying them we could have just
as easily done http download and let it be cached by existing
appliances.

brandon


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Colm MacCarthaigh

On Sat, Jan 06, 2007 at 08:46:41PM -0600, Frank Bulk wrote:
 What does the Venice project see in terms of the number of upstreams
 required to feed one view, 

At least 3, but more can participate to improve resilience against
partial stream loss.

 and how much does the size of upstream pipe affect this all?  

If the application doesn't have enough upstream bandwidth to send a
proportion of the stream, then it won't. Right now, even if there was
infinite upstream bandwidth, there are hard-coded limits, we've been
changing these slightly as we put the application through more and more
QA. I think right now it's still limited to at most ~220Kbit/sec,
that's what I see on our test cluster, but I'll get back to you if I'm
wrong.

Detecting upstream capacity with UDP streams is always a bit hard, we
don't want to flood the link, we have other control traffic
(renegotiating peers, grabbing checksums, and so on) which needs to keep
working so that video keeps playing smoothly for the user, which is
what matters more.

If the application is left running long enough with good upstream, it
may elect to become a supernode, but that is control traffic only,
rather than streaming. Our supernodes are not relays, they act as
coordinators of peers. I don't have hard data yet on how much bandwidth
these use, because it depends on how often people change channels,
fast-forward and that kind of thing but our own supernodes which
presently manage the entire network use about 300 Kbit/sec. 

But once again, the realities of the internet mean that in order to
ensure a good user experience, we need to engineer against the lowest
common denominator, not the highest. So if the supernode bandwidth
creeps up, we may have to look at increasing the proportion of
supernodes in the network to bring it back down again, so that
packet-loss from supernodes doesn't become an operational problem.

 Do you see trends where 10 upstreams can feed one view if
 they are at 100 kbps each as opposed to 5 upstreams and 200 kbps each, or is
 it no tight relation?  

We do that now, though our numbers are lower :-)

 Supposedly FTTH-rich countries contribute much more
 to P2P networks because they have a symmetrical connection and are more
 attractive to the P2P clients.  
 
 And how much does being in the same AS help compare to being geographically
 or hopwise apart?

That we don't yet know for sure. I've been reading a lot of research on
it, and doing some experimentation, but there is a high degree of
correlation between intra-AS routing and lower latency and greater
capacity. Certainly a better correlation than geographic proximity. 

Using AS proximity is definitely a help for resilience though, same-AS
sources and adjacent AS sources are more likely to remain reachable in
the event of transit problems, general BGP flaps and so on. 

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Marshall Eubanks


Dear Michael;

On Jan 7, 2007, at 8:18 AM, [EMAIL PROTECTED] wrote:




That might be worse for download operators, because people may
download
an hour of video, and only watch 5 minutes :/



So, from that standpoint, making a video file available for download
is wasting order of 90% of the bandwidth used
to download it.


Considering that this is supposed to be a technically
oriented list, I am shocked at the level of ignorance
of networking technology displayed here.

Have folks never heard of content-delivery networks,
Akamai, P2P, BitTorrent, EMule?



Most of the video sites I know of in detail or have researched
do not use Akamai or other local caching services. (Youtube uses  
Limelight for delivery, for example, as AFAIKT they do no caching  
outside of that network. Certainly, the Youtube video
I have looked at here through tcpdump and traceroute seems to transit  
the network.)
And P2P services like BitTorrent do not conserve network bandwidth.  
(Although, they might in the future.)


What does save network bandwidth is progressive download; if people  
actually look at what they downloading,

they may stop it in progress if they don't want it. (I know I do.)


--Michael Dillon



Regards
Marshall


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Alexander Harrowell

In the mobile world, there is a lot of telco-led activity around providing
streaming video (TV), which always seems to boil down to the following
points:

1) Just unicasting it over the radio access network is going to use a lot of
capacity, and latency will make streaming good quality tough.

2) Therefore, it has to be delivered in some sort of defined-QOS fashion or
else over a dedicated, broadcast or one-way only radio link.

3) That means either a big centralised server we own, or another big radio
network we own.

4)

5) PROFIT!!

The unexamined assumptions are of course that:

1) Streaming is vital.

2) By definition, just doing it in TCP/IP must mean naive unicasting.

3) Only telco control can provide quality.

4) Mobile data latency is always and everywhere a radio issue.

Critique:

Why would you want to stream when you can download? *Because letting them
download it means they can watch it again, share it with their friends, edit
it perhaps?*

Why would you want to stream in unicast when there are already models for
effective multicast content delivery (see Michael's list)? *See point
above!*

In my own limited experience with UMTS IP service,  it struck me that the
biggest source of latency was the wait for DNS resolution, a highly soluble
problem with methods known to us all. *But if it's inherent in mobility
itself, then only our solutions can fix it...*

On 1/7/07, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:



  That might be worse for download operators, because people may
  download
  an hour of video, and only watch 5 minutes :/

 So, from that standpoint, making a video file available for download
 is wasting order of 90% of the bandwidth used
 to download it.

Considering that this is supposed to be a technically
oriented list, I am shocked at the level of ignorance
of networking technology displayed here.

Have folks never heard of content-delivery networks,
Akamai, P2P, BitTorrent, EMule?

--Michael Dillon




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Marshall Eubanks


Dear Colm;

On Jan 7, 2007, at 8:50 AM, Colm MacCarthaigh wrote:



On Sat, Jan 06, 2007 at 08:46:41PM -0600, Frank Bulk wrote:

What does the Venice project see in terms of the number of upstreams
required to feed one view,





snip


Supposedly FTTH-rich countries contribute much more
to P2P networks because they have a symmetrical connection and are  
more

attractive to the P2P clients.

And how much does being in the same AS help compare to being  
geographically

or hopwise apart?


That we don't yet know for sure. I've been reading a lot of  
research on

it, and doing some experimentation, but there is a high degree of
correlation between intra-AS routing and lower latency and greater
capacity. Certainly a better correlation than geographic proximity.



As is frequently pointed out, here and elsewhere, network topology !=  
geography.



Using AS proximity is definitely a help for resilience though, same-AS
sources and adjacent AS sources are more likely to remain reachable in
the event of transit problems, general BGP flaps and so on.



Do you actually inject any BGP information into Venice ? How do you  
determine otherwise
that two nodes are in the same AS (do you, for example, assume that  
if they are in the same /24

then they are close in network topology) ?



--
Colm MacCárthaighPublic Key: colm 
[EMAIL PROTECTED]



Regards
Marshall


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Colm MacCarthaigh

On Sun, Jan 07, 2007 at 09:09:27AM -0500, Marshall Eubanks wrote:
 Using AS proximity is definitely a help for resilience though,
 same-AS sources and adjacent AS sources are more likely to remain
 reachable in the event of transit problems, general BGP flaps and so
 on.
 
 Do you actually inject any BGP information into Venice ?

yes and no, there is topology information there, but it's based on
snapshots. Dyanamic is the next step.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Marshall Eubanks


Dear Alexander;

On Jan 7, 2007, at 8:59 AM, Alexander Harrowell wrote:

In the mobile world, there is a lot of telco-led activity around  
providing streaming video (TV), which always seems to boil down  
to the following points:




We (AmericaFree.TV) simulcast everything in 3GPP and 3GPP2 at a lower  
bit rate for mobiles.

At present, the mobile audience for our video is

- 0.3% of the total for the last month
- doubling every 2 months or less.

It's not clear if this glass is mostly empty or half full, but there  
is a data point FWIW.


1) Just unicasting it over the radio access network is going to use  
a lot of capacity, and latency will make streaming good quality tough.


2) Therefore, it has to be delivered in some sort of defined-QOS  
fashion or else over a dedicated, broadcast or one-way only radio  
link.


3) That means either a big centralised server we own, or another  
big radio network we own.


4)

5) PROFIT!!


I have heard that several big mobile providers are shortly going to  
come out with 802.16 networks in support (I

assume) of point 3.

Regards
Marshall


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Michael . Dillon

  Note that video caching systems like P2P networks can
  potentially serve video to extremely large numbers of
  users while consuming reasonably low levels of upstream
  bandwidth.
 
 The total bandwidth used is the same though, no escaping
 that, someone pays.

This is not true. Increased bandwidth consumption does 
not necessarily cost money on most ISP infrastructure. 
At my home I have a fairly typical ISP service using 
BT's DSL. If I use a P2P network to download files from
other BT DSL users, then it doesn't cost me a penny
more than the basic DSL service. It also doesn't cost
BT any more and it doesn't cost those users any more.
The only time that costs increase is when I download
data from outside of BT's network because the increased
traffic reaquires larger circuits or more circuits, etc.

The real problem with P2P networks is that they don't 
generally make download decisions based on network
architecture. This is not inherent in the concept of
P2P which means that it can be changed. It is perfectly
possible to use existing P2P protocols in a way that is
kind to an ISP's costs.

 If it was only redistributed locally. Even in that case it's not
 helping much as it still consumes the most expensive bandwidth (for UK
 ADSL). Transit is way cheaper than BT ADSL wholesale, you're saving
 something that's cheap.

I have to admit that I have no idea how BT charges
ISPs for wholesale ADSL. If there is indeed some kind
of metered charging then Internet video will be a big
problem for the business model. 

 Or the caches that are being sold to fudge the protocols to
 keep it local but if you're buying them we could have just
 as easily done http download and let it be cached by existing
 appliances.

The difference with P2P is that caching is built-in to
the model, therefore 100% of users participate in 
caching. With HTTP, caches are far from universal, 
especially to non-business users.

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Michael . Dillon

 Why would you want to stream in unicast when there are already 
 models for effective multicast content delivery (see Michael's 
 list)? *See point above!*

The word multicast in the above quote, does not refer
to the set of protocols called IP multicast. Content
delivery networks (CDNs) like Akamai are also, inherently,
a form of multicasting. So are P2P networks like BitTorrent
and EMule. If this sounds odd to you, perhaps you don't really
understand the basics of either multicast or P2P. Check out
Wikipedia to see what I mean:
http://en.wikipedia.org/wiki/Peer-to-peer
http://en.wikipedia.org/wiki/Multicast

If your data absolutely, positively, must be delivered
simultaneously to multiple destinations, i.e. time is
of the essence, then I agree that P2P and IP multicast
are not comparable. But the context of this discussion
is not NYSE market data feeds, but entertainment video.
The use-cases for entertainment mean that timing is
of little importance. More important are things like
consistency and control.

--Michael Dillon




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Gian Constantine
You know, when it's all said and done, streaming video may be the  
motivator for migrating the large scale Internet to IPv6. I do not  
see unicast streaming as a long term solution for video service. In  
the short term, unicast streaming and PushVoD models may prevail, but  
the ultimate solution is Internet-wide multicasting.


I want my m6bone. :-)

Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.


On Jan 6, 2007, at 1:52 AM, Thomas Leavitt wrote:

If this application takes off, I have to presume that everyone's  
baseline network usage metrics can be tossed out the window...


Thomas



From: David Farber [EMAIL PROTECTED]
Subject: Using Venice Project? Better get yourself a non-capping  
ISP...

Date: Fri, 5 Jan 2007 11:11:46 -0500



Begin forwarded message:

From: D.H. van der Woude [EMAIL PROTECTED]
Date: January 5, 2007 11:06:31 AM EST
To: [EMAIL PROTECTED]
Subject: Using Venice Project? Better get yourself a non-capping  
ISP...



I am one of Venice' beta testers. Works like a charm,
admittedly with a 20/1 Mbs ADSL2+ connection and
a unlimited use ISP.

Even at sub-DVD quality the data use is staggering...

Venice Project would break many users' ISP conditions
http://www.out-law.com/page-7604
OUT-LAW News, 03/01/2007

Internet television system The Venice Project could break users'  
monthly internet bandwith limits in hours, according to the team  
behind it.


It downloads 320 megabytes (MB) per hour from users' computers,  
meaning that users could reach their monthly download limits in  
hours and that it could be unusable for bandwidth-capped users.


The Venice Project is the new system being developed by Janus Friis  
and Niklas Zennström, the Scandinavian entrepreneurs behind the  
revolutionary services Kazaa and Skype. It is currently being used  
by 6,000 beta testers and is due to be launched next year.


The data transfer rate is revealed in the documentation sent to  
beta testers and the instructions make it very clear what the  
bandwidth requirements are so that users are not caught out.


Under a banner saying 'Important notice for users with limits on  
their internet usage', the document says: The Venice Project is a  
streaming video application, and so uses a relatively high amount  
of bandwidth per hour. One hour of viewing is 320MB downloaded and  
105 Megabytes uploaded, which means that it will exhaust a 1  
Gigabyte cap in 10 hours. Also, the application continues to run in  
the background after you close the main window.


For this reason, if you pay for your bandwidth usage per megabyte  
or have your usage capped by your ISP, you should be careful to  
always exit the Venice Project client completely when you are  
finished watching it, says the document


Many ISPs offer broadband connections which are unlimited to use by  
time, but have limits on the amount of data that can be transferred  
over the connection each month. Though limits are 'advisory' and  
not strict, users who regularly far exceed the limits break the  
terms of their deals.


BT's most basic broadband package BT Total Broadband Package 1, for  
example, has a 2GB monthly 'usage guideline'. This would be reached  
after 20 hours of viewing.


The software is also likely to transfer data even when not being  
used. The Venice system is going to run on a peer-to-peer (P2P)  
network, which means that users host and send the programmes to  
other users in an automated system.


OUT-LAW has seen screenshots from the system and talked to one of  
the testers of it, who reports very favourably on its use. This is  
going to be the one. I've used some of the other software out there  
and it's fine, but my dad could use this, they've just got it  
right, he said. It looks great, you fire it up and in two minutes  
you're live, you're watching television.


The source said that claims being made for the system being near  
high definition in terms of picture quality are wide of the mark.  
It's not high definition. It's the same as normal television, he  
said.





-- Private where private belongs, public where it's needed, and an  
admission that circumstances alter cases. Robert A. Heinlein, 1969


--
Thomas Leavitt - [EMAIL PROTECTED] - 831-295-3917 (cell)

*** Independent Systems and Network Consultant, Santa Cruz, CA ***

thomas.vcf




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Alexander Harrowell

Michael Dillon said:

The word multicast in the above quote, does not refer
to the set of protocols called IP multicast. Content
delivery networks (CDNs) like Akamai are also, inherently,
a form of multicasting. So are P2P networks like BitTorrent
and EMule.

That's precisely what I mean.

Marshall Eubanks said: I have heard that several big mobile providers are
shortly going to
come out with 802.16 networks in support (I
assume) of point 3

I don't know whether Sprint Nextel's big 802.16e deployment is going to be
used for this, although their keenness on video/TV argues for it. A wide
range of technologies are in prospect, including DMB, DAB-IP, DVB-H,
Qualcomm's MediaFLO and IPWireless's TDTV.

These are radio broadcast systems of various kinds - MediaFLO and TDTV are
adaptations of 3G mobile technologies, from the CDMA2000 world and UMTS
respectively. TDTV, the one I am most familiar with, is essentially a
UMTS-TDD network with all the timeslots set to  send (from the base
station's viewpoint). 3GPP and 3GPP2 are standardising a Multimedia
Broadcast-Multicast Subsystem as an add-on to the R99 core network, expected
in 2008.


From an IP perspective, most of these are fairly orthogonal, being

essentially alternative access networks on the other side of the MBMS
control function.


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Patrick W. Gilmore


On Jan 7, 2007, at 8:59 AM, Alexander Harrowell wrote:

1) Just unicasting it over the radio access network is going to use  
a lot of

capacity, and latency will make streaming good quality tough.


I'm confused why high latency makes streaming good quality tough?

Perhaps this goes back to the streaming vs. downloading problem,  
but every player I've ever seen on a personal computer buffers the  
content for at least a second, and usually multiple seconds.  Latency  
is measured in, at most, 10th of a second, and jitter another order  
of magnitude less at least.


High latency links with stable throughput are much better for  
streaming than low latency links with any packet loss, even without  
buffering.


IOW: Latency is irrelevant.

--
TTFN,
patrick



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Patrick W. Gilmore


On Jan 7, 2007, at 3:17 PM, Brandon Butterworth wrote:


The real problem with P2P networks is that they don't
generally make download decisions based on network
architecture.


Indeed, that's what I said. Until then ISPs can only fix it with P2P
aware caches, if the protocols did it then they wouldn't need the
caches though P2P efficiency may go down

It'll be interesting to see how Akamai  co. counter this trend. At  
the
moment they can say it's better to use a local Akamai cluster than  
have

P2P taking content from anywhere on the planet. Once it's mostly local
traffic then it's pretty much equivalent to Akamai. It's still moving
routing/TE up the stack though so will affect the ISPs network ops.


ISPs don't pay Akamai, content owners do.

Content owners are usually not concerned with the same things an  
ISP's newtork ops are.  (I'm not saying that's a good thing, I'm  
just saying that is reality.  Life might be much better all around if  
the two groups interacted more.  Although one could say that Akamai  
fills that gap as well. :)


Anyway, a content provider is going to do what's best for their  
content, not what's best for the ISP.  It's a difficult argument to  
make to a content provider that putting their content on millions of  
end user HDs depending on grandma to provide good quality streaming  
to Joe Smith down the street.  At least in my experience.


--
TTFN,
patrick



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Alexander Harrowell

Yes, on reflection that should also have been filed under unexamined
assumptions.

On 1/7/07, Patrick W. Gilmore [EMAIL PROTECTED] wrote:



On Jan 7, 2007, at 8:59 AM, Alexander Harrowell wrote:

 1) Just unicasting it over the radio access network is going to use
 a lot of
 capacity, and latency will make streaming good quality tough.

I'm confused why high latency makes streaming good quality tough?

Perhaps this goes back to the streaming vs. downloading problem,
but every player I've ever seen on a personal computer buffers the
content for at least a second, and usually multiple seconds.  Latency
is measured in, at most, 10th of a second, and jitter another order
of magnitude less at least.

High latency links with stable throughput are much better for
streaming than low latency links with any packet loss, even without
buffering.

IOW: Latency is irrelevant.

--
TTFN,
patrick




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Marshall Eubanks


Dear Gian;

On Jan 7, 2007, at 10:27 AM, Gian Constantine wrote:

You know, when it's all said and done, streaming video may be the  
motivator for migrating the large scale Internet to IPv6. I do not  
see unicast streaming as a long term solution for video service. In  
the short term, unicast streaming and PushVoD models may prevail,  
but the ultimate solution is Internet-wide multicasting.


I want my m6bone. :-)



Well, help make it possible. Join the MboneD WG list. Help us  
recharter. Come to Prague, even, if you can.


BTW, have you taken the multicast survey :

http://www.multicasttech.com/survey/MBoneD_Survey_v_1_5.txt
http://www.multicasttech.com/survey/MBoneD_Survey_v_1_5.pdf   ?

Regards
Marshall



Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.


On Jan 6, 2007, at 1:52 AM, Thomas Leavitt wrote:

If this application takes off, I have to presume that everyone's  
baseline network usage metrics can be tossed out the window...


Thomas



From: David Farber [EMAIL PROTECTED]
Subject: Using Venice Project? Better get yourself a non-capping  
ISP...

Date: Fri, 5 Jan 2007 11:11:46 -0500



Begin forwarded message:

From: D.H. van der Woude [EMAIL PROTECTED]
Date: January 5, 2007 11:06:31 AM EST
To: [EMAIL PROTECTED]
Subject: Using Venice Project? Better get yourself a non-capping  
ISP...



I am one of Venice' beta testers. Works like a charm,
admittedly with a 20/1 Mbs ADSL2+ connection and
a unlimited use ISP.

Even at sub-DVD quality the data use is staggering...

Venice Project would break many users' ISP conditions
http://www.out-law.com/page-7604
OUT-LAW News, 03/01/2007

Internet television system The Venice Project could break users'  
monthly internet bandwith limits in hours, according to the team  
behind it.


It downloads 320 megabytes (MB) per hour from users' computers,  
meaning that users could reach their monthly download limits in  
hours and that it could be unusable for bandwidth-capped users.


The Venice Project is the new system being developed by Janus  
Friis and Niklas Zennström, the Scandinavian entrepreneurs behind  
the revolutionary services Kazaa and Skype. It is currently being  
used by 6,000 beta testers and is due to be launched next year.


The data transfer rate is revealed in the documentation sent to  
beta testers and the instructions make it very clear what the  
bandwidth requirements are so that users are not caught out.


Under a banner saying 'Important notice for users with limits on  
their internet usage', the document says: The Venice Project is a  
streaming video application, and so uses a relatively high amount  
of bandwidth per hour. One hour of viewing is 320MB downloaded and  
105 Megabytes uploaded, which means that it will exhaust a 1  
Gigabyte cap in 10 hours. Also, the application continues to run  
in the background after you close the main window.


For this reason, if you pay for your bandwidth usage per megabyte  
or have your usage capped by your ISP, you should be careful to  
always exit the Venice Project client completely when you are  
finished watching it, says the document


Many ISPs offer broadband connections which are unlimited to use  
by time, but have limits on the amount of data that can be  
transferred over the connection each month. Though limits are  
'advisory' and not strict, users who regularly far exceed the  
limits break the terms of their deals.


BT's most basic broadband package BT Total Broadband Package 1,  
for example, has a 2GB monthly 'usage guideline'. This would be  
reached after 20 hours of viewing.


The software is also likely to transfer data even when not being  
used. The Venice system is going to run on a peer-to-peer (P2P)  
network, which means that users host and send the programmes to  
other users in an automated system.


OUT-LAW has seen screenshots from the system and talked to one of  
the testers of it, who reports very favourably on its use. This  
is going to be the one. I've used some of the other software out  
there and it's fine, but my dad could use this, they've just got  
it right, he said. It looks great, you fire it up and in two  
minutes you're live, you're watching television.


The source said that claims being made for the system being near  
high definition in terms of picture quality are wide of the mark.  
It's not high definition. It's the same as normal television, he  
said.





-- Private where private belongs, public where it's needed, and  
an admission that circumstances alter cases. Robert A. Heinlein,  
1969


--
Thomas Leavitt - [EMAIL PROTECTED] - 831-295-3917 (cell)

*** Independent Systems and Network Consultant, Santa Cruz, CA ***

thomas.vcf






Anyone have details on MCI outage yesterday

2007-01-07 Thread Owen DeLong

Yesterday, around 10:00 AM Pacific Time 1/5/07, Kwajalein Atoll lost all
connectivity to the mainland. We were told this was because MCI lost 40
DS-3s due to someone shooting up a telephone pole in California

This affected Internet, Telephones (although inbound phone calls to the
islands were possible), and television.

Kwajalein access to the mainlaind is via Satellite to Washington  
connected to

terrestrial link to Georgia.

The outage lasted more than 12 hours.

It seems odd to me in this day and age that:

1.  There wasn't a redundant path for these circuits.

2.  It took 12 hours to restore or reroute these circuits.

Any details would be appreciated.

Thanks,

Owen

P.S. For those wondering Where? There are excellent resources in  
Wikipedia, but,
the short answer is Kwajalein Atoll is part of the Marshall Islands  
and is about the

midpoint on a line from Honolulu to Sydney.  About 9N by 165E.




smime.p7s
Description: S/MIME cryptographic signature


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Joe Abley



On 7-Jan-2007, at 15:17, Brandon Butterworth wrote:


The only time that costs increase is when I download
data from outside of BT's network because the increased
traffic reaquires larger circuits or more circuits, etc.


Incorrect, DSLAM backhaul costs regardless of where the traffic
comes from. ISPs pay for that, it costs more than transit


Setting aside the issue of what particular ISPs today have to pay,  
the real cost of sending data, best-effort over an existing network  
which has spare capacity and which is already supported and managed  
is surely zero.


If I acquire content while I'm sleeping, during a low dip in my ISP's  
usage profile, the chances good that are nobody incurs more costs  
that month than if I had decided not to acquire it. (For example, you  
might imagine an RSS feed with BitTorrent enclosures, which requires  
no human presence to trigger the downloads.)


If I acquire content the same time as many other people, since what  
I'm watching is some coordinated, streaming event, then it seems far  
more likely that the popularity of the content will lead to network  
congestion, or push up a peak on an interface somewhere which will  
lead to a requirement for a circuit upgrade, or affect a 95%ile  
transit cost, or something.


If asynchronous delivery of content is as free as I think it is, and  
synchronous delivery of content is as expensive as I suspect it might  
be, it follows that there ought to be more of the former than the  
latter going on.


If it turned out that there was several orders of magnitude more  
content being shifted around the Internet in a download when you are  
able; watch later fashion than there is content being streamed to  
viewers in real-time I would be thoroughly unsurprised.



Joe


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Joe Abley [EMAIL PROTECTED] wrote:

If I acquire content the same time as many other people, since what  
I'm watching is some coordinated, streaming event, then it seems far  
more likely that the popularity of the content will lead to network  
congestion, or push up a peak on an interface somewhere which will  
lead to a requirement for a circuit upgrade, or affect a 95%ile  
transit cost, or something.

If asynchronous delivery of content is as free as I think it is, and  
synchronous delivery of content is as expensive as I suspect it might  
be, it follows that there ought to be more of the former than the  
latter going on.


Completely agree here.

$.02,

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.5.2 (Build 4075)

wj8DBQFFoVX6q1pz9mNUZTMRArMxAKC1HcQzuRVtw7RizPH9Sxubpd4CyACfe9Mp
IVrcy6mKMtdNdzu6qMMdpOs=
=ehDE
-END PGP SIGNATURE-



--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Roland Dobbins



On Jan 6, 2007, at 6:07 AM, Colm MacCarthaigh wrote:

I'll try and answer any questions I can, I may be a little  
restricted in

revealing details of forthcoming developments and so on, so please
forgive me if there's later something I can't answer, but for now I'll
try and answer any of the technicalities. Our philosophy is to pretty
open about how we work and what we do.


Colm, a few random questions as they came to mind:[;]

Will your downloads be encrypted/obfuscated?  Will your application  
be port-agile?  Is it HTTP, or Something Else?


If it's not encrypted, will you be cache-friendly?

Will you be supporting/enforcing some form of DRM?

Will you be multi-platform?  If so, which ones?

When you say 'TV', do you mean HDTV?  If so, 1080i/1080p?

Will you have Skype-like 'supernode' functionality?  If so, will it  
be user-configurable?


Will you insert ads into the content?  If so, will you offer a  
revenue-sharing model for SPs who wish to participate?


Many thanks!

---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Roland Dobbins



On Jan 7, 2007, at 12:28 PM, Roland Dobbins wrote:


Colm, a few random questions as they came to mind:[;]


Two more questions:

Do you plan to offer the Venice Project for mobile devices?  If so,  
which ones?


Will you support offline storage/playback?

Thanks again!

---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Roland Dobbins



On Jan 7, 2007, at 1:15 PM, Colm MacCarthaigh wrote:

Now comes the please forgive me part, but most of your questions  
arn't

relevant to the NANOG charter


I believe that's open to interpretation - for example, the question  
about mobile devices is relevant to mobile SPs, the question about  
offline viewing has an impact on perceived network usage patterns,  
the 'supernode' questions same, the TV vs. HDTV question, same (size/ 
length), the DRM question same (help desk/supportability), platforms  
same (help desk/supportability).  The ad question is is actually out- 
of-charter, though I suspect of great interest to many of the list  
subscribers


Now, if you don't *want* to answer the above questions, that's  
perfectly fine; but they're certainly within the list charter, and  
entirely relevant to network operations, heh.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Colm MacCarthaigh

On Sun, Jan 07, 2007 at 12:35:54PM -0800, Roland Dobbins wrote:
 
 
 On Jan 7, 2007, at 12:28 PM, Roland Dobbins wrote:
 
 Colm, a few random questions as they came to mind:[;]
 
 Two more questions:
 
 Do you plan to offer the Venice Project for mobile devices?  If so,  
 which ones?
 
 Will you support offline storage/playback?

Now comes the please forgive me part, but most of your questions arn't
relevant to the NANOG charter, so you're going to have to mail our PR
dept for answers (see: http://www.theveniceproject.com/contact.html).
Answers to nearly all of them will be online soon anyway on our website,
we're not trying to hide anything, but this isn't the place :-)

I'll try to answer the questions which are relevant to Network
Operations, and I have not already answered, anyway;

We use a very small set of ports, currently;

TCP ports 80 and 443 - for various http requests
TCP/UDP port 3   - for p2p control traffic and
   streaming
TCP port port 5223   - for our jabber-powered 
   channel-chat.
UDP port 1   - for error reporting and
   usage tracking. This port
   is short term.

Port 3 is not fixed, and we should be making an IANA
request soonish and then we'll change it, but again to
just one port. So I guess we're not port-agile :-)

We use HTTP(s) requests for making content searches, 
fetching thumbnails and so on. Right now, these
requests are not cacheable, but are pretty small.

Every peer is a cache, so in that sense we are cache friendly,
but our protocol is not cacheable/proxyable by a man in the
middle.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Roland Dobbins



On Jan 7, 2007, at 1:15 PM, Colm MacCarthaigh wrote:


I'll try to answer the questions which are relevant to Network
Operations, and I have not already answered, anyway


And thank you very much for popping up and answering the questions  
you *can* answer - it's useful info, and much appreciated!



---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Will Hargrave

[EMAIL PROTECTED] wrote:

 I have to admit that I have no idea how BT charges
 ISPs for wholesale ADSL. If there is indeed some kind
 of metered charging then Internet video will be a big
 problem for the business model. 

They vary, it depends on what pricing model has been selected.

http://tinyurl.com/yjgsum has BT Central pipe pricing. Note those are
prices, not telephone numbers. ;-)

If you convert into per-megabit charges - at least an order of magnitude
greater than the cost of transit, and at least a couple of orders of
magnitude more than peering/partial transit.

p2p is no panacea to get around these charges; in the worst case p2p
traffic will just transit your central pipe twice, which means the
situation is worse with p2p not better.

For a smaller UK ISP, I do not know if there is a credible wholesale LLU
alternative to BT.

Note this information is of course completely UK-centric. A more
regionalised model (21CN?!) would change the situation.

Will



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Gian Constantine
I may have missed it in previous posts, but I think an important  
point is being missed in much of this discussion: take rate.


An assumption being made is one of widespread long time usage. I  
would argue consumers have little interest in viewing content for  
more than a few hundred seconds on their PC. Further, existing  
solutions for media extension to the television are gaining very  
little foothold outside of technophiles. They tend to be more complex  
for the average user than many vendors seemingly realize. While Apple  
may help in this arena, there are many other obstacles to widespread  
usage of streaming video outside of media extension.


In entertainment, content is king. More specifically, new release  
content is king. While internet distribution may help breathe life  
into the long tail market, it is hard to imagine any major shift from  
existing distribution methods. People simply like the latest TV shows  
and the latest movies.


So, this leaves us with little more than what is already offered by  
the MSOs: linear TV and VoD. This is where things become complex.


The studios will never (not any time soon) allow for a subscription  
based VoD on new content. They would instantly be sued by Time Warner  
(HBO). This leaves us with a non-subscription VoD option, which still  
requires an agreement with the each of the major studios, and would  
likely cost a fortune to obtain. CinemaNow and MovieLink have done  
this successfully, and use a PushVoD model to distribute their  
content. CinemaNow allows DVD burning for some of their content, but  
both companies are otherwise tied to the PC (without a media  
extender). Furthermore, the download wait is a pain. Their content is  
good quality 1200-1500 kbps VC-1 *wince*. It is really hard to say  
when and if either of these will take off as a service. It is a good  
service, with a great product, and almost no market at the moment.  
Get it on the TV and things may change dramatically.


This leaves us with linear TV, which is another acquisition  
nightmare. It is very difficult to acquire pass-through/distribution  
rights for linear television, especially via IP. Without deep  
pockets, a company might be spinning their wheels trying to get  
popular channels onto their lineup. And good luck trying to acquire  
the rights to push linear TV outside of a closed network. The studios  
will hear none of it.


I guess where I am going with all this is simply it is very hard to  
make this work from a business and marketing side. The network  
constraints are, likely, a minor issue for some time to come.  
Interest is low in the public at large for primary (or even major  
secondary) video service on the PC.


By the time interest in the product swells and content providers ease  
some of their more stringent rules for content distribution, a better  
solution for multicasting the content will have presented itself. I  
would argue streaming video across the Internet to a large audience,  
direct to subscribers, is probably 4+ years away at best.


I am not saying we throw in the towel on this problem, but I do think  
unicast streaming has a limited scope and short life-span for prime  
content. IPv6 multicast is the real long term solution for Internet  
video to a wide audience.


Of course, there is the other argument. The ILECs and MSOs will keep  
it from ever getting beyond a unicast model. Why let the competition  
in, right? *sniff* I smell lobbyists and legislation. :-)


Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.





Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Sean Donelan


On Sun, 7 Jan 2007, Joe Abley wrote:
Setting aside the issue of what particular ISPs today have to pay, the real 
cost of sending data, best-effort over an existing network which has spare 
capacity and which is already supported and managed is surely zero.


As long as the additional traffic doesn't exceed the existing capacity.

But what happens when 5% of the paying subscribers use 95% of the existing 
capacity, and then the other 95% of the subscribers complain about poor 
performance?  What is the real cost to the ISP needing to upgrade the

network to handle the additional traffic being generated by 5% of the
subscribers when there isn't spare capacity?

If I acquire content while I'm sleeping, during a low dip in my ISP's usage 
profile, the chances good that are nobody incurs more costs that month than 
if I had decided not to acquire it. (For example, you might imagine an RSS 
feed with BitTorrent enclosures, which requires no human presence to trigger 
the downloads.)


The reason why many universities buy rate-shaping devices is dorm users 
don't restrain their application usage to only off-peak hours, which may 
or may not be related to sleeping hours.  If peer-to-peer applications 
restrained their network usage during periods of peak network usage so 
it didn't result in complaints from other users, it would probably 
have a better reputation.


If I acquire content the same time as many other people, since what I'm 
watching is some coordinated, streaming event, then it seems far more likely 
that the popularity of the content will lead to network congestion, or push 
up a peak on an interface somewhere which will lead to a requirement for a 
circuit upgrade, or affect a 95%ile transit cost, or something.


Depends on when and where the replication of the content is taking place.

Broadcasting is a very efficient way to distribute the same content 
to large numbers of people, even when some people may watch it later.  You 
can broadcast either streaming or file downloads.  You can also unicast 
either streaming or file downloads. Unicast tends to be less efficient 
to distribute the same content to large numbers of people.  Then there is 
lots of events in the middle.  Some content is only of interest to a some 
people.


Streaming vs download and broadcast vs unicast.  There are lots of 
combinations.  One way is not necessarily the best way for every

situation.  Sometimes store-and-forward e-mail is useful, other times
instant messenger communications is useful.  Things may change over 
time.  For example, USENET has mostly stopped being a widely flooded 
through every ISP and large institution, and is now accessed on demand by 
users from a few large aggregators.


Distribution methods aren't mutually exclusive.

If asynchronous delivery of content is as free as I think it is, and 
synchronous delivery of content is as expensive as I suspect it might be, it 
follows that there ought to be more of the former than the latter going on.


If it turned out that there was several orders of magnitude more content 
being shifted around the Internet in a download when you are able; watch 
later fashion than there is content being streamed to viewers in real-time I 
would be thoroughly unsurprised.


If you limit yourself to the Internet, you exclude a lot of content
being shifted around and consumed in the world.  The World Cup or 
Superbowl are still much bigger events than Internet-only events. Broadcast
television shows with even bottom ratings are still more popular than 
most Internet content.  The Internet is good for narrowcasting, but its

still working on mass audience events.

Asynchronous receivers are more expensive and usually more complicated
than synchronous receivers.  Not everyone owns a computer or spends a
several hundred dollars for a DVR.  If you already own a computer, you 
might consider it free.  But how many people want to buy a computer for 
each television set?  In the USA, Congress debated whether it should
spend $40 per digital receiver so people wouldn't lose their over the 
air broadcasting.


Gadgets that interest 5% of the population versus reaching 95% of the 
population may have different trade-offs.