Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Frank Coluccio

Gian wrote:

From a big picture standpoint, I would say P2P distribution is a non-starter,
too many reluctant parties to appease. From a detail standpoint, I would say P2P
distribution faces too many hurdles in existing network infrastructure to be
justified. Simply reference the discussion of upstream bandwidth caps and you
will have a wonderful example of those hurdles.

Speaking about upstream hurdles, just out of curiosity (since this is merely a
diversionary discussion at this point;) ...  wouldn't peer-to-peer be the LEAST
desirable approach for an SP that is launching WiFi nets as its primary first
mile platform? I note that Earthlink is launching a number of cityscale WiFi 
nets
as we speak, which is why I'm asking. Has this in any way, even subliminally,
been influential in the shaping of your opinions about P2P for content
distribution? I know that it would affect my views, a whole lot, since the
prospects for WiFi's shared upstream capabilities to improve are slim to none in
the short to intermediate terms. Whereas, CM and FTTx are known to raise their
down and up offerings periodically, gated only by their usual game of chicken
where each watches to see who'll be first.

Frank 

On Mon Jan  8 22:26 , Gian Constantine  sent:

My contention is simple. The content providers will not allow P2P video as a
legal commercial service anytime in the near future. Furthermore, most ISPs are
going to side with the content providers on this one. Therefore, discussing it 
at
this point in time is purely academic, or more so, diversionary.Personally, I am
not one for throttling high use subscribers. Outside of the fine print, which no
one reads, they were sold a service of Xkbps down and Ykbps up. I could not care
less how, when, or how often they use it. If you paid for it, burn it up.I have
questions as to whether or not P2P video is really a smart distribution method
for service provider who controls the access medium. Outside of being a service
provider, I think the economic model is weak, when there can be little
expectation of a large scale take rate.Ultimately, my answer is: we're not there
yet. The infrastructure isn't there. The content providers aren't there. The
market isn't there. The product needs a motivator. This discussion has been
putting the cart before the horse.A lot of big pictures pieces are completely
overlooked. We fail to question whether or not P2P sharing is a good method in
delivering the product. There are a lot of factors which play into this.
Unfortunately, more interest has been paid to the details of this delivery 
method
than has been paid to whether or not the method is even worthwhile.From a big
picture standpoint, I would say P2P distribution is a non-starter, too many
reluctant parties to appease. From a detail standpoint, I would say P2P
distribution faces too many hurdles in existing network infrastructure to be
justified. Simply reference the discussion of upstream bandwidth caps and you
will have a wonderful example of those hurdles.
 Gian Anthony ConstantineSenior Network Design EngineerEarthlink, Inc. 
On Jan 8, 2007, at 9:49 PM, Thomas Leavitt wrote:So, kind of back to the
original question: what is going to be the reaction of your average service
provider to the presence of an increasing number of people sucking down massive
amounts of video and spitting it back out again... nothing? throttling all
traffic of a certain type? shutting down customers who exceed certain 
thresholds?
or just throttling their traffic? massive upgrades of internal network hardware?
Is it your contention that there's no economic model, given the architecture of
current networks, which would would generate enough revenue to offset the cost 
of
traffic generated by P2P video?
Thomas
Gian Constantine wrote: There may have been a disconnect on my part, or at
least, a failure to disclose my position. I am looking at things from a provider
standpoint, whether as an ISP or a strict video service provider.
I agree with you. From a consumer standpoint, a trickle or off-peak download
model is the ideal low-impact solution to content delivery. And absolutely, a
500GB drive would almost be overkill on space for disposable content encoded in
H.264. Excellent SD (480i) content can be achieved at ~1200 to 1500kbps,
resulting in about a 1GB file for a 90 minute title. HD is almost out of the
question for internet download, given good 720p at ~5500kbps, resulting in a 
30GB
file for a 90 minute title.
Service providers wishing to provide this service to their customers may see
some success where they control the access medium (copper loop, coax, FTTH).
Offering such a service to customers outside of this scope would prove very
expensive, and likely, would never see a return on the investment without
extensive peering arrangements. Even then, distribution rights would be very
difficult to attain without very deep pockets and crippling revenue sharing. The
studios really dislike the idea of transmission 

Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Michal Krsek


Hi Marshall,


- the largest channel has 1.8% of the audience
- 50% of the audience is in the largest 2700 channels
- the least watched channel has ~ 10 simultaneous viewers
- the multicast bandwidth usage would be 3% of the unicast.


I'm a bit skeptic for future of channels. For making money from the long 
tail, you have to have to adapt your distribution to user's needs. It is not 
only format, codec ... but also time frame. You can organise your programs 
in channels, but they will not run simultaneously for all the users. I want 
to control my TV, I don't want to my TV jockey my life.


For the distribution, you as content owner have to help the ISP find the 
right way to distribute your content. In example: having distribution center 
in Tier1 ISP network will make money from Tier2 ISP connected directly to 
Tier1. Probably, having CDN (your own or pay for service) will be the only 
one way for large scale non synchronous programing.


   Regards
   Michal



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Michal Krsek


Hi Marshall,


- the largest channel has 1.8% of the audience
- 50% of the audience is in the largest 2700 channels
- the least watched channel has ~ 10 simultaneous viewers
- the multicast bandwidth usage would be 3% of the unicast.


I'm a bit skeptic for future of channels. For making money from the long 
tail, you have to have to adapt your distribution to user's needs. It is not 
only format, codec ... but also time frame. You can organise your programs 
in channels, but they will not run simultaneously for all the users. I want 
to control my TV, I don't want to my TV jockey my life.


For the distribution, you as content owner have to help the ISP find the 
right way to distribute your content. In example: having distribution center 
in Tier1 ISP network will make money from Tier2 ISP connected directly to 
Tier1. Probably, having CDN (your own or pay for service) will be the only 
one way for large scale non synchronous programing.


   Regards
   Michal



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Richard Naylor


At 08:40 p.m. 9/01/2007 -0500, Gian Constantine wrote:
It would not be any easier. The negotiations are very complex. The issue 
is not one of infrastructure capex. It is one of jockeying between content 
providers (big media conglomerates) and the video service providers (cable 
companies).


We're seeing a degree of co-operation in this area. Its being driven by the 
market. - see below.


snip
On Jan 9, 2007, at 7:57 PM, Bora Akyol wrote:
An additional point to consider is that it takes a lot of effort and

 to get a channel allocated to your content in a cable network.

This is much easier when TV is being distributed over the Internet.


The other bigger driver, is that for most broadcasters (both TV and Radio), 
advertising revenues are flat, *except* in the on-line area. So they are 
chasing on-line growth like crazy. Typically on-line revenues now make up 
around 25% of income.


So broadcasters are reacting and developing quite large systems for 
delivering content both new and old. We're seeing these as a mixture of 
live streams, on-demand streams, on-demand downloads and torrents. 
Basically, anything that works and is reliable and can be scaled. (we 
already do geographic distribution and anycast routing).


And the broadcasters won't pay flash transit charges. They are doing this 
stuff from within existing budgets. They will put servers in different 
countries if it makes financial sense. We have servers in the USA, and 
their biggest load is non-peering NZ  based ISPs.


And broadcasters aren't the only source of large content. My estimate is 
that they are only 25% of the source. Somewhere last year I heard John 
Chambers say that many corporates are seeing 500% growth in LAN traffic - 
fueled by video.


We do outside webcasting - to give you an idea of traffic, when we get a 
fiber connex, we allow for 6GBytes per day between an encoder and the 
server network - per programme. We often produce several different 
programmes from a site in different languages etc. Each one is 6GB. If we 
don't have fiber, it scales down to about 2GB per programme. (on fiber we 
crank out a full 2Mbps Standard Def stream, on satellite we only get 2Mbps 
per link). I have a chart by my phone that gives the minute/hour/day/month 
traffic impact of a whole range of streams and refer to it every day. Oh - 
we can do 1080i on demand and can and do produce content in that format. 
They're 8Mbps streams. Not many viewers tho :-)   We're close to being able 
to webcast it live.


We currently handle 50+ radio stations and 12 TV stations, handling around 
1.5 to 2million players a month, in a country with a population of 
4million. But then my stats could be lying..


Rich
(long time lurker)




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Michael . Dillon

 How many channels can you get on your (terrestrial) broadcast receiver?

There are about 30 channels broadcast free-to-air
on digital freeview in the UK. I only have so many
hours in the day so I never have a problem in finding
something. Some people are TV junkies or they only
want some specific content so they get satellite dishes.
Any Internet TV service has a limited market because
it competes head-on with free-to-air and satellite
services. And it is difficult to plug Internet TV into
your existing TV setup.

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Mikael Abrahamsson


On Tue, 9 Jan 2007, [EMAIL PROTECTED] wrote:

between handling 30K unicast streams, and 30K multicast streams that 
each have only one or at most 2-3 viewers?


My opinion on the downside of video multicast is that if you want it 
realtime your SLA figures on acceptable packet loss goes down from 
fractions of a percent into the thousands of a percent, at least with 
current implementations of video.


Imagine internet multicast and having customers complain about bad video 
quality and trying to chase down that last 1/10 packet loss that 
makes peoples video pixelate every 20-30 minutes, and the video stream 
doesn't even originate in your network?


For multicast video to be easier to implement we need more robust video 
codecs that can handle jitter and packet loss that are currently present 
in networks and handled acceptably by TCP for unicast.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Marshall Eubanks


On Jan 10, 2007, at 5:42 AM, Mikael Abrahamsson wrote:



On Tue, 9 Jan 2007, [EMAIL PROTECTED] wrote:

between handling 30K unicast streams, and 30K multicast streams  
that each have only one or at most 2-3 viewers?


My opinion on the downside of video multicast is that if you want  
it realtime your SLA figures on acceptable packet loss goes down  
from fractions of a percent into the thousands of a percent, at  
least with current implementations of video.




Actually, this is true with unicast as well.

This can (I think) largely be handled by a fairly moderate amount of  
Forward Error Correction.


Regards
Marshall


Imagine internet multicast and having customers complain about bad  
video quality and trying to chase down that last 1/10 packet  
loss that makes peoples video pixelate every 20-30 minutes, and the  
video stream doesn't even originate in your network?


For multicast video to be easier to implement we need more robust  
video codecs that can handle jitter and packet loss that are  
currently present in networks and handled acceptably by TCP for  
unicast.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]




Re: Internet Video: The Next Wave of Massive Disruption to the US Peering Ecosystem (v1.2)

2007-01-10 Thread Brandon Butterworth

 Then that wouldn't be enough since the other Tier 1's would need to
 upgrade their peering infrastructure to handle the larger peering
 links (n*10G), having to argue to their CFO that they need to do it so
 that their competitors can support the massive BW customers.

Someone will take the business so that traffic is coming
regardless, they can either be that peer or be the source
with the cash. If they can't do either then they're not
in business, I hope they wouldn't ignore it congesting
their existing peers (I know...)

 Then even if the peers all upgraded the peering gear at the same time,
 the backbones would have to be upgraded as well to get that traffic
 out of the IXes and out to the eyeball networks.

The Internet doesn't scale, turn it off

brandon


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Simon Lockhart

On Wed Jan 10, 2007 at 09:43:11AM +, [EMAIL PROTECTED] wrote:
 And it is difficult to plug Internet TV into your existing TV setup.

Can your average person plug a cable / satellite / terrestrial (in the UK,
the only mainstream option here for self-install is terrestrial)? Power,
TV, and antenna? Then why can't they plug in Power, TV  phone line? That's
where IPTV STBs are going...

Simon


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Gian Constantine
Many of the small carriers, who are doing IPTV in the U.S., have  
acquired their content rights through a consortium, which has since  
closed its doors to new membership.


I cannot stress this enough: content is the key to a good industry- 
changing business model. Broad appeal content will gain broad  
interest. Broad interest will change the playing field and compel  
content providers to consider alternative consumption/delivery models.


The ILECs are going to do it. They have deep pockets. Look at how  
quickly they were able to get franchising laws adjusted to allow them  
to offer video.


Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.

On Jan 10, 2007, at 2:30 AM, Christian Kuhtz wrote:


Marshall,

I completely agree, and due diligence on business models will show  
that fact very clearly.  And nothing much has changed here in terms  
of substance over the last 4+ yrs either.  Costs and opportunities  
have changed or evolved rather, but not the mechanics.


Infrastructure capital is very much the gating factor in every  
major video distribution infrastructure (and the reason why DOCSIS  
3.0 is just such a neato thing).  The carriage deals are merely  
table stakes, and that doesn't mean they're easy.  They are  
obtainable.


And some business models are just fundamentally broken.

Examples for infrastructure costs are size of CSA's or cost  
upgrading CPE is a far bigger deal than carriage.  And if you can't  
get into RT's in a ILEC colo arrangement, that doesn't per se  
globally invalidate business models, but rather provides unique  
challenges and limitations on a given specific business model.


What has changed is that ppl are actually 'doing it'.  And that  
proves that several models are viable for funding in all sorts of  
flavors and risks.


IPTV is fundamentally subject to the analog fallacies of VoIP  
replacing 1FR/1BR service on 1:1 basis (toll arbitrage or anomalies  
aside).  There seems to be plenty of that.  A new IP service  
offering no unique features over specialzed and depreciated  
infrastructure will not be viable until commoditized and not at an  
early maturity level like where IPTV is at.


Unless an IPTV service offers a compelling cost advantage, mass  
adoption will not occur.  And any cost increase will have to be  
justifiable to consumers, and that cannot be underestimated.


But, some just continue to ignore those fundamentals and those  
business models will fail.  And we should be thankful for that self  
cleansing action of a functioning market.


Enough rambling after a long day at CES, I suppose.  Thanks for  
reading this far.


Best regards,
Christian

--
Sent from my BlackBerry.

-Original Message-
From: Marshall Eubanks [EMAIL PROTECTED]
Date: Wed, 10 Jan 2007 01:52:06
To:Gian Constantine [EMAIL PROTECTED]
Cc:Bora Akyol [EMAIL PROTECTED],Simon Lockhart  
[EMAIL PROTECTED], [EMAIL PROTECTED],nanog@merit.edu
Subject: Re: Network end users to pull down 2 gigabytes a day,  
continuously?




On Jan 9, 2007, at 8:40 PM, Gian Constantine wrote:


It would not be any easier. The negotiations are very complex. The
issue is not one of infrastructure capex. It is one of jockeying
between content providers (big media conglomerates) and the video
service providers (cable companies).


Not necessarily. Depends on your business model.

Regards
Marshall



Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.


On Jan 9, 2007, at 7:57 PM, Bora Akyol wrote:



Simon

An additional point to consider is that it takes a lot of effort and
 to get a channel allocated to your content in a cable network.

This is much easier when TV is being distributed over the Internet.



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Simon Lockhart
Sent: Tuesday, January 09, 2007 2:42 PM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Network end users to pull down 2 gigabytes a
day, continuously?


On Tue Jan 09, 2007 at 07:52:02AM +,
[EMAIL PROTECTED] wrote:

Given that the broadcast model for streaming content
is so successful, why would you want to use the
Internet for it? What is the benefit?


How many channels can you get on your (terrestrial) broadcast
receiver?

If you want more, your choices are satellite or cable. To get
cable, you
need to be in a cable area. To get satellite, you need to
stick a dish on
the side of your house, which you may not want to do, or may
not be allowed
to do.

With IPTV, you just need a phoneline (and be close enough to
the exchange/CO
to get decent xDSL rate). In the UK, I'm already delivering
40+ channels over
IPTV (over inter-provider multicast, to any UK ISP that wants it).

Simon












Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Alexander Harrowell


On 1/10/07, Simon Lockhart [EMAIL PROTECTED] wrote:


On Wed Jan 10, 2007 at 09:43:11AM +, [EMAIL PROTECTED] wrote:
 And it is difficult to plug Internet TV into your existing TV setup.

Can your average person plug a cable / satellite / terrestrial (in the UK,
the only mainstream option here for self-install is terrestrial)? Power,
TV, and antenna? Then why can't they plug in Power, TV  phone line? That's
where IPTV STBs are going...

Simon



Especially as more and more ISPs/telcos hand out WLAN boxen of various
kinds - after all, once you have some sort of Linux (usually)
networked appliance in the user's premises, it's quite simple to
deploy more services (hosted VoIP, IPTV, media centre, connected
storage, maybe SIP/Asterisk..) on top of that.

Slingbox-like features and mobile-world things like UMA are also
pushing us that way.


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Gian Constantine

Ah-ha. You are mistaken. :-)

My focus is next-gen broadband and video. The wifi guys have their  
own department.


Good try, though. :-)

Personally, I am against the peer-to-peer method for business  
reasons, not technical ones. It will be difficult to get blessed by  
the content providers and painful to support (high opex).


I have confidence in creative engineers. I am sure any one of us  
could come up with a workable solution for P2P given the time and  
proper motivation.


All in all, P2P is really limited to a VoD model. It is hard to say  
whether or not VoD would ever become such an important service over  
the Internet, as to press content providers into an agreeable nature.


Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.

On Jan 10, 2007, at 3:25 AM, Frank Coluccio wrote:



Gian wrote:

From a big picture standpoint, I would say P2P distribution is a  
non-starter,
too many reluctant parties to appease. From a detail standpoint, I  
would say P2P
distribution faces too many hurdles in existing network  
infrastructure to be
justified. Simply reference the discussion of upstream bandwidth  
caps and you

will have a wonderful example of those hurdles.

Speaking about upstream hurdles, just out of curiosity (since this  
is merely a
diversionary discussion at this point;) ...  wouldn't peer-to-peer  
be the LEAST
desirable approach for an SP that is launching WiFi nets as its  
primary first
mile platform? I note that Earthlink is launching a number of  
cityscale WiFi nets
as we speak, which is why I'm asking. Has this in any way, even  
subliminally,

been influential in the shaping of your opinions about P2P for content
distribution? I know that it would affect my views, a whole lot,  
since the
prospects for WiFi's shared upstream capabilities to improve are  
slim to none in
the short to intermediate terms. Whereas, CM and FTTx are known to  
raise their
down and up offerings periodically, gated only by their usual game  
of chicken

where each watches to see who'll be first.

Frank

On Mon Jan  8 22:26 , Gian Constantine  sent:

My contention is simple. The content providers will not allow P2P  
video as a
legal commercial service anytime in the near future. Furthermore,  
most ISPs are
going to side with the content providers on this one. Therefore,  
discussing it at
this point in time is purely academic, or more so,  
diversionary.Personally, I am
not one for throttling high use subscribers. Outside of the fine  
print, which no
one reads, they were sold a service of Xkbps down and Ykbps up. I  
could not care
less how, when, or how often they use it. If you paid for it, burn  
it up.I have
questions as to whether or not P2P video is really a smart  
distribution method
for service provider who controls the access medium. Outside of  
being a service

provider, I think the economic model is weak, when there can be little
expectation of a large scale take rate.Ultimately, my answer is:  
we're not there
yet. The infrastructure isn't there. The content providers aren't  
there. The
market isn't there. The product needs a motivator. This discussion  
has been
putting the cart before the horse.A lot of big pictures pieces are  
completely
overlooked. We fail to question whether or not P2P sharing is a  
good method in
delivering the product. There are a lot of factors which play into  
this.
Unfortunately, more interest has been paid to the details of this  
delivery method
than has been paid to whether or not the method is even  
worthwhile.From a big
picture standpoint, I would say P2P distribution is a non-starter,  
too many
reluctant parties to appease. From a detail standpoint, I would say  
P2P
distribution faces too many hurdles in existing network  
infrastructure to be
justified. Simply reference the discussion of upstream bandwidth  
caps and you

will have a wonderful example of those hurdles.

Gian Anthony ConstantineSenior Network Design EngineerEarthlink, Inc.
On Jan 8, 2007, at 9:49 PM, Thomas Leavitt wrote:So, kind of back  
to the
original question: what is going to be the reaction of your average  
service
provider to the presence of an increasing number of people sucking  
down massive
amounts of video and spitting it back out again... nothing?  
throttling all
traffic of a certain type? shutting down customers who exceed  
certain thresholds?
or just throttling their traffic? massive upgrades of internal  
network hardware?
Is it your contention that there's no economic model, given the  
architecture of
current networks, which would would generate enough revenue to  
offset the cost of

traffic generated by P2P video?

Thomas
Gian Constantine wrote: There may have been a disconnect on my  
part, or at
least, a failure to disclose my position. I am looking at things  
from a provider

standpoint, whether as an ISP or a strict video service provider.
I agree with you. From a consumer standpoint, a trickle or off- 
peak download
model is 

Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Gian Constantine

All H.264?

Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.

On Jan 10, 2007, at 4:41 AM, Richard Naylor wrote:



At 08:40 p.m. 9/01/2007 -0500, Gian Constantine wrote:
It would not be any easier. The negotiations are very complex. The  
issue is not one of infrastructure capex. It is one of jockeying  
between content providers (big media conglomerates) and the video  
service providers (cable companies).


We're seeing a degree of co-operation in this area. Its being  
driven by the market. - see below.


snip
On Jan 9, 2007, at 7:57 PM, Bora Akyol wrote:
An additional point to consider is that it takes a lot of effort and

 to get a channel allocated to your content in a cable network.

This is much easier when TV is being distributed over the Internet.


The other bigger driver, is that for most broadcasters (both TV and  
Radio), advertising revenues are flat, *except* in the on-line  
area. So they are chasing on-line growth like crazy. Typically on- 
line revenues now make up around 25% of income.


So broadcasters are reacting and developing quite large systems for  
delivering content both new and old. We're seeing these as a  
mixture of live streams, on-demand streams, on-demand downloads and  
torrents. Basically, anything that works and is reliable and can be  
scaled. (we already do geographic distribution and anycast routing).


And the broadcasters won't pay flash transit charges. They are  
doing this stuff from within existing budgets. They will put  
servers in different countries if it makes financial sense. We have  
servers in the USA, and their biggest load is non-peering NZ  based  
ISPs.


And broadcasters aren't the only source of large content. My  
estimate is that they are only 25% of the source. Somewhere last  
year I heard John Chambers say that many corporates are seeing 500%  
growth in LAN traffic - fueled by video.


We do outside webcasting - to give you an idea of traffic, when we  
get a fiber connex, we allow for 6GBytes per day between an encoder  
and the server network - per programme. We often produce several  
different programmes from a site in different languages etc. Each  
one is 6GB. If we don't have fiber, it scales down to about 2GB per  
programme. (on fiber we crank out a full 2Mbps Standard Def stream,  
on satellite we only get 2Mbps per link). I have a chart by my  
phone that gives the minute/hour/day/month traffic impact of a  
whole range of streams and refer to it every day. Oh - we can do  
1080i on demand and can and do produce content in that format.  
They're 8Mbps streams. Not many viewers tho :-)   We're close to  
being able to webcast it live.


We currently handle 50+ radio stations and 12 TV stations, handling  
around 1.5 to 2million players a month, in a country with a  
population of 4million. But then my stats could be lying..


Rich
(long time lurker)






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Gian Constantine
Sounds a little like low buffering and sparse I-frames, but I'm no  
MPEG expert. :-)


Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.

On Jan 10, 2007, at 5:42 AM, Mikael Abrahamsson wrote:



On Tue, 9 Jan 2007, [EMAIL PROTECTED] wrote:

between handling 30K unicast streams, and 30K multicast streams  
that each have only one or at most 2-3 viewers?


My opinion on the downside of video multicast is that if you want  
it realtime your SLA figures on acceptable packet loss goes down  
from fractions of a percent into the thousands of a percent, at  
least with current implementations of video.


Imagine internet multicast and having customers complain about bad  
video quality and trying to chase down that last 1/10 packet  
loss that makes peoples video pixelate every 20-30 minutes, and the  
video stream doesn't even originate in your network?


For multicast video to be easier to implement we need more robust  
video codecs that can handle jitter and packet loss that are  
currently present in networks and handled acceptably by TCP for  
unicast.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Michael . Dillon

   Then why can't they plug in Power, TV  phone line? That's
  where IPTV STBs are going...

OK, I can see that you could use such a set-top box to
sell broadband to households which would not otherwise 
buy Internet services. But that is a niche market.

 Especially as more and more ISPs/telcos hand out WLAN boxen of various
 kinds - after all, once you have some sort of Linux (usually)
 networked appliance in the user's premises, it's quite simple to
 deploy more services (hosted VoIP, IPTV, media centre, connected
 storage, maybe SIP/Asterisk..) on top of that.

He didn't say that his STB had an Ethernet port.
And I'm not aware of any generic Linux box that can
be used to deploy additional services other than
do-it-yourself. And that too is a niche market.

Also, note that the proliferation of boxes, each
needing its own power connection and some place 
to sit, is causing its own problems in the household.
Stacking boxes is not straightforward because some have
air vents on top and others are not flat on top.
The TV people have not learned the lessons of
that the hi-fi component people learned back in
the 1960s.

--Michael Dillon



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Alexander Harrowell


On 1/10/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


   Then why can't they plug in Power, TV  phone line? That's
  where IPTV STBs are going...

OK, I can see that you could use such a set-top box to
sell broadband to households which would not otherwise
buy Internet services. But that is a niche market.

 Especially as more and more ISPs/telcos hand out WLAN boxen of various
 kinds - after all, once you have some sort of Linux (usually)
 networked appliance in the user's premises, it's quite simple to
 deploy more services (hosted VoIP, IPTV, media centre, connected
 storage, maybe SIP/Asterisk..) on top of that.

He didn't say that his STB had an Ethernet port.
And I'm not aware of any generic Linux box that can
be used to deploy additional services other than
do-it-yourself. And that too is a niche market.



For example: France Telecom's consumer ISP in France (Wanadoo) is
pushing out lots and lots of WLAN boxes to its subs, which it brands
Liveboxes. As well as the router, they also carry their carrier-VoIP
and IPTV STB functions. If they can be remotely managed, then they are
a potential platform for further services beyond that. See also 3's
jump into Slingboxes.


Also, note that the proliferation of boxes, each
needing its own power connection and some place
to sit, is causing its own problems in the household.
Stacking boxes is not straightforward because some have
air vents on top and others are not flat on top.
The TV people have not learned the lessons of
that the hi-fi component people learned back in
the 1960s.



Analogous to the question of whether digicams, iPods etc will
eventually be absorbed by mobile devices. Will convergence on IP,
which tends towards concentration of functions on a common box,
outpace the creation of new boxes? CES this year saw a positive rash
of home server products.


RFQ: IP service in U.K. for U.S. hosting company

2007-01-10 Thread nealr



 Ladies  Gentlemen,


  A while ago I posted some questions about transport from the 
U.S., peering in London and received very good technical feedback from 
this group.



  We've determined that the best option for our problem is not 
peering, but the purchase of service from a provider with a good network 
in the U.K. and on the continent. We'd like to receive proposals based 
on this requirement.


 The customer has equipment colocated with Sprint fiber in 
Champaign Illinois and this appears to be the only sensible route out of 
their network. Existing IP service is from Sprint and McLeod. The 
current network is cisco 7507s and we don't envision any changes here 
beyond an upgrade from RSP4 to RSP8 at some point in the future. We 
expect to receive a DS3 level interface but based on traffic we think 
we'd need perhaps one third of its total capacity.



 Feel free to email me if you need more information.


--
mailto:[EMAIL PROTECTED] // IM:layer3arts
voice: 402 408 5951
cell : 402 301 9555
fax  : 402 408 6902



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Andre Oppermann


Alexander Harrowell wrote:

Analogous to the question of whether digicams, iPods etc will
eventually be absorbed by mobile devices.


I guess eventually it will go the other way around as well.  I was
very surprised not to see Steve Jobs announce an iPod Nano-Phone.
A iPod Nano with bare-bone GSM functionality as provided by one
of the recent single-chip developments from TI and SiLabs AeroFon.
Would fit nicely and cover 85% of all use cases, that is voice and
SMS.  True mass-market.  Pop in your SIM and you're ready to rock.
A slightly enhanced click-wheel would make a nice input device too
(and no, do not emulate a rotary phone).  All together would cost
only $15 more than the base iPod.  GSM single chip is really cheap.

Yeah, I'm a dreamer.

--
Andre


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Sam Stickland


Will Hargrave wrote:

[EMAIL PROTECTED] wrote:

  

I have to admit that I have no idea how BT charges
ISPs for wholesale ADSL. If there is indeed some kind
of metered charging then Internet video will be a big
problem for the business model. 


They vary, it depends on what pricing model has been selected.

http://tinyurl.com/yjgsum has BT Central pipe pricing. Note those are
prices, not telephone numbers. ;-)

If you convert into per-megabit charges - at least an order of magnitude
greater than the cost of transit, and at least a couple of orders of
magnitude more than peering/partial transit.
  
A cursory look at the document doesn't seem to show any prices above 
622Mbps, but for that you're looking at about £160,000 a year or 
£21/Mbps/month.


2GB per day, equates to 190Kbps (assuming a perfectly even distribution 
pattern, which of course would never happen), which would be £3.98 a 
month per user. In reality I imagine that you could see usage peaking at 
about 3 times the average, or considerably greater if large flash crowd 
events occur.


I would say that in the UK market today, those sorts of figures are 
enough to destroy current margins, but certainly not high enough that 
the costs couldn't be passed onto the end user as part of an Internet 
TV package.

p2p is no panacea to get around these charges; in the worst case p2p
traffic will just transit your central pipe twice, which means the
situation is worse with p2p not better.

For a smaller UK ISP, I do not know if there is a credible wholesale LLU
alternative to BT.
  
Both Bulldog (CW) and Easynet sell wholesale LLU via an L2TP handoff. 
It's been a while since I was in that game so any prices I have will be 
out of date by now, but IIRC both had the option to pay them per line 
_or_ for a central pipe style model. The per line prices were just about 
low enough to remain competitve, with the central pipe being cheaper for 
volume (but of course, only because you'd currently need to buy far less 
bandwidth than the total of all the lines in use; most ASDL users 
consume a surprisingly small amount of bandwidth and they aggregate very 
well).

Note this information is of course completely UK-centric. A more
regionalised model (21CN?!) would change the situation.

Will

  

S


Re: RFQ: IP service in U.K. for U.S. hosting company

2007-01-10 Thread Michael . Dillon

We've determined that the best option for our problem is not 
 peering, but the purchase of service from a provider with a good network 

 in the U.K. and on the continent. We'd like to receive proposals based 
 on this requirement.

Then you should write up an RFP and send it to
companies that meet your requirement. Otherwise you
risk only getting quotes from people who troll the
mailing lists, desperate for business.

--Michael Dillon



Re: Internet Video: The Next Wave of Massive Disruption to the US Peering Ecosystem (v1.2)

2007-01-10 Thread William B. Norton


Why are folks turning away 10G orders?

I forgot to mention a couple other issues that folks brought up:
4) the 100G equipment won't be standardized for a few years yet, so
folks will continue to trunk which presents its own challenges over
time.
5) the last mile infrastructure may not be able to/willing to accept
the competing video traffic . There was some disagreement among the
group I discussed this point with however.  A few of the cable
operations guys said there is BW and the biz guys don't want to 'give
it away' when there is a potential to charge or block (or rather
mitigate the traffic as they do now).

My favorite data point was from Geoff Huston who said that the cable
companies are clinging to their 1998 business model as if it were
relevent in the world where peer-2-peer for distribution of large
objects has already won. He believes that the sophisticated
peer-2-peer is encrypting and running over ports noone will shut off,
the secure shell ports that are required for VPNs.

So give up, be the best dumb pipes you can be I guess.

Bill

On 1/10/07, Brandon Butterworth [EMAIL PROTECTED] wrote:


 Then that wouldn't be enough since the other Tier 1's would need to
 upgrade their peering infrastructure to handle the larger peering
 links (n*10G), having to argue to their CFO that they need to do it so
 that their competitors can support the massive BW customers.

Someone will take the business so that traffic is coming
regardless, they can either be that peer or be the source
with the cash. If they can't do either then they're not
in business, I hope they wouldn't ignore it congesting
their existing peers (I know...)

 Then even if the peers all upgraded the peering gear at the same time,
 the backbones would have to be upgraded as well to get that traffic
 out of the IXes and out to the eyeball networks.

The Internet doesn't scale, turn it off

brandon





--
//
// William B. Norton [EMAIL PROTECTED]
// Co-Founder and Chief Technical Liaison, Equinix
// GSM Mobile: 650-315-8635
// Skype, Y!IM: williambnorton


Re: Internet Video: The Next Wave of Massive Disruption to the US Peering Ecosystem (v1.2)

2007-01-10 Thread Jared Mauch

On Wed, Jan 10, 2007 at 09:33:41AM -0800, William B. Norton wrote:
 
 Why are folks turning away 10G orders?
 
 I forgot to mention a couple other issues that folks brought up:
 4) the 100G equipment won't be standardized for a few years yet, so
 folks will continue to trunk which presents its own challenges over
 time.

well, there's a few important issues here:

currently the state-of-the-art is to bundle/balance
n*10G.  While it's possible to do 40G/n*40G in some places, this
is not entirely universal.

Given the above constraint, in delivering 10G/n*10G to
customers requires some investment in your infrastructure
to be able to carry that traffic on your network.  The cost difference
between sonet/sdh ports compared to 10GE is significant here and
continues to be a driving force, imho.

Typically in the past, the tier-1 isps have had a larger
circuit than the customer edge.  eg: I have my OC3, but my provider
network is OC12/OC48.  Now with everyone having 10G since it is
cheap enough, this drives multihoming, routing table size, fib/tcam
and other memory consumption, including the corresponding CPU
cost.

 5) the last mile infrastructure may not be able to/willing to accept
 the competing video traffic . There was some disagreement among the
 group I discussed this point with however.  A few of the cable
 operations guys said there is BW and the biz guys don't want to 'give
 it away' when there is a potential to charge or block (or rather
 mitigate the traffic as they do now).

I suspect this in varies depending on how it's done.  Most of
the cable folks are dealing with short enough distances as long as
the fiber quality is high enough, they could do 10/40G to the
neighborhood.  The issue becomes the coax side as well as the bandwidth
consumption of those analog users.  Folks don't upgrade their TV or
set-top-box as quickly as they upgrade their computers.  There's also a
significant cost associated with any change and dealing with those
grumpy users if they don't want a STB either.

 My favorite data point was from Geoff Huston who said that the cable
 companies are clinging to their 1998 business model as if it were
 relevent in the world where peer-2-peer for distribution of large
 objects has already won. He believes that the sophisticated
 peer-2-peer is encrypting and running over ports noone will shut off,
 the secure shell ports that are required for VPNs.
 
 So give up, be the best dumb pipes you can be I guess.

I suspect there's going to be continued seperation at the top
as folks see it.  Those that can take on these new 10G and n*10G customers
and deliver the traffic and those who run into peering and their own
network issues in being able to deliver the bits.  While 100G will ease
some of this, there's still those pesky colo/power issues to deal with.
unless you own your own facility, and even if you do, you may have
months if not years of slowly evolving upgrades to face.  Perhaps
there will be some technology that will help us through this, but
at the same time, perhaps not, and we'll be getting out the huge rolls
of duct tape.  It may not be politics that drives partial-transit/paid 
peering deals, it may just be plain technology.

- Jared

 On 1/10/07, Brandon Butterworth [EMAIL PROTECTED] wrote:
 
  Then that wouldn't be enough since the other Tier 1's would need to
  upgrade their peering infrastructure to handle the larger peering
  links (n*10G), having to argue to their CFO that they need to do it so
  that their competitors can support the massive BW customers.
 
 Someone will take the business so that traffic is coming
 regardless, they can either be that peer or be the source
 with the cash. If they can't do either then they're not
 in business, I hope they wouldn't ignore it congesting
 their existing peers (I know...)
 
  Then even if the peers all upgraded the peering gear at the same time,
  the backbones would have to be upgraded as well to get that traffic
  out of the IXes and out to the eyeball networks.
 
 The Internet doesn't scale, turn it off
 
 brandon
 
 
 
 
 -- 
 //
 // William B. Norton [EMAIL PROTECTED]
 // Co-Founder and Chief Technical Liaison, Equinix
 // GSM Mobile: 650-315-8635
 // Skype, Y!IM: williambnorton

-- 
Jared Mauch  | pgp key available via finger from [EMAIL PROTECTED]
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Petri Helenius


Marshall Eubanks wrote:

Actually, this is true with unicast as well.

This can (I think) largely be handled by a fairly moderate amount of 
Forward Error Correction.


Regards
Marshall
Before streaming meant HTTP-like protocols over port 80 and UDP was 
actually used, we did some experiments with FEC and discovered that 
reasonable interleaving (so that two consequtive packets lost could be 
recovered) and 1:10 FEC resulted in zero-loss environment in all cases 
we tested.


Pete



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Simon Leinen

Alexander Harrowell writes:
 For example: France Telecom's consumer ISP in France (Wanadoo) is
 pushing out lots and lots of WLAN boxes to its subs, which it brands
 Liveboxes. As well as the router, they also carry their carrier-VoIP
 and IPTV STB functions. [...]

Right, and the French ADSL ecosystem mostly seems to be based on these
boxes - Proxad/free.fr has its Freebox, Alice ADSL (Telecom Italia)
the AliceBox, etc.  All these have SCART (peritelevision) TV plugs
in their current incarnations, in addition to the WLAN access points
and phone jacks that previous versions already had.

Personally I don't like this kind of bundling, and I think being able
to choose telephony and video providers indepenently of ISP is better.
But the business model seems to work in that market.  Note that I
don't have any insight or numbers, just noticing that non-technical
people (friends and family in France) do seem to be capable of
receiving TV over IP (although not over the Internet) - confirming
what Simon Lockhart claimed.

Of course there are still technical issues such as how to connect two
TV sets in different parts of an appartment to a single *box.  (Some
boxes do support two simultaneous video channels depending on
available bandwidth, which is based on the level of unbundling
(degroupage) in the area.)

As far as I know, the French ISPs use IP multicast for video
distribution, although I'm pretty sure that these IP multicast
networks are not connected to each other or to the rest of the
multicast Internet.
-- 
Simon.


Re: Internet Video: The Next Wave of Massive Disruption to the US Peering Ecosystem (v1.2)

2007-01-10 Thread Wayne E. Bouchard

On Wed, Jan 10, 2007 at 09:33:41AM -0800, William B. Norton wrote:
 
 Why are folks turning away 10G orders?

Quite simple...

We've found a fair number of networks with no 10GE equipment and no
budget to add it. Doubtless, some of these don't have OC192 capacity
either. Others have 10G in the offing but are still putting it through
acceptance testing and won't sell it for several more months.

Others will happily sell you a 10GE circuit but then limit you to some
fraction of that circuit because of internal limitations within their
nodes. (I've seen this on more than one network.)

And in any case, some of these don't have the egress capacity either
from the local node to their backbone or to their peers/customers to
be able to swallow that kind of traffic anyway.

Truth be told, there really are not that many networks out there at
present that are capable of accepting a 10G customer who actually
intends to USE 10G. And believe it or not, there are still those out
there that believe that customers aren't going to be able to pass a
full 10G and therefore see no need to offer it at the edge.

Currently, for all but the most intensive users, NxGE or NxOC48 still
seems to be preferred termination. (Often this is also partly a factor
of minimum commits and varying methods of billing.) This is changing
but it's happening more slowly than I would like to see.

My $0.37 (inflation's a pain)

-Wayne

---
Wayne Bouchard
[EMAIL PROTECTED]
Network Dude
http://www.typo.org/~web/


Demand for 10G connections

2007-01-10 Thread Sean Donelan


On Wed, 10 Jan 2007, William B. Norton wrote:

Why are folks turning away 10G orders?


In Hollywood, San Francisco and a few other cities with large concentration
of movie/entertainment industries 10G network connections have been sold 
for at least a year, not necessarily connected to the Internet.


Don't we go through this every other network generation?  I seem to recall 
the NSFNET only being able to handle 22Mbps on its T3 (45Mbps) connections
for a while.  A few years later, another backbone was out of capacity and 
had to stop selling new connections for several months while it caught up.
Later we had the OC12, OC48 bottleneck on switches being used at some 
exchange points and getting GigE connections was a problem.  And so on 
and so on.


We'll have wailing and gnashing of teeth for many months but somehow 
things will balance out again as technology, equipment and revenue catch
up with each other.  And then it will start over again with someone 
wanting 100GE connections, and then 1000GE connections, etc, etc.


Standalone DC power recommendations?

2007-01-10 Thread Deepak Jain



We are going to need to deploy about 800A of -48VDC power equipment in a
datacenter that doesn't supply DC power normally. They will give us an 
three phase AC-power feed that is generator backed.


Can anyone point me in the direction of a decent vendor of DC-power 
plants that are suitable for this size need and/or a ballpark price one 
would expect to pay for the equipment?


Thanks in advance,

Deepak


Re: Internet Video: The Next Wave of Massive Disruption to the US Peering Ecosystem (v1.2)

2007-01-10 Thread Daniel Golding


On Jan 10, 2007, at 12:33 PM, William B. Norton wrote:



Why are folks turning away 10G orders?



Some of this depends on how much you are willing to pay. The issue is  
as much 10G orders at today's transit prices as it is the capacity.  
We're used to paying less per unit for greater capacity, but when  
we're asking networks to sell capacity in chunks as large as the ones  
they use to build their backbones, that may simply not work.


One other issue is that willingness to sell 10G is one vital  
competitive distinguisher in an otherwise largely commodity transit  
market. There have been rumors that older legacy carriers wish to  
punish more agile competitors for daring to steal 10G customers  
away from them, in spite of the fact that those older carriers have  
lots of trouble delivering 10G.


- Dan

Re: Internet Video: The Next Wave of Massive Disruption to the US Peering Ecosystem (v1.2)

2007-01-10 Thread Deepak Jain



One other issue is that willingness to sell 10G is one vital competitive 
distinguisher in an otherwise largely commodity transit market. There 
have been rumors that older legacy carriers wish to punish more agile 
competitors for daring to steal 10G customers away from them, in spite 
of the fact that those older carriers have lots of trouble delivering 10G.




There isn't anything new to that either. I can think of (similar to 
Sean's story) a time when a backbone had all NxDS3 links in their 
network and used to be very upset at the idea of selling OC3s instead of 
NxDS3 (If its good enough for *our* backbone...).


Then this network went ATM on Fore systems boxes with handoffs to Cisco 
routers to leapfrog their competitors... which um, didn't work...


Then they reluctantly went POS and better systems since...

They are still around today after about 6 name changes. I have no 
present-day-knowledge of their network or, really, their performance, so 
I don't want to mention any names.


Deepak


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Richard Naylor


At 08:58 a.m. 10/01/2007 -0500, Gian Constantine wrote:

All H.264?


no - H.264 is only the free stuff. Pretty well its all WindowsMedia - 
because of the DRM capabilities. The rights holders are insisting on that.


No DRM = no content. (from the big content houses)

The advantage of WM DRM is that smaller players can add DRM to their 
content quite easily and these folks want to be able to control that space. 
Even when they are part of an International conglomerate, each country 
subsidiary seems to get non-DRM'ed material and repackage it (ie add DRM). 
I understand this is how folks like Sony dish out the rights - on a country 
basis, so each subsidiary gets to define the business rights (ie play 
rights) in their own country space. WM DRM has all of this well defined.


Rich






Anything going on in Atlanta, GA?

2007-01-10 Thread Dean, Matt
Good evening,

 

I'm currently seeing a routing loop in Atlanta, GA.  I've contacted the
ISP with no response yet.  Just curious if anyone knows if there's
something major going on there.  My traceroute is below.

 

Tracing route to shs01.mdeinc.ca [207.210.113.61]

over a maximum of 30 hops:

 

  1 1 ms 2 ms 2 ms  corpnet-rtr01.ba01.mdeinc.ca
[10.143.5.1]

  2 5 ms 5 ms 5 ms  mdei-rci-border-rtr01.ba01.mdeinc.ca
[10.99.99.5]

  317 ms13 ms15 ms  10.231.36.1

  416 ms13 ms13 ms  gw01.bawk.phub.net.cable.rogers.com
[66.185.90.193]

  513 ms13 ms16 ms  gw01.bawk.phub.net.cable.rogers.com
[66.185.83.253]

  614 ms13 ms13 ms  gw01.basp.phub.net.cable.rogers.com
[66.185.82.6]

  715 ms15 ms18 ms  gw02.mtnk.phub.net.cable.rogers.com
[66.185.81.209]

  815 ms15 ms15 ms  66.185.80.41

  932 ms31 ms32 ms  igw01.vaash.phub.net.cable.rogers.com
[66.185.80.190]

 1036 ms37 ms37 ms  POS2-1.ar2.DCA3.gblx.net
[208.51.239.201]

 1151 ms54 ms51 ms
NLAYER-COMMUNICATIONS-INC.ge-4-1-0.410.ar4.ATL1.gblx.net [206.41.25.230]

 12   267 ms76 ms   122 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 13 *** Request timed out.

 1454 ms50 ms56 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 15 *** Request timed out.

 1652 ms53 ms53 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 17 *** Request timed out.

 1854 ms64 ms51 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 19 *** Request timed out.

 2052 ms53 ms52 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 21 *** Request timed out.

 2253 ms53 ms54 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 23 *** Request timed out.

 2453 ms51 ms54 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 25 *** Request timed out.

 2654 ms51 ms56 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 27 *** Request timed out.

 2853 ms66 ms56 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 29 *** Request timed out.

 3052 ms52 ms56 ms  atl-core-a-tgi2-1.gnax.net
[209.51.149.105]

 

Trace complete.

 

Thanks in advance.

 

Matt Dean, MCP, MCDST, MCPS, MCNPS

Cremto Inc. Integrated Technology Consulting

530 Adelaide St. West, Unit 6133

Toronto, Ontario M5V 2K7

Canada

(416) 619-0472

[EMAIL PROTECTED] 

 

 



Re: Anything going on in Atlanta, GA?

2007-01-10 Thread William Petrisko

Switch and Data was reporting power issues at 56 Marietta
earlier.  Don't know if it was isolated to their suite, or
more widespread.

bill

On Wed, Jan 10, 2007 at 07:51:36PM -0500, Dean, Matt wrote:
 Good evening,
 
  
 
 I'm currently seeing a routing loop in Atlanta, GA.  I've contacted the
 ISP with no response yet.  Just curious if anyone knows if there's
 something major going on there.  My traceroute is below.
 
  
 
 Tracing route to shs01.mdeinc.ca [207.210.113.61]
 
 over a maximum of 30 hops:
 
  
 
   1 1 ms 2 ms 2 ms  corpnet-rtr01.ba01.mdeinc.ca
 [10.143.5.1]
 
   2 5 ms 5 ms 5 ms  mdei-rci-border-rtr01.ba01.mdeinc.ca
 [10.99.99.5]
 
   317 ms13 ms15 ms  10.231.36.1
 
   416 ms13 ms13 ms  gw01.bawk.phub.net.cable.rogers.com
 [66.185.90.193]
 
   513 ms13 ms16 ms  gw01.bawk.phub.net.cable.rogers.com
 [66.185.83.253]
 
   614 ms13 ms13 ms  gw01.basp.phub.net.cable.rogers.com
 [66.185.82.6]
 
   715 ms15 ms18 ms  gw02.mtnk.phub.net.cable.rogers.com
 [66.185.81.209]
 
   815 ms15 ms15 ms  66.185.80.41
 
   932 ms31 ms32 ms  igw01.vaash.phub.net.cable.rogers.com
 [66.185.80.190]
 
  1036 ms37 ms37 ms  POS2-1.ar2.DCA3.gblx.net
 [208.51.239.201]
 
  1151 ms54 ms51 ms
 NLAYER-COMMUNICATIONS-INC.ge-4-1-0.410.ar4.ATL1.gblx.net [206.41.25.230]
 
  12   267 ms76 ms   122 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  13 *** Request timed out.
 
  1454 ms50 ms56 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  15 *** Request timed out.
 
  1652 ms53 ms53 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  17 *** Request timed out.
 
  1854 ms64 ms51 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  19 *** Request timed out.
 
  2052 ms53 ms52 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  21 *** Request timed out.
 
  2253 ms53 ms54 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  23 *** Request timed out.
 
  2453 ms51 ms54 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  25 *** Request timed out.
 
  2654 ms51 ms56 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  27 *** Request timed out.
 
  2853 ms66 ms56 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  29 *** Request timed out.
 
  3052 ms52 ms56 ms  atl-core-a-tgi2-1.gnax.net
 [209.51.149.105]
 
  
 
 Trace complete.
 
  
 
 Thanks in advance.
 
  
 
 Matt Dean, MCP, MCDST, MCPS, MCNPS
 
 Cremto Inc. Integrated Technology Consulting
 
 530 Adelaide St. West, Unit 6133
 
 Toronto, Ontario M5V 2K7
 
 Canada
 
 (416) 619-0472
 
 [EMAIL PROTECTED] 
 
  
 
  
 


RE: Anything going on in Atlanta, GA?

2007-01-10 Thread Randy Epstein

Bill,

 Switch and Data was reporting power issues at 56 Marietta
 earlier.  Don't know if it was isolated to their suite, or
 more widespread.
 
 bill

No issues on 2nd, 3rd or 4th floor.  Not sure about the 6th (where SD is
located.)

There are also separate generators in the building for the various tenants.

Regards,

Randy



i wanna be a kpn peer

2007-01-10 Thread Randy Bush

route-views.oregon-ix.netsh ip bg 203.10.63.0
BGP routing table entry for 0.0.0.0/, version 2
Paths: (1 available, best #1, table Default-IP-Routing-Table)
  Not advertised to any peer
  286
134.222.85.45 from 134.222.85.45 (134.222.85.45)
  Origin IGP, localpref 100, valid, external, best
  Community: 286:286 286:3031 286:3809



Re: i wanna be a kpn peer

2007-01-10 Thread Chris L. Morrow



On Wed, 10 Jan 2007, Randy Bush wrote:


 route-views.oregon-ix.netsh ip bg 203.10.63.0
 BGP routing table entry for 0.0.0.0/, version 2

do most folks setup route-views peers as a 'standard customer' or are they
generally on a special purpose box with special (easy to forget about and
screw up) box?

-Chris


Re: i wanna be a kpn peer

2007-01-10 Thread Patrick W. Gilmore


On Jan 10, 2007, at 10:54 PM, Chris L. Morrow wrote:

On Wed, 10 Jan 2007, Randy Bush wrote:


route-views.oregon-ix.netsh ip bg 203.10.63.0
BGP routing table entry for 0.0.0.0/, version 2


do most folks setup route-views peers as a 'standard customer' or  
are they
generally on a special purpose box with special (easy to forget  
about and

screw up) box?


Or even a special purpose box that intentionally gives an unfiltered  
view?


I don't think a spurious prefix directly injected into route-views is  
proof a network is broken.


--
TTFN,
patrick




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Thomas Leavitt
It seems to me that multi-cast is a technical solution for the bandwidth 
consumption problems precipitated by real-time Internet video broadcast, 
but it doesn't seem to me that the bulk of current (or even future) 
Internet video traffic is going to be amenable to distribution via 
multi-cast - or, at least, separate and apart from whatever happens with 
multi-cast, a huge and growing volume of video traffic will be flowing 
over the 'net...


I don't think consumers are going to accept having to wait for a 
scheduled broadcast of whatever piece of video content they want to 
view - at least if the alternative is being able to download and watch 
it nearly immediately. That said, for the most popular content with the 
widest audience, scheduled multi-cast makes sense... especially when the 
alternative is waiting for a large download to finish - contrawise, it 
doesn't seem reasonable to be constantly multi-casting *every* piece of 
video content anyone might ever want to watch (that in itself would 
consume an insane amount of bandwidth). How many pieces of video content 
are there on YouTube? How many more can we expect to emerge over the 
next decade, given the ever decreasing cost of entry for reasonably 
decent video production?


All of which, to me, leaves the fundamental issue of how the upsurge in 
traffic is going to be handled left unresolved.


Thomas

Simon Lockhart wrote:

On Tue Jan 09, 2007 at 07:52:02AM +, [EMAIL PROTECTED] wrote:
  

Given that the broadcast model for streaming content
is so successful, why would you want to use the
Internet for it? What is the benefit?



How many channels can you get on your (terrestrial) broadcast receiver?

If you want more, your choices are satellite or cable. To get cable, you 
need to be in a cable area. To get satellite, you need to stick a dish on 
the side of your house, which you may not want to do, or may not be allowed

to do.

With IPTV, you just need a phoneline (and be close enough to the exchange/CO
to get decent xDSL rate). In the UK, I'm already delivering 40+ channels over
IPTV (over inter-provider multicast, to any UK ISP that wants it).

Simon
  



--
Thomas Leavitt - [EMAIL PROTECTED] - 831-295-3917 (cell)

*** Independent Systems and Network Consultant, Santa Cruz, CA ***

begin:vcard
fn:Thomas Leavitt
n:Leavitt;Thomas
org:Godmoma's Forge, LLC
adr:Suite B;;916 Soquel Ave.;Santa Cruz;CA;95062;United States
email;internet:[EMAIL PROTECTED]
title:Systems and Network Consultant
tel;fax:831-469-3382
tel;cell:831-295-3917
url:http://www.godmomasforge.com/
version:2.1
end:vcard



Fwd: [routing-wg]New Document Available: RIPE-399

2007-01-10 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I hadn't seen any mention of this on the list today, so I figured
I would mention it.

I finally got a few free minutes to review this document this
evening and I think it is a really good community resource.

FYI,

- - ferg

[forwarded message]

To: [EMAIL PROTECTED]
From: RIPE NCC Document Announcement Service [EMAIL PROTECTED]
Subject: [routing-wg]New Document Available: RIPE-399
Sender: [EMAIL PROTECTED]
Date: Wed, 10 Jan 2007 16:57:36 +0100


New RIPE Document Announcement
- --
A new document is available from the RIPE Document store. 


Ref:ripe-399
Title:  RIPE Routing Working Group Recommendations on Route Aggregation
Author: Philip Smith, Rob Evans, Mike Hughes
Format: PDF= 89, 997
Date:   December  2006


Short content description
- -
This document discusses the need for aggregation of prefixes on the
Internet today, and recommends good working practices for Internet Service
Providers and other Autonomous Networks connected to the Internet.


Accessing the RIPE Document Store
- -

You can access this RIPE document at:


  http://www.ripe.net/docs/ripe-399.html


[snip]

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.5.2 (Build 4075)

wj8DBQFFpbr7q1pz9mNUZTMRAoZ8AJ4gbdH1fo8OD/KaRToztqpcbp+E3QCdEeZn
FtwMbt3qzzAs485WlPvJLwk=
=jcbf
-END PGP SIGNATURE-



--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: i wanna be a kpn peer

2007-01-10 Thread Randy Bush

 I don't think a spurious prefix directly injected into route-views is
 proof a network is broken.

we've had this discussion 42 times.  it is not proof of anything and no
one has said it is.  but if it was one of my areas of responsibility
leaking something strange, i sure would not mind folk mentioning it
here.  in fact, i would be greatful.

randy


Re: i wanna be a kpn peer

2007-01-10 Thread Patrick W. Gilmore


On Jan 10, 2007, at 11:28 PM, Randy Bush wrote:


I don't think a spurious prefix directly injected into route-views is
proof a network is broken.


we've had this discussion 42 times.  it is not proof of anything  
and no

one has said it is.  but if it was one of my areas of responsibility
leaking something strange, i sure would not mind folk mentioning it
here.  in fact, i would be greatful.


It is not proof.  No one said it was.  And no one said you said it  
was. :)


That said, I would be grateful if someone showed me I screwed up too  
- in private.  In public, I'm not so sure.  Especially if someone  
only -thought- I screwed up.


One could argue that it is difficult to reach the proper people  
privately (although noc@ might be a start, or iNOC-DBA, or ...).   
One could also argue that public notification is better than no  
notification.  But then one would might want to mention that private  
channels had been exhausted in one's public notification.


Anyway, this one is sorry if that one thought one was being  
curmudgeonly. :)


--
TTFN,
patrick



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Marshall Eubanks



On Jan 10, 2007, at 11:19 PM, Thomas Leavitt wrote:

It seems to me that multi-cast is a technical solution for the  
bandwidth consumption problems precipitated by real-time Internet  
video broadcast, but it doesn't seem to me that the bulk of current  
(or even future) Internet video traffic is going to be amenable to  
distribution via multi-cast - or, at least, separate and apart from  
whatever happens with multi-cast, a huge and growing volume of  
video traffic will be flowing over the 'net...


I would fully agree with this.



I don't think consumers are going to accept having to wait for a  
scheduled broadcast of whatever piece of video content they want  
to view - at least if the alternative is being able to download and  
watch it nearly


That's the pull model. The push model will also exist. Both will make  
money.


immediately. That said, for the most popular content with the  
widest audience, scheduled multi-cast makes sense... especially  
when the alternative is waiting for a large download to finish -  
contrawise, it doesn't seem reasonable to be constantly multi- 
casting *every* piece of video content anyone might ever want to  
watch (that in itself would consume an insane amount of bandwidth).  
How many pieces of video content are there on YouTube? How many  
more can we expect to emerge over the next decade, given the ever  
decreasing cost of entry for reasonably decent video production?


Lots. Remember, of course, Sturgeon's law. But, lots. If you want  
numbers, 10^4 channels, billions of pieces of uncommercial content,  
and millions of pieces of commercial content.




All of which, to me, leaves the fundamental issue of how the  
upsurge in traffic is going to be handled left unresolved.




I think that technically, we have a pretty good idea how. I think  
that the real fundamental question is whose business models will  
allow them to make a profit from this upsurge.



Thomas



Regards
Marshall



Simon Lockhart wrote:
On Tue Jan 09, 2007 at 07:52:02AM +,  
[EMAIL PROTECTED] wrote:



Given that the broadcast model for streaming content
is so successful, why would you want to use the
Internet for it? What is the benefit?



How many channels can you get on your (terrestrial) broadcast  
receiver?


If you want more, your choices are satellite or cable. To get  
cable, you need to be in a cable area. To get satellite, you need  
to stick a dish on the side of your house, which you may not want  
to do, or may not be allowed

to do.

With IPTV, you just need a phoneline (and be close enough to the  
exchange/CO
to get decent xDSL rate). In the UK, I'm already delivering 40+  
channels over

IPTV (over inter-provider multicast, to any UK ISP that wants it).

Simon




--
Thomas Leavitt - [EMAIL PROTECTED] - 831-295-3917 (cell)

*** Independent Systems and Network Consultant, Santa Cruz, CA ***

thomas.vcf