[Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Paul Ferguson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- William Warren [EMAIL PROTECTED] wrote:

Here in comcast land hdtv is actually averaging around 12 megabits a 
second.  Still adds up to staggering numbers..:)


Another disturbing fact inside this entire mess is that, by compressing
HD content, consumers are noticing the degradation in quality:

http://cbs5.com/local/hdtv.cable.compression.2.705405.html

So, we have a Tragedy of The Commons situation that is completely
created by the telcos themselves trying to force consumer decisions,
and then failing to deliver, but bemoaning the fact that
infrastructure is being over-utilized by file-sharers (or
Exafloods or whatever the apocalyptic issue of the day is for
telcos).

A real Charlie Foxtrot.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFIDX5Rq1pz9mNUZTMRAovKAJ0SXpK/XrW73mkCGZhtLCO5ZGsspQCfbUY3
0DPluYrtq0Et/RbvJguq3WM=
=furJ
-END PGP SIGNATURE-


--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Petri Helenius

Time to push multicast as transport for bittorrent? If the downloads get 
better performance that way, I think the clients would be around quicker 
that multicast would be enabled for consumer DSL or cable.

Pete


Paul Ferguson wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 - -- William Warren [EMAIL PROTECTED] wrote:

   
 Here in comcast land hdtv is actually averaging around 12 megabits a 
 
 second.  Still adds up to staggering numbers..:)
   

 Another disturbing fact inside this entire mess is that, by compressing
 HD content, consumers are noticing the degradation in quality:

 http://cbs5.com/local/hdtv.cable.compression.2.705405.html

 So, we have a Tragedy of The Commons situation that is completely
 created by the telcos themselves trying to force consumer decisions,
 and then failing to deliver, but bemoaning the fact that
 infrastructure is being over-utilized by file-sharers (or
 Exafloods or whatever the apocalyptic issue of the day is for
 telcos).

 A real Charlie Foxtrot.

 - - ferg

 -BEGIN PGP SIGNATURE-
 Version: PGP Desktop 9.6.3 (Build 3017)

 wj8DBQFIDX5Rq1pz9mNUZTMRAovKAJ0SXpK/XrW73mkCGZhtLCO5ZGsspQCfbUY3
 0DPluYrtq0Et/RbvJguq3WM=
 =furJ
 -END PGP SIGNATURE-


 --
 Fergie, a.k.a. Paul Ferguson
  Engineering Architecture for the Internet
  fergdawg(at)netzero.net
  ferg's tech blog: http://fergdawg.blogspot.com/


 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog

   


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] REDUSE LEASE PERIOD

2008-04-22 Thread Felix Bako
Hi Guyz
We have a cisco PDSN that acts as a NAS for our Wireless BroadBand Clients.
The PDSN tackles the IP alllocation Part. This is because we have 
defined the
ips to be allocated using the IP LOCAL POOL command.
Is there a way to reduce the lease period of the allocated IPs.
The IOS code is  disk2:c7200-c6ik9s-mz.123-11.YF4.bin.

-- 

Best Regards,

Felix Bako
Team Leader Networks
Africa Online, Kenya
Tel: +254 (20) 27 92 000
Fax: +254 (20) 27 100 10
Email: [EMAIL PROTECTED]
Aim:felixbako

 


* Africa Online Disclaimer and Confidentiality Note *


This e-mail, its attachments and any rights attaching hereto are, unless 
the context clearly indicates otherwise, the property of Africa Online 
Holdings (Kenya) Limited and / or its subsidiaries (the Group). It is 
confidential and intended for the addressee only. Should you not be the 
addressee and have received this e-mail by mistake, kindly notify the 
sender, delete this e-mail immediately and do not disclose or use the 
same in any manner whatsoever. Views and opinions expressed in this 
e-mail are those of the sender unless clearly stated as those of the 
Group. The Group accepts no liability whatsoever for any loss or 
damages, however incurred, resulting from the use of this e-mail or its 
attachments. The Group does not warrant the integrity of this e-mail, 
nor that it is free of errors, viruses, interception or interference. 
For more information about Africa Online, please visit our website at 
http://www.africaonline.com


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread michael.dillon
 
 Time to push multicast as transport for bittorrent? 

Bittorrent clients are already multicast, only they do it in a crude way
that does not match network topology as well as it could. Moving to use
IP multicast raises a whole host of technical issues such as lack of
multicast peering. Solving those technical issues requires ISP
cooperation, i.e. to support global multicast.

But there is another way. That is for software developers to build a
modified client that depends on a topology guru for information on the
network topology. This topology guru would be some software that is run
by an ISP, and which communicates with all other topology gurus in
neighboring ASes These gurus learn the topology using some kind of
protocol like a routing protocol. They also have some local intelligence
configured by the ISP such as allowed traffic rates at certain time
periods over certain paths. And they share all of that information in
order to optimize the overall downloading of all files to all clients
which share the same guru. Some ISPs have local DSL architectures in
which it makes better sense to download a file from a remote location,
than from the guy next door. In that case, an ISP could configure a guru
to prefer circuits into their data centre, then operate clients in the
data center that effectively cache files. But the caching thing is
optional.

Then, a bittorrent client doesn't have to guess how to get files
quickly, it just has to follow the guru's instructions. Part of this
would involve cooperating with all other clients attached to the same
guru so that no client downloads distant blocks of data that have
already been downloaded by another local client. This is the part that
really starts to look like IP multicast except that it doesn't rely on
all clients functioning in real time. Also, it looks like NNTP news
servers except that the caching is all done on the clients. The gurus
never cache or download files.

For this to work, you need to start by getting several ISPs to buy-in,
help with the design work, and then deploy the gurus. Once this proves
itself in terms of managing how and *WHEN* bandwidth is used, it should
catch on quite quickly with ISPs. Note that a key part of this
architecture is that it allows the ISP to open up the throttle on
downloads during off-peak hours so that most end users can get a
predictable service of all downloads completed overnight.

--Michael Dillon

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread michael.dillon

  I think you're too high there! MPEG2 SD is around 4-6Mbps, 
 MPEG4 SD is 
  around 2-4Mbps, MPEG4 HD is anywhere from 8 to 20Mbps, depending on 
  how much wow factor the broadcaster is trying to give.
 
 Nope, ATSC is 19 (more accurately 19.28) megabits per second. 

So why would anyone plug an ATSC feed directly into the Internet?
Are there any devices that can play it other than a TV set?
Why wouldn't a video services company transcode it to MPEG4 and
transmit that? 

I can see that some cable/DSL companies might transmit ATSC to
subscribers
but they would also operate local receivers so that the traffic never
touches their core. Rather like what a cable company does today with TV
receivers in their head ends.

All this talk of exafloods seems to ignore the basic economics of
IP networks. No ISP is going to allow subscribers to pull in 8gigs
per day of video stream. And no broadcaster is going to pay for the
bandwidth needed to pump out all those ATSC streams. And nobody is
going to stick IP multicast (and multicast peering) in the core just
to deal with video streams to people who leave their TV on all day
whether
they are at home or not.

At best you will see IP multicast on a city-wide basis in a single
ISP's network. Also note that IP multicast only works for live broadcast
TV. In today's world there isn't much of that except for news.
Everything
else is prerecorded and thus it COULD be transmitted at any time. IP
multicast
does not help you when you have 1000 subscribers all pulling in 1000
unique
streams. In the 1960's it was reasonable to think that you could deliver
the
same video to all consumers because everybody was the same in one big
melting
pot. But that day is long gone.

On the other hand, P2P software could be leveraged to download video
files
during off-peak hours on the network. All it takes is some cooperation
between
P2P software developers and ISPs so that you have P2P clients which can
be told
to lay off during peak hours, or when they want something from the other
side
of a congested peering circuit. Better yet, the ISP's P2P manager could
arrange
for one full copy of that file to get across the congested peering
circuit during
the time period most favorable for that single circuit, then distribute
elsewhere.

--Michael Dillon

As far as I am concerned the killer application for IP multicast is
*NOT* video,
it's market data feeds from NYSE, NASDAQ, CBOT, etc.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Dorn Hetzel
It's certainly not reasonable to assume the same video goes to all
consumers, but on the other hand, there *is* plenty of video that goes to a
*lot* of consumers.  I don't really need my own personal unicast copy of the
bits that make up an episode of BSG or whatever.  I would hope that the
future has even more tivo-like devices at the consumer edge that can take
advantage of the right (desired) bits whenever they are available.  A single
box that can take bits off the bird or cable tv when what it wants is
found there or request over IP when it needs to doesn't seem like rocket
science...

-dorn

On Tue, Apr 22, 2008 at 6:33 AM, [EMAIL PROTECTED] wrote:


   I think you're too high there! MPEG2 SD is around 4-6Mbps,
  MPEG4 SD is
   around 2-4Mbps, MPEG4 HD is anywhere from 8 to 20Mbps, depending on
   how much wow factor the broadcaster is trying to give.
 
  Nope, ATSC is 19 (more accurately 19.28) megabits per second.

 So why would anyone plug an ATSC feed directly into the Internet?
 Are there any devices that can play it other than a TV set?
 Why wouldn't a video services company transcode it to MPEG4 and
 transmit that?

 I can see that some cable/DSL companies might transmit ATSC to
 subscribers
 but they would also operate local receivers so that the traffic never
 touches their core. Rather like what a cable company does today with TV
 receivers in their head ends.

 All this talk of exafloods seems to ignore the basic economics of
 IP networks. No ISP is going to allow subscribers to pull in 8gigs
 per day of video stream. And no broadcaster is going to pay for the
 bandwidth needed to pump out all those ATSC streams. And nobody is
 going to stick IP multicast (and multicast peering) in the core just
 to deal with video streams to people who leave their TV on all day
 whether
 they are at home or not.

 At best you will see IP multicast on a city-wide basis in a single
 ISP's network. Also note that IP multicast only works for live broadcast
 TV. In today's world there isn't much of that except for news.
 Everything
 else is prerecorded and thus it COULD be transmitted at any time. IP
 multicast
 does not help you when you have 1000 subscribers all pulling in 1000
 unique
 streams. In the 1960's it was reasonable to think that you could deliver
 the
 same video to all consumers because everybody was the same in one big
 melting
 pot. But that day is long gone.

 On the other hand, P2P software could be leveraged to download video
 files
 during off-peak hours on the network. All it takes is some cooperation
 between
 P2P software developers and ISPs so that you have P2P clients which can
 be told
 to lay off during peak hours, or when they want something from the other
 side
 of a congested peering circuit. Better yet, the ISP's P2P manager could
 arrange
 for one full copy of that file to get across the congested peering
 circuit during
 the time period most favorable for that single circuit, then distribute
 elsewhere.

 --Michael Dillon

 As far as I am concerned the killer application for IP multicast is
 *NOT* video,
 it's market data feeds from NYSE, NASDAQ, CBOT, etc.

 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Brandon Butterworth
 So why would anyone plug an ATSC feed directly into the Internet?

Because we can. One day ISPs might do multicast and it might become
cheap enough to deliver to the home. If we don't then they probably
will never bother fixing those two problems

I've been multicasting the BBCs channels in the UK since 2004. The full
rate are mostly used by NOCs with our news on their projectors, we have
lower rate h264, WM and Real for people testing multicast over current
ADSL. The aim is by 2012 to be able to do all our Olympics sports in HD
(a channel per simultaneous event rather than the usual just one with
highlights of each) something we can't do on DTT (= ATSC) due to lack
of spectrum (there's enough but it's being sold for non TV use after
analogue switch off)

 Are there any devices that can play it other than a TV set?

Sure, STB for TV and VLC etc for most OS. It's trivial

 No ISP is going to allow subscribers to pull in 8gigs
 per day of video stream. And no broadcaster is going to pay for the
 bandwidth needed to pump out all those ATSC streams.

That's because they don't have a viable business model (unlimited
use...). Cable companies are moving to IP, they already carry
it from their core to the home just the transport is changing.

 And nobody is
 going to stick IP multicast (and multicast peering) in the core just
 to deal with video streams to people who leave their TV on all day
 whether they are at home or not.

When people do it unicast regardless then not doing multicast is silly

 At best you will see IP multicast on a city-wide basis in a single
 ISP's network.

Unlikely, too much infrastructure and not all content is available
locally

 Also note that IP multicast only works for live broadcast TV. 

See Sky Movies for a simulation of multicast VoD

 IP multicast
 does not help you when you have 1000 subscribers all pulling in 1000
 unique streams.

True but the 1000 watching BBC1 may as well be multicast, at
least you save a bit.

 In the 1960's it was reasonable to think that you could deliver the
 same video to all consumers because everybody was the same in one big
 melting pot. But that day is long gone.

Evidence is a lot of people still like to vegetate in front of a
tv rather than hunt their content. Once they're all dead we'll
find out if linear TV is still viable, by then IPv6 roll out may
have completed too.

 On the other hand, P2P software could be leveraged to download video
 files during off-peak hours on the network.

Sure but P2P isn't a requirement for that and currently saves you no
money (UK ADSL wholesale model) over unicast. If people are taking
random content you won't be able to predict and send it in advance. If
you can predict then you can multicast it and save some transport cost
vs P2P/unicast

 Better yet, the ISP's P2P manager could arrange
 for one full copy of that file to get across the congested peering
 circuit during
 the time period most favorable for that single circuit, then distribute
 elsewhere.

Or they could just run an http cache and save a lot more traffic
and not have to rely on P2P apps playing nicely.

Apologies for length, just no seemed too rude

brandon

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Marshall Eubanks


On Apr 21, 2008, at 9:35 PM, Frank Bulk - iNAME wrote:

 I've found it interesting that those who do Internet TV (re)define  
 HD in a
 way that no one would consider HD anymore except the provider. =)


The FCC did not appear to set a bit rate specification for HD  
Television.

The ATSC standard (A-53 part 4) specifies aspect ratios and pixel  
formats and frame rates, but not
bit rates.

So AFAICT, no redefinition is necessary. If you are doing (say) 720 x  
1280 at 30 fps, you
can call it HD, regardless of your bit rate. If you can find somewhere  
where the standard
says otherwise, I would like to know about it.


 In the news recently has been some complaints about Comcast's HD TV.
 Comcast has been (selectively) fitting 3 MPEG-2 HD streams in a 6 MHz
 carrier (38 Mbps = 12.6 Mbps) and customers aren't happy with that.   
 I'm not
 sure how the average consumer will see 1.5 Mbps for HD video as  
 sufficient
 unless it's QVGA.

Well, not with a 15+ year old standard like MPEG-2. (And, of course,  
HD is a set of
pixel formats that specifically does not include QVGA.)

I have had video professionals go wow at H.264 dual pass 720 p  
encodings at 2 Mbps, so it can be done. The real
question is, how often do you see artifacts ? And, how much does the  
user care ? Modern encodings
at these bit rates tend to provide very good encodings of static  
scenes. As the on-screen action increases, so
does the likelihood of artifacts, so selection of bit rate depends I  
think on user expectations and the typical content being down.
(As an aside, I see lots of artifacts on my at-home Cable HD, but I  
don't know their bandwidth allocation.)

Regards
Marshall




 Frank

 -Original Message-
 From: Alex Thurlow [mailto:[EMAIL PROTECTED]
 Sent: Monday, April 21, 2008 4:26 PM
 To: nanog@nanog.org
 Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010

 snip

 I'm going to have to say that that's much higher than we're actually
 going to see.  You have to remember that there's not a ton of
 compression going on in that.  We're looking to start pushing HD video
 online, and our intial tests show that 1.5Mbps is plenty to push HD
 resolutions of video online.  We won't necessarily be doing 60 fps or
 full quality audio, but HD doesn't actually define exactly what it's
 going to be.

 Look at the HD offerings online today and I think you'll find that
 they're mostly 1-1.5 Mbps.  TV will stay much higher quality than  
 that,
 but if people are watching from their PCs, I think you'll see much  
 more
 compression going on, given that the hardware processing it has a lot
 more horsepower.


 --
 Alex Thurlow
 Technical Director
 Blastro Networks


 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog


 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Mark Smith
On Tue, 22 Apr 2008 11:55:58 +0100
[EMAIL PROTECTED] wrote:

  
  Time to push multicast as transport for bittorrent? 
 
 Bittorrent clients are already multicast, only they do it in a crude way
 that does not match network topology as well as it could. Moving to use
 IP multicast raises a whole host of technical issues such as lack of
 multicast peering. Solving those technical issues requires ISP
 cooperation, i.e. to support global multicast.
 
 But there is another way. That is for software developers to build a
 modified client that depends on a topology guru for information on the
 network topology.

snip

Isn't TCP already measuring throughput and latency of the network for
RTO etc.? Why not expose those parameters for peers to the local P2P
software, and then have it select the closest peers with either the
lowest latency, the highest throughput, or a weighed combination of
both? I'd think that would create a lot of locality in the traffic.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Petri Helenius
[EMAIL PROTECTED] wrote:
 But there is another way. That is for software developers to build a
 modified client that depends on a topology guru for information on the
 network topology. This topology guru would be some software that is run
   
While the current bittorrent implementation is suboptimal for large 
swarms (where number of adjacent peers is significantly less than the 
number of total participants) I fail to figure out the necessary 
mathematics where topology information would bring superior results 
compared to the usual greedy algorithms where data is requested from the 
peers where it seems to be flowing at the best rates. If local peers 
with sufficient upstream bandwidth exist, majority of the data blocks 
are already retrieved from them.

In many locales ISP's tend to limit the available upstream on their 
consumer connections, usually causing more distant bits to be delivered 
instead.

I think the most important metric to study is the number of times the 
same piece of data is transmitted in a defined time period and try to 
figure out how to optimize for that. For a new episode of BSG, there are 
a few hundred thousand copies in the first hour and a million or so in 
the first few days. With the headers and overhead, we might already be 
hitting a petabyte per episode. RSS feeds seem to shorten the 
distribution ramp-up from release.

The p2p world needs more high-upstream proxies to make it more 
effective. I think locality with current torrent implementations would 
happen automatically. However there are quite a few parties who are 
happy to have it as bad as they can make it :-)

Is there a problem that needs to be solved that is not solved by 
Akamai's of the world already?

Pete


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Brandon Butterworth
 Is there a problem that needs to be solved that is not solved by 
 Akamai's of the world already?

Yes, the ones that aren't Akamai want to play too

branodn

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Cogent Router dropping packets

2008-04-22 Thread manolo
Well it had sounded like I was in the minority and should keep my mouth 
shut. But here goes. On several occasions the peer that would advertise 
our routes would drop and with that the peer with the full bgp tables 
would drop as well. This happened for months on end. They tried blaming 
our 6500, our fiber provider, our IOS version, no conclusive findings 
where ever found that it was our problem. After some testing at the 
local Cogent office by both Cogent and myself, Cogent decided that they 
could make a product that would allow us too one have only one peer 
and two to connect directly to the GSR and not through a small catalyst. 
Low and behold things worked well for some time after that.

  This all happened while we had 3 other providers on the same router 
with no issues at all. We moved gbics, ports etc around to make sure it 
was not some odd ASIC or throughput issue with the 6500.

   Hope this answers the question.

Manolo

Paul Wall wrote:
 On Mon, Apr 21, 2008 at 4:02 PM, manolo [EMAIL PROTECTED] wrote:
   
 I do have to say that the PSI net side of cogent is very good. We use
  them in Europe without many issues. I stay far away from the legacy
  cogent network in US.
 

 You still haven't explained the failure modes you've experienced as a
 result of cogent's A/B peer configuration, only fronted.

 Inquiring minds would like to know!

   


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Joe Greco
 All this talk of exafloods seems to ignore the basic economics of
 IP networks. No ISP is going to allow subscribers to pull in 8gigs
 per day of video stream. And no broadcaster is going to pay for the
 bandwidth needed to pump out all those ATSC streams. And nobody is
 going to stick IP multicast (and multicast peering) in the core just
 to deal with video streams to people who leave their TV on all day
 whether they are at home or not.

The floor is littered with the discarded husks of policies about what ISP's
are going to allow or disallow.  No servers, no connection sharing,
web browsing only, no voip, etc.  These typically last only as long as 
the errant assumptions upon which they're based remain somewhat viable. 
For example, when NAT gateways and Internet Connection Sharing became 
widely available, trying to prohibit connection sharing went by the wayside.

8GB/day is less than a single megabit per second, and with ISP's selling
ultra high speed connections (we're now able to get 7 or 15Mbps), an ISP 
might find it difficult to defend why they're selling a premium 15Mbps 
service on which a user can't get 1/15th of that.

 At best you will see IP multicast on a city-wide basis in a single
 ISP's network. Also note that IP multicast only works for live broadcast
 TV. In today's world there isn't much of that except for news.

Huh?  Why does IP multicast only work for that?

 Everything else is prerecorded and thus it COULD be transmitted at 
 any time. IP multicast does not help you when you have 1000 subscribers 
 all pulling in 1000 unique streams. 

Yes, that's potentially a problem.  That doesn't mean that multicast can
not be leveraged to handle prerecorded material, but it does suggest that
you could really use a TiVo-like device to make best use.  A fundamental
change away from live broadcast and streaming out a show in 1:1 realtime,
to a model where everything is spooled onto the local TiVo, and then
watched at a user's convenience.

We don't have the capacity at the moment to really deal with 1000 subs all
pulling in 1000 unique streams, but the likelihood is that we're not going
to see that for some time - if ever.

What seems more likely is that we'll see an evolution of more specialized
offerings, possibly supplementing or even eventually replacing the tiered
channel package offerings of your typical cable company, since it's pretty
clear that a-la-carte channel selection isn't likely to happen soon.

That may allow some less popular channels to come into being.  I happen
to like holding up SciFi as an example, because their current operations
are significantly different than originally conceived, and they're now
producing significant quantities of their own original material.  It's
possible that we could see a much larger number of these sorts of ventures
(which would terrify legacy television networks even further).

The biggest challenge that I would expect from a network point of view is
the potential for vast amounts of decentralization.  For example, there's
low-key stuff such as the Star Trek: Hidden Frontier series of fanfic-
based video projects.  There are almost certainly enough fans out there
that you'd see a small surge in viewership if the material was more
readily accessible (read that as: automatically downloaded to your TiVo).
That could encourage others to do the same in more quantity.  These are
all low-volume data sources, and yet taken as a whole, they could
represent a fairly difficult problem were everyone to be doing it.  It is
not just tech geeks that are going to be able produce video, as the stuff
becomes more accessible (see: YouTube), we may see stuff like mini soap
operas, home  garden shows, local sporting events, local politics, etc.

I'm envisioning a scenario where we may find that there are a few tens of
thousands of PTA meetings each being uploaded routinely onto the home PC's
of whoever recorded the local meeting, and then made available to the
small number of interested parties who might then watch, where (0N20).

If that kind of thing happens, then we're going to find that there's a
large range of projects that have potential viewership landing anywhere
between this example and that of the specialty broadcast cable channels,
and the question that is relevant to network operators is whether there's
a way to guide this sort of thing towards models which are less harmful
to the network.  I don't pretend to have the answers to this, but I do
feel reasonably certain that the success of YouTube is not a fluke, and
that we're going to see more, not less, of this sort of thing.

 As far as I am concerned the killer application for IP multicast is
 *NOT* video, it's market data feeds from NYSE, NASDAQ, CBOT, etc.

You can go compare the relative successes of Yahoo! Finance and YouTube.

While it might be nice to multicast that sort of data, it's a relative
trickle of data, and I'll bet that the majority of users have not only
not visited a market data 

Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Matthew Moyle-Croft


[EMAIL PROTECTED] wrote:
 ...
  You need a way to insert non-technical
 information about the network into the decision-making process. 
 The only way for this to work is to allow the network operator
 to have a role in every P2P transaction. And to do that you need
 a middlebox that sits in the ISP network which they can configure.
   
You could probably do this with a variant of DNS.  Use an Anycast 
address common to everyone to solve the discovery problem.   Client 
sends a DNS request for a TXT record for, as an example, 
148.165.32.217.p2ptopology.org.  The topology box looks at the IP 
address that the request came from and does some magic based on the 
requested information and returns a ranking score based on that (maybe 
0-255 worse to best) that the client can then use to rank where it 
downloads from. (might have to run DNS on another port so that normal 
resolvers don't capture this).

The great thing is that you can use it for other things.

MMC

-- 
Matthew Moyle-Croft - Internode/Agile - Networks
Level 5, 150 Grenfell Street, Adelaide, SA 5000 Australia
Email: [EMAIL PROTECTED]  Web: http://www.on.net
Direct: +61-8-8228-2909 Mobile: +61-419-900-366
Reception: +61-8-8228-2999  Fax: +61-8-8235-6909

   The difficulty lies, not in the new ideas, 
 but in escaping from the old ones - John Maynard Keynes


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Matthew Moyle-Croft
(I know, replying to your own email is sad ...)
 You could probably do this with a variant of DNS.  Use an Anycast 
 address common to everyone to solve the discovery problem.   Client 
 sends a DNS request for a TXT record for, as an example, 
 148.165.32.217.p2ptopology.org.  The topology box looks at the IP 
 address that the request came from and does some magic based on the 
 requested information and returns a ranking score based on that (maybe 
 0-255 worse to best) that the client can then use to rank where it 
 downloads from. (might have to run DNS on another port so that normal 
 resolvers don't capture this).

 The great thing is that you can use it for other things.
   
Since this could be dynamic (I'm guessing BGP and other things like SNMP 
feeding the topology box) you could then use it to balance traffic flows 
through your network to avoid congestion on certain links - that's a win 
for everyone.   You could get webbrowsers to look at it when you've got 
multiple A records to chose which one is best for things like Flash 
video etc.

MMC

-- 
Matthew Moyle-Croft - Internode/Agile - Networks
Level 5, 150 Grenfell Street, Adelaide, SA 5000 Australia
Email: [EMAIL PROTECTED]  Web: http://www.on.net
Direct: +61-8-8228-2909 Mobile: +61-419-900-366
Reception: +61-8-8228-2999  Fax: +61-8-8235-6909

   The difficulty lies, not in the new ideas, 
 but in escaping from the old ones - John Maynard Keynes


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Alexander Harrowell
NCAP - Network Capability (or Cost) Announcement Protocol.

On Tue, Apr 22, 2008 at 2:24 PM, Matthew Moyle-Croft [EMAIL PROTECTED]
wrote:

 (I know, replying to your own email is sad ...)
  You could probably do this with a variant of DNS.  Use an Anycast
  address common to everyone to solve the discovery problem.   Client
  sends a DNS request for a TXT record for, as an example,
  148.165.32.217.p2ptopology.org.  The topology box looks at the IP
  address that the request came from and does some magic based on the
  requested information and returns a ranking score based on that (maybe
  0-255 worse to best) that the client can then use to rank where it
  downloads from. (might have to run DNS on another port so that normal
  resolvers don't capture this).
 
  The great thing is that you can use it for other things.
 
 Since this could be dynamic (I'm guessing BGP and other things like SNMP
 feeding the topology box) you could then use it to balance traffic flows
 through your network to avoid congestion on certain links - that's a win
 for everyone.   You could get webbrowsers to look at it when you've got
 multiple A records to chose which one is best for things like Flash
 video etc.

 MMC

 --
 Matthew Moyle-Croft - Internode/Agile - Networks
 Level 5, 150 Grenfell Street, Adelaide, SA 5000 Australia
 Email: [EMAIL PROTECTED]  Web: http://www.on.net
 Direct: +61-8-8228-2909 Mobile: +61-419-900-366
 Reception: +61-8-8228-2999  Fax: +61-8-8235-6909

   The difficulty lies, not in the new ideas,
  but in escaping from the old ones - John Maynard Keynes


 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Cogent Router dropping packets

2008-04-22 Thread Joe Greco
 Well it had sounded like I was in the minority and should keep my mouth 
 shut. But here goes. On several occasions the peer that would advertise 
 our routes would drop and with that the peer with the full bgp tables 
 would drop as well. This happened for months on end. They tried blaming 
 our 6500, our fiber provider, our IOS version, no conclusive findings 
 where ever found that it was our problem. After some testing at the 
 local Cogent office by both Cogent and myself, Cogent decided that they 
 could make a product that would allow us too one have only one peer 
 and two to connect directly to the GSR and not through a small catalyst. 
 Low and behold things worked well for some time after that.
 
   This all happened while we had 3 other providers on the same router 
 with no issues at all. We moved gbics, ports etc around to make sure it 
 was not some odd ASIC or throughput issue with the 6500.

Perhaps you haven't considered this, but did it ever occur to you that
Cogent probably had the same situation?  They had a router with a bunch
of other customers on it, no reported problems, and you were the oddball
reporting significant issues?

Quite frankly, your own description does not support this as being a
problem inherent to the peerA/peerB setup.

You indicate that the peer advertising your routes would drop.  The peer
with the full BGP tables would then drop as well.  Well, quite frankly,
that makes complete sense.  The peer advertising your routes also
advertises to you the route to get to the multihop peer, which you need
in order to be able to talk to that.  Therefore, if the directly connected
BGP goes away for any reason, the multihop is likely to go away too.

However, given the exact same hardware minus the multihop, your direct
BGP was still dropping.  So had they been able to send you a full table
from the aggregation router, the same thing probably would have happened.

This sounds more like flaky hardware, dirty optics, or a bad cable (or
several of the above).

Given that, it actually seems quite reasonable to me to guess that it
could have been your 6500, your fiber provider, or your IOS version that
was introducing some problem.  Anyone who has done any reasonable amount
of work in this business will have seen all three, and many of the people
here will say that the 6500 is a bit flaky and touchy when pushed into
service as a real router (while simultaneously using them in their 
networks as such, heh, since nothing else really touches the price per
port), so Cogent's suggestion that it was a problem on your side may have
been based on bad experiences with other customer 6500's.

However, it is also likely that it was some other mundane problem, or a 
problem with the same items on Cogent's side.  I would consider it a 
shame that Cogent didn't work more closely with you to track down the 
specific issue, because most of the time, these things can be isolated 
and eliminated, rather than being potentially left around to mess up 
someone in the future (think: bad port).

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Joe Greco
 On Tue, Apr 22, 2008 at 2:02 PM, Joe Greco [EMAIL PROTECTED] wrote:
   As far as I am concerned the killer application for IP multicast is
   *NOT* video, it's market data feeds from NYSE, NASDAQ, CBOT, etc.
 
  You can go compare the relative successes of Yahoo! Finance and YouTube.
 
  While it might be nice to multicast that sort of data, it's a relative
  trickle of data, and I'll bet that the majority of users have not only
  not visited a market data site this week, but have actually never done
  so.
 
 As if most financial (and other mega-dataset) data was on consumer Web
 sites. Think pricing feeds off stock exchange back-office systems.

Oh, you got my point.  Good.  :-)

This isn't a killer application for IP multicast, at least not on the
public Internet.  High volume bits that are not busily traversing a hundred 
thousand last-mile residential connections are probably not the bits that
are going to pose a serious challenge for network operators, or at least,
that's my take on things.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Stephane Bortzmeyer
On Tue, Apr 22, 2008 at 02:02:21PM +0100,
 [EMAIL PROTECTED] [EMAIL PROTECTED] wrote 
 a message of 46 lines which said:

 This is where all the algorithmic tinkering of the P2P software
 cannot solve the problem. You need a way to insert non-technical
 information about the network into the decision-making process.

It's strange that noone in this thread mentioned P4P yet. Isn't there
someone involved in P4P at Nanog?

http://www.dcia.info/activities/p4pwg/

IMHO, the biggest issue with P4P is the one mentioned by Alexander
Harrowell. After that users have been s.d up so many times by some
ISPs, will they trust this service?


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Cogent Router dropping packets

2008-04-22 Thread manolo
Well it also was the total arrogance on the part of Cogent engineering 
and management taking zero responsibility and pushing it back everytime 
valid issue or not. You had to be there.  But everyone has a different 
opinion, my opinion is set regardless of what cogent tries to sell me now.



Manolo

Joe Greco wrote:
 Well it had sounded like I was in the minority and should keep my mouth 
 shut. But here goes. On several occasions the peer that would advertise 
 our routes would drop and with that the peer with the full bgp tables 
 would drop as well. This happened for months on end. They tried blaming 
 our 6500, our fiber provider, our IOS version, no conclusive findings 
 where ever found that it was our problem. After some testing at the 
 local Cogent office by both Cogent and myself, Cogent decided that they 
 could make a product that would allow us too one have only one peer 
 and two to connect directly to the GSR and not through a small catalyst. 
 Low and behold things worked well for some time after that.

   This all happened while we had 3 other providers on the same router 
 with no issues at all. We moved gbics, ports etc around to make sure it 
 was not some odd ASIC or throughput issue with the 6500.
 

 Perhaps you haven't considered this, but did it ever occur to you that
 Cogent probably had the same situation?  They had a router with a bunch
 of other customers on it, no reported problems, and you were the oddball
 reporting significant issues?

 Quite frankly, your own description does not support this as being a
 problem inherent to the peerA/peerB setup.

 You indicate that the peer advertising your routes would drop.  The peer
 with the full BGP tables would then drop as well.  Well, quite frankly,
 that makes complete sense.  The peer advertising your routes also
 advertises to you the route to get to the multihop peer, which you need
 in order to be able to talk to that.  Therefore, if the directly connected
 BGP goes away for any reason, the multihop is likely to go away too.

 However, given the exact same hardware minus the multihop, your direct
 BGP was still dropping.  So had they been able to send you a full table
 from the aggregation router, the same thing probably would have happened.

 This sounds more like flaky hardware, dirty optics, or a bad cable (or
 several of the above).

 Given that, it actually seems quite reasonable to me to guess that it
 could have been your 6500, your fiber provider, or your IOS version that
 was introducing some problem.  Anyone who has done any reasonable amount
 of work in this business will have seen all three, and many of the people
 here will say that the 6500 is a bit flaky and touchy when pushed into
 service as a real router (while simultaneously using them in their 
 networks as such, heh, since nothing else really touches the price per
 port), so Cogent's suggestion that it was a problem on your side may have
 been based on bad experiences with other customer 6500's.

 However, it is also likely that it was some other mundane problem, or a 
 problem with the same items on Cogent's side.  I would consider it a 
 shame that Cogent didn't work more closely with you to track down the 
 specific issue, because most of the time, these things can be isolated 
 and eliminated, rather than being potentially left around to mess up 
 someone in the future (think: bad port).

 ... JG
   


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] Anyone know how I can contact uky.edu abuse?

2008-04-22 Thread Jake Matthews
I've tried from 4-5 different mail providers to send something to 
[EMAIL PROTECTED]
Can't figure out what's wrong, as I've never seen AuthRequired anywhere 
before.


Every single one of them has gotten the reply of:

*Delivery has failed to these recipients or distribution lists:*

[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Your message wasn't delivered because of security policies. Microsoft 
Exchange will not try to redeliver this message for you. Please provide 
the following diagnostic text to your system administrator.


Sent by Microsoft Exchange Server 2007

Generating server: ad.uky.edu http://ad.uky.edu

[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
#550 5.7.1 RESOLVER.RST.AuthRequired; authentication required 
##rfc822;[EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED]



___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Laird Popkin
This raises an interesting issue - should optimization of p2p traffic (P4P) be 
based on static network information, or dynamic network information. It's 
certainly easier for ISP's to provide a simple network map that real-time 
network condition data, but the real-time data might be much more effective. Or 
even if it's not real-time, perhaps there could be static network maps 
reflecting conditions at different times of day?

Since P4P came up, I'd like to mention that the P4P Working Group is putting 
together another field test, where we can quantify issues like the tradeoff 
between static and dynamic network data, and we would love to hear from any 
ISP's that would be interested in participating in that test.  If you'd like 
the details of what it would take to participate, and what data you would get 
out of it, please email me.

Of course, independently of the test, if you're interested in participating in 
the P4P Working Group, we'd love to hear from you!

- Laird Popkin, CTO, Pando Networks
  email: [EMAIL PROTECTED]
  mobile: 646/465-0570

- Original Message -
From: Alexander Harrowell [EMAIL PROTECTED]
To: Stephane Bortzmeyer [EMAIL PROTECTED]
Cc: nanog@nanog.org
Sent: Tuesday, April 22, 2008 10:10:28 AM (GMT-0500) America/New_York
Subject: Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: 
Internet to hit capacity by 2010]

Personally I consider P4P a big step forward; it's good to see Big Verizon
engaging with these issues in a non-coercive fashion.

Just to braindump a moment, it strikes me that it would be very useful to be
able to announce preference metrics by netblock (for example, to deal with
networks with varied internal cost metrics or to pref-in the CDN servers)
but also risky. If that was done, client developers would be well advised to
implement a check that the announcing network actually owns the netblock
they are either preffing in (to send traffic via a suboptimal route/through
a spook box of some kind/onto someone else's pain-point) or out (to restrict
traffic from reaching somewhere); you wouldn't want a hijack, whether
malicious or clue-deficient.

There is every reason to encourage the use of dynamic preference.


On Tue, Apr 22, 2008 at 2:54 PM, Stephane Bortzmeyer [EMAIL PROTECTED]
wrote:

 On Tue, Apr 22, 2008 at 02:02:21PM +0100,
  [EMAIL PROTECTED] [EMAIL PROTECTED] wrote
  a message of 46 lines which said:

  This is where all the algorithmic tinkering of the P2P software
  cannot solve the problem. You need a way to insert non-technical
  information about the network into the decision-making process.

 It's strange that noone in this thread mentioned P4P yet. Isn't there
 someone involved in P4P at Nanog?

 http://www.dcia.info/activities/p4pwg/

 IMHO, the biggest issue with P4P is the one mentioned by Alexander
 Harrowell. After that users have been s.d up so many times by some
 ISPs, will they trust this service?


 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Bruce Curtis

On Apr 22, 2008, at 9:15 AM, Marc Manthey wrote:

 Am 22.04.2008 um 16:05 schrieb Bruce Curtis:

  p2p isn't the only way to deliver content overnight, content could
 also be delivered via multicast overnight.

 http://www.intercast.com/Eng/Index.asp

 http://kazam.com/Eng/About/About.jsp


 hmm sorry i did not  get it IMHO multicast ist uselese  for VOD ,
 correct ?


 marc


   Michael said the same thing Also note that IP multicast only works  
for live broadcast TV. and then mentioned that p2p could be used to  
download content during off-peak hours.

   Kazam is a beta test that uses Intercast's technology to download  
content overnight to a users PC via multicast.

   My point was p2p isn't the only way to deliver content overnight,  
multicast could also be used to do that, and in fact at least one  
company is exploring that option.

   The example seemed to fit in well with the other examples in the  
the thread that mentioned TiVo type devices recording content for  
later viewing on demand.

   I agree that multicast can be used for live TV and others have  
mentioned the multicasting of the BBC and www.ostn.tv is another  
example of live multicasting.  However since TiVo type devices today  
record broadcast content for later viewing on demand there could  
certainly be devices that record multicast content for later viewing  
on demand.



---
Bruce Curtis [EMAIL PROTECTED]
Certified NetAnalyst II701-231-8527
North Dakota State University


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Anyone know how I can contact uky.edu abuse?

2008-04-22 Thread John Kristoff
On Tue, 22 Apr 2008 10:44:07 -0400
Jake Matthews [EMAIL PROTECTED] wrote:

 I've tried from 4-5 different mail providers to send something to 
 [EMAIL PROTECTED]
 Can't figure out what's wrong, as I've never seen AuthRequired anywhere 
 before.

If it is security related, I highly recommend you forward you concern
directly to [EMAIL PROTECTED]  In a nutshell, REN-ISAC is a highly
regarded entity that helps coordinate and communicate within the research
and education community pertaining to info security related matters.
See their web page for further details.

In many cases where default email aliases are broken or an institution
seems unresponsive, they have a way to get someone's attention at even
the smallest and most removed sites.

John

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Anyone know how I can contact uky.edu abuse?

2008-04-22 Thread Michael Holstein
Try the voice route .. their helpdesk is (859) 257-1300

Cheers,

Michael Holstein
Cleveland State University
 I've tried from 4-5 different mail providers to send something to 
 [EMAIL PROTECTED]
 Can't figure out what's wrong, as I've never seen AuthRequired anywhere 
 before.


 Every single one of them has gotten the reply of:

 *Delivery has failed to these recipients or distribution lists:*

 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 Your message wasn't delivered because of security policies. Microsoft 
 Exchange will not try to redeliver this message for you. Please provide 
 the following diagnostic text to your system administrator.

 
 Sent by Microsoft Exchange Server 2007

 Generating server: ad.uky.edu http://ad.uky.edu

 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 #550 5.7.1 RESOLVER.RST.AuthRequired; authentication required 
 ##rfc822;[EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED]



 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog

   


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Joe Greco
   IP multicast does not help you when you have 1000 subscribers 
   all pulling in 1000 unique streams.
  
  Yes, that's potentially a problem.  That doesn't mean that 
  multicast can not be leveraged to handle prerecorded 
  material, but it does suggest that you could really use a 
  TiVo-like device to make best use. 
 
 You mean a computer? Like the one that runs file-sharing
 clients? 

Like the one that nobody really wants to watch large quantities of
television on?  Especially now that it's pretty common to have large,
flat screen TV's, and watching TV even on a 24 monitor feels like a
throwback to the '80's?

How about the one that's shaped like a TiVo and has a built-in remote
control, sane operating software, can be readily purchased and set up
by a non-techie, and is known to work well?

I remember all the fuss about how people would be making phone calls
using VoIP and their computers.  Yet most of the time, I see VoIP
consumers transforming VoIP to legacy POTS, or VoIP hardphones, or
stuff like that.  I'm going to make a guess and take a stab and say
that people are going to prefer to keep their TV's somewhat more TV-
like.

 Or Squid? Or an NNTP server? 

Speaking as someone who's run the largest Squid and news server
deployments in this region, I think I can safely say - no.

It's certainly fine to note that both Squid and NNTP have elements that
deal with transferring large amounts of data, and that fundamentally 
similar elements could play a role in the distribution model, but I see
no serious role for those at the set-top level.

 Is video so different from other content? Considering the 
 volume of video that currently traverses P2P networks I really
 don't see that there is any need for an IP multicast solution
 except for news feeds and video conferencing.

Wow.  Okay.  I'll just say, then, that such a position seems a bit naive,
and I suspect that broadband networks are going to be crying about the
sheer stresses on their networks, when moderate numbers of people begin
to upload videos into their TiVo, which then share them with other TiVo's
owned by their friends around town, or across an ocean, while also
downloading a variety of shows from a dozen off-net sources, etc.

I really see the any-to-any situation as being somewhat hard on networks,
but if you believe that not to be the case, um, I'm listening, I guess.

  What seems more likely is that we'll see an evolution of more 
  specialized offerings, 
 
 Yes. The overall trend has been to increasingly split the market
 into smaller slivers with additional choices being added and older
 ones still available. 

Yes, but that's still a broadcast model.  We're talking about an evolution
(potentially _r_evolution) of technology where the broadcast model itself
is altered.

 During the shift to digital broadcasting in
 the UK, we retained the free-to-air services with more channels
 than we had on analog. Satellite continued to grow in diversity and
 now there is even a Freesat service coming online. Cable TV is still
 there although now it is usually bundled with broadband Internet as
 well as telephone service. You can access the Internet over your mobile
 phone using GPRS, or 3G and wifi is spreading slowly but surely.

Yes.

 But one thing that does not change is the number of hours in the day.
 Every service competes for scarce attention spans, 

Yes.  However, some things that do change:

1) Broadband speeds continue to increase, making it possible for more
   content to be transferred

2) Hard drives continue to grow, and the ability to store more, combined
   with higher bit rates (HD, less artifact, whatever) means that more
   bits can be transferred to fill the same amount of time

3) Devices such as TiVo are capable of downloading large amounts of material
   on a speculative basis, even on days where #hrs-tv-watched == 0.  I
   suspect that this effect may be a bit worse as more diversity appears,
   because instead of hitting stop during a 30-second YouTube clip, you're
   now hitting delete 15 seconds into a 30-minute InterneTiVo'd show.  I
   bet I can clear out a few hours worth of not-that-great programming in
   5 minutes...

 and a more-or-less
 fixed portion of people's disposable income. Based on this, I don't
 expect to see any really huge changes. 

That's fair enough.  That's optimistic (from a network operator's point
of view.)  I'm afraid that such changes will happen, however.

  That may allow some less popular channels to come into 
  being.  
 
 YouTube et al. 

The problem with that is that there's money to be had, and if you let 
YouTube host your video, it's YouTube getting the juicy ad money.  An
essential quality of the Internet is the ability to eliminate the
middleman, so even if YouTube has invented itself as a new middleman,
that's primarily because it is kind of a new thing, and we do not yet
have ways for the average user to easily serve video clips a different
way.  That will almost 

Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Adrian Chadd
On Tue, Apr 22, 2008, Marc Manthey wrote:

 hmm sorry i did not  get it IMHO multicast ist uselese  for VOD ,  
 correct ?

As a delivery mechanism to end-users? Sure.

As a way of feeding content to edge boxes which then serve VOD?
Maybe not so useless. But then, its been years since I toyed with
IP over satellite to feed ${STUFF}.. :)



Adrian


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Joe Abley

On 22 Apr 2008, at 12:47, Joe Greco wrote:

 You mean a computer? Like the one that runs file-sharing
 clients?

 Like the one that nobody really wants to watch large quantities of
 television on?

Perhaps more like the mac mini that's plugged into the big plasma  
screen in the living room? Or one of the many stereo-component-styled  
media PCs sold for the same purpose, perhaps even running Windows  
MCE, a commercial operating system sold precisely because people want  
to hook their computers up to televisions?

Or the old-school hacked XBox running XBMC, pulling video over SMB  
from the PC in the other room?

Or the XBox 360 which can play media from the home-user NAS in the  
back room? The one with the bittorrent client on it? :-)


Joe


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Williams, Marc
The OSCAR is the first H.264 encoder appliance designed by HaiVision
specifically for QuickTime environments. It natively supports
the RTSP streaming media protocol. The OSCAR can stream directly to
QuickTime supporting up to full D1 resolution (full standard
definition resolution or 720 x 480 NTSC / 576 PAL) at video bit rates up
to 1.5 Mbps. The OSCAR supports either multicast or unicast
RTSP sessions. With either, up to 10 separate destination streams can be
generated by a single OSCAR encoder (more at lower bit
rates). So, on a college campus for example, this simple, compact,
rugged appliance can be placed virtually anywhere and with a
simple network connection can stream video to any QuickTime client on
the local network or over the WAN. If more than 10
QuickTime clients need to view or access the video, the OSCAR can be
directed to a QuickTime Streaming Server which can typically
host well over 1000 clients 

 -Original Message-
 From: Brandon Galbraith [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, April 22, 2008 1:51 PM
 To: Joe Abley
 Cc: nanog@nanog.org; Joe Greco
 Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
 
 On 4/22/08, Joe Abley [EMAIL PROTECTED] wrote:
 
 
  On 22 Apr 2008, at 12:47, Joe Greco wrote:
 
   You mean a computer? Like the one that runs file-sharing clients?
  
   Like the one that nobody really wants to watch large 
 quantities of 
   television on?
 
 
  Perhaps more like the mac mini that's plugged into the big plasma 
  screen in the living room? Or one of the many 
 stereo-component-styled 
  media PCs sold for the same purpose, perhaps even running Windows 
  MCE, a commercial operating system sold precisely because 
 people want 
  to hook their computers up to televisions?
 
  Or the old-school hacked XBox running XBMC, pulling video over SMB 
  from the PC in the other room?
 
  Or the XBox 360 which can play media from the home-user NAS in the 
  back room? The one with the bittorrent client on it? :-)
 
 
 Don't forget the laptop or thin desktop hooked up to the 
 24-60 inch monitor in the bedroom/living room to watch 
 Netflix Watch It Now content (which there is no limit on how 
 much can be viewed by a customer).
 
 -brandon
 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog
 

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Cogent Router dropping packets

2008-04-22 Thread John van Oppen (list account)
I know I have experienced the engineering department there as well, the
best one was when they wanted paper documentation for every route I
asked to have in our filters...  (and they were incapable of using
RADB).   It was especially odd since we have  80 of our own peers and
three other transit providers to who we were announcing over 100 routes
while they still wanted paper docs.

But, filters seem to be an annoyance for most big providers...   I have
been trying to get level3 to fix our radb-based filtering for a while
now (it just stopped pulling new updates for some reason).  :)

John


-Original Message-
From: manolo [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 22, 2008 7:23 AM
To: Joe Greco
Cc: [EMAIL PROTECTED]
Subject: Re: [Nanog] Cogent Router dropping packets

Well it also was the total arrogance on the part of Cogent engineering 
and management taking zero responsibility and pushing it back everytime 
valid issue or not. You had to be there.  But everyone has a different 
opinion, my opinion is set regardless of what cogent tries to sell me
now.



Manolo

Joe Greco wrote:
 Well it had sounded like I was in the minority and should keep my
mouth 
 shut. But here goes. On several occasions the peer that would
advertise 
 our routes would drop and with that the peer with the full bgp tables

 would drop as well. This happened for months on end. They tried
blaming 
 our 6500, our fiber provider, our IOS version, no conclusive findings

 where ever found that it was our problem. After some testing at the 
 local Cogent office by both Cogent and myself, Cogent decided that
they 
 could make a product that would allow us too one have only one peer

 and two to connect directly to the GSR and not through a small
catalyst. 
 Low and behold things worked well for some time after that.

   This all happened while we had 3 other providers on the same router

 with no issues at all. We moved gbics, ports etc around to make sure
it 
 was not some odd ASIC or throughput issue with the 6500.
 

 Perhaps you haven't considered this, but did it ever occur to you that
 Cogent probably had the same situation?  They had a router with a
bunch
 of other customers on it, no reported problems, and you were the
oddball
 reporting significant issues?

 Quite frankly, your own description does not support this as being a
 problem inherent to the peerA/peerB setup.

 You indicate that the peer advertising your routes would drop.  The
peer
 with the full BGP tables would then drop as well.  Well, quite
frankly,
 that makes complete sense.  The peer advertising your routes also
 advertises to you the route to get to the multihop peer, which you
need
 in order to be able to talk to that.  Therefore, if the directly
connected
 BGP goes away for any reason, the multihop is likely to go away too.

 However, given the exact same hardware minus the multihop, your direct
 BGP was still dropping.  So had they been able to send you a full
table
 from the aggregation router, the same thing probably would have
happened.

 This sounds more like flaky hardware, dirty optics, or a bad cable (or
 several of the above).

 Given that, it actually seems quite reasonable to me to guess that it
 could have been your 6500, your fiber provider, or your IOS version
that
 was introducing some problem.  Anyone who has done any reasonable
amount
 of work in this business will have seen all three, and many of the
people
 here will say that the 6500 is a bit flaky and touchy when pushed into
 service as a real router (while simultaneously using them in their 
 networks as such, heh, since nothing else really touches the price per
 port), so Cogent's suggestion that it was a problem on your side may
have
 been based on bad experiences with other customer 6500's.

 However, it is also likely that it was some other mundane problem, or
a 
 problem with the same items on Cogent's side.  I would consider it a 
 shame that Cogent didn't work more closely with you to track down the 
 specific issue, because most of the time, these things can be isolated

 and eliminated, rather than being potentially left around to mess up 
 someone in the future (think: bad port).

 ... JG
   


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] OARC DNS Operations Meeting, Brooklyn NY, 4/5th June

2008-04-22 Thread Keith Mitchell
I'm pleased to announce the next OARC DNS Operations meeting will take
place immediately after the NANOG43 meeting, in Brooklyn, NY, USA.

The venue will be CUNY's Brooklyn College, about 5 miles from the NANOG
Hotel. The open DNS Operations workshop will take place on the afternoon
of Wed 4th June, and the morning of Thu 5th June. The OARC members-only
meeting will take place on the afternoon of Thu 5th June.

Further meeting information will be published as it becomes available at:

http://public.oarci.net/dns-operations/workshop-2008/

We are grateful to OARC members for supporting this nonprofit meeting,
allowing registration to be free for all participants. In order to help
with meeting logistics, if you are planning to attend please register at

https://oarc.isc.org/register.php

as soon as possible.

The strengths of OARC's meetings are very much due to the active
participation of our members and the wider operators' community, and we
are seeking presentations and suggestions for speakers on topics
relevant to DNS Operations and Research. If you have either, please
submit a short abstract to [EMAIL PROTECTED] by the 31st May.

Please contact me if you can assist with any of the above or need
further information.

Keith Mitchell
OARC Programme Manager


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Cogent Router dropping packets

2008-04-22 Thread Pete Templin
John van Oppen (list account) wrote:
 I know I have experienced the engineering department there as well, the
 best one was when they wanted paper documentation for every route I
 asked to have in our filters...  (and they were incapable of using
 RADB).   It was especially odd since we have  80 of our own peers and
 three other transit providers to who we were announcing over 100 routes
 while they still wanted paper docs.

I've fixed this by throwing their own policies back at them.  Point out 
to them that the route is already appearing globally through your AS, 
and remind them that their policy, section 3b, already allows that.  :)

On the previous topic, I'd have to say that their two-peer system is 
perhaps one of the better, if not best, multihop implementations I've 
seen.  Amongst other things, it tends to provide a rapid assessment of 
life in the POP.  I just wish they'd use their network status messages 
  to reflect when they were having problems, instead of just problems 
that are too large for the call center to handle.  :(

pt


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Joe Greco
 On 22 Apr 2008, at 12:47, Joe Greco wrote:
  You mean a computer? Like the one that runs file-sharing
  clients?
 
  Like the one that nobody really wants to watch large quantities of
  television on?
 
 Perhaps more like the mac mini that's plugged into the big plasma  
 screen in the living room? Or one of the many stereo-component-styled  
 media PCs sold for the same purpose, perhaps even running Windows  
 MCE, a commercial operating system sold precisely because people want  
 to hook their computers up to televisions?
 
 Or the old-school hacked XBox running XBMC, pulling video over SMB  
 from the PC in the other room?
 
 Or the XBox 360 which can play media from the home-user NAS in the  
 back room? The one with the bittorrent client on it? :-)

Pretty much.  People have a fairly clear bias against watching anything
on your conventional PC.  This probably has something to do with the way
the display ergonomics work; my best guess is that most people have their
PC's set up in a corner with a chair and a screen suitable for work at a
distance of a few feet.  As a result, there's usually a clear delineation
between devices that are used as general purpose computers, and devices
that are used as specialized media display devices. 

The Mac Mini may be an example of a device that can be used either way, 
but do you know of many people that use it as a computer (and do all their
normal computing tasks) while it's hooked up to a large TV?  Even Apple
acknowledged the legitimacy of this market by releasing AppleTV.

People generally do not want to hook their _computer_ up to televisions,
but rather they want to hook _a_ computer up to television so that they're
able to do things with their TV that an off-the-shelf product won't do for
them.  That's an important distinction, and all of the examples you've
provided seem to be examples of the latter, rather than the former, which
is what I was talking about originally. 

If you want to discuss the latter, then we've got to include a large field
of other devices, ironically including the TiVo, which are actually
programmable computers that have been designed for specific media tasks,
and are theoretically reprogrammable to support a wide variety of 
interesting possibilities, and there we have the entry into the avalanche
of troubling operational issues that could result from someone releasing
software that distributes large amounts of content over the Internet, and
...  oh, my bad, that brings us back to what we were talking about.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] Crypto export restricted prefix list

2008-04-22 Thread Kevin Blackham
Is there a prefix list available listing the IP space of cryptographic
export restricted countries?  My google skills are failing me.  I'm
required to apply a ban on North Korea, Iran, Syria, Sudan and Cuba.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-22 Thread Marc Manthey
 ...is the first H.264 encoder .. designed by 
 specifically for ... environments. It natively supports
 the RTSP streaming media protocol.  can stream directly to
 .

hi marc
so your  oskar can rtsp multicast stream over ipv6 and quicktime  
not , or was this just an ad ?

cheers

Marc


-- 
Les enfants teribbles - research and deployment
Marc Manthey -  Hildeboldplatz 1a
D - 50672 Köln - Germany
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
jabber :[EMAIL PROTECTED]
blog : http://www.let.de
ipv6 http://www.ipsix.org

Klarmachen zum Ändern!
http://www.piratenpartei-koeln.de/
___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog