Re: Resilient streaming protocols

2011-06-11 Thread Petri Helenius

There is a RTP FEC extension...

Pete

On May 29, 2011, at 12:40 AM, Aria Stewart wrote:

 Anyone have any interest in a forward-error-corrected streaming protocol 
 suitable for multicast, possibly both audio and video?
 
 Good for when there's some packet loss.
 
 
 Aria Stewart
 
 
 




Re: How polluted is 1/8?

2010-02-07 Thread Petri Helenius

Hi,

We would also be happy to sink the traffic and provide captures and statistics 
for general consumption.

Pete

On Feb 4, 2010, at 9:30 PM, Jared Mauch wrote:

 
 On Feb 4, 2010, at 1:27 PM, Kevin Loch wrote:
 
 Mirjam Kuehne wrote:
 Hello,
 After 1/8 was allocated to APNIC last week, the RIPE NCC did some 
 measurements to find out how polluted this block really is.
 See some surprising results on RIPE Labs: 
 http://labs.ripe.net/content/pollution-18
 Please also note the call for feedback at the bottom of the article.
 
 The most surprising thing in that report was that someone has an AMS-IX
 port at just 10 megs.  It would be nice to see an actual measurement of
 the traffic and daily/weekly changes. A breakdown of the flow data by
 source ASN and source prefix (for the top 50-100 sources) would also be
 interesting.
 
 There was a call on the apnic list for someone to sink some of the traffic.
 
 I'd like to see someone capture the data and post pcaps/netflow analysis, and 
 possibly just run a http server on that /24 so people can test if their 
 network is broken.
 
 I've taken a peek at the traffic, and I don't think it's 100's of megs, but 
 without a global view who knows.
 
 - Jared
 




Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Petri Helenius

Time to push multicast as transport for bittorrent? If the downloads get 
better performance that way, I think the clients would be around quicker 
that multicast would be enabled for consumer DSL or cable.

Pete


Paul Ferguson wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 - -- William Warren [EMAIL PROTECTED] wrote:

   
 Here in comcast land hdtv is actually averaging around 12 megabits a 
 
 second.  Still adds up to staggering numbers..:)
   

 Another disturbing fact inside this entire mess is that, by compressing
 HD content, consumers are noticing the degradation in quality:

 http://cbs5.com/local/hdtv.cable.compression.2.705405.html

 So, we have a Tragedy of The Commons situation that is completely
 created by the telcos themselves trying to force consumer decisions,
 and then failing to deliver, but bemoaning the fact that
 infrastructure is being over-utilized by file-sharers (or
 Exafloods or whatever the apocalyptic issue of the day is for
 telcos).

 A real Charlie Foxtrot.

 - - ferg

 -BEGIN PGP SIGNATURE-
 Version: PGP Desktop 9.6.3 (Build 3017)

 wj8DBQFIDX5Rq1pz9mNUZTMRAovKAJ0SXpK/XrW73mkCGZhtLCO5ZGsspQCfbUY3
 0DPluYrtq0Et/RbvJguq3WM=
 =furJ
 -END PGP SIGNATURE-


 --
 Fergie, a.k.a. Paul Ferguson
  Engineering Architecture for the Internet
  fergdawg(at)netzero.net
  ferg's tech blog: http://fergdawg.blogspot.com/


 ___
 NANOG mailing list
 NANOG@nanog.org
 http://mailman.nanog.org/mailman/listinfo/nanog

   


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Petri Helenius
[EMAIL PROTECTED] wrote:
 But there is another way. That is for software developers to build a
 modified client that depends on a topology guru for information on the
 network topology. This topology guru would be some software that is run
   
While the current bittorrent implementation is suboptimal for large 
swarms (where number of adjacent peers is significantly less than the 
number of total participants) I fail to figure out the necessary 
mathematics where topology information would bring superior results 
compared to the usual greedy algorithms where data is requested from the 
peers where it seems to be flowing at the best rates. If local peers 
with sufficient upstream bandwidth exist, majority of the data blocks 
are already retrieved from them.

In many locales ISP's tend to limit the available upstream on their 
consumer connections, usually causing more distant bits to be delivered 
instead.

I think the most important metric to study is the number of times the 
same piece of data is transmitted in a defined time period and try to 
figure out how to optimize for that. For a new episode of BSG, there are 
a few hundred thousand copies in the first hour and a million or so in 
the first few days. With the headers and overhead, we might already be 
hitting a petabyte per episode. RSS feeds seem to shorten the 
distribution ramp-up from release.

The p2p world needs more high-upstream proxies to make it more 
effective. I think locality with current torrent implementations would 
happen automatically. However there are quite a few parties who are 
happy to have it as bad as they can make it :-)

Is there a problem that needs to be solved that is not solved by 
Akamai's of the world already?

Pete


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog