Re: IRRd Add New Maintainer (irr_rpsl_submit ?)

2011-05-18 Thread Eduardo Meyer
On Wed, May 18, 2011 at 2:23 AM, Eduardo Meyer dudu.me...@gmail.com wrote:
 Hello,

 I have installed IRRd and I am trying to set it up, just for study
 purposes. I have successfully mirrored some DBs but I cant handle to
 make my very first maintainer creation. irrd-user.pdf seems to be the
 only documentation around and it says nothing about How the admin
 creates a maintainer it only says that the password used in irrd.conf
 is the one.

 Here's what I am trying:

 # cat /tmp/step-1
 mntner: MAINT-AS65500
 descr: Test Inc
 admin-c: Dudu M
 tech-c:  Dudu M
 upd-to: eduardo.me...@gmail.com
 mnt-nfy: eduardo.me...@gmail.com
 mnt-by: MAINT-AS65500
 auth: MAIL-FROM eduardo.me...@gmail.com
 changed: eduardo.me...@gmail.com 20110518
 source: SAMPLEDB

 And the command:

 # cat /tmp/step-1 | /usr/local/sbin/irr_rpsl_submit -x -D -v -E
 db-ad...@testing123.net -c 23AWrNgTooc32

 I always get the following error:

 May 18 04:23:16 [18267] #ERROR: New maintainers must be added by a DB
 administrator.
 May 18 04:23:16 [18267] Forwarding new request to db-ad...@testing123.net

 Can someone please help me? I know it seems very simple but I have no
 idea how to do that.

 Thank you.

I managed adding the appropriated entries on my .db file by hand but I
believe there's a better way to do so, since this way a restart is
needed.

I am sorry asking it up here but I believe someone will be able to
help be since irrd-discuss mailing list is so quiet.



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Randy Bush
another view might be that netflix's customers are eating the bandwidth

randy



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Carl Rosevear
Eating Up sounds so overweight and unhealthy.  Since a good number
of us get paid for delivering bits, isn't this a good thing?  Always
glad to see bits and dollars flowing into the Internet, personally.
However must express severe dissatisfaction with the topic of the
thread a while ago referencing Comcast trying to charge providers for
delivery over their network.  Maybe I'm wrong, but I'm pretty happy
with the current model...  even if it means a $5/month residential
rate hike (or something).

--C



RE: Netflix Is Eating Up More Of North America's Bandwidth Than AnyOther Company

2011-05-18 Thread Leigh Porter


 -Original Message-
 From: Carl Rosevear [mailto:crosev...@skytap.com]
 
 Eating Up sounds so overweight and unhealthy.  Since a good number
 of us get paid for delivering bits, isn't this a good thing?  Always
 glad to see bits and dollars flowing into the Internet, personally.
 However must express severe dissatisfaction with the topic of the
 thread a while ago referencing Comcast trying to charge providers for
 delivery over their network.  Maybe I'm wrong, but I'm pretty happy
 with the current model...  even if it means a $5/month residential
 rate hike (or something).
 
 --C
 

Well it depends if Netflix pay for the bandwidth they use or if they get
it all for free with non settlement peering. If, suddenly, your business
model breaks because of a huge demand for high bandwidth services by
your customers then either you need to charge your customers more or
Netflix (or whoever) need to share the pie.

--
Leigh Porter


__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__



Re: Netflix Is Eating Up More Of North America's Bandwidth Than AnyOther Company

2011-05-18 Thread Phil Regnauld
Leigh Porter (leigh.porter) writes:
 
 Well it depends if Netflix pay for the bandwidth they use

You mean, customers have to pay for the bandwidth they use.
I'm sure NetFlix is paying *their* network and other transit providers
for outgoing bandwidth they consume.

 or if they get it all for free with non settlement peering. If, suddenly,
 your business model breaks because of a huge demand for high bandwidth
 services by your customers then either you need to charge your customers
 more or Netflix (or whoever) need to share the pie.

Whoever ?  Nah, the consumers.  Bad business model, change business 
model.

Phil



Re: MITM attacks or the Half/Circuit model - was Netflix Is Eating

2011-05-18 Thread bmanning
On Wed, May 18, 2011 at 12:32:49PM +0200, Phil Regnauld wrote:
 Leigh Porter (leigh.porter) writes:
  
  Well it depends if Netflix pay for the bandwidth they use
 
   You mean, customers have to pay for the bandwidth they use.
   I'm sure NetFlix is paying *their* network and other transit providers
   for outgoing bandwidth they consume.
   Phil


note the classic Man-In-The-Middle attack here.  Or in other words, 
the ITU half/circuit billing model for traditional telecomunications
companies.   

The telecom model is :  I'll provide you with a tranist path to me,
and trust me to hand your communications to the other party you wish
to communicate with.  So GTE / MaBell gets to bill -both- parties
at their usual usarious rates.  The problem here is that the incumbent
operators have and are fighting tooth/nail to ensure their near monopoly
on access.  
So...
We either need to re-regulate them to assure equal access at equitable 
rates
-or- we need to de-regulate the access market and open up last mile ROW 
to
all comers.  What we have done is de-regulate the access and retain the 
monopoly
status on last mile ROW.  the incumbents have captive markets and can 
charge
whatever the market will bear.  Great work if you can get it.

If we truely beleived in end-2-end, we might see more systems
using or trying to find other access paths ...   YMMV of course.

/bill



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Alex Brooks
On Wed, May 18, 2011 at 7:56 AM, Randy Bush ra...@psg.com wrote:

 another view might be that netflix's customers are eating the bandwidth

 randy


One of the UKs large residential ISPs publishes what their customers
use bandwidth for at
http://www.talktalkmembers.com/content/view/154/159/
Streaming protocols do use up a large % there, but only 2.9% is
listed as used by BBC iPlayer (like a no advertising version of Hulu,
but only for one broadcaster), Rapidshare and Facebook are 1.9% each,
whilst YouTube is 9.7%.  It's kind of interesting.



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Randy Bush
 Since a good number of us get paid for delivering bits, isn't this a
 good thing?

at layer eight, having a single very large customer can be a source of
unhappy surprises.

randy



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Eliot Lear


On 5/18/11 2:36 PM, Randy Bush wrote:
 at layer eight, having a single very large customer can be a source of
 unhappy surprises.

Heh- no matter what layers one through seven are...



blocking unwanted traffic from hitting gateway

2011-05-18 Thread Rogelio
I've got about 1000 people hammering a Linux gateway with http
requests, but only about 150 of them are authenticated users for the
ISP.

Once someone authenticates, then I want their traffic to pass through
okay.  But if they're not an authenticated user, I would like to
ideally block those http requests (e.g. Google updater, AV scanners,
etc) from ever tying up my web server.

Is there some sort of box I could put in front (e.g. OpenBSD pf in
transparency mode) or maybe some sort of filter on the webserver?
This solution would need to be tied into the authentication services
so authenticated users hit the gateway.

-- 
Also on LinkedIn?  Feel free to connect if you too are an open
networker: scubac...@gmail.com



Re: blocking unwanted traffic from hitting gateway

2011-05-18 Thread Dobbins, Roland
On May 18, 2011, at 7:42 PM, Rogelio wrote:

 This solution would need to be tied into the authentication services so 
 authenticated users hit the gateway.


So the attackers can just hammer the authentication subsystem and take it down, 
instead?

;

By going the 'authentication' route in the sense you mean it, you'll make it 
even more trivially easy to DDoS the Web servers than is possible without such 
a system.

http://www.mail-archive.com/nanog@nanog.org/msg17914.html

---
Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com

The basis of optimism is sheer terror.

  -- Oscar Wilde




Re: blocking unwanted traffic from hitting gateway

2011-05-18 Thread Matthew Palmer
On Wed, May 18, 2011 at 09:42:03AM -0300, Rogelio wrote:
 I've got about 1000 people hammering a Linux gateway with http
 requests, but only about 150 of them are authenticated users for the
 ISP.

Are you the ISP, or someone else?  Why is the gateway caring that the
requests are HTTP?  Is it also an HTTP server (and if so, does it matter
that it's a gateway?)

 Once someone authenticates, then I want their traffic to pass through
 okay.  But if they're not an authenticated user, I would like to
 ideally block those http requests (e.g. Google updater, AV scanners,
 etc) from ever tying up my web server.

What authentication mechanism are acceptable?  HTTP at the request level,
captive portal, custom app, etc etc etc.

 Is there some sort of box I could put in front (e.g. OpenBSD pf in
 transparency mode) or maybe some sort of filter on the webserver?

What risk or problem are you actually trying to mitigate against?  Sure, you
can put all sorts of things in front of it or on it, but are you just going
to be moving the problem (whatever it may be) to another box, adding
complexity for no good reason?

 This solution would need to be tied into the authentication services
 so authenticated users hit the gateway.

You might want to mention what authentication services you're using if you
want any useful recommendation about tying into it.

- Matt

-- 
The hypothalamus is one of the most important parts of the brain, involved
in many kinds of motivation, among other functions. The hypothalamus
controls the Four F's: 1. fighting; 2. fleeing; 3. feeding; and 4. mating.
-- Psychology professor in neuropsychology intro course



Re: blocking unwanted traffic from hitting gateway

2011-05-18 Thread Wil Schultz
On May 18, 2011, at 5:42 AM, Rogelio wrote:

 I've got about 1000 people hammering a Linux gateway with http
 requests, but only about 150 of them are authenticated users for the
 ISP.
 
 Once someone authenticates, then I want their traffic to pass through
 okay.  But if they're not an authenticated user, I would like to
 ideally block those http requests (e.g. Google updater, AV scanners,
 etc) from ever tying up my web server.
 
 Is there some sort of box I could put in front (e.g. OpenBSD pf in
 transparency mode) or maybe some sort of filter on the webserver?
 This solution would need to be tied into the authentication services
 so authenticated users hit the gateway.
 
 -- 
 Also on LinkedIn?  Feel free to connect if you too are an open
 networker: scubac...@gmail.com
 

I use apache mod_rewrite in front of some stuff, there are a couple of examples 
where I look for a cookie and make sure it's set to some value before they can 
do something interesting. 
If the cookie doesn't exist, or if it's not set to the desired value, it goes 
somewhere else that's easily cacheable.

Here's an example, the cookie name is loggedin and the value is true. If 
that doesn't match up it proxies over to login.jsp.

RewriteCond %{HTTP_COOKIE}  !loggedin=true
RewriteRule ^/(.*)  http://%{HTTP:Host}/login.jsp [P,L]

Good luck.

-wil


IPv6 Conventions

2011-05-18 Thread Todd Snyder
As I start working more and more with IPv6 and find myself having to address
services, I am wondering if there are any sort of written or unwritten
'conventions'/best practices that are being adopted about how to address
devices/servers/services.

Specifically:

1) Is there a general convention about addresses for DNS servers? NTP
servers? dhcp servers?
2) Are we tending to use different IPs for each service on a device?
3) Any common addresses/schemes for other common services?
(smtp/snmp/http/ldap/etc)?

Similarly, I've been referring to
http://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xml for
a list of the 'reserved' space - are there any other blocks/conventions
around addressing that exist?

Finally, what tools do people find themselves using to manage IPv6 and
addressing?  It seems to me that IPAM is almost required to manage IPv6 in
any sane way, even for very small deployments (My home ISP gave me a /56 and
a /64).

I figured this was a fairly operational question/set of questions, so I hope
this is the right venue.

Cheers,

Todd.


Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Jay Ashworth
- Original Message -
 From: Randy Bush ra...@psg.com

  Since a good number of us get paid for delivering bits, isn't this a
  good thing?
 
 at layer eight, having a single very large customer can be a source of
 unhappy surprises.

I have first hand experience, having been laid off from my last IT 
director job because such a monopsony customer yanked 3/5 of its business
from my then employer.

Or ask *hundreds* of 35 year old companies that used to produce, nearly
exclusively, lots of specialized, flight certified parts for the Space
Shuttle Program.

Cheers,
-- jra
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth  Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: IPv6 Conventions

2011-05-18 Thread Jeroen Massar
On 2011-May-18 16:44, Todd Snyder wrote:
 As I start working more and more with IPv6 and find myself having to address
 services, I am wondering if there are any sort of written or unwritten
 'conventions'/best practices that are being adopted about how to address
 devices/servers/services.
 
 Specifically:
 
 1) Is there a general convention about addresses for DNS servers? NTP
 servers? dhcp servers?
 2) Are we tending to use different IPs for each service on a device?
 3) Any common addresses/schemes for other common services?
 (smtp/snmp/http/ldap/etc)?

Depends mostly on personal preference I would say.
Same applies to IPv4 as IPv6.

If you want a service to map always to a specific IP, eg because you
anycast/failover-IP it, then a service IP makes sense.

If you have a smaller deployment then just a service per host and/or
using CNAMEs (except for MX :) can make sense.

 Similarly, I've been referring to
 http://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xml for
 a list of the 'reserved' space - are there any other blocks/conventions
 around addressing that exist?

Only thing you might want to know is that 2000::/3 is global unicast,
that there is ULA and link-local. For the rest you don't need to know
anything about address blocks, just what the address space is that is
routed to you and that is what you get to use.

Except maybe for BGP where you want to limit what you want to
receive/announce. See google(gert ipv6) aka
http://www.space.net/~gert/RIPE/ipv6-filters.html for information on that.

 Finally, what tools do people find themselves using to manage IPv6 and
 addressing?  It seems to me that IPAM is almost required to manage IPv6 in
 any sane way, even for very small deployments (My home ISP gave me a /56 and
 a /64).

Textfiles, SQL databases. Depends on your need.

Greets,
 Jeroen



Re: IPv6 Conventions

2011-05-18 Thread Iljitsch van Beijnum
On 18 mei 2011, at 16:44, Todd Snyder wrote:

 1) Is there a general convention about addresses for DNS servers? NTP
 servers? dhcp servers?

There are people who do stuff like blah::53 for DNS, or blah:193:77:81:20 for a 
machine that has IPv4 address 193.177.81.20.

For the DNS, I always recommend using a separate /64 for each one, as that way 
you can move them to another location without having to renumber, and make the 
addresses short, so a ::1 address or something, because those are the IPv6 
addresses that you end up typing a lot.

For all the other stuff, just use stateless autoconfig or start from ::1 when 
configuring things manually although there is also a little value in putting 
some of the IPv4 address in there. Note that 2001:db8::10.0.0.1 is a valid IPv6 
address. Unfortunately when you see it copied back to you it shows up as 
2001:db8::a00:1 which is less helpful.

 2) Are we tending to use different IPs for each service on a device?

No, the same Internet Protocol.

 Finally, what tools do people find themselves using to manage IPv6 and
 addressing?

Stateless autoconfig for hosts, EUI-64 addressing for routers, VLAN ID in the 
subnet bits. That makes life simple. Simple be good.




Re: IPv6 Conventions

2011-05-18 Thread Cameron Byrne
On May 18, 2011 8:07 AM, Iljitsch van Beijnum iljit...@muada.com wrote:

 On 18 mei 2011, at 16:44, Todd Snyder wrote:

  1) Is there a general convention about addresses for DNS servers? NTP
  servers? dhcp servers?

 There are people who do stuff like blah::53 for DNS, or blah:193:77:81:20
for a machine that has IPv4 address 193.177.81.20.

 For the DNS, I always recommend using a separate /64 for each one, as that
way you can move them to another location without having to renumber, and
make the addresses short, so a ::1 address or something, because those are
the IPv6 addresses that you end up typing a lot.

 For all the other stuff, just use stateless autoconfig or start from ::1
when configuring things manually although there is also a little value in
putting some of the IPv4 address in there. Note that 2001:db8::10.0.0.1 is a
valid IPv6 address. Unfortunately when you see it copied back to you it
shows up as 2001:db8::a00:1 which is less helpful.

  2) Are we tending to use different IPs for each service on a device?

 No, the same Internet Protocol.

  Finally, what tools do people find themselves using to manage IPv6 and
  addressing?

 Stateless autoconfig for hosts, EUI-64 addressing for routers, VLAN ID in
the subnet bits. That makes life simple. Simple be good.


You may want to use some randomness to limit address scanning.  Ymmv on how
well this works or applies, I do it.

Cb



Re: Experience with Open Source load balancers?

2011-05-18 Thread Hammer
I've worked with everything over the years. BigIP, CSS, CSM, ACE (blows),
NetScaler, say when. I've been thru a few RFPs and bake offs and also
evaluated open source options.

1. If you are looking for simple round robin load balancing with decent load
capabilities then there are several open source options in this thread that
may work. As long as you understand that you are going to be expected to
support them.

2. If you are pushing features. SSL termination. Header rewrites. Payload
inspection (NetScaler does application firewalling on the same appliance).
Or other complexities and you are having to deal with enterprise traffic
volume you might be better off with one of the big vendors. Applications
these days are more and more complicated and a high end load balancer with a
stable feature set can often rescue your AppDev team and make you a hero.

Recommend: F5 and Citrix Netscaler. If you are looking to combine your L7 FW
into your LB then you might lean towards NetScaler. If you are looking at
seperating those duties you can look at F5. IRules (F5) are the bomb.




 -Hammer-

I was a normal American nerd.
-Jack Herer





On Wed, May 18, 2011 at 12:31 AM, matthew zeier m...@velvet.org wrote:

 I'll pile on here too - there's very little of Mozilla's web infrastructure
 that isn't behind Zeus.

  +1 for Zeus. Use it in our production network with great success.
  Magnitudes cheaper than a solution from F5, and doesn't hide the inner
  workings of the product if you want to do some things outside the
  scope of support.







Re: IPv6 Conventions

2011-05-18 Thread sthaug
 1) Is there a general convention about addresses for DNS servers? NTP
 servers? dhcp servers?

DNS server addresses should be short and easy to tape, as already
mentioned.

 2) Are we tending to use different IPs for each service on a device?

In many cases yes - because that makes it possible to easily move the
service to a different box.

 Finally, what tools do people find themselves using to manage IPv6 and
 addressing?

Excel spreadsheets, HaCi.

  It seems to me that IPAM is almost required to manage IPv6 in
 any sane way, even for very small deployments (My home ISP gave me a /56 and
 a /64).

At least as long as you use static addresses. We like static, and tend
to stay away from SLAAC. We do *not* use EUI-64 for router links. For
customer links we use /64, for backbone links we use /124 (ensures
that SLAAC can never ever be used on the link, and also that the two
ends can be numbered ending in 1 and 2 - nice and simple).

Steinar Haug, Nethelp consulting, sth...@nethelp.no



Re: Experience with Open Source load balancers?

2011-05-18 Thread matthew zeier
  
 Recommend: F5 and Citrix Netscaler. If you are looking to combine your L7 FW 
 into your LB then you might lean towards NetScaler. If you are looking at 
 seperating those duties you can look at F5. IRules (F5) are the bomb.

Except that under (Mozilla) load, Netscaler fell apart.  F5, at the time, could 
not handle the logging rate I required.  Mozilla load is typically defined as 
high connection rate, low traffic per connection and mostly all SSL.  

During the Firefox 4 release, we peaked globally at 12Gbps, a significant 
portion of which was pushed out of three Zeus clusters with L7 rules and some 
non-trivial traffic script rules and a heck of a lot of content caching.  Of 
all the systems seeing increased usage during the Fx4 release, this wasn't 
where my worries were :)

A slightly older post,

http://blog.mozilla.com/mrz/2008/12/04/load-balancer-performance-issues-fxfeedsmozillaorg-versioncheck/




Re: Yahoo and IPv6

2011-05-18 Thread Jeroen van Aart

Steve Clark wrote:
This is all very confusing to me. How are meaningful names going to 
assigned automatically?
Right now I see something like ool-6038bdcc.static.optonline.net for one 
of our servers, how does this

mean anything to anyone else?


Does http://وزارة-الأتصالات.مصر/ mean more to you?

Or http://xn--4gbrim.xnymcbaaajlc6dj7bxne2c.xn--wgbh1c which is what 
it translates to in your browser.


Just saying... ;-)

--
http://goldmark.org/jeff/stupid-disclaimers/
http://linuxmafia.com/~rick/faq/plural-of-virus.html



Re: Experience with Open Source load balancers?

2011-05-18 Thread Andreas Echavez
We're using both an F5 BigIP as well as Nginx (open source software) in a
production environment.

They both have their merits, but when we recently came under some advanced
DDoSes (slowloris, slow POST, and more), we couldn't process certain types
of layer 7 insepction/modification because it was too heavy for the F5 to
handle. Nginx was more cost effective because we could scale laterally with
cheap commodity hardware.

This isn't a knock on the BigIP though; it's a much better piece of
equipment, has commercial support, and a fantastic web interface. With Nginx
you might find yourself compiling modules in by hand and writing config
files.

Ultimately, the open source solution is going to stand the test of time
better. It all depends on who's paying the bills, and what your time is
worth. Nginx was specifically worth the effort for us because we had unique
traffic demands that change too quickly for a commercial solution.

Thanks,
Andreas


On Mon, May 16, 2011 at 4:15 PM, Welch, Bryan bryan.we...@arrisi.comwrote:

 Greetings all.

 I've been tasked with comparing the use of open source load balancing
 software against commercially available off the shelf hardware such as F5,
 which is what we currently use.  We use the load balancers for traditional
 load balancing, full proxy for http/ssl traffic, ssl termination and
 certificate management, ssl and http header manipulation, nat, high
 availability of the physical hardware and stateful failover of the tcp
 sessions.  These units will be placed at the customer prem supporting our
 applications and services and we'll need to support them accordingly.

 Now my knee jerk reaction to this is that it's a really bad idea.  It is
 the heart and soul of our data center network after all.  However, once I
 started to think about it I realized that I hadn't had any real experience
 with this solution beyond tinkering with it at home and reading about it in
 years past.

 Can anyone offer any operational insight and real world experiences with
 these solutions?

 TIA, replies off list are welcomed.


 Regards,

 Bryan




Re: Yahoo and IPv6

2011-05-18 Thread Jeroen van Aart

Paul Vixie wrote:

time in Nicaragua he said that he has a lot of days like this and he'd
like more work to be possible when only local connectivity was available.

Compelling stuff.  Pity there's no global market for localized services
or we'd already have it.  Nevertheless this must and will get fixed, and
we should be the generation who does it.


I have found that the general theme is to move services that were 
traditionally available inside an office network (source control, email, 
ticketing/bug tracking systems, storing documents, corporate wikis 
etc.) to an external place, perhaps even outsourced to one of the 
virtual server or software as a service providers.


I am not a particular fan of that trend, but I can see the pros and cons 
of doing it. It doesn't look like that's going to stop any time soon, 
let alone be (partially) reversed.


Regards,
Jeroen

--
http://goldmark.org/jeff/stupid-disclaimers/
http://linuxmafia.com/~rick/faq/plural-of-virus.html



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Michael Holstein

 http://e.businessinsider.com/public/184962


Somebody should invent a a way to stream groups of shows simultaneously
and just arrange for people to watch the desired stream at a particular
time. Heck, maybe even do it wireless.

problem solved, right?

Cheers,

Michael Holstein
Cleveland State University




Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Landon Stewart
On Wed, May 18, 2011 at 12:46 PM, Michael Holstein 
michael.holst...@csuohio.edu wrote:


  http://e.businessinsider.com/public/184962
 

 Somebody should invent a a way to stream groups of shows simultaneously
 and just arrange for people to watch the desired stream at a particular
 time. Heck, maybe even do it wireless.

 problem solved, right?


There was a lengthy discussion about that on NANOG a week or so ago.  I
don't claim to understand all facets of multicast but it could be a sort of
way to operate tv station type scheduled programming for streaming media.
There's no way to pause, rewind or otherwise seek multicasted media though.
It would be going backwards in terms of what consumers want these days.

http://en.wikipedia.org/wiki/Multicast
http://en.wikipedia.org/wiki/Mbone

It seems to me that every provider these days is using a year 2K business
model with 2011 bandwidth requirements and then complaining that consumers
are transferring too much data.

-- 
Landon Stewart lstew...@superb.net
SuperbHosting.Net by Superb Internet Corp.
Toll Free (US/Canada): 888-354-6128 x 4199
Direct: 206-438-5879
Web hosting and more Ahead of the Rest: http://www.superbhosting.net


RE: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Holmes,David A
I think this shows the need for an Internet-wide multicast implementation. 
Although I can recall working on a product that delivered satellite multicast 
streams (with each multicast group corresponding to individual TV stations) to 
telco CO's. This enabled the telco to implement multicast at the edge of their 
networks, where user broadband clients would issue multicast joins only as far 
as the CO. If I recall this was implemented with the old Cincinnati Bell telco. 
I admit there are a lot of CO's and cable head-ends though for this solution to 
scale.

-Original Message-
From: Michael Holstein [mailto:michael.holst...@csuohio.edu]
Sent: Wednesday, May 18, 2011 12:46 PM
To: Roy
Cc: nanog
Subject: Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any 
Other Company


 http://e.businessinsider.com/public/184962


Somebody should invent a a way to stream groups of shows simultaneously
and just arrange for people to watch the desired stream at a particular
time. Heck, maybe even do it wireless.

problem solved, right?

Cheers,

Michael Holstein
Cleveland State University



This communication, together with any attachments or embedded links, is for the 
sole use of the intended recipient(s) and may contain information that is 
confidential or legally protected. If you are not the intended recipient, you 
are hereby notified that any review, disclosure, copying, dissemination, 
distribution or use of this communication is strictly prohibited. If you have 
received this communication in error, please notify the sender immediately by 
return e-mail message and delete the original and all copies of the 
communication, along with any attachments or embedded links, from your system.



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Christopher Morrow
On Wed, May 18, 2011 at 3:56 PM, Landon Stewart lstew...@superb.net wrote:
 There was a lengthy discussion about that on NANOG a week or so ago.  I
 don't claim to understand all facets of multicast but it could be a sort of
 way to operate tv station type scheduled programming for streaming media.
 There's no way to pause, rewind or otherwise seek multicasted media though.
 It would be going backwards in terms of what consumers want these days.


why not permit your users to subscribe to shows/instances, stream them
on-demand for viewing later... and leave truly live content
(news/sports/etc) as is, with only the ability to pause/rewind?

how is this different from broadcast tv today though?



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Matt Ryanczak

On 05/18/2011 04:01 PM, Holmes,David A wrote:

I think this shows the need for an Internet-wide multicast implementation. 
Although I can recall working on a product that delivered satellite multicast 
streams (with each multicast group corresponding to individual TV stations) to 
telco CO's. This enabled the telco to implement multicast at the edge of their 
networks, where user broadband clients would issue multicast joins only as far 
as the CO. If I recall this was implemented with the old Cincinnati Bell telco. 
I admit there are a lot of CO's and cable head-ends though for this solution to 
scale.


I don't see how multicast necasarily solves the netflix on-demand video 
problem. you have millions of users streaming different content at 
different times. multicast is great for the world cup but how does it 
solve the video on demand problem?




Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Joe Abley

On 2011-05-18, at 16:01, Holmes,David A wrote:

 I think this shows the need for an Internet-wide multicast implementation.

Or perhaps even some kind of new technology that is independent of the 
Internet! Imagine such futuristic ideas as solar-powered spacecraft in orbit 
around the planet bouncing content back across massive areas so that everybody 
can pick them up at once.

Crazy stuff.


Joe




Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Landon Stewart
Lets say you had a file that was 1,000,000,000 characters consisting of
8,000,000,000bits.  What if instead of transferring that file through the
interwebs you transmitted a mathematical equation to tell a computer on the
other end how to *construct* that file.  First you'd feed the file into a
cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
equation somehow.  Sure this would take time, I realize that.  The equation
would then be transmitted to the other computer where it would use its
mad-math-skillz to *figure out the answer* which would theoretically be the
same pattern of bits.  Thus the same file would emerge on the other end.

The real question here is how long would it take for a regular computer to
do this kind of math?

Just a weird idea I had.  If it's a good idea then please consider this
intellectual property.  LOL


-- 
Landon Stewart lstew...@superb.net
SuperbHosting.Net by Superb Internet Corp.
Toll Free (US/Canada): 888-354-6128 x 4199
Direct: 206-438-5879
Web hosting and more Ahead of the Rest: http://www.superbhosting.net


Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Dorn Hetzel


 I don't see how multicast necasarily solves the netflix on-demand video
 problem. you have millions of users streaming different content at different
 times. multicast is great for the world cup but how does it solve the video
 on demand problem?

 I suppose in theory if you have tivo-like devices at the endpoints then
they can capture popular programs at the time of multicast for later
viewing.  Whether this is better than capturing the same programs over a
broadcast medium for later playback, I don't know...


Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread John Adams
We call that Compression.

-j


On Wed, May 18, 2011 at 1:07 PM, Landon Stewart lstew...@superb.net wrote:

 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through the
 interwebs you transmitted a mathematical equation to tell a computer on the
 other end how to *construct* that file.  First you'd feed the file into a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
 equation somehow.  Sure this would take time, I realize that.  The equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be the
 same pattern of bits.  Thus the same file would emerge on the other end.

 The real question here is how long would it take for a regular computer to
 do this kind of math?

 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.  LOL


 --
 Landon Stewart lstew...@superb.net
 SuperbHosting.Net by Superb Internet Corp.
 Toll Free (US/Canada): 888-354-6128 x 4199
 Direct: 206-438-5879
 Web hosting and more Ahead of the Rest: http://www.superbhosting.net



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Cameron Byrne
On Wed, May 18, 2011 at 1:02 PM, Christopher Morrow
morrowc.li...@gmail.com wrote:
 On Wed, May 18, 2011 at 3:56 PM, Landon Stewart lstew...@superb.net wrote:
 There was a lengthy discussion about that on NANOG a week or so ago.  I
 don't claim to understand all facets of multicast but it could be a sort of
 way to operate tv station type scheduled programming for streaming media.
 There's no way to pause, rewind or otherwise seek multicasted media though.
 It would be going backwards in terms of what consumers want these days.


 why not permit your users to subscribe to shows/instances, stream them
 on-demand for viewing later... and leave truly live content
 (news/sports/etc) as is, with only the ability to pause/rewind?

 how is this different from broadcast tv today though?


It's not.  These people need a pair of rabbit ears and a DVR.

CB



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Jack Carrozzo
That's basically what compression is. Except rarely (read: never) does your
Real Data (tm) fit just one equation, hence the various compression
algorithms that look for patterns etc etc.

-J

On Wed, May 18, 2011 at 4:07 PM, Landon Stewart lstew...@superb.net wrote:

 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through the
 interwebs you transmitted a mathematical equation to tell a computer on the
 other end how to *construct* that file.  First you'd feed the file into a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
 equation somehow.  Sure this would take time, I realize that.  The equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be the
 same pattern of bits.  Thus the same file would emerge on the other end.

 The real question here is how long would it take for a regular computer to
 do this kind of math?

 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.  LOL


 --
 Landon Stewart lstew...@superb.net
 SuperbHosting.Net by Superb Internet Corp.
 Toll Free (US/Canada): 888-354-6128 x 4199
 Direct: 206-438-5879
 Web hosting and more Ahead of the Rest: http://www.superbhosting.net



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Randy Bush
 why not permit your users to subscribe to shows/instances, stream them
 on-demand for viewing later... and leave truly live content
 (news/sports/etc) as is, with only the ability to pause/rewind?
 
 how is this different from broadcast tv today though?

for some of us, the thing that is wonderful about netflix is the long
tail.  my tastes are a sigma or three out.

randy



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Joe Abley

On 2011-05-18, at 16:09, Dorn Hetzel wrote:

 they can capture popular programs at the time of multicast for later
 viewing.  Whether this is better than capturing the same programs over a
 broadcast medium for later playback, I don't know...

... or a peer to peer medium, which is (as I understand it) how people who 
really want this to happen today manage to do it. The problem is not the 
distribution so much as the need to shoe-horn this network efficiency into the 
content business model.

I heard similar stories about the early days of distributing digital copies of 
movies to theatres for presentation -- the technology was trivial, even with 
fairly low-power commodity CPUs, until you insist that the content be encrypted 
so that nobody can walk off with it without paying.


Joe




Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Michael Holstein

 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.
   

It's easy .. the zeros are fatter than the ones.

http://dilbert.com/strips/comic/2004-12-09/

~Mike.



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Joel Jaeggli

On May 18, 2011, at 1:01 PM, Holmes,David A wrote:

 I think this shows the need for an Internet-wide multicast implementation. 

there's a pretty longtailed distribution on what people might chose to stream. 
static content is ameniable to distribution via cdn (which is frankly a 
degenerate form of multicast), but lets face it, how many people watched 
Charles Mingus: Triumph of the Underdog in east palo alto last night at 10pm. 
 

 Although I can recall working on a product that delivered satellite multicast 
 streams (with each multicast group corresponding to individual TV stations) 
 to telco CO's. This enabled the telco to implement multicast at the edge of 
 their networks, where user broadband clients would issue multicast joins only 
 as far as the CO. If I recall this was implemented with the old Cincinnati 
 Bell telco. I admit there are a lot of CO's and cable head-ends though for 
 this solution to scale.
 
 -Original Message-
 From: Michael Holstein [mailto:michael.holst...@csuohio.edu]
 Sent: Wednesday, May 18, 2011 12:46 PM
 To: Roy
 Cc: nanog
 Subject: Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any 
 Other Company
 
 
 http://e.businessinsider.com/public/184962
 
 
 Somebody should invent a a way to stream groups of shows simultaneously
 and just arrange for people to watch the desired stream at a particular
 time. Heck, maybe even do it wireless.
 
 problem solved, right?
 
 Cheers,
 
 Michael Holstein
 Cleveland State University
 
 
 
 This communication, together with any attachments or embedded links, is for 
 the sole use of the intended recipient(s) and may contain information that is 
 confidential or legally protected. If you are not the intended recipient, you 
 are hereby notified that any review, disclosure, copying, dissemination, 
 distribution or use of this communication is strictly prohibited. If you have 
 received this communication in error, please notify the sender immediately by 
 return e-mail message and delete the original and all copies of the 
 communication, along with any attachments or embedded links, from your system.
 
 




Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Dorn Hetzel
On Wed, May 18, 2011 at 4:07 PM, Landon Stewart lstew...@superb.net wrote:

 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through the
 interwebs you transmitted a mathematical equation to tell a computer on the
 other end how to *construct* that file.  First you'd feed the file into a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
 equation somehow.  Sure this would take time, I realize that.  The equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be the
 same pattern of bits.  Thus the same file would emerge on the other end.

 The real question here is how long would it take for a regular computer to
 do this kind of math?

 The real question is whether this is possible.  And the short answer is No,
at least not in general.

Now if your file has patterns that make it compressible, you can make it
smaller, but not
all files can be compressed this way, at least not in a way that makes them
smaller.

To understand why, consider the case of a file of one byte, or 8 bits.
 There are 256 possible
files of this size, , 0001, 0010, ..., 1101, 1110,
111.

Since each code we send must generate a unique file (or what's the point, we
need 256 different codes
to represent each possible file), but the shortest general way to write 256
different codes is
still 8 bits long.  Now, we can use coding schemes and say that the one-bit
value 1 represents 
because that file happens a lot.  Then we could use 01 to represent
something else, but
we can't use 1 at the beginning again because we couldn't tell that from the
file named by 1.

Bottom line, for some codes to be shorter than the file they represent,
others must be longer...

So if files have a lot of repetition, you can get a win, but for random
data, not so much :(


RE: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Stefan Fouant
 -Original Message-
 From: Landon Stewart [mailto:lstew...@superb.net]
 Sent: Wednesday, May 18, 2011 4:08 PM
 To: nanog
 Subject: Had an idea - looking for a math buff to tell me if it's
 possible with today's technology.
 
 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through
 the
 interwebs you transmitted a mathematical equation to tell a computer on
 the
 other end how to *construct* that file.  First you'd feed the file into
 a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into
 an
 equation somehow.  Sure this would take time, I realize that.  The
 equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be
 the
 same pattern of bits.  Thus the same file would emerge on the other
 end.

Not exactly the same thing, but application acceleration of this sort has
been around for some time - 

http://www.riverbed.com/us/
http://www.juniper.net/us/en/products-services/application-acceleration/wxc-
series/
http://www.cisco.com/en/US/products/ps5680/Products_Sub_Category_Home.html

Stefan Fouant





Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Jay Ashworth
- Original Message -
 From: Christopher Morrow morrowc.li...@gmail.com

 why not permit your users to subscribe to shows/instances, stream them
 on-demand for viewing later... and leave truly live content
 (news/sports/etc) as is, with only the ability to pause/rewind?
 
 how is this different from broadcast tv today though?

It's on the Internet.  So it's cooler.

Cheers,
-- jra
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth  Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Robert Bonomi


Wildly off-topic for the NANOG mailing-list, as it has -zero- relevance to
'network operations'

 Date: Wed, 18 May 2011 13:07:32 -0700
 Subject: Had an idea - looking for a math buff to tell me if it's possible
   with today's technology.
 From: Landon Stewart lstew...@superb.net
 To: nanog nanog@nanog.org

 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through the
 interwebs you transmitted a mathematical equation to tell a computer on the
 other end how to *construct* that file.  First you'd feed the file into a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
 equation somehow.  Sure this would take time, I realize that.  The equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be the
 same pattern of bits.  Thus the same file would emerge on the other end.

 The real question here is how long would it take for a regular computer to
 do this kind of math?

I have, on my computer, an encoder/decoder that does _exactly_ that.

Both the encoder and decoder are _amazingly_ fast -- as fast as a file copy,
in fact.

the average size of the tranmsitted files, across all possible input files
is exactly 100% of the size of the input files.  (one *cannot* do better
than that, across all possible inputs -- see the 'counting' problem, in
data-compression theory)

 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.  LOL

'Weird' is one word for it.  You might want to read up on the subject
of 'data compression', to get an idea of how things work.

See also polynominial curve-fitting, for the real-world limits of your
theory.
for the real-world limits of your
theory.






Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Jeroen van Aart

Joe Abley wrote:

Or perhaps even some kind of new technology that is independent of the 
Internet! Imagine such futuristic ideas as solar-powered spacecraft in orbit 
around the planet bouncing content back across massive areas so that everybody 
can pick them up at once.

Crazy stuff.


You mean like a sputnik?

Crazy indeed...

--
http://goldmark.org/jeff/stupid-disclaimers/
http://linuxmafia.com/~rick/faq/plural-of-virus.html



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Christopher Morrow
On Wed, May 18, 2011 at 4:15 PM, Randy Bush ra...@psg.com wrote:
 why not permit your users to subscribe to shows/instances, stream them
 on-demand for viewing later... and leave truly live content
 (news/sports/etc) as is, with only the ability to pause/rewind?

 how is this different from broadcast tv today though?

 for some of us, the thing that is wonderful about netflix is the long
 tail.  my tastes are a sigma or three out.

usenet is over -

in all seriousness, if the content was available and you could request
it be streamed to you 'sometime tomorrow' or 'sometime before Friday',
you and the other people like you coudl get serviced on a singular
'stream'.  I suspect that the vast majority of content is in the 1st
sigma... and again, servicing everyone with a limited number of
multicast'd streams seems like it would be nice. even falling back to
unicast for some set of mathematically/cost-conscious examples seems
like a win here.



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Christopher Morrow
On Wed, May 18, 2011 at 4:18 PM, Joel Jaeggli joe...@bogus.com wrote:

 On May 18, 2011, at 1:01 PM, Holmes,David A wrote:

 I think this shows the need for an Internet-wide multicast implementation.

 there's a pretty longtailed distribution on what people might chose to 
 stream. static content is ameniable to distribution via cdn (which is frankly 
 a degenerate form of multicast), but lets face it, how many people watched 
 Charles Mingus: Triumph of the Underdog in east palo alto last night at 
 10pm.

slightly wrong question: How many people last 'period of time' chose,
early enough, to want to watch CMTotU lastnight at 10.

if the number is greater than X, multicast it with time to deliver
before 10pm pdt start time. If it's less, unicast...



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Dorn Hetzel
If we're really talking efficiency, the popular stuff should probably
stream out over the bird of your choice (directv, etc) because it's hard to
beat millions of dishes and dvr's and no cable plant.

Then what won't fit on the bird goes unicast IP from the nearest CDN.   Kind
of like the on demand over broadband on my satellite box.  Their selection
sucks, but the model is valid.

On Wed, May 18, 2011 at 4:18 PM, Joel Jaeggli joe...@bogus.com wrote:


 On May 18, 2011, at 1:01 PM, Holmes,David A wrote:

  I think this shows the need for an Internet-wide multicast
 implementation.

 there's a pretty longtailed distribution on what people might chose to
 stream. static content is ameniable to distribution via cdn (which is
 frankly a degenerate form of multicast), but lets face it, how many people
 watched Charles Mingus: Triumph of the Underdog in east palo alto last
 night at 10pm.

  Although I can recall working on a product that delivered satellite
 multicast streams (with each multicast group corresponding to individual TV
 stations) to telco CO's. This enabled the telco to implement multicast at
 the edge of their networks, where user broadband clients would issue
 multicast joins only as far as the CO. If I recall this was implemented with
 the old Cincinnati Bell telco. I admit there are a lot of CO's and cable
 head-ends though for this solution to scale.
 
  -Original Message-
  From: Michael Holstein [mailto:michael.holst...@csuohio.edu]
  Sent: Wednesday, May 18, 2011 12:46 PM
  To: Roy
  Cc: nanog
  Subject: Re: Netflix Is Eating Up More Of North America's Bandwidth Than
 Any Other Company
 
 
  http://e.businessinsider.com/public/184962
 
 
  Somebody should invent a a way to stream groups of shows simultaneously
  and just arrange for people to watch the desired stream at a particular
  time. Heck, maybe even do it wireless.
 
  problem solved, right?
 
  Cheers,
 
  Michael Holstein
  Cleveland State University
 
 
 
  This communication, together with any attachments or embedded links, is
 for the sole use of the intended recipient(s) and may contain information
 that is confidential or legally protected. If you are not the intended
 recipient, you are hereby notified that any review, disclosure, copying,
 dissemination, distribution or use of this communication is strictly
 prohibited. If you have received this communication in error, please notify
 the sender immediately by return e-mail message and delete the original and
 all copies of the communication, along with any attachments or embedded
 links, from your system.
 
 





Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Steven Bellovin

On May 18, 2011, at 4:07 32PM, Landon Stewart wrote:

 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through the
 interwebs you transmitted a mathematical equation to tell a computer on the
 other end how to *construct* that file.  First you'd feed the file into a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
 equation somehow.  Sure this would take time, I realize that.  The equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be the
 same pattern of bits.  Thus the same file would emerge on the other end.
 
 The real question here is how long would it take for a regular computer to
 do this kind of math?
 
 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.  LOL

http://en.wikipedia.org/wiki/Kolmogorov_complexity

--Steve Bellovin, https://www.cs.columbia.edu/~smb








Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Christopher Morrow
On Wed, May 18, 2011 at 4:18 PM, Michael Holstein
michael.holst...@csuohio.edu wrote:

 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.


 It's easy .. the zeros are fatter than the ones.

no no no.. it's simply, since the OP posited a math solution, md5.
ship the size of file + hash, compute file on the other side. All
files can be moved anywhere regardless of the size of the file in a
single packet.


The solution is left as an exercise for the reader.



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Aria Stewart

On Wednesday, May 18, 2011 at 2:18 PM, Dorn Hetzel wrote: 
 On Wed, May 18, 2011 at 4:07 PM, Landon Stewart lstew...@superb.net wrote:
 
  Lets say you had a file that was 1,000,000,000 characters consisting of
  8,000,000,000bits. What if instead of transferring that file through the
  interwebs you transmitted a mathematical equation to tell a computer on the
  other end how to *construct* that file. First you'd feed the file into a
  cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
  equation somehow. Sure this would take time, I realize that. The equation
  would then be transmitted to the other computer where it would use its
  mad-math-skillz to *figure out the answer* which would theoretically be the
  same pattern of bits. Thus the same file would emerge on the other end.
  
  The real question here is how long would it take for a regular computer to
  do this kind of math?
  
  The real question is whether this is possible. And the short answer is No,
 at least not in general.
Exactly: What you run up against is that you can reduce extraneous information, 
and compress redundant information, but if you actually have dense information, 
you're not gonna get any better.

So easy to compress a billion bytes of JSON or XML significantly; not so much a 
billion bytes of already tightly coded movie.


Aria Stewart




Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Randy Bush
 for some of us, the thing that is wonderful about netflix is the long
 tail.  my tastes are a sigma or three out.
 in all seriousness, if the content was available and you could request
 it be streamed to you 'sometime tomorrow' or 'sometime before Friday',
 you and the other people like you coudl get serviced on a singular
 'stream'.

they do that now.  by a station wagon full of holerith cards.  well, how
about a dvd in the post?

randy



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread John Lee
The concept is called fractals where you can compress the image and send the
values and recreate the image. There was a body of work on the subject, I
would say in the mid to late eighties where two Georgia Tech professors
started a company doing it.

John (ISDN) Lee

On Wed, May 18, 2011 at 4:07 PM, Landon Stewart lstew...@superb.net wrote:

 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through the
 interwebs you transmitted a mathematical equation to tell a computer on the
 other end how to *construct* that file.  First you'd feed the file into a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
 equation somehow.  Sure this would take time, I realize that.  The equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be the
 same pattern of bits.  Thus the same file would emerge on the other end.

 The real question here is how long would it take for a regular computer to
 do this kind of math?

 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.  LOL


 --
 Landon Stewart lstew...@superb.net
 SuperbHosting.Net by Superb Internet Corp.
 Toll Free (US/Canada): 888-354-6128 x 4199
 Direct: 206-438-5879
 Web hosting and more Ahead of the Rest: http://www.superbhosting.net



RE: Had an idea - looking for a math buff to tell me if it's possiblewith today's technology.

2011-05-18 Thread George Bonser


 -Original Message-
 From: Landon Stewart [mailto:lstew...@superb.net]
 Sent: Wednesday, May 18, 2011 1:08 PM
 To: nanog
 Subject: Had an idea - looking for a math buff to tell me if it's
 possiblewith today's technology.
 
 Lets say you had a file that was 1,000,000,000 characters consisting
of
 8,000,000,000bits.  What if instead of transferring that file through
 the
 interwebs you transmitted a mathematical equation to tell a computer
on
 the
 other end how to *construct* that file.

Congratulations.  You have just invented compression.


 



RE: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Joe Loiacono
Stefan Fouant sfou...@shortestpathfirst.net wrote on 05/18/2011 
04:19:26 PM:

  Lets say you had a file that was 1,000,000,000 characters consisting 
of
 
 http://www.riverbed.com/us/
 
http://www.juniper.net/us/en/products-services/application-acceleration/wxc-

 series/
 
http://www.cisco.com/en/US/products/ps5680/Products_Sub_Category_Home.html

You also need to include Silver Peak.

http://www.silver-peak.com/

Saw a very interesting presentation on their techniques.

Joe


Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Brielle Bruns

On 5/18/11 2:33 PM, Dorn Hetzel wrote:

If we're really talking efficiency, the popular stuff should probably
stream out over the bird of your choice (directv, etc) because it's hard to
beat millions of dishes and dvr's and no cable plant.

Then what won't fit on the bird goes unicast IP from the nearest CDN.   Kind
of like the on demand over broadband on my satellite box.  Their selection
sucks, but the model is valid.




If someone hadn't mentioned already, there used to be a usenet provider 
that delivered a full feed via Satellite.  Anything is feasible, just 
have to find people who actually want/need it and a provider that isn't 
blind to long term benefits.



--
Brielle Bruns
The Summit Open Source Development Group
http://www.sosdg.org/ http://www.ahbl.org



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Jay Ashworth
- Original Message -
 From: Joel Jaeggli joe...@bogus.com

 On May 18, 2011, at 1:01 PM, Holmes,David A wrote:
  I think this shows the need for an Internet-wide multicast
  implementation.
 
 there's a pretty longtailed distribution on what people might chose to
 stream. static content is ameniable to distribution via cdn (which is
 frankly a degenerate form of multicast), but lets face it, how many
 people watched Charles Mingus: Triumph of the Underdog in east palo
 alto last night at 10pm.

Of course.  But that's a strawman.  What percentage of available titles, by
*count*, accounts for even 50% of the streamed data, in bytes?  2%?  1?

Cheers,
-- jra

-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth  Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Christopher Morrow
On Wed, May 18, 2011 at 4:27 PM, Jeroen van Aart jer...@mompl.net wrote:
 Joe Abley wrote:

 Or perhaps even some kind of new technology that is independent of the
 Internet! Imagine such futuristic ideas as solar-powered spacecraft in orbit
 around the planet bouncing content back across massive areas so that
 everybody can pick them up at once.

 Crazy stuff.

 You mean like a sputnik?

sputnik was VERY low bandwidth though... if you wanted to stream a
current movie, you'd likely have to have started when sputnik actually
launched to be sure you'd be able to watch it next year.



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Leo Bicknell
In a message written on Wed, May 18, 2011 at 04:33:34PM -0400, Christopher 
Morrow wrote:
 no no no.. it's simply, since the OP posited a math solution, md5.
 ship the size of file + hash, compute file on the other side. All
 files can be moved anywhere regardless of the size of the file in a
 single packet.
 
 
 The solution is left as an exercise for the reader.

Bah, you should include the solution, it's so trivial.

Generate all possible files and then do an index lookup on the MD5.
It's a little CPU heavy, but darn simple to code.

You can even stop when you get a match, which turns out to be a HUGE
optimization. :)

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpqsN6hKjrXD.pgp
Description: PGP signature


Re: Had an idea - looking for a math buff to tell me if it's possible

2011-05-18 Thread Lyndon Nerenberg (VE6BBM/VE7TFX)
 no no no.. it's simply, since the OP posited a math solution, md5.
 ship the size of file + hash, compute file on the other side. All
 files can be moved anywhere regardless of the size of the file in a
 single packet.

MD5 compression is lossy in this context.  Given big enough files
you're going to start seeing hash collisions.




Re: user-relative names - was:[Re: Yahoo and IPv6]

2011-05-18 Thread Steven Bellovin

On May 17, 2011, at 10:30 13PM, Joel Jaeggli wrote:

 
 On May 17, 2011, at 6:09 PM, Scott Weeks wrote:
 
 --- joe...@bogus.com wrote:
 From: Joel Jaeggli joe...@bogus.com
 On May 17, 2011, at 4:30 PM, Scott Brim wrote:
 On May 17, 2011 6:26 PM, valdis.kletni...@vt.edu wrote:
 On Tue, 17 May 2011 15:04:19 PDT, Scott Weeks said:
 
 What about privacy concerns
 
 Privacy is dead.  Get used to it. -- Scott McNeely
 
 Forget that attitude, Valdis. Just because privacy is blown at one level
 doesn't mean you give it away at every other one. We establish the framework
 for recovering privacy and make progress step by step, wherever we can.
 Someday we'll get it all back under control.
 
 if you put something in the dns you do so because you want to discovered. 
 scoping the nameservers such that they only express certain certain resource 
 records to queriers in a particular scope is fairly straight forward.
 
 
 
 The article was not about DNS.  It was about Persistent Personal Names for 
 Globally Connected Mobile Devices where Users normally create personal 
 names by introducing devices locally, on a common WiFi network for example. 
 Once created, these names remain persistently bound to their targets as 
 devices move. Personal names are intended to supplement and not replace 
 global DNS names.  
 
 you mean like mac addresses? those have a tendency to follow you around in 
 ipv6...
 
This is why RFC 3041 (replaced by 4941) was written, 10+ years ago.  The problem
is that it's not enabled by default on many (possibly all) platforms, so I
have to have

# cat /etc/sysctl.conf
net.inet6.ip6.use_tempaddr=1

set on my Mac.


--Steve Bellovin, https://www.cs.columbia.edu/~smb








Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Philip Dorr
On Wed, May 18, 2011 at 3:33 PM, Christopher Morrow
morrowc.li...@gmail.com wrote:
 On Wed, May 18, 2011 at 4:18 PM, Michael Holstein
 michael.holst...@csuohio.edu wrote:

 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.


 It's easy .. the zeros are fatter than the ones.

 no no no.. it's simply, since the OP posited a math solution, md5.
 ship the size of file + hash, compute file on the other side. All
 files can be moved anywhere regardless of the size of the file in a
 single packet.


 The solution is left as an exercise for the reader.


You would need a lot of computing power to generate a file of any
decent size.  If you want to be evil then you could send just a md5
hash and a sha512 hash (or some other hash that would not have a
collision at the same time except when correct)



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Chris Owen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On May 18, 2011, at 4:03 PM, Leo Bicknell wrote:

 Bah, you should include the solution, it's so trivial.
 
 Generate all possible files and then do an index lookup on the MD5.
 It's a little CPU heavy, but darn simple to code.

Isn't this essentially what Dropbox has been doing in many cases?

Chris

- --
- -
Chris Owen - Garden City (620) 275-1900 -  Lottery (noun):
President  - Wichita (316) 858-3000 -A stupidity tax
Hubris Communications Inc  www.hubris.net
- -


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (Darwin)
Comment: Public Key: http://home.hubris.net/owenc/pgpkey.txt
Comment: Public Key ID: 0xB513D9DD

iEYEARECAAYFAk3UOKIACgkQElUlCLUT2d3YoQCfee38nKuXD5O4C2w5VXUWszF1
EjcAmwfyytDgwmQDpJsQZSpl03ddGbVv
=3sX9
-END PGP SIGNATURE-



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Christopher Morrow
On Wed, May 18, 2011 at 4:53 PM, Brielle Bruns br...@2mbit.com wrote:
 On 5/18/11 2:33 PM, Dorn Hetzel wrote:

 If we're really talking efficiency, the popular stuff should probably
 stream out over the bird of your choice (directv, etc) because it's hard
 to
 beat millions of dishes and dvr's and no cable plant.

 Then what won't fit on the bird goes unicast IP from the nearest CDN.
 Kind
 of like the on demand over broadband on my satellite box.  Their
 selection
 sucks, but the model is valid.



 If someone hadn't mentioned already, there used to be a usenet provider that
 delivered a full feed via Satellite.  Anything is feasible, just have to

doug went out of that business, it wasn't (apparently) actually viable.



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Christopher Morrow
On Wed, May 18, 2011 at 4:47 PM, Joe Loiacono jloia...@csc.com wrote:

 You also need to include Silver Peak.


only if you like random failures.



Re: Had an idea - looking for a math buff to tell me if it's possible

2011-05-18 Thread Dorn Hetzel


 MD5 compression is lossy in this context.  Given big enough files
 you're going to start seeing hash collisions.


 Actually, for a n-bit hash, I can guarantee to find collisions in the
universe of files just n+1 bits in size :)


Re: Had an idea - looking for a math buff to tell me if it's possiblewith today's technology.

2011-05-18 Thread Landon Stewart
On Wed, May 18, 2011 at 1:44 PM, George Bonser gbon...@seven.com wrote:


 Congratulations.  You have just invented compression.


Woot.

-- 
Landon Stewart lstew...@superb.net
SuperbHosting.Net by Superb Internet Corp.
Toll Free (US/Canada): 888-354-6128 x 4199
Direct: 206-438-5879
Web hosting and more Ahead of the Rest: http://www.superbhosting.net


Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Jeffrey S. Young


On 19/05/2011, at 6:01 AM, Holmes,David A dhol...@mwdh2o.com wrote:

 I think this shows the need for an Internet-wide multicast implementation. 
 Although I can recall working on a product that delivered satellite multicast 
 streams (with each multicast group corresponding to individual TV stations) 
 to telco CO's. This enabled the telco to implement multicast at the edge of 
 their networks, where user broadband clients would issue multicast joins only 
 as far as the CO. If I recall this was implemented with the old Cincinnati 
 Bell telco. I admit there are a lot of CO's and cable head-ends though for 
 this solution to scale.
 
 -Original Message-
 From: Michael Holstein [mailto:michael.holst...@csuohio.edu]
 Sent: Wednesday, May 18, 2011 12:46 PM
 To: Roy
 Cc: nanog
 Subject: Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any 
 Other Company
 
 
 http://e.businessinsider.com/public/184962
 
 
 Somebody should invent a a way to stream groups of shows simultaneously
 and just arrange for people to watch the desired stream at a particular
 time. Heck, maybe even do it wireless.
 
 problem solved, right?
 
 Cheers,
 
 Michael Holstein
 Cleveland State University
 
 
No matter where you go, there you are.
[--anon?]

or

Those who don't understand history are doomed to repeat it. - 
[heavily paraphrased -- Santayana]

jy
 



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Jon Lewis

On Wed, 18 May 2011, Brielle Bruns wrote:

If someone hadn't mentioned already, there used to be a usenet provider that 
delivered a full feed via Satellite.  Anything is feasible, just have to find 
people who actually want/need it and a provider that isn't blind to long term 
benefits.


Skycache/Cidera...until it didn't fit anymore in the bandwidth they had. 
IIRC, it was only around 28mbps.


Also, IIRC, that business was a sort of after thought after their 
original plan (squid cache pre-population) didn't pan out.


Anyone want to buy some Skycache chopsticks?  I think I still have a few 
unopened sets from whichever late 90s ISPCon I went to in San Jose, 
CA...Skycache rented out some museum for a sushi party.


--
 Jon Lewis, MCP :)   |  I route
 Senior Network Engineer |  therefore you are
 Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_



RE: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Paul Stewart
There was also Planet Connect years ago that delivered full Usenet (128K
worth) along with all my Fidonet BBS updates too .. I think I just dated
myself ;)

We still have an old Cidera system on a rooftop that nobody has taken down
yet ...

Paul


-Original Message-
From: Jon Lewis [mailto:jle...@lewis.org] 
Sent: May-18-11 6:01 PM
To: Brielle Bruns
Cc: nanog@nanog.org
Subject: Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any
Other Company

On Wed, 18 May 2011, Brielle Bruns wrote:

 If someone hadn't mentioned already, there used to be a usenet provider
that 
 delivered a full feed via Satellite.  Anything is feasible, just have to
find 
 people who actually want/need it and a provider that isn't blind to long
term 
 benefits.

Skycache/Cidera...until it didn't fit anymore in the bandwidth they had. 
IIRC, it was only around 28mbps.

Also, IIRC, that business was a sort of after thought after their 
original plan (squid cache pre-population) didn't pan out.

Anyone want to buy some Skycache chopsticks?  I think I still have a few 
unopened sets from whichever late 90s ISPCon I went to in San Jose, 
CA...Skycache rented out some museum for a sushi party.

--
  Jon Lewis, MCP :)   |  I route
  Senior Network Engineer |  therefore you are
  Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Jay Ashworth
- Original Message -
 From: Jeffrey S. Young yo...@jsyoung.net

  Somebody should invent a a way to stream groups of shows simultaneously
  and just arrange for people to watch the desired stream at a particular
  time. Heck, maybe even do it wireless.
 
  problem solved, right?
 
 Those who don't understand history are doomed to repeat it. -
 [heavily paraphrased -- Santayana]

Those who do not understand broadcasting are doomed to reinvent it.  
Poorly.  --after Henry Spencer.

Cheers,
-- jra
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth  Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Heath Jones
I wonder if this is possible:

- Take a hash of the original file. Keep a counter.
- Generate data in some sequential method on sender side (for example simply
starting at 0 and iterating until you generate the same as the original
data)
- Each time you iterate, take the hash of the generated data. If it matches
the hash of the original file, increment counter.
- Send the hash and the counter value to recipient.
- Recipient performs same sequential generation method, stopping when
counter reached.

Any thoughts?

Heath


On 18 May 2011 21:07, Landon Stewart lstew...@superb.net wrote:

 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through the
 interwebs you transmitted a mathematical equation to tell a computer on the
 other end how to *construct* that file.  First you'd feed the file into a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
 equation somehow.  Sure this would take time, I realize that.  The equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be the
 same pattern of bits.  Thus the same file would emerge on the other end.

 The real question here is how long would it take for a regular computer to
 do this kind of math?

 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.  LOL


 --
 Landon Stewart lstew...@superb.net
 SuperbHosting.Net by Superb Internet Corp.
 Toll Free (US/Canada): 888-354-6128 x 4199
 Direct: 206-438-5879
 Web hosting and more Ahead of the Rest: http://www.superbhosting.net



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread JC Dill

 On 18/05/11 1:13 PM, Cameron Byrne wrote:


It's not.  These people need a pair of rabbit ears and a DVR.


Roughly 90% of the content I'm interested in watching is not available 
over the air.  E.g. Comedy Central, CNN, Discovery, Showtime/HBO, etc.


jc




Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Dorn Hetzel
On Wed, May 18, 2011 at 7:35 PM, JC Dill jcdill.li...@gmail.com wrote:

  On 18/05/11 1:13 PM, Cameron Byrne wrote:


 It's not.  These people need a pair of rabbit ears and a DVR.


 Roughly 90% of the content I'm interested in watching is not available over
 the air.  E.g. Comedy Central, CNN, Discovery, Showtime/HBO, etc.

 jc


 Sure, but I'm guessing that something like that 80% of the content that 80%
of people watch *is* available on some satellite/cable channel.

IP is perfect for the long tail, and yes, some of are mostly consumers of
the tail :), but still, there is a win to be had on the front end of the
beast...


Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Valdis . Kletnieks
On Thu, 19 May 2011 00:26:26 BST, Heath Jones said:
 I wonder if this is possible:
 
 - Take a hash of the original file. Keep a counter.
 - Generate data in some sequential method on sender side (for example simply
 starting at 0 and iterating until you generate the same as the original
 data)
 - Each time you iterate, take the hash of the generated data. If it matches
 the hash of the original file, increment counter.
 - Send the hash and the counter value to recipient.
 - Recipient performs same sequential generation method, stopping when
 counter reached.

MD5 is a 128 bit hash.

2^128 is 340,282,366,920,938,463,463,374,607,431,768,211,456 - you're welcome
to iterate that many times to find a duplicate. You may get lucky and get a hit
in the first trillion or so attempts - but you may get unlucky and not get a
hit until the *last* few trillion attempts. On average you'll have to iterate
about half that huge number before you get a hit.

And it's lossy - if you hash all the possible 4K blocks with MD5, you'll find
that each of those 2^128 hashes has been hit about 256 times - and no
indication in the hash of *which* of the 256 colliding 4K blocks you have on
this iteration.  (The only reason that companies can do block-level
de-duplication by saving a hash as an index to one copy shared by all blocks
with the same hash value is because you have a *very small* fraction of the
possibilities covered, so if you saved a 4K block of data from somebody's
system32 folder under a given MD5 hash, it's *far* more likely that another
block with that same hash is from another copy of another identical system32
folder, than it is an actual accidental collision.)

Protip:  A good hash function is by definition one-way - given the data, it's
easy to generate the hash - but reversing it to find the pre-image (the data
that *generated* the hash) is massively difficult.



pgpcx4G19LjCd.pgp
Description: PGP signature


Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread JC Dill

 On 18/05/11 4:42 PM, Dorn Hetzel wrote:



On Wed, May 18, 2011 at 7:35 PM, JC Dill jcdill.li...@gmail.com 
mailto:jcdill.li...@gmail.com wrote:


 On 18/05/11 1:13 PM, Cameron Byrne wrote:


It's not.  These people need a pair of rabbit ears and a DVR.


Roughly 90% of the content I'm interested in watching is not
available over the air.  E.g. Comedy Central, CNN, Discovery,
Showtime/HBO, etc.

jc


Sure, but I'm guessing that something like that 80% of the content 
that 80% of people watch *is* available on some satellite/cable channel.


Yes, but most isn't available over the air with rabbit ears and a 
DVR.  One of the big appeals of Netflix is the $8/month for all you can 
eat versus ~$40-60 for various cable and satellite packages.


jc




Re: Had an idea - looking for a math buff to tell me if it's possible

2011-05-18 Thread Heath Jones
My point here is it IS possible to transfer just a hash and counter value
and effectively generate identical data at the remote end.
The limit that will be hit is the difficulty of generating and comparing
hash values with current processing power.

I'm proposing iterating through generated data up until the actual data.
It's not even a storage issue, as once you have incremented the data you
don't need to store old data or hash values - just the counter. No massive
hash tables.

It's a CPU issue.

Heath

On 19 May 2011 00:42, valdis.kletni...@vt.edu wrote:

 On Thu, 19 May 2011 00:26:26 BST, Heath Jones said:
  I wonder if this is possible:
 
  - Take a hash of the original file. Keep a counter.
  - Generate data in some sequential method on sender side (for example
 simply
  starting at 0 and iterating until you generate the same as the original
  data)
  - Each time you iterate, take the hash of the generated data. If it
 matches
  the hash of the original file, increment counter.
  - Send the hash and the counter value to recipient.
  - Recipient performs same sequential generation method, stopping when
  counter reached.

 MD5 is a 128 bit hash.

 2^128 is 340,282,366,920,938,463,463,374,607,431,768,211,456 - you're
 welcome
 to iterate that many times to find a duplicate. You may get lucky and get a
 hit
 in the first trillion or so attempts - but you may get unlucky and not get
 a
 hit until the *last* few trillion attempts. On average you'll have to
 iterate
 about half that huge number before you get a hit.

 And it's lossy - if you hash all the possible 4K blocks with MD5, you'll
 find
 that each of those 2^128 hashes has been hit about 256 times - and no
 indication in the hash of *which* of the 256 colliding 4K blocks you have
 on
 this iteration.  (The only reason that companies can do block-level
 de-duplication by saving a hash as an index to one copy shared by all
 blocks
 with the same hash value is because you have a *very small* fraction of the
 possibilities covered, so if you saved a 4K block of data from somebody's
 system32 folder under a given MD5 hash, it's *far* more likely that another
 block with that same hash is from another copy of another identical
 system32
 folder, than it is an actual accidental collision.)

 Protip:  A good hash function is by definition one-way - given the data,
 it's
 easy to generate the hash - but reversing it to find the pre-image (the
 data
 that *generated* the hash) is massively difficult.




Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Heath Jones
My point here is it IS possible to transfer just a hash and counter value
and effectively generate identical data at the remote end.
The limit that will be hit is the difficulty of generating and comparing
hash values with current processing power.

I'm proposing iterating through generated data up until the actual data.
It's not even a storage issue, as once you have incremented the data you
don't need to store old data or hash values - just the counter. No massive
hash tables.

It's a CPU issue.

On 19 May 2011 00:42, valdis.kletni...@vt.edu wrote:

 On Thu, 19 May 2011 00:26:26 BST, Heath Jones said:
  I wonder if this is possible:
 
  - Take a hash of the original file. Keep a counter.
  - Generate data in some sequential method on sender side (for example
 simply
  starting at 0 and iterating until you generate the same as the original
  data)
  - Each time you iterate, take the hash of the generated data. If it
 matches
  the hash of the original file, increment counter.
  - Send the hash and the counter value to recipient.
  - Recipient performs same sequential generation method, stopping when
  counter reached.

 MD5 is a 128 bit hash.

 2^128 is 340,282,366,920,938,463,463,374,607,431,768,211,456 - you're
 welcome
 to iterate that many times to find a duplicate. You may get lucky and get a
 hit
 in the first trillion or so attempts - but you may get unlucky and not get
 a
 hit until the *last* few trillion attempts. On average you'll have to
 iterate
 about half that huge number before you get a hit.

 And it's lossy - if you hash all the possible 4K blocks with MD5, you'll
 find
 that each of those 2^128 hashes has been hit about 256 times - and no
 indication in the hash of *which* of the 256 colliding 4K blocks you have
 on
 this iteration.  (The only reason that companies can do block-level
 de-duplication by saving a hash as an index to one copy shared by all
 blocks
 with the same hash value is because you have a *very small* fraction of the
 possibilities covered, so if you saved a 4K block of data from somebody's
 system32 folder under a given MD5 hash, it's *far* more likely that another
 block with that same hash is from another copy of another identical
 system32
 folder, than it is an actual accidental collision.)

 Protip:  A good hash function is by definition one-way - given the data,
 it's
 easy to generate the hash - but reversing it to find the pre-image (the
 data
 that *generated* the hash) is massively difficult.




Re: Had an idea - looking for a math buff to tell me if it's possible

2011-05-18 Thread Aria Stewart

On Wednesday, May 18, 2011 at 6:01 PM, Heath Jones wrote: 
 My point here is it IS possible to transfer just a hash and counter value
 and effectively generate identical data at the remote end.
 The limit that will be hit is the difficulty of generating and comparing
 hash values with current processing power.
 
 I'm proposing iterating through generated data up until the actual data.
 It's not even a storage issue, as once you have incremented the data you
 don't need to store old data or hash values - just the counter. No massive
 hash tables.
 
 It's a CPU issue.

Google Birthday paradox and hash collision


Aria Stewart




Re: Had an idea - looking for a math buff to tell me if it's possible

2011-05-18 Thread Justin Cook
Why is this on nanog?

Yes it is possible. But the CPU use and time will be absurd compared to
just sending the data across the network.

I would say attempting this with even a small file will end up laughable.
Passwords are just several bytes and have significant lifetimes.

-- 
Justin Cook
On 19 May 2011 01:03, Heath Jones hj1...@gmail.com wrote:
 My point here is it IS possible to transfer just a hash and counter value
 and effectively generate identical data at the remote end.
 The limit that will be hit is the difficulty of generating and comparing
 hash values with current processing power.

 I'm proposing iterating through generated data up until the actual data.
 It's not even a storage issue, as once you have incremented the data you
 don't need to store old data or hash values - just the counter. No massive
 hash tables.

 It's a CPU issue.

 Heath

 On 19 May 2011 00:42, valdis.kletni...@vt.edu wrote:

 On Thu, 19 May 2011 00:26:26 BST, Heath Jones said:
  I wonder if this is possible:
 
  - Take a hash of the original file. Keep a counter.
  - Generate data in some sequential method on sender side (for example
 simply
  starting at 0 and iterating until you generate the same as the original
  data)
  - Each time you iterate, take the hash of the generated data. If it
 matches
  the hash of the original file, increment counter.
  - Send the hash and the counter value to recipient.
  - Recipient performs same sequential generation method, stopping when
  counter reached.

 MD5 is a 128 bit hash.

 2^128 is 340,282,366,920,938,463,463,374,607,431,768,211,456 - you're
 welcome
 to iterate that many times to find a duplicate. You may get lucky and get
a
 hit
 in the first trillion or so attempts - but you may get unlucky and not
get
 a
 hit until the *last* few trillion attempts. On average you'll have to
 iterate
 about half that huge number before you get a hit.

 And it's lossy - if you hash all the possible 4K blocks with MD5, you'll
 find
 that each of those 2^128 hashes has been hit about 256 times - and no
 indication in the hash of *which* of the 256 colliding 4K blocks you have
 on
 this iteration. (The only reason that companies can do block-level
 de-duplication by saving a hash as an index to one copy shared by all
 blocks
 with the same hash value is because you have a *very small* fraction of
the
 possibilities covered, so if you saved a 4K block of data from somebody's
 system32 folder under a given MD5 hash, it's *far* more likely that
another
 block with that same hash is from another copy of another identical
 system32
 folder, than it is an actual accidental collision.)

 Protip: A good hash function is by definition one-way - given the data,
 it's
 easy to generate the hash - but reversing it to find the pre-image (the
 data
 that *generated* the hash) is massively difficult.




Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Dorn Hetzel


 Sure, but I'm guessing that something like that 80% of the content that
 80% of people watch *is* available on some satellite/cable channel.


 Yes, but most isn't available over the air with rabbit ears and a DVR.
  One of the big appeals of Netflix is the $8/month for all you can eat
 versus ~$40-60 for various cable and satellite packages.

 jc


 But it's not really $8/month, it's 8$ plus broadband.


Re: Yahoo and IPv6

2011-05-18 Thread Owen DeLong

On May 17, 2011, at 8:55 AM, Matthew Kaufman wrote:

 On 5/17/2011 5:25 AM, Owen DeLong wrote:
 
 My point was that at least in IPv6, you can reach your boxes whereas with
 IPv4, you couldn't reach them at all (unless you used a rendezvous service
 and preconfigured stuff).
 
 Actually almost everyone will *still* need a rendezvous service as even if 
 there isn't NAT66 (which I strongly suspect there will be, as nobody has 
 magically solved the rest of the renumbering problems) there will still be 
 default firewall filters that the average end-user won't know how or why to 
 change (and in some cases won't even have access to the CPE).

PI solves the majority of the renumbering problems quite nicely and is readily 
available for
most orgs. now.

Beyond that, I think you will see firewalls become much easier for the average 
person to
manage and it will become a simple matter of making an http (hopefully https) 
connection
to the home gateway and telling it which service (by name, such as VNC, HTTP, 
HTTPs, etc.
from a pull-down) and which host (ideally by name, but, may have other 
requirements here)
to permit.

Some firewalls already come pretty close to that.

There is also talk (for better or worse) of having something like UPNP, but, 
without the NAT
for enabling such services.

No rendezvous server required.

 
 For the former we can only hope that NAT66 box builders can get guidance from 
 IETF rather than having IETF stick its collective head in the sand... for the 
 latter the firewall traversal has a chance of being more reliable than having 
 to traversal both filtering and address translation.
 

I'm still hoping that we just don't have NAT66 box builders. So far, it's 
working out that way.

Owen





Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Chrisjfenton
try itu v.42bis 


Iridescent iPhone 
+1 972 757 8894



On May 18, 2011, at 15:07, Landon Stewart lstew...@superb.net wrote:

 Lets say you had a file that was 1,000,000,000 characters consisting of
 8,000,000,000bits.  What if instead of transferring that file through the
 interwebs you transmitted a mathematical equation to tell a computer on the
 other end how to *construct* that file.  First you'd feed the file into a
 cruncher of some type to reduce the pattern of 8,000,000,000 bits into an
 equation somehow.  Sure this would take time, I realize that.  The equation
 would then be transmitted to the other computer where it would use its
 mad-math-skillz to *figure out the answer* which would theoretically be the
 same pattern of bits.  Thus the same file would emerge on the other end.
 
 The real question here is how long would it take for a regular computer to
 do this kind of math?
 
 Just a weird idea I had.  If it's a good idea then please consider this
 intellectual property.  LOL
 
 
 -- 
 Landon Stewart lstew...@superb.net
 SuperbHosting.Net by Superb Internet Corp.
 Toll Free (US/Canada): 888-354-6128 x 4199
 Direct: 206-438-5879
 Web hosting and more Ahead of the Rest: http://www.superbhosting.net



Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread JC Dill

 On 18/05/11 5:10 PM, Dorn Hetzel wrote:



Sure, but I'm guessing that something like that 80% of the
content that 80% of people watch *is* available on some
satellite/cable channel.


Yes, but most isn't available over the air with rabbit ears and
a DVR.  One of the big appeals of Netflix is the $8/month for all
you can eat versus ~$40-60 for various cable and satellite packages.

jc


But it's not really $8/month, it's 8$ plus broadband. 


But I have broadband already.  To get Satellite or Cable it's another 
$40-60/month, to get Netflix it's another $8/month.


jc





Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Christopher Morrow
On Wed, May 18, 2011 at 8:03 PM, Heath Jones hj1...@gmail.com wrote:
 My point here is it IS possible to transfer just a hash and counter value
 and effectively generate identical data at the remote end.
 The limit that will be hit is the difficulty of generating and comparing
 hash values with current processing power.

 I'm proposing iterating through generated data up until the actual data.
 It's not even a storage issue, as once you have incremented the data you
 don't need to store old data or hash values - just the counter. No massive
 hash tables.

 It's a CPU issue.


i'd note it took you many more packets than my example of roughly the
same thing.

if you really want to save bandwidth, my 1 packet answer is the best answer.



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Barry Shein

Compression is one result.

But this is sometimes referred to as the inverse problem: Given a
set of data tell me a function which fits it (or fits it to some
tolerance.) It's important in statistics and all kinds of data
analyses.

Another area is fourier transforms which basically sums sine waves of
different amp/freq until you reach the desired fit. This is also the
basis of a lot of noise filtering algorithms, throw out the
frequencies you don't want, such as 60HZ or 50HZ, or all those smaller
than you consider interesting, high-freq noise, or low freq noise,
whatever.

Another buzz term is data entropy, randomness. If the data were
perfectly random then there exists no such function which can be
represented in less bits than the original data, which is why you
can't compress a compressed file indefinitely and also why it's
recommended you compress files before encrypting them, it's hard to
begin cracking a file which is pretty close to random.

And this is what you do when you give something like a MARC or ISBN or
Dewey Decimal index to a librarian and s/he brings you the book you
want. Effectively you've represented the entire book as that small
number. Imagine if you had to recite the entire text of a book to
find it unambiguously! See: Transfinite Number Systems.

-- 
-Barry Shein

The World  | b...@theworld.com   | http://www.TheWorld.com
Purveyors to the Trade | Voice: 800-THE-WRLD| Dial-Up: US, PR, Canada
Software Tool  Die| Public Access Internet | SINCE 1989 *oo*



Re: Had an idea - looking for a math buff to tell me if it's possible

2011-05-18 Thread Valdis . Kletnieks
On Thu, 19 May 2011 01:01:43 BST, Heath Jones said:

 My point here is it IS possible to transfer just a hash and counter value
 and effectively generate identical data at the remote end.

Nope. Let's use phone numbers as an example.  I want to send you the phone
number 540-231-6000.  The hash function is number mod 17 plus 5. So
5402316000 mod 17 plus 5 is '7'.  Yes, it's a poor hash function, except it has
two nice features - it can be worked with pencil and paper or a calculator, and
it has similar output distributions to really good hash functions (math geeks
would say it's an onto function, but not a one-to-one function).

http://www.regentsprep.org/Regents/math/algtrig/ATP5/OntoFunctions.htm

Go read that, and get your mind wrapped around onto and one-to-one.  Almost
all good hashes are onto, and almost none are one-to-one.

OK. counter = 0. Hash that, we got 5. increment and hash, we get 6. Increment
and hash, we got 7.  If we keep incrementing and hashing, we'll also get 7 for
19, 36, 53, 70, and roughly 317,783,289 other numbers before you get to my
phone number.

Now if I send you 2 and 7, how do you get that phone number back out, and be
sure you wanted *that* phone number and not 212-555-3488, which *also* ends up
with a hash of 7, so you'd send a counter of 2?  Or a number in Karubadam, 
Tajikistan
that starts with +992 3772 but also hashes to 7?

The problem is that if the number of input values is longer than the hash 
output,
there *will* be collisions.  The hash function above generates 17 numbers from 
5 to
22 - if you try to hash an 18th number, it *has* to collide with a number
already used. Think a game of musical chairs, which is interesting only because
it's an onto function (every chair gets a butt mapped to it), but it's not
one-to-one (not all chairs have *exactly one* butt aimed at them).

(And defining the hash function so that it's one-to-one and every possible
input value generates a different output value doesn't work either - because at
that point, the only counter that generates the same hash as the number you're
trying to send *is that number*.  So if 5552316000 generates a hash value of
8834253743, you'll hash 0, 1, 2,3, ... and only get that same hash again when 
you get
to the phone number.  Then you send me 5552316000,8834253743 and I hash some
5,552,315,999 other numbers till I reach the phone number.. which you sent me
already as the counter value.

tl;dr:  If the hash function is onto but not one-to-one, you get collisions that
you can't resolve. And if the hash function *is* one-to-one, you end up
sending a counter that's equal to the data.


pgp8cF84brL3k.pgp
Description: PGP signature


Re: Had an idea - looking for a math buff to tell me if it's possible

2011-05-18 Thread Heath Jones
 My point here is it IS possible to transfer just a hash and counter value
 and effectively generate identical data at the remote end.

Nope. Let's use phone numbers as an example.  I want to send you the phone
 number 540-231-6000.  The hash function is number mod 17 plus 5. So
  5402316000 mod 17 plus 5 is '7'.



 OK. counter = 0. Hash that, we got 5. increment and hash, we get 6.
 Increment
 and hash, we got 7.  If we keep incrementing and hashing, we'll also get 7
 for
 19, 36, 53, 70, and roughly 317,783,289 other numbers before you get to my
 phone number.

 Now if I send you 2 and 7, how do you get that phone number back out, and
 be
 sure you wanted *that* phone number and not 212-555-3488, which *also* ends
 up
  with a hash of 7, so you'd send a counter of 2?


The correct values I would send for that hash function are 7 and the
approximate 317783289, the counter is incremented each time a data value is
reached with a matching hash to the data that is to be communicated, *not
hashing of the counter*..

Example:
I want to send you the number
1.
The MD5 hash of this is f59a3651eafa7c4dbbb547dd7d6b41d7.
I generate data 0,1,2,3,4,5.. all the way up
to 
1,
observing the hash value of the data just generated each time. Whenever the
hash matches f59a3651eafa7c4dbbb547dd7d6b41d7 , I increment a counter.
Once I have reached the number I want to send you, I send the hash value and
the counter value.

You perform the same function starting at 0 and working your way up until
you have a matching counter value. The number of collisions in the range 0
- target is represented by the counter value, and as long as both sides are
performing the same sequence this will work.

Obviously this is completely crazy and would never happen with current
processing power... It's just theoretical nonsense, but answers the OP's
question.


Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Brett Frankenberger
On Thu, May 19, 2011 at 12:26:26AM +0100, Heath Jones wrote:
 I wonder if this is possible:
 
 - Take a hash of the original file. Keep a counter.
 - Generate data in some sequential method on sender side (for example simply
 starting at 0 and iterating until you generate the same as the original
 data)
 - Each time you iterate, take the hash of the generated data. If it matches
 the hash of the original file, increment counter.
 - Send the hash and the counter value to recipient.
 - Recipient performs same sequential generation method, stopping when
 counter reached.
 
 Any thoughts?

That will work.  Of course, the CPU usage will be overwhelming --
longer than the age of the universe to do a large file -- but,
theoretically, with enough CPU power, it will work.

For a 8,000,000,000 bit file and a 128 bit hash, you will need a
counter of at least 7,999,999,872 bits to cover the number of possible
collisions.

So you will need at leat 7,999,999,872 + 128 = 8,000,000,000 bits to
send your 8,000,000,000 bit file.  If your goal is to reduce the number
of bits you send, this wouldn't be a good choice.

 -- Brett



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Heath Jones
Ha! I was wondering this the whole time - if the size of the counter would
make it a zero sum game. That sux! :)

On 19 May 2011 03:52, Brett Frankenberger rbf+na...@panix.com wrote:

 On Thu, May 19, 2011 at 12:26:26AM +0100, Heath Jones wrote:
  I wonder if this is possible:
 
  - Take a hash of the original file. Keep a counter.
  - Generate data in some sequential method on sender side (for example
 simply
  starting at 0 and iterating until you generate the same as the original
  data)
  - Each time you iterate, take the hash of the generated data. If it
 matches
  the hash of the original file, increment counter.
  - Send the hash and the counter value to recipient.
  - Recipient performs same sequential generation method, stopping when
  counter reached.
 
  Any thoughts?

 That will work.  Of course, the CPU usage will be overwhelming --
 longer than the age of the universe to do a large file -- but,
 theoretically, with enough CPU power, it will work.

 For a 8,000,000,000 bit file and a 128 bit hash, you will need a
 counter of at least 7,999,999,872 bits to cover the number of possible
 collisions.

 So you will need at leat 7,999,999,872 + 128 = 8,000,000,000 bits to
 send your 8,000,000,000 bit file.  If your goal is to reduce the number
 of bits you send, this wouldn't be a good choice.

 -- Brett



Re: IPv6 Conventions

2011-05-18 Thread Owen DeLong

On May 18, 2011, at 8:05 AM, Iljitsch van Beijnum wrote:

 On 18 mei 2011, at 16:44, Todd Snyder wrote:
 
 1) Is there a general convention about addresses for DNS servers? NTP
 servers? dhcp servers?
 
 There are people who do stuff like blah::53 for DNS, or blah:193:77:81:20 for 
 a machine that has IPv4 address 193.177.81.20.
 
 For the DNS, I always recommend using a separate /64 for each one, as that 
 way you can move them to another location without having to renumber, and 
 make the addresses short, so a ::1 address or something, because those are 
 the IPv6 addresses that you end up typing a lot.
 
 For all the other stuff, just use stateless autoconfig or start from ::1 when 
 configuring things manually although there is also a little value in putting 
 some of the IPv4 address in there. Note that 2001:db8::10.0.0.1 is a valid 
 IPv6 address. Unfortunately when you see it copied back to you it shows up as 
 2001:db8::a00:1 which is less helpful.
 
 2) Are we tending to use different IPs for each service on a device?
 
 No, the same Internet Protocol.
 

I believe he meant different IP addresses and I highly recommend doing so.

If you do so, then you can move services around and name things independent of
the actual host that they happen to be on at the moment without having to 
renumber
or rename.


 Finally, what tools do people find themselves using to manage IPv6 and
 addressing?
 
 Stateless autoconfig for hosts, EUI-64 addressing for routers, VLAN ID in the 
 subnet bits. That makes life simple. Simple be good.
 

Yep, where that works, those are fine ideas.

Owen




NOT Buckaroo (WAS: Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company)

2011-05-18 Thread John Osmon
On Thu, May 19, 2011 at 07:45:44AM +1000, Jeffrey S. Young wrote:
 No matter where you go, there you are.
 [--anon?]

Oliver's Law of Location

Kinda usurped by Buckaroo Banzai in the movie by the same name.  It
always annoys me when attributed to that character.

Go back to your regular programming -- multicast, unicast, or broadcast
it'll proabably be more interesting this this dead end...



Re: Yahoo and IPv6

2011-05-18 Thread Michael Dillon
 Right now I see something like ool-6038bdcc.static.optonline.net for one
 of our servers, how does this
 mean anything to anyone else?

 Does http://وزارة-الأتصالات.مصر/ mean more to you?

 Or http://xn--4gbrim.xnymcbaaajlc6dj7bxne2c.xn--wgbh1c which is what it
 translates to in your browser.

Actually, it translates to
http://xnrmckbbajlc6dj7bxne2c.xn--wgbh1c/ in the browser which
then redirects to the URL that you quoted above.

Got to pay attention to these details if you want to keep up your
troubleshooting skills.

--Michael Dillon



RE: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Frank Bulk
You mean IP TV content products from folks such as SES Americom' IP-PRIME or
IPTV Americas or EchoStart IP TV or Intelsat?

-Original Message-
From: Holmes,David A [mailto:dhol...@mwdh2o.com] 
Sent: Wednesday, May 18, 2011 3:01 PM
To: Michael Holstein; Roy
Cc: nanog
Subject: RE: Netflix Is Eating Up More Of North America's Bandwidth Than Any
Other Company

I think this shows the need for an Internet-wide multicast implementation.
Although I can recall working on a product that delivered satellite
multicast streams (with each multicast group corresponding to individual TV
stations) to telco CO's. This enabled the telco to implement multicast at
the edge of their networks, where user broadband clients would issue
multicast joins only as far as the CO. If I recall this was implemented with
the old Cincinnati Bell telco. I admit there are a lot of CO's and cable
head-ends though for this solution to scale.