From: Bob Frankston <[EMAIL PROTECTED]>
Date: May 21, 2007 12:54:58 PM EDT

Am I missing something but this seems to hark back to the days of  
making the network smarter and the users dumber. Why do we assume  
that the most interesting applications require dedicated resources?  
Is this about Hollywood or about sharing information? Now we want the  
network to guarantee security ­ are we so desperate for “peace of  
mind” that we’d rather return to the past then deal with reality?  
Reminds me of the peace of mind that people find in Intelligent Design.

I’d feel far more comfortable if GENI funded efforts to understand  
connected applications rather than trying to make a smarter network  
or “a new communications infrastructure built alongside the Internet”  
as if the Internet were a physical object not a concept.

Alas, this slicing up the substrate into predefined managed middles  
sounds like just the opposite of the defining concept of the end-to- 
end Internet. It makes the applications dependent upon highly  
valuable assured bits which is a return to the world of the center- 
defined telephone system. By assuring dependency it breaks the  
fundamental dynamic that has made the Internet work so well. In  
today’s Internet (prototype) applications are not dependent upon any  
particulars of the path and thus we've been able to take advantage of  
an abundance of "junk bits".

What is glaringly absent from all these efforts is supporting the  
Internet at the edge or, more to the point, from the edge. The  
networks in our homes must operate completely without any dependency  
on any part of the network outside -- thus you cannot rely on an IP  
address allocation or the DNS if you are to send a message from your  
light switch to your light fixture. You can then interconnect your  
network to your neighbors’ networks without depending on any network  
elements from others. And eventually you will connect the world  
without any dependency upon the benevolence of ICANN or ARIN and you  
will not depend on the DNS for stability.

Yet all this research seems to be going in just the opposite  
direction and is assuring that the applications will be fatally  
dependent upon a synthetic and sterile version of the real world. The  
real world exists, the P2P and other communities have embraced it.  
But the real Internet is far too frightening to those who want to buy  
solutions to well-defined problems. Better to defang it and assure  
that only good applications will be funded and that it will be the  
1950's again with The Internet Company assuring the proper allocation  
of resources for each application. What’s the difference between TIC  
and TPC?

Remember that a BBN spinoff Genuity bet the company on selling high  
value bits and it lost the bet. If our military thinks that this is  
good design then we have every reason to be very afraid.




From: Lauren Weinstein <[EMAIL PROTECTED]>
Date: May 21, 2007 2:32:21 PM EDT

I'm not sure that Bob's concerns about GENI specifically are
entirely justified, but he's certainly correct about the overall
trends.

There's scarce interest among the corporate or governmental elite
(to the extent that these are distinct categories) for the end-to-end
Internet with users in control.

In fact, the push is in exactly the opposite direction, both for
income stream maximization and for societal "management" (the latter
will typically be termed "security," "war on terror," and/or
"protecting our children."

The end-to-end Internet is an amusement that the communications
giants and global governments were willing to let develop so long as
it didn't interfere with established agendas.  Now that the Internet
as we know it is a real threat to the status quo, the push to
"evolve" to something more controllable is in full swing.

The dream of such proponents of an "under our thumb" Internet:

Every bit controlled.
Every bit tracked.
Every application certified.
Uncertified end-to-end applications strangled.
Every connection and packet tappable.
Every user fully identified.
All data recorded.
All data retained.
Love Big Brother.

This will take time of course, and might still be prevented.  But by
and large, folks like us who keep pushing for end-to-end Internet
user control rather than central control are viewed as increasingly
irritating and perhaps even dangerous dinosaurs.

--Lauren--
Lauren Weinstein
[EMAIL PROTECTED] or [EMAIL PROTECTED]
Tel: +1 (818) 225-2800
http://www.pfir.org/lauren



From: "Zegura, Ellen Witte" <[EMAIL PROTECTED]>
Date: May 22, 2007 3:33:53 AM EDT


I’ve seen the IP thread about GENI that was prompted by yesterday’s  
announcement that BBN will be the GENI Project Office (GPO).  Along  
with Scott Shenker, I am helping lead the GENI Science Council (GSC),  
the group that represents the research community that will make use  
of the GENI facility.

I wanted to respond to a few parts of Bob’s message.  Primarily, I  
wanted to make clear that GENI is a facility for experimenting with  
new Internet architectures, not a proposal to BE the new  
architecture.  The reason for slicing is to allow multiple  
experiments to run on the same physical resources at the same time.    
The slices are managed so that experimenters don’t step on one  
another during the testing phase.  There are other funding programs  
in NSF that will fund the research on alternative architectures and  
ideas that can be tested in the GENI facility.  These include the  
FIND program, and many other networking and distributed systems  
programs.  Of course, one could propose that GENI itself be the new  
architecture ­ and surely some will -- and then folks like Bob and  
others could reasonably debate the wisdom of that.  We’re not there  
yet, though, and its important to keep the distinction between GENI  
as a facility and GENI as a proposed new architecture.

With respect to the suggestion that money would be better spent on  
research that improves our understanding of how to operate well from  
the edge, I guess I view that as complementary.  GENI is likely to  
support experiments that modify edges and use a standard IPv4 or v6  
core, perhaps with instrumentation inside the network that would  
allow more insight into what helps edge-controlled apps work better.

Ellen

From: Bob Frankston <[EMAIL PROTECTED]>
Date: May 22, 2007 4:50:28 PM EDT

I’ll admit that much of this is an expression of my own frustrations  
as I try to explain why Internet Inc is the wrong model just as I try  
to explain why broadband is the anti-Internet. I see GENI as a  
symptom of incrementalism even though it’s just a funding mechanism  
for research. I realize that it is not really supposed to be the next  
Internet but nonetheless it is filling that role by being a focus of  
Internet research. I don't blame the sponsors for the hype but it's  
still useful to take a look at it as a proxy for the idea that we  
should look at the network architecture rather than at how we use  
connectivity. I’m also biased towards solutions that empower the  
individual rather than institution be they corporations or  
governmental agencies.

If there indeed equivalent “from the edge” efforts I would like to  
know about them. For now I’m treating GENI is the face of Internet  
research whether or not it’s meant to be. It’s a useful foil for  
contrasting the network-centric approach to alternative approaches.

If GENI is indeed "a facility for experimenting with new Internet  
architectures" then the absence of the null case is telling. This is  
not necessarily the fault of those sponsoring GENI or other such  
efforts except as a reminder of the lack of funding for approaches  
that aren't as well-defined.

I've been arguing that the Internet has been defined by the end-to- 
end argument as a constraint -- that is, the inability to depend on  
any particular architecture. Attempts to improve the architecture  
with smart protocols like multi-casting and to extend it with V6 have  
been problematic while P2P approaches have met with wide spread  
adoption. These P2P efforts are the real inheritors of the Internet's  
End-to-End constraint.

What most concerns me is the seeming lack of this defining constraint  
in the research efforts. It leads to efforts to solve complex  
problems like phishing and scaling in sterile environments that can't  
exhibit these problems. There seems to be a presumption that issues  
like security can be addressed in a physical architecture of a network.

I cited the light switch/fixture problem because it seems so trivial  
yet demands a solution that does not depend on the network as a  
solution-provider. All we can presume is two consenting devices that  
might be able to exchange messages. To be more precise one can indeed  
assume a network of services but what makes this problem so  
interesting is that it arises because I have not permitted myself to  
take advantage of the overwhelming temptation to rely on network  
services and thus we can decouple the system elements. Such solutions  
are not only able to exhibit Moore's law effects they are also more  
stable.

My own "research" consisted of trying to do projects such as home  
control (and, while at Microsoft, working with Honeywell and Intel to  
(unintentionally) learn what doesn't work). It's the same kind of  
constraint that I used for home networking (no installers or service  
providers) which arose from my experience with personal computing. I  
remember efforts like Project Athena failing to "get" personal  
computing.

I don't think I'm too cynical to observe that well-defined  
experiments on a network test bed are very appealing to those who  
need to have fundable projects with deliverables. Congressional  
scrutiny of funding almost requires local justification of each step  
while efforts to improve connectivity at the edge are too-easily  
treated as threatening by those who associate P2P with a loss of  
control.

The question then is where is the effort to solve problems in making  
use of whatever transport is available to do our own networking and  
to create solutions at the edge. Problems like phishing cannot be  
solved inside the network nor can they be "solved" in a closed sense.  
I also argue that efforts to solve scaling problems in a test bed  
fall into what I consider the trap of trying to solve a problem  
instead of seeing the opportunity in solutions already available.

Compositing local connectivity when we can assume abundant capacity  
is very different from presuming scarcity with a single entity can  
manage the complex routing through chokepoints. Why aren't we asking  
why we can't take advantage of the full capacity of the available  
fiber (and other transports) rather accepting this synthetic scarcity  
and reveling in the complex algorithms necessary to work around it?

How do we support projects that aren't as amenable to closed  
solutions but are still vital? This is a nontrivial question and,  
perhaps, we have to rely on by-product of the effective marketplaces.  
This is why individuals can create P2P solutions without  
organizational funding but such funding helps in doing professional  
solutions as we’ve seen with Skype and from Firefox’s challenge in  
scaling the effort. That leaves a lot of problems, like phishing,  
seemingly orphaned because it’s not as amenable to solutions in  
isolation.





---------------------------------------------------------------
             WWWhatsup NYC
http://pinstand.com - http://punkcast.com
--------------------------------------------------------------- 

_______________________________________________
Discuss mailing list
[email protected]
http://lists.isoc-ny.org/mailman/listinfo/discuss

Reply via email to