On 02/19/2013 04:53 AM, Amir Sagie wrote: > I think more context as to who will actually use this would be > useful...
I don't think the guys @CERN or @MIT had any idea what the web would be used for by the end of the century when they decided on HTTP 1.0 and HTML. First of all, I don't think there is no single answer to this. The incentives are spread all over the field. People running infrastructure (including arig) may want (more) distributed DNS (just as we also want distributed NTP or whatever basic infrastructure service is still centralized) just to reduce the single-point-of-failure in the current centralized infrastructre. This especially applies to mesh networks with potentially more unreliable links/nodes... Current file and media hosts on the other hand spend quite a lot on centralized infrastructure. I guess they'd rather not spend it, given that the user experience would not change significantly... Web-developers will love having a (secure) DHT which can be accessed through JavaScript -- as that would obsolete the whole PHP/SQL mess we currently use on the server-side. >> I envision an architecture that extends the current W3C web by a number of >> additions: >> >> >> -1. (a more distributed) Networking infrastructure, multi-homing in ARPA, >> local mesh (arig), global overlay-networks like DN42, VPNs and TOR >> * transparent to client applications due to trivial routing for IPv6 >> multi-homed nodes as there are now address-space collisions >> > > I can only add the extending our capability to form long range wireless > links, and leverage distributed, logical time-synced antennas for is, > IMO, crucial for achieving truly distributed infrastructure. Yes, nice one, good that you mention that. >> 0. distributed DNS (many are breaking their heads on it) >> DNSSec provides the base for it, however, probably there will remain many >> implementations, ranging from hierachical/centralized to fully distributed, >> depending on the clients connectivity and system properties. >> * implemented as a device-type specific >> (permanent-infrastructure,stationary,mobile) daemon on each network node, >> therefore also transparent to the client. >> > > I'm not sure if/how any of the distributed DNS implementation handle the > need to have DNS name demultiplexing, i.e enabling gnu.org to point both > to the GNU Project as well as to Gnu wildebeest protection program... > bad example? well how about john-doe.name when there are multiple > john-doe's? Another cool feature would allow one to automatically enroll as an > example.com mirror by somehow 'registering' to local DNS servers. Interesting thoughts and at least 2 steps ahead of what I was thinking. I imagined a centralized PKI for backward-compatibility with the existing DNS system and storage/queries through a DHT-wrapper which does DNSSEC verification. https://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions (definitely worth the read!) >> 1. (more) DHT >> e.g. using an existing implementation like libbitdht >> BitTorrent DHT is already widely adopted, namespaces can easily be added. >> I can imagine a use like NoSQL, given there is a JavaScript API provided by >> the browser... >> The DHT service could be extended with operations needed to run a distributed >> PKI, see section 4. I guess I should read up: https://wiki.freenetproject.org/X-Vine seems related ;) >> 2. Distributed storage and streaming (to be implemented in VLC, hopefully >> Mozilla foundation will realize the need...) >> retrievel e.g. >> http://torrentstream.info >> there are proposals to add libtorrent streaming to VLC >> why not to Firefox? > > This sounds like a bad idea - what we need is open standards, which > would allow any OS/browser to support this 'torrent-ification' of data. precisely true. this is why I hope for the involvement of existing organizations previously involved in Web-standardization. I believe that Mozilla could do a great deal here, if they want to. Anyway, as previously seen with CSS3, there will always be an experimental/evaluation phase before a new standard is made -- and that's good, otherwise we'd end up with stuff which doesn't actually work. >> 3. WebSockets<->TOR(onion) local service on each client :) > > Not clear what #3 stands for... > > As for running Tor using websockets, while cool, it's still an awkward > solution and ultimately something you don't want most people to be > using. I wonder how "cool" would a Tor node implemented as a Java > web-applet sound nowadays... Probably this is not something everybody wants. However, if you already run ToR, WebSocket might be a more fancy API for Web-applications (compated to SOCKS). This could be useful for a bunch of things, such as chatrooms or whatever real-time communication between users of a site/service. _______________________________________________ arig-discuss mailing list arig-discuss@lists.subsignal.org https://lists.subsignal.org/mailman/listinfo/arig-discuss