-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Mon, 27 Mar 2006, Lalo Martins wrote:

And so says Peter Amstutz on 27/03/06 12:46...
The solution is to completely re-think how a remote site is identified,
and I will discuss this in my next email.

Cryptographic key pairs?  (Which would then also open up the field to
encrypted payloads when we deem them necessary?)

So A can refer to B by whatever address; before C even tries to resolve
the address, it will first check in its sitecache if there is any site
with that same public key.

And/or, if C "knows" it's in a privileged network (gigabit, behind
firewall, etc), and the cache misses, it can try service discovery next,
and only if that also fails, revert to DNS.

A lot of ideas here.

First, I'm going to coin a term "site id" to refer to the public key fingerprint that would be used to identify a site.

A key fingerprint is a 64-bit identifier which contains a hash code of the key along with a small portion of data from the key. A key fingerprint is extremely difficult to forge, because the attacker would need to a) find a hash collision that is a valid crypto key and the b) ensure that the it contained a certain four bytes at a specific position in the key. The computation required to brute-force attack this is astronomical, so it is considered a valid way of identifying a public key. (I got the idea from PGP/GPG).

The so far assumption is that the normal mode of operation will be that if you receive unrecognized site id, it's from some site you are already in contact with. So, you can query the remote host that sent you the id to get the contact information, and the complete 1024-bit (or whatever) public key for that site.

Or, you could broadcast the query to all connected sites. What's really cool is the fact that it's okay if you get a whole stack of different possible ways to contact a given remote site. You sort the list based on some criteria, start from the top and work your way down. When you connect to a site, you go through a public key handshake. Did it fail? Try the next site on the list. Stale site connection data isn't a big deal.

So there is a lot of for distributed mapping of the site id (key fingerprint) to an actual host.

=======================

Funny, that kind of ties into one other thing I have been thinking.  It
is quite possible to have a VOS-based "directory" service, comparable to
DNS.  You could configure on your client machine the address of your
preferred pointer site, just like you configure your DNS server.

...


I think you lost me here. Searching for site ids / key fingerprints works because it cannot be forged. Creating a namespace of arbitrary strings, however, seems a lot more complicated?

Let me describe what I think you are suggesting.

A system which is somewhat parallel to DNS. There would be a "root" or "hub" server who's purpose is to store a tree of links to remote vobjects. To resolve a name you go to your well-known root server and follow the chain of parent-child links down to find the object you are looking for.

Like DNS, you would probably delegate serving subtrees to other sites, so depending on where you start you might have to contact several servers to find the vobject you are looking for.

Since you can travel both vertically and horizontally when traversing the vobject graph, as you said there is some very interesting potential for having strong semantic relations. Also, there isn't the same requirement for having a strong central root server; rather you can have a partitioned network of servers for different semantic topics.

Something I've also thought about is the potential for self-organizing search networks, where a site has a bunch of keywords, tags, search terms and other metadata it wants to make known, so it would go out and find similar sites or public indexes and register itself as being related.

Unfortunately such ideas quickly run into the harsh reality that as soon as such a system gained any traction it would likely be spammed to hell and back, rendering it useless. If you want to get complicated you can then have trust networks which help filter the on data quality, or you need a policing system like wikipedia to constantly fight against spam, but in any event experience has shown it is an extrodanarily difficult problem.

[   Peter Amstutz   ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED]  ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFEKNBFaeHUyhjCHfcRAgI1AKCzuSQVxp44Mnv2HfgC6jlZQHNqBgCgqcuA
2Grldy44Zxe0/SvU2B7Gl/w=
=P27Y
-----END PGP SIGNATURE-----


_______________________________________________
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to