Julian,

I'm not sure I understand your proposal, but I do think what Google
does is not something trivial, straightforward or easy to automate. I
remember reading an article about Google's ranking strategy. IIRC,
they use the patterns of mutual linking between websites. So far, so
good. But then, when Google became popular, some companies started to
build link farms, to make themselves look more important to Google.
When Google finds out about this behavior, they kick the company to
the bottom of the index. I'm sure they have many secret automated
schemes to do this kind of thing, but it's essentially an arms race,
and it takes constant human attention. Local search is much less
problematic, but still you can end up with a huge pile of unstructured
data, or a huge bowl of linked spaghetti mess, so it may well make
sense to ask a third party for help to sort it out.

I don't think there's anything architecturally centralized about using
Google as a search engine, it's just a matter of popularity. You also
have Bing, Duckduckgo, whatever.

 On the other hand, data storage and bandwidth are very centralized.
Dropbox, Google docs, iCloud, are all sympthoms of the fact that PC
operating systems were designed for local storage. I've been looking
at possible alternatives. There's distributed fault-tolerant network
filesystems like Xtreemfs (and even the Linux-based XtreemOS), or
Tahoe-LAFS (with object-capabilities!), or maybe a more P2P approach
such as Tribler (a tracker-free bittorrent), and for shared bandwidth
apparently there is a BittorrentLive (P2P streaming). But I don't know
how to put all that together into a usable computing experience. For
instance, squeak is a single file image, so I guess it can't benefit
from file-based capabilities, except if the objects were mapped to
files in some way. Oh, well, this is for another thread.


-Best

 Martin

On Fri, Mar 2, 2012 at 6:54 AM, Julian Leviston <jul...@leviston.net> wrote:
> Right you are. Centralised search seems a bit silly to me.
>
> Take object orientedism and apply it to search and you get a thing where
> each node searches itself when asked...  apply this to a local-focussed
> topology (ie spider web serch out) and utilise intelligent caching (so
> search the localised caches first) and you get a better thing, no?
>
> Why not do it like that? Or am I limited in my thinking about this?
>
> Julian
>
> On 02/03/2012, at 4:26 AM, David Barbour wrote:
>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to