Hello again,

 

I have been reading the discussion about searching Freenet and I think perhaps a different idea is possible (bear with me though, perhaps it would be impossible or extremely difficult to implement).  I think what keeps Freenet out of the mainstream file-swapping/publishing genre is the inability to conduct searches in the Google manner.

 

Since we are talking about a distributed network, it doesn’t really make sense to me for there to be spiders in the typical sense that create large index files.  That seems unusable especially if you try to scale it to a large size.

 

What about this idea:

 

-          When individuals browse through Freenet, everything they see passes through their node, as does everything that they are relaying.

-          We add a spider function to the node such that it creates a local index of keywords, rankings, etc., of everything that passes through it and their matching keys.

-          We then add a new protocol message that searches the local index for keywords.

 

This new message type would pass through the system much like current messages, preserving anonymity.  And since the index was of everything viewing locally and relayed the node operator still maintains plausible deniability.

 

The one thing I see with this idea, is that the index could not simply be plaintext as that would leave traces of what the local user was viewing.  To combat that problem, we could use a global hash function that would hash the search keywords individually before the search message was ever sent.  This would protect the contents of the search, but would still be vulnerable to dictionary attacks against the hash. 

 

Let me know if this idea is completely far fetched or not. I’m just brainstorming aloud here.  It’s just I doubt that any searching can really succeed on a large scale without it being an integral part of the protocol.

 

Jeremy

Reply via email to