On Friday 15 August 2003 14:03, Newsbite wrote:
> I don't know why some people claim it's not possible to create a true
> searchengine; I've already got information one is working on something with
> frost. And tho I doubt it's going to be what amounts to a 'normal'
> searchengine, I do remember reading a paper here about a Scalable Anonymous
> Freenet Searchengine. It was in pdf format, I believe. Maybe Ian can shed
> some light on this or paste the link/paper again.

I read that paper. Unfortunately, IIRC, the proposed architecture is 
inherently contradictory to the primary goals of the Freenet design.

> so at least in theory, it's quite possible to create a searchengine for
> non-centralised networks.

Almost certainly. But it is not an easy problem to solve.

> Besides, though some arguments are valid, I think some are too pessimistic.
> If one takes the indexes as a group of  keywords and/or the short
> description + the link (and thus not the duplication of the content), then
> every single entity in the database would stay under 5kb. So, thousands
> could be placed in the database, before the js got too heavy for practical
> use.

1) An index that is not based on all of the content of a page is nowhere 
nearly as useful as having the index of all ther terms on a page.

2) You seem to vastly overestimate the capabilityes of JavaScript. Remember 
that this language was originaly designed to deal with things like roll-over 
graphics. Now you are talking about creating an entire client-end application 
in it. While it may be possible, there are issues that may be difficult to 
overcome...

Gordan
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to