- - - - - - - - - - - - - - - - - - - - - - - - - - - -
Name: gek
Subject: performance issue 2

I have a question about perfomances. Let's suppose that i'm going to use 
cache-mode to index my data. I have to index a huge amount of data, let's say ~ 
k*10^9 urls.
can this be really performed or is it an utopia?
what about some intelligent criteria/advices to reduce the amount of indexed 
data?
what about performance in this case?
would be helpful to distribute word indexes among servers?
I need the search engine to handle at least 10 queries per second! is it 
possible? 
I want also to ask:
since i have a very large amount of data to index (and to search through), is 
it anyhow possible using SEARCHD to distribute words index between servers, as 
written in the documentation? How? Because i think that this is the only 
possibility to have good performance even with VERY LARGE data.
can somebody answer to me? 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -

Read the full topic here:
http://www.dataparksearch.org/cgi-bin/simpleforum.cgi?fid=02;post=

Reply via email to