Hi

If you are looking for something fast with a GUI, have a look at Recoll 
(uses Xapian index database.) This is my preference as it is actively 
maintained and comes with a decent manual (the Beagle project appears to 
have been abandoned.) 

If you want something portable with a GUI, have a look at DocFetcher (uses 
Lucene.) 

http://alternativeto.net/software/recoll/

There are numerous server side solutions -- a few listed here.

http://alternativeto.net/software/elasticsearch/

regards

On Wednesday, July 30, 2014 10:47:54 AM UTC+2, Kathir J wrote:
>
> Hi,
>
> i have already posted the question in 
> http://stackoverflow.com/questions/25030572/50-000-articles-parsing-html-text-xml-faster-search-results
>  
> and this is not double posting but to check about the TiddlyWiki 
> feasibility and fast.
>
> I have huge collection of articles which can be converted into a 
> TiddlyWiki pages.
>
> currently for example
>
> 1947 - 100 1948 - 200 ... 2014....
>
> Each page has huge text example 250/500 KB size.
>
> I tried with Zim portable edition and added all the pages and sub-pages 
> and the search was really slow to get the results eventhough it was 
> referring to local directory and index is enabled.
>
> I need to search and display the results in less than 5 seconds.
>
> example search keywords:
>
>    1. going
>    2. going away
>    3. passing over
>
> clarifications:
>
> 1. How about the speed of Tiddly Wiki
> 2. How to get the results in the expected time frame?
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/tiddlywiki.
For more options, visit https://groups.google.com/d/optout.

Reply via email to