Hi,
in general you can implement the whole stuff as described in this paper using
Lucene - you don’t need to customize Lucene for this just use its official apis
and tokenizers:
You have to build your own Analyzer that builds trigrams and does *not*
tokenize on whitespace and so on. From me it
Hi Erick,
this was a question about Lucene so "&debug=true" won't help. It also talks
about *Lucene's Facetting*, not Solr's.
Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Erick Erickson [mailto
True, but Erick's questions are still valid :-). We need more info to
answer these questions. So Simona, the more info you can give us the better
we'll be able to answer.
On Fri, Feb 26, 2016, 10:54 Uwe Schindler wrote:
> Hi Erick,
>
> this was a question about Lucene so "&debug=true" won't help
Hi Simona,
In addition to Ericks' questions:
Are you talking about *search* time or facet-collection time? And how many
results are in your result set?
I have some experience with collecting facets from large results set, these
are typically slow (as they have to retrieve all the relevant facet
OK, I won't do any caching, Lucene will cache if applicable:
https://issues.apache.org/jira/browse/LUCENE-6855
However, I think the usage in the Javadoc (
https://lucene.apache.org/core/5_5_0/core/org/apache/lucene/search/LRUQueryCache.html)
should be fixed anyway.
On Thu, Feb 25, 2016 at 7:07 PM