Hi Furkan
Actually i have set of CandidateResumes and comments which are related to
whether resumes have been selected or rejected now I have to make machine
learn itself that if next time such or similar resume comes based on the
pre history it should go in which bag selected or rejected .
Than
very different one is search , another is ml.
But, I want use ml results to improve solr performance, for example, buy
more, view more.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-add-machine-learning-to-Apache-lucene-tp4135052p4138216.html
Sent from the Lucene -
1000+ is solr, lucenen more fast.
--
View this message in context:
http://lucene.472066.n3.nabble.com/NewBie-To-Lucene-Perfect-configuration-on-a-64-bit-server-tp4136871p4138215.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
-
You don't need to worry about the 1024 maxBooleanClauses, just use a
TermsFilter.
https://lucene.apache.org/core/4_8_0/queries/org/apache/lucene/queries/TermsFilter.html
I use it for a similar scenario, where we have a data structure that
determines a subset of 1.5 million documents from outsi
bq: We don’t want to search on the complete document store
Why not? Alexandre's comment is spot on. For 500 docs you could easily
form a filter query like
&fq=id1 OR id2 OR id3 (solr-style, but easily done in Lucene). You
get these IDs from the DB
search. This will still be MUCH faster than in
On 26/05/2014 05:40, Shruthi wrote:
Hi All,
Thanks for the suggestions. But there is a slight difference in the
requirements.
1. We don't index/ search 10 million documents for a keyword; instead we do it
on only 500 documents because we are supposed to get the final result only from
the 500
Hi All,
Thanks for the suggestions. But there is a slight difference in the
requirements.
1. We don't index/ search 10 million documents for a keyword; instead we do it
on only 500 documents because we are supposed to get the final result only from
the 500 set of documents.
2.We have already f