Hi
I am trying to use spellcheck in solr with below config but it throwing with
error while using spellcheck build or reload
it works fine otherwise for indexed search, can someone please help
implementing spellcheck corectly
schema.xml:
// fieldType declaration
fieldType name=textSpell
It's a null pointer exception. Either something is not defined
correctly or you are hitting a funny unexpected.
Which version of Solr is it?
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency
On
Hi,
Now we have a more informative error :
org.apache.solr.handler.dataimport.DataImportHandlerException:
java.lang.OutOfMemoryError: Java heap space
Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException:
java.lang.OutOfMemoryError: Java heap space
at
its solr-4.6.0
--
View this message in context:
http://lucene.472066.n3.nabble.com/Not-Able-to-Build-Spellcheck-index-SpellCheckComponent-prepare-500-Error-tp4129368p4129392.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi All,
I want to suggest the correct phrase if a typo is made while searching and
then search it using eDismax parser(pf,pf2,pf3), if no typo is made then
search it using eDismax parser alone.
Is there a way I can combine these two components , I have seen examples
for eDismax and also for
One of the Lucene guys is going to need to address this question.
I d know that Trie fields index additional values to support fast range
queries, so maybe you are merely seeing some of those generated values, and
if you look further you should see your actual indexed value. What exactly
are
There is one commercial solution
http://www.sematext.com/products/dym-researcher/index.html
On Saturday, April 5, 2014 4:07 PM, S.L simpleliving...@gmail.com wrote:
Hi All,
I want to suggest the correct phrase if a typo is made while searching and
then search it using eDismax
You can use faceting to human readable values.
On Saturday, April 5, 2014 7:08 PM, Jack Krupansky j...@basetechnology.com
wrote:
One of the Lucene guys is going to need to address this question.
I d know that Trie fields index additional values to support fast range
queries, so maybe you
Hi,
Did restart solr and you re-index after schema change?
On Saturday, April 5, 2014 2:39 AM, Vijay Kokatnur kokatnur.vi...@gmail.com
wrote:
I had already tested with omitTermFreqAndPositions=false . I still got the
same error.
Is there something that I am overlooking?
On Fri, Apr 4,
Shawn,
I suppose e yields syntax error. Therefore, this case doesn't prove
anything yet.
Haven't you tried sqrt(-1) or log(-1) ?
On Sat, Apr 5, 2014 at 1:47 AM, Shawn Heisey s...@elyograg.org wrote:
On 4/4/2014 3:13 PM, Mikhail Khludnev wrote:
I suppose
Is the URL for the Solr request absolutely 100% identical in both cases?
By not getting a response, do you mean it hangs andtimes out or that the
response is empty?
-- Jack Krupansky
-Original Message-
From: EXTERNAL Taminidi Ravi (ETI, Automotive-Service-Solutions)
Sent: Friday,
Set the q.op parameter to OR and set mm=10% or something like that. The idea is
to not excessively restrict the documents that will match, but weight the
matched results based on how many word pairs and triples do match.
In addition, use the pf parameter to provide extra weight when the full
The LucidWorks Search query parser lets you use the all pseudo-field to
search across all fields.
See:
http://docs.lucidworks.com/display/lweug/Field+Queries
For example:
q = all:some_word
-- Jack Krupansky
-Original Message-
From: Ahmet Arslan
Sent: Friday, April 4, 2014 8:13 AM
On 4/5/2014 1:21 PM, Mikhail Khludnev wrote:
I suppose e yields syntax error. Therefore, this case doesn't prove
anything yet.
Haven't you tried sqrt(-1) or log(-1) ?
Using boost=sqrt(-1) is error-free whether I include the sort parameter
or not. That seems like a bug.
Thanks,
Shawn
Hi Dmitry;
I think that such kind of hacking may reduce the search speed. I think that
it should be done with boundary scanner isn't it? I think that bs.type=LINE
is what I am looking for? There is one more point. I want to do that for
Turkish language and I think that I should customize it or if
Yes, I saw that earlier in one of your other postings. Is it the case that we
cannot use the SpellChecker with a parser like edismax by making a
configuration change without having to go thru this commercial product?
Sent from my HTC
- Reply message -
From: Ahmet Arslan
One technique is to add a copyField directive to your schema, which can use
a wildcard to copy a bunch of fields to a single, combinaed field that you
can query directly, such as rullAll:key.
Or, consider using a multivalued field.
-- Jack Krupansky
-Original Message-
From:
As we all know, maxDistErr=0.09 is approx 1 meter.
If I increase it to maxDistErr=0.9 then it would be 10 meters. Still
really good for most usages (finding a house, etc).
What would be the index size improvement on a million rows? And what would
the anticipated performance gain be? In
Thoughts on getting together for breakfast? a little Solr meet up?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
I'll be there. I'd love to meet up. Let me know!
Sent from my Windows Phone From: William Bell
Sent: 4/5/2014 10:40 PM
To: solr-user@lucene.apache.org
Subject: Anyone going to ApacheCon in Denver next week?
Thoughts on getting together for breakfast? a little Solr meet up?
--
Bill Bell
Healthgrades is also hiring for a Linux/SOLR Admin. Ability to:
- Manage production, and development SOLR machines using Debian Linux
- Knowledge of Jetty, Java 7
- 1+ years Solr experience
Downtown Denver, CO location.
Contact me or see me at ApacheCon... Or ghay...@healthgrades.com
--
21 matches
Mail list logo