What release of Solr?
4.8.1.
Do you have autoGeneratePhraseQueries=true on the field?
No, the config I've provided is the exact.
And when you said But any of these does, did you mean But NONE of these
does?
Whoops, yes, fixed that.
--
View this message in context:
From this page: http://wiki.apache.org/solr/SchemaXml
autoGeneratePhraseQueries=true|false (in schema version 1.4 and later
this now defaults to false)
Just checked, I've schema name=sunspot version=1.0 so this may be true
by default?
--
View this message in context:
Hello,
Yes, with schema version 1.5 all those examples that didn't work do work
now. But results also include records that match by com, twitter, etc,
which is not desirable.
It seems we do need autoGeneratePhraseQueries=true but also need to ignore
blacklisted words. Is that somehow possible?
Hallo,
I have solr 4.9.0 and I’m getting the above error if I try to index a pdf
document with the Solr Web-Interface.
Here is my schema and solrconfig. Do I miss something? :
?xml version=1.0 encoding=UTF-8 ?
schema name=simple version=1.1
types
On 8/19/2014 7:23 PM, S.L wrote:
I get No Live SolrServers available to handle this request error
intermittently while indexing in a SolrCloud cluster with 3 shards and
replication factor of 2.
I am using Solr 4.7.0.
Please see the stack trace below.
There's pretty much zero information to
Because my is the 7th suggestion down the list, it is going to need more than
30 tries to figure out the one that can give some hits. You can increase
maxCollationTries if you're willing to endure the performance penalty of
trying so many replacement queries. This case actually highlights why
Solr 4.8.1
Correct value: Wardell F E B Dr
Just wondering if anyone can see an issue with my spellchecker settings. There
is no collation value and I'm hoping that someone can explain why.
lst name=spellchecker
str
name=classnameorg.apache.solr.spelling.DirectSolrSpellChecker/str
I'm working with business names which are even sometimes people names such as
Wardell F E B Dr . I suspect I need to change my logic to not try to rely on
spellchecking so much as you suggest.
Thanks.
Corey
-Original Message-
From: Dyer, James [mailto:james.d...@ingramcontent.com]
New to Solr and looking at an Endeca to Solr/hybris implementation. Is
there anything available about migrating existing rules from endeca to
solr/hybris? So far I haven't seen anything.
Thank you!
From: Erick Erickson erickerick...@gmail.com
To: solr-user@lucene.apache.org
Date:
I'm going to reply to my own question. After recalling a previous email from
James Dyer, I know the answer.
-Original Message-
From: Corey Gerhardt [mailto:corey.gerha...@directwest.com]
Sent: August-20-14 9:54 AM
To: Solr User List
Subject: Business Name Collation
Solr 4.8.1
You need to change the handler to /update/extract - the handler that accepts
“rich documents”, whereas /update only handles the types it mentions in the
error message.
Erik
On Aug 20, 2014, at 9:34 AM, Croci Francesco Luigi (ID SWS) fcr...@id.ethz.ch
wrote:
Hallo,
I have solr
Greetings,
We are glad to announce immediate availability of YourKit Java Profiler 2014.
Download: http://www.yourkit.com/download/
Changes: http://www.yourkit.com/changes/
==
MOST NOTABLE CHANGES AND NEW FEATURES:
==
NEW
Here’s the repo:
https://github.com/whitepages/solrcloud_manager
Comments/Issues/Patches welcome.
On 8/18/14, 11:28 AM, Greg Solovyev g...@zimbra.com wrote:
Thanks Jeff, I'd be interested in taking a look at the code for this
tool. My github ID is grishick.
Thanks,
Greg
- Original
Hi Alex,
I guess a spatial tutorial might be helpful, but there isn’t one. There is
a sample at the Lucene-spatial layer but not up at Solr. You need to use
WKT syntax for line’s and polys, and you may do so as well for other
shapes. And in the schema use location_rpt copied from Solr’s
Hi all, I have a question about dynamically loading a core properties
file with the new core discovery method of defining cores. The concept
is that I can have a dev.properties file and a prod.properties file, and
specify which one to load with -Dsolr.env=dev. This way I can have one
file
Hmmm, I was going to make a code change to do this, but Chris
Hostetter saved me from the madness that ensues. Here's his comment on
the JIRA that I did open (but then closed), does this handle your
problem?
I don't think we want to make the name of core.properties be variable
... that way leads
Hello,
I've been working with Solr together with JTS and use location_rpt class for
the geometry field for a while now. (However, I must say that the index grew a
lot when used this class instead of the geohash for simple points ..so use it
only if you really need to index polylines and/or
Ok Great, I'm just going to dive in and see if I can index my data. Does
spatial reference matter?
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Message-
From: Pires,
Thanks Erick, that mirrors my thoughts exactly. If core.properties had
property expansion it would work for this, but I agree with not
supporting that for the complexities it introduces, and I'm not sure
it's the right way to solve it anyway. So, it doesn't really handle my
problem.
I
I added a JIRA issue here: https://issues.apache.org/jira/browse/SOLR-6399
On Thu, May 22, 2014 at 4:16 PM, Erick Erickson erickerick...@gmail.com
wrote:
Age out in this context is just implementing a LRU cache for open
cores. When the cache limit is exceeded, the oldest core is closed
OK, not quite sure if this would work, but
In each core.properties file, put in a line similar to what Chris suggested:
properties=${env}/custom.properties
You might be able to now define your sys var like
-Drelative_or_absolute_path_to_dev_custom.proerties file.
or
Hello everybody,
I had a requirement to store complicated json documents in solr.
i have modified the JsonLoader to accept complicated json documents with
arrays/objects as values.
It stores the object/array and then flatten it and indexes the fields.
e.g basic example document
{
The core discovery process is dependent on presence of core.properties file
in the particular directory.
You can have a script, which will traverse the directory structure of core
base directory and depending on env/host name, will either restore
core.properties or rename it to a different file.
Or you could use system properties to control that.
For example if you are using logbak, then
JAVA_OPTS=$JAVA_OPTS
-Dlogback.configurationFile=$CATALINA_BASE/conf/logback.xml will do it
On 20 August 2014 03:15, Aman Tandon amantandon...@gmail.com wrote:
As you are using tomcat you can
The performance of wild card queries and specially prefix wild card query
can be quite slow.
http://lucene.apache.org/core/4_9_0/core/org/apache/lucene/search/WildcardQuery.html
Also, you won't be able to time them out.
Take a look at ReversedWildcardFilter
Grouping supports group by queries.
https://cwiki.apache.org/confluence/display/solr/Result+Grouping
However you will need to form the group queries before hand.
On 18 August 2014 12:47, deniz denizdurmu...@gmail.com wrote:
is it possible to have multiple filters/criterias on grouping?
Field Collapsing has a limitation. Currently it will not allow you to get
different number of results from a each group.
You can plug a custom AnalyticQuery, which can do exactly what you want
with after seeing a matching document.
27 matches
Mail list logo