I have a dismax request handler with a default fq parameter.
requestHandler name=dismax class=solr.DisMaxRequestHandler
lst name=defaults
str name=echoParamsexplicit/str
float name=tie0.01/float
str name=qf
sku^9.0 upc^9.1 searchKeyword^1.9 series^2.8 productTitle^1.2 productID^9.0
Think I answered my own question... I need to use an appends list
--
View this message in context:
http://lucene.472066.n3.nabble.com/default-fq-in-dismax-request-handler-being-overridden-tp3768735p3768817.html
Sent from the Solr - User mailing list archive at Nabble.com.
Adding autoGeneratePhraseQueries=true to my field definitions has solved
the problem
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-dismax-not-returning-expected-results-tp3891346p3891594.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am trying to import data through my db but I have dynamic fields that I
don't always know the names of. Can someone tell me why something like this
doesn't work.
entity name=dynamicfield processor=CachedSqlEntityProcessor
datasource=database query=select optionname, datatype, optionvalue FROM
I have created a custom transformer for dynamic fields but it doesn't seem to
be working correctly and I'm not sure how to debug it with a live running
solr instance.
Here is my transformer
package org.build.com.solr;
import org.apache.solr.handler.dataimport.Context;
import
Also here is my schema
dynamicField name=*_string type=facetstring indexed=true
stored=false multiValued=true/
dynamicField name=*_numeric type=tfloat indexed=true
stored=false multiValued=true/
dynamicField name=*_boolean type=boolean indexed=true
stored=false/
--
View this message in
Thank you for your input. With your help I was able to solve my problem.
Although I could find no good example of how to handle multivalued fields
with a custom transformer online your comments helped me to find a solution.
Here is the code that handles both multi-valued and single valued fields.
I noticed that Magento is using the overwritePending commit directive but I
can't find any documentation on this. Does the overwritePending directive
purge any added docs since the last commit? Any help would be appreciated.
--
View this message in context:
Yes I confirmed in the logs. I have also committed manually several times
using the updatehandler /update?commit=true
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problems-with-documents-that-are-added-not-showing-up-in-index-Solr-3-5-tp4043539p4043716.html
Sent from the
, 2013, at 06:07 PM, dboychuck wrote:
Yes I confirmed in the logs. I have also committed manually several
times
using the updatehandler /update?commit=true
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problems-with-documents-that-are-added-not-showing-up-in-index
before I optimize (build my spellchecker index) my solr instance running in
tomcat uses about 2 gigs of memory
as soon as I optimize it jumps to about 5 gigs
http://d.pr/i/oUQI
it just doesn't seem right
http://pastebin.com/6Cg7F0dK
is there anything wrong with my configuration?
when i dump
I'm having trouble with solrj generating a query like q=kohler%5C+k for the
search term 'Kohler k'
I am using Solr 4.3 in cloud mode. When I remove the %5C everything is fine.
I'm not sure why the %5C is being added when I call
solrQuery.setQuery('Kohler k');
Any help is appreciated.
--
View
solrQuery.setQuery(ClientUtils.escapeQueryChars(keyword));
It looks like using the solrj ClientUtils.escapeQueryChars function is
escaping any spaces with %5C+ which returns 0 results at search time.
--
View this message in context:
I recently moved an index from 3.6 non-distributed to Solr Cloud 4.4 with
three shards. My company uses a boosting function with a value assigned to
each document. This boosting function no longer works dependably and I
believe the cause is that IDF is not distributed.
This seems like it should
I am indexing documents using the domin:id format ex id = k-690kohler!670614
This ensures that all k-690kohler documents are indexed to the same shard.
This does cause numDocs that are not perfectly distributed across shards
probably even worse than the default sharding algorithm.
Here is the
I am running a simple query in a non-distributed search using grouping. I am
getting incorrect facet field counts and I cannot figure out why.
Here is the query you will notice that the facet field and facet query
counts are not the same. The facet query counts are correct. Any help is
Here is my query String:
/solr/singleproductindex/productQuery?fq=siteid:82q=categories_82_is:109124facet=truefacet.query=HeatingArea_numeric:[0%20TO%20*]facet.field=HeatingArea_numericdebugQuery=true
Here is my schema for that field:
dynamicField name=*_numeric type=tfloatindexed=true
if I do group=falsegroup.facet=false the counts are what they should be for
the ungrouped counts... seems like group.facet isn't working correctly
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-facet-field-counts-not-correct-tp4097305p4097314.html
Sent from the Solr -
Hoss created: https://issues.apache.org/jira/browse/SOLR-5383
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-facet-field-counts-not-correct-tp4097305p4097346.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am upgrading from 4.4 to 4.5.1
I used to just upload my configurations to zookeeper and then install solr
with no default core
Solr would give me an error that no cores were created when I tried to
access until I ran the collections API create command to make a collection
however now when I
Here is an example URL that gives the error:
I ran into an error with the CollapsingQParserPlugin when trying to use it in
tandem with tagging
I get the following error whenever I use {!tag} in the same request as
{!collapse field=groupid}
Oct 31, 2013 6:43:56 PM org.apache.tomcat.util.http.Cookies
processCookieHeader
INFO: Cookies:
:5);
params.add(facet,true);
params.add(facet.field,{!ex=test_ti}test_ti);
assertQ(req(params), *[count(//doc)=1],
//doc[./int[@name='test_ti']='5'])
On Thu, Oct 31, 2013 at 6:46 PM, dboychuck [via Lucene]
ml-node+s472066n4098710...@n3.nabble.com wrote:
Here is an example URL that gives
I've created the following tracker for the issue:
https://issues.apache.org/jira/browse/SOLR-5416
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-with-CollapsingQParserPlugin-when-trying-to-use-tagging-tp4098709p4098862.html
Sent from the Solr - User mailing list
I'm having the same issue with solrJ 4.5.1
If I use the escapeQueryChars() function on a string like a b c it is
escaping it to a\+b\+c which returns 0 results using edismax query parser.
However a b c returns results.
--
View this message in context:
Thanks Shawn! That makes sense now. I appreciate the response.
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-4-3-solrj-generating-search-terms-that-return-no-results-tp4077137p4099615.html
Sent from the Solr - User mailing list archive at Nabble.com.
Where did you add that directive? I am having the same problem.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-when-creating-collection-in-Solr-4-6-tp4103536p4106248.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am running a data import and it is throwing all kinds of errors. I am
upgrading to 4.6 from 4.5.1 with the exact schema and solrconfig and dih
configs.
Here is the error I am getting:
org.apache.solr.common.SolrException: ERROR: [doc=k-690kohler!670614] Error
adding field
Here is the output from the logs of the server running the import:
598413 [updateExecutor-1-thread-62] ERROR
org.apache.solr.update.StreamingSolrServers – error
org.apache.solr.common.SolrException: Bad Request
request:
And here are the logs of one of the replicas:
2286617 [Thread-146] WARN org.apache.solr.cloud.RecoveryStrategy –
Stopping recovery for zkNodeName=core_node2core=productindex
2286627 [Thread-147] WARN org.apache.solr.cloud.RecoveryStrategy –
Stopping recovery for
I have created Jira issue here:
https://issues.apache.org/jira/browse/SOLR-5551
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Cloud-error-with-shard-update-tp4106260p4106448.html
Sent from the Solr - User mailing list archive at Nabble.com.
https://issues.apache.org/jira/browse/SOLR-5773
I am having trouble with CollapseQParserPlugin showing duplicate groups when
the search results contain a member of a grouped document but another member
of that grouped document is defined in the elevate component. I have
described the issue in
are in the result set. What you are suggesting is that the elevated
document becomes the group head. We can discuss the best way to handle
this
on the new ticket.
Joel
Joel Bernstein
Search Engineer at Heliosearch
On Tue, Feb 25, 2014 at 1:29 PM, dboychuck [hidden
email]http://user
The documentation is very unclear (at least to me) around Query Elevation
Component and filter queries (fq param)
The documentation for Solr 4.9 states:
The fq Parameter
Query elevation respects the standard filter query (fq) parameter. That is,
if the query contains the fq parameter, all
I'm trying to index certain data from a table and documents located on disk
using jdbc and tika. I can derive the file locations from the table and
using that data I want to also import documents into Solr. However I'm
having trouble with my configuration.
dataConfig
dataSource
Got it working with the updated config:
dataConfig
dataSource type=JdbcDataSource
name=db
jndiName=java:comp/env/jdbc/BuildDB
/
dataSource name=bin type=BinFileDataSource /
document
entity
name=productDocument
onError=skip
datsource=db
query=SELECT
I am trying to figure out how to give weights to my suggestions but I can
find no documentation on how to do this correctly.
Here is my configuration:
solrconfig.xml
mySuggester
DocumentDictionaryFactory
FuzzyLookupFactory
suggest
popularity
textSuggest
Ok let me explain what I am trying to do first since there may be a better
approach. Recently I had been trying to increase solr's matching precision
by requiring that all of the words in a field match before allowing a match
on a field. I am using edismax as my query parser and since it tokenizes
38 matches
Mail list logo