Hi,
I have some documents with keywords egg and some with salad and some
with egg salad.
When I search for egg salad, I expect to see egg results + salad results. I
dont see them.
egg and salad queries individually work fine.
I am using whitespacetokenizer.
Not sure if I am missing something.
Is coreNodeName exposed via collections api?
--
View this message in context:
http://lucene.472066.n3.nabble.com/custom-names-for-replicas-in-solrcloud-tp4086205p4086628.html
Sent from the Solr - User mailing list archive at Nabble.com.
here is keywords field for 3 docs,
Simply Asia products,Simply Asia,Sesame Chicken Egg Drop Soup,Soy Ginger
Shrimp and Noodle Salad,Sesame Teriyaki Noodle Bowl
Eggs,AllWhites,Better'n Eggs,Foods,AllWhites or Better'n Eggs
DOLE Salad Blend Salad Kit,Salad Kit,Salad,DOLE,produce
Here is my
I am not searching for phrase query, I am not sure why it shows up in
parsedquery.
lst name=responseHeader
int name=status0/int
int name=QTime3/int
lst name=params
str name=debugQuerytrue/str
str name=indenttrue/str
str name=qegg salad
/str
str name=_1377569284170/str
str
Hi,
I am using Solr 4.3 with 3 solr hosta and with an external zookeeper
ensemble of 3 servers. And just 1 shard currently.
When I create collections using collections api it creates collections with
names,
collection1_shard1_replica1, collection1_shard1_replica2,
collection1_shard1_replica3.
I have the following situation when using Solr 4.3.
My document contains entities for example peanut butter. I have a list
of such entities. These are items that go together and are not to be treated
as two individual words. During indexing, I want solr to realize this and
treat peanut butter as
I have setup solr cloud and when I try to access documents I get this error,
lst name=errorstr name=msgno servers hosting shard: /strint
name=code503/int/lst
However if I add shards=shard1 param it works.
--
View this message in context:
Hi,
I am trying to create a debian package for solr 4.3 (default installation
with jetty).
Is there anything already available?
Also, I need 3 different cores so plan to create corresponding packages for
each of them to create solr core using admin/cores or collections api.
I also want to
I have 2 collections, lets say coll1 and coll2.
I configured solr.DirectSolrSpellChecker in coll1 solrconfig.xml and works
fine.
Now, I want to configure coll2 solrconfig.xml to use SAME spell check
dictionary index created above. (I do not want coll2 prepare its own
dictionary index but just
Hey,
Is there a way to do spellcheck and search (using suggestions returned from
spellcheck) in a single Solr request?
I am seeing that if my query is spelled correctly, i get results but if
misspelled, I just get suggestions.
Any pointers will be very helpful.
Thanks,
-Manasi
--
View this
I am trying to use Filebased and index based spell checker and getting this
exception All checkers need to use the same StringDistance.
They work fine as expected individually but not together.
Any pointers?
-Manasi
--
View this message in context:
Exploring various SpellCheckers in solr and have a few questions,
1. Which algorithm is used for generating suggestions when using
IndexBasedSpellChecker. I know its Levenshtein (with edit distance=2 -
default) in DirectSolrSpellChecker.
2. If i have 2 indices, can I setup multiple
I am running solr 4.3 with tomcat 7 (with non SolrCloud) and have 4 solr
cores running.
Switching to start using SolrCloud with tomcat7 and embedded zookeeper I
updated JAVA_OPTS in this file tomcat7/bin/setenv.sh to following,
JAVA_OPTS=-Djava.awt.headless=true -Xms2048m -Xmx4096m
to do it for expansion
reasons (to add replicas later on), then each one will need to have a
distinct collection.configName parameter, so that ZK knows to keep the
configs separate.
On 17 July 2013 07:44, smanad lt;
smanad@
gt; wrote:
I am running solr 4.3 with tomcat 7 (with non
I am using solr 4.3 and have 2 collections coll1, coll2.
After searching in coll1 I get field1 values which is a comma separated list
of strings like, val1, val2, val3,... valN.
How can I use that list to match field2 in coll2 with those values separated
by an OR clause.
So i want to return all
Hi,
We have a need where we would want normalized scores from score ranging
between 0 to 1 rather than a free range.
I read about it @ http://wiki.apache.org/lucene-java/ScoresAsPercentages and
seems like thats not something that is recommended.
However, is there still a way to set some config
Gr8! thanks a lot!
--
View this message in context:
http://lucene.472066.n3.nabble.com/update-solr-xml-dynamically-to-add-new-cores-tp4071800p4072190.html
Sent from the Solr - User mailing list archive at Nabble.com.
/Appinionsgt; | g+:
plus.google.com/appinions
w: appinions.com lt;http://www.appinions.com/gt;
On Wed, Jun 19, 2013 at 8:27 PM, smanad lt;
smanad@
gt; wrote:
Hi,
Is there a way to edit solr.xml as a part of debian package installation
to
add new cores.
In my use case, there 4 solr
place to
put
the atomic update options in such a simple text format.
-- Jack Krupansky
-Original Message-
From: smanad
Sent: Wednesday, June 19, 2013 8:30 PM
To:
solr-user@.apache
Subject: Partial update using solr 4.3 with csv input
I was going through this link
http
Hi,
Is there a way to edit solr.xml as a part of debian package installation to
add new cores.
In my use case, there 4 solr indexes and they are managed/configured by
different teams.
The way I am thinking packages will work is as described below,
1. There will be a solr-base debian package
I was going through this link
http://solr.pl/en/2012/07/09/solr-4-0-partial-documents-update/ and one of
the comments is about support for csv.
Since the comment is almost a year old, just wondering if this is still true
that, partial updates are possible only with xml and json input?
Thanks,
Hi,
I am thinking of using Solr to implement Search on our site. Here is my use
case,
1. We will have multiple 4-5 indexes based on different data
types/structures and data will be indexed into these by several processes,
like cron, on demand, thru message queue applications, etc.
2. A single
Thanks for the reply Michael.
In some cases schema is similar but not all of them. So lets go with
assumption schema NOT being similar.
I am not quite sure what you mean by you're probably stuck coordinating the
results externally. Do you mean, searching in each index and then somehow
merge
Is this a limitation of solr/lucene, should I be considering using other
option like using Elasticsearch (which is also based on lucene)?
But I am sure search in multiple indexes is kind of a common problem.
Also, i as reading this post
In my case, different teams will be updating indexes at different intervals
so having separate cores gives more control. However, I can still
update(add/edit/delete) data with conditions like check for doc type.
Its just that, using shards sounds much cleaner and readable.
However, I am not yet
Thanks for the reply.
Regarding second question, actually thats what I am looking for.
My use case is, my DIH runs for 2 httpdatasources, api1 and api2 with
different ttls returned. I was thinking of saving this in a file something
like,
url:api1, timestamp:100, expires: 60
url:api2,
Hi,
I am new to Solr and recently started exploring it for search/sort needs in
our webapp.
I have couple of questions as below, (I am using solr 4.2.1 with default
core named collection1)
1. We have a use case where we would like to index data every 10 mins (avg).
Whats the best way to
27 matches
Mail list logo