HI,
We have status text field in our solr document and it is optional.
search query status: !Closed returning documents with no status as well.
how to get only documents having status and it is !Closed ?
one way is status:* AND status:!Closed . any other way ? Thanks
Regards,
Anil
I want to migrate my Solrcloud from Windows to CentOS. Because I am new to
CentOS, not familiar with how to install Solr on it and I did a lot of config
in my Solrcloud on Windows, so I use ftp to upload solr-5.4.1 and
zookeeper-3.4.6 folders to 3 different servers running CentOS. (They are
On my lab under Windows PC:
2 Solrcloud nodes, 1 collection, named cugna, with numShards=1 and
replicationFactor=2, add index up to 90GB
After it worked, I migrate them to CentOS (1 node 1 machine) but I want to add
3rd node to 3rd machine. I think there's only 1 shard and
Although you did mention that you wont need to sort and you are using
mutlivalued=true. On the off chance you do change something like
multivalued=false docValues=false then this will come in to play:
https://issues.apache.org/jira/browse/SOLR-7495
This has been a rather large pain to deal with
There's nothing saying you have
to highlight fields you search on. So you
can specify hl.fl to be the "normal" (perhaps
stored-only) fields and still search on the
uber-field.
Best,
Erick
On Thu, May 26, 2016 at 2:08 PM, kostali hassan
wrote:
> I did it , I copied all
Should be fine. When the location field is
re-indexed (as it is with Atomic Updates)
the two fields will be filled back in.
Best,
Erick
On Thu, May 26, 2016 at 4:45 PM, Zheng Lin Edwin Yeo
wrote:
> Thanks Erick for your reply.
>
> It works when I remove the 'stored="true"
Thanks Erick for your reply.
It works when I remove the 'stored="true" ' from the gps_0_coordinate and
gps_1_coordinate.
But will this affect the search functions of the gps coordinates in the
future?
Yes, I am referring to Atomic Updates.
Regards,
Edwin
On 27 May 2016 at 02:02, Erick
I did it , I copied all my dynamic field into text field and it work great.
just one question even if I copied text into content and the inverse for
get highliting , thats not work ,they are another way to get highliting?
thank you eric
2016-05-26 18:28 GMT+01:00 Erick Erickson
public abstract class ResultContext {
/// here are all results
public abstract DocList getDocList();
public abstract ReturnFields getReturnFields();
public abstract SolrIndexSearcher getSearcher();
public abstract Query getQuery();
public abstract SolrQueryRequest getRequest();
On
Hi Mikhail,
Is there really? If I look at ResultContext, I see it is an abstract
class, completed by BasicResultContext. I don't see any context method
there. I can see a getContext() on SolrQueryRequest which just returns a
hashmap. Will I find the response in there? Is that what you are
Hi Erick,
Thank you for the reply. What I meant was suppose I have the config:
2 shards each with 1 replica.
Hence, on both servers I have
1. shard1_replica1
2 . shard2_replica1
Suppose I have 50 documents then,
shard1_replica1 + shard2_replica1 = 50 ?
or shard2_replica1 = 50 &&
all good ideas and recs, guys. erick, i'd thought of much the same after
reading through the SolrJ post and beginning to get a bit anxious at the
idea of implementation (not a java dev here lol). we're already doing some
processing before the import, taking a few million records, rolling them up
/
I always prefer ints to strings, they can't help but take
up less memory, comparing two ints is much faster than
two strings etc. Although Lucene can play some tricks
to make that less noticeable.
Although if these are just a few values, it'll be hard to
actually measure the perf difference.
And
Having more carefully read Erick's post - I see that is essentially what he
said in a much more straightforward way.
I will also second Erick's suggestion of hammering on the SQL. We found
that fruitful many times at the same gig. I develop and am not a SQL
master. In a similar situation I'll
Note that <3> is actually not hard at all, the "ant package" target
does it all for you. You do need to install ant and a Java JDK, but
the rest is pretty automatic, just apply the patch and execute the
above target.
Details are here in case you get desperate ;)...
On 5/25/2016 10:16 PM, scott.chu wrote:
> Thanks! I thought I have to tune solrconfig.xml.
>
> scott.chu,scott@udngroup.com
> 2016/5/26 (週四)
> - Original Message -
> From: Jay Potharaju
> To: solr-user ; scott(自己)
> CC:
> Date: 2016/5/26 (週四) 11:31
> Subject: Re: How to save index
It may or may not be helpful, but there's a similar class of problem that
is frequently solved either by stored procedures or by running the query on
a time-frame and storing the results... Doesn't matter if the end-point
for the data is Solr or somewhere else.
The problem is long running
Q1: Not quite sure what you mean. Let's say I have 2 shards, 3
replicas each 16 docs on each.I _think_ you're
talking about the "core selector", which shows the docs on that
particular core, 16 in our case not 48.
Q2: Yes, that's how SolrCloud is designed. It has to be for HA/DR.
Every replica in
Thanks Erik, option 4 is my favorite so far :)
On Thu, May 26, 2016 at 2:15 PM, Erick Erickson
wrote:
> There is no plan to release 5.5.2, development has moved to trunk and
> 6.x. Also, while there
> is a patch for that JIRA it hasn't been committed even in trunk/6.0.
Thanks Chris --
The two projects I'm aware of are:
https://github.com/healthonnet/hon-lucene-synonyms
and the one referenced from the Lucidworks page here:
https://lucidworks.com/blog/2014/07/12/solution-for-multi-term-synonyms-in-lucenesolr-using-the-auto-phrasing-tokenfilter/
... which is
Forgot to add... sometimes really hammering at the SQL query in DIH
can be fruitful, can you make a huge, monster query that's faster than
the sub-queries?
I've also seen people run processes on the DB that move all the
data into a temporary place making use of all of the nifty stuff you
can do
There is no plan to release 5.5.2, development has moved to trunk and
6.x. Also, while there
is a patch for that JIRA it hasn't been committed even in trunk/6.0.
So I think your choices are:
1> find a work-around
2> see about moving to Solr 6.0.1 (in release process now),
assuming that it
Don't mess with distrib=true|false to start. What you're
seeing is that when a query comes in to SolrCloud, a
sub-query is being sent to one replica of every shard. That
sub-query has distrib=false set. Then when preliminary
results are returned and collated by the distributor, another
request is
Is there an anticipated release date for 5.5.2? I know 5.5.1 was just
released a while ago and although it fixes the faceting performance
(SOLR-8096), distributed grouping is broken (SOLR-8940).
I just need a solid 5.x release that is stable and with all core
functionality working.
Thanks
On 5/26/2016 2:37 AM, Nuhaa All Bakry wrote:
> Wondering if versioning is built-in in Solr? Say I have deployed a working
> SolrCloud (v1.0) and there are applications consuming the REST APIs. Is there
> a way to deploy the next v1.1 without removing v1.0? The reason I ask is
> because we dont
oo gotcha. cool, will make sure to check it out and bounce any related
questions through here.
thanks!
best,
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Thu, May 26, 2016 at 1:45 PM, Erick
Try removing the 'stored="true" ' from the gps_0_coordinate and
gps_1_coordinate.
When you say "...tried to do an update on any other fileds" I'm assuming you're
talking about Atomic Updates, which require that the destinations of
copyFields are single valued. Under the covers the location type
Chris Morley here, from Wayfair. (Depahelix = my domain)
Suyash Sonawane and I have worked on multiple word synonyms at Wayfair. We
worked mostly off of Ted Sullivan's work and also off of some suggestions from
Koorosh Vakhshoori. We have gotten to a point where we have a more
fixing typo:
http://wiki.apache.org/solr/QueryParser (search the page for
synonym_edismax)
On Thu, May 26, 2016 at 11:50 AM, John Bickerstaff wrote:
> Hey Jeff (or anyone interested in multi-word synonyms) here are some
> potentially interesting links...
>
>
Hey Jeff (or anyone interested in multi-word synonyms) here are some
potentially interesting links...
http://wiki.apache.org/solr/QueryParser (search the page for
synonum_edismax)
https://nolanlawson.com/2012/10/31/better-synonym-handling-in-solr/ (blog
post about what became the
Oh, interesting. I’ve certainty encountered issues with multi-word synonyms,
but I hadn’t come across this. If you end up using it with a recent solr
verison, I’d be glad to hear your experience.
I haven’t used it, but I am aware of one other project in this vein that you
might be interested
Solr commits aren't the issue I'd guess. All the time is
probably being spent getting the data from MySQL.
I've had some luck writing to Solr from a DB through a
SolrJ program, here's a place to get started:
searchhub.org/2012/02/14/indexing-with-solrj/
you can peel out the Tika bits pretty
And, you can copy all of the fields into an "uber field" using the
copyField directive and just search the "uber field".
Best,
Erick
On Thu, May 26, 2016 at 7:35 AM, kostali hassan
wrote:
> thank you it make sence .
> have a good day
>
> 2016-05-26 15:31 GMT+01:00
Hello,
There is a protected ResultContext field named context.
On Thu, May 26, 2016 at 5:31 PM, Upayavira wrote:
> Looking at the code for a sample DocTransformer, it seems that a
> DocTransformer only has access to the document itself, not to the whole
> results. Because of
Hi,
Are you firing both trailing and leading wildcard query?
Or you just put stars for emphasizing purposes?
Please consider using normal queries, since you are already using a tokenized
field.
By the way what is 'tollc soon'?
Ahmet
On Thursday, May 26, 2016 4:33 PM, Preeti Bhat
Ahh - for question #3 I may have spoken too soon. This line from the
github repository readme suggests a way.
Update: We have tested to run with the jar in $SOLR_HOME/lib as well, and
it works (Jetty).
I'll try that and only respond back if that doesn't work.
Questions 1 and 2 still stand of
Hi all,
I'm creating a Solr Cloud that will index and search medical text.
Multi-word synonyms are a pretty important factor.
I find that there are some challenges around multi-word synonyms and I also
found on the wiki that there is a recommended 3rd-party parser
(synonym_edismax parser)
hi all,
i've got layered entities in my solr import. it's calling on some
transactional data from a MySQL instance. there are two fields that are
used to then lookup other information from other tables via their related
UIDs, one of which has its own child entity w yet another select statement
to
thank you it make sence .
have a good day
2016-05-26 15:31 GMT+01:00 Siddhartha Singh Sandhu :
> The schema.xml/managed_schema defines the default search field as `text`.
>
> You can make all fields that you want searchable type `text`.
>
> On Thu, May 26, 2016 at 10:23 AM,
The schema.xml/managed_schema defines the default search field as `text`.
You can make all fields that you want searchable type `text`.
On Thu, May 26, 2016 at 10:23 AM, kostali hassan
wrote:
> I import data from sql databases with DIH . I am looking for serch term
Reranking is done after the collapse. So you'll get the original score in
cscore()
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, May 26, 2016 at 12:56 PM, aanilpala wrote:
> with cscore() in collapse, will I get the similarity score from lucene or
> the
> reranked
Looking at the code for a sample DocTransformer, it seems that a
DocTransformer only has access to the document itself, not to the whole
results. Because of this, it isn't possible to use a DocTransformer to
merge, for example, the highlighting results into the main document.
Am I missing
I import data from sql databases with DIH . I am looking for serch term in
all fields not by field.
Hi,
I am using Solr 6.0 on Ubuntu 14.04.
I am ending up with loads of junk in the text body. It starts like,
The JSON entry output of a search result shows the indexed text starting
with...
body_txt_en: " stream_size 36499 X-Parsed-By
org.apache.tika.parser.DefaultParser X-Parsed-By"
Hi Ahmet & Sid,
Thanks for the reply
I have the below requirement
1) If I search with say company_nm:*llc* then we should not return any results
or only few results where llc is embedded in other words like tollc soon. So I
had implemented the stopwords.
2) But If I search with say
Hi Preeti,
You can use the analysis tool in the Solr console to see how your queries
are being tokenized. Based on your results you might need to make changes
in "strings_ci".
Also, If you want to be able to search on stopwords you might want to
remove solr.StopFilterFactory from indexing and
Hi,
Thanks for the feedback. The queries I run are very basic filter queries
with some sorting.
q:*:*=(dt1:[date1 TO *] && dt2:[* TO NOW/DAY+1]) && fieldA:abc &&
fieldB:(123 OR 456)=dt1 asc,field2 asc, fieldC desc
I noticed that the date fields(dt1,dt2) are using date instead of tdate
fields &
Not able to get the DocTransformer [explain] to work in Solr 5. I'm sure I'm
doing something wrong. But I'm following the example in the documentation.
https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents
Other transformers ( [docid] and [shard] ) are working as
Thank you for your answer but I'm not sure I've understood: document.type
is not in the same core as annotations, how can I facet on that field?
Il giorno gio 26 mag 2016 alle ore 14:06 Upayavira ha
scritto:
>
>
> On Thu, 26 May 2016, at 01:02 PM, Zaccheo Bagnati wrote:
> > Hi
On Thu, 26 May 2016, at 01:02 PM, Zaccheo Bagnati wrote:
> Hi all,
> I have a SOLR core containing documents:
> document (id, type, text)
> and a core containing annotations (each document has 0 or more
> annotations):
> annotation (id, document_id, user, text)
>
> I can filter annotations on
Hi all,
I have a SOLR core containing documents:
document (id, type, text)
and a core containing annotations (each document has 0 or more annotations):
annotation (id, document_id, user, text)
I can filter annotations on document fields using JoinQueryParser but how
can I create a faceting?
with cscore() in collapse, will I get the similarity score from lucene or the
reranked score by the raranker if I am using a plugin that reranks the
results? I guess the answer depends on which of fq or rq is applied first.
--
View this message in context:
Also if you're using the min/max param within a collapse you can use the
cscore() function, which is much more efficient then the query() function.
But cscore() is only available within the context of a collapse, to select
the group head. Outside of the collapse, query() is the approach.
Joel
Hi,
Probably, using the 'query' function query, which returns the score of a given
query.
https://cwiki.apache.org/confluence/display/solr/Function+Queries#FunctionQueries-UsingFunctionQuery
On Thursday, May 26, 2016 1:59 PM, aanilpala wrote:
is it allowed to provide a
No need for a new thread.
Yes, there can only be one ranking collector. I believe the effect of
having two rq's is that one would simply be ignored as you mentioned.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, May 26, 2016 at 11:21 AM, aanilpala wrote:
> thanks,
Hi Bhat,
What do you mean by multi term search?
In your first e-mail, your example uses quotes, which means phrase/proximity
search.
ahmet
On Thursday, May 26, 2016 11:49 AM, Preeti Bhat
wrote:
HI All,
Sorry for asking the same question again, but could someone
is it allowed to provide a sort function (sortspec) that is using similarity
score. for example something like in the following:
sort=product(2,score) desc
seems that it won't work. is there an alternative way to achieve this?
using solr6
thanks in advance.
--
View this message in context:
thanks, it indeed works that way.
I was curious if the same would work with rq but it seems not (from the
results I can at least say that one reranker is ignored). Is there a way to
combine two rq components?
PS: I know now it is another question, should I open a start thread?
--
View this
Could you use two filter queries:
fq=is_valid:true={!collapse field=user_id}
This syntax should work fine. It first filter the results based on
is_valid:true and then collapse the results.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, May 26, 2016 at 10:41 AM, aanilpala
Also, tests sometimes fail with:
org.apache.solr.common.SolrException: No registered leader was found after
waiting for 1ms , collection: collection1 slice: shard1
Despite having waitForThingsToLevelOut(45);
If anyone has a suggestion for this as well, it would be much appreciated :)
hi there,
I can't seem to find a way to collapse results on a filter query. For
example, imagine that I have a query with filter is_valid:true. Now, if I
want to collapse the results on another field than is_valid (i.e user_id),
neither of the following works:
fq=is_valid:true AND {!collapse
Hi,
We have a bunch of tests extending AbstractFullDistribZkTestBase on 6.0 and our
builds sometimes fail with the following message:
org.apache.solr.common.SolrException: Could not load collection from ZK:
collection1
at io.
Caused by:
HI All,
Sorry for asking the same question again, but could someone please advise me on
this.
Thanks and Regards,
Preeti Bhat
From: Preeti Bhat
Sent: Wednesday, May 25, 2016 2:22 PM
To: solr-user@lucene.apache.org
Subject: how can we use multi term search along with stop words
HI,
I am
Anyone has any solutions to this problem?
I tried to remove the gps_0_coordinate and gps_1_coordinate, but I will get
the following error during indexing.
ERROR: [doc=id1] unknown field 'gps_0_coordinate'
Regards,
Edwin
On 25 May 2016 at 11:37, Zheng Lin Edwin Yeo wrote:
Hello,
Wondering if versioning is built-in in Solr? Say I have deployed a working
SolrCloud (v1.0) and there are applications consuming the REST APIs. Is there a
way to deploy the next v1.1 without removing v1.0? The reason I ask is because
we dont want the deployment of Solr to be tightly
65 matches
Mail list logo