Cross-posted / addressed (both me), here.
https://stackoverflow.com/questions/65620642/solr-query-with-space-only-q-20-stalls/65638561#65638561
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Cross-posted / addressed (both me), here.
https://stackoverflow.com/questions/65620642/solr-query-with-space-only-q-20-stalls/65638561#65638561
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
data value if it
>> exists, otherwise an empty string? I'm integrating this with a 3rd party
>> app which I can't change. When the field is null it isn't showing up in the
>> output.
>>>
>>> -Original Message-
>>> From: Erick Erickson
>&
d is null it isn't showing up in the
> output.
> >
> > -Original Message-
> > From: Erick Erickson
> > Sent: Wednesday, July 29, 2020 12:49 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: solr query returns items with spaces removed
> >
> >
r@lucene.apache.org
> Subject: Re: solr query returns items with spaces removed
>
> The “def” function goes after the _indexed_ value, so that’s what you’re
> getting back. Try just specifying “fl=INSTRUCTIONS”, and if the value is
> stored that should return the original fiel
, July 29, 2020 12:49 PM
To: solr-user@lucene.apache.org
Subject: Re: solr query returns items with spaces removed
The “def” function goes after the _indexed_ value, so that’s what you’re
getting back. Try just specifying “fl=INSTRUCTIONS”, and if the value is stored
that should return
The “def” function goes after the _indexed_ value, so that’s what you’re
getting back. Try just specifying “fl=INSTRUCTIONS”, and if the value is stored
that should return the original field value before any analysis is done.
Why are you using the def function? If the field is absent from the
Hi Swetha,
Given URL is encoded. So, you can decode it before analyzing. Plus
character is used for whitespaces when you encode a URL and minus sign
represents a negative query in Solr.
Kind Regards,
Furkan KAMACI
On Tue, Jul 7, 2020 at 9:16 PM swetha vemula
wrote:
> Hi,
>
> I have an URL and
First of all, if you’re really using pre-and-postfix wildcards and those
asterisks are not just bold formatting, those are very expensive operations.
I’d suggest you investigate alternatives (like ngramming) or other alternate
ways of analyzing your input (both at indexing and query time)
Multiple replicas of the same shard will execute their autocommits at
different wall clock times.
Thus there may be a _temporary_ time when newly-indexed document is
found by a query that
happens to get served by replica1 but not by replica2. If you have a
timestamp in the doc, and
a soft commit
Hello.
_text_=kids is not a query syntax Solr supports. Last time I've looked
into, mixing doc blocks and childfree docs is not supported. Anyway,
debugQuery=true usually helps to understand puzzling results.
On Thu, Aug 29, 2019 at 2:19 AM craftlogan wrote:
> So in Solr I have a data
If you’re literally including the quotes, i.e. q=“one two”, then you’re doing
phrase searches which are more complex and will take longer. q=field:one AND
field:two is a straight boolean query. Also, what query parser are you using?
If it’s edismax, then you’re searching across multiple fields.
I’m talking about the filterCache. You said you were using this in a “filter
query”, which I assume is an “fq” clause, which is automatically cached in the
filterCache.
It would help a lot if you told us two things:
1> what the clause looks like. You say 1,500 strings. How are you assmebling
Tim/Eric,
@Tim : We do have category field in existing solr configuration. So the
existing functionality is we will query based on category and get the
results from solr and display them on PCAT.
But as per new requirement , we need to invoke third party service to fetch
the personalized
What version of Solr? Recent versions automatically use terms query parser
for large, simple or clauses. Do look into using it anyway. And I'd set
cache=false because I doubt you'll ever get a cache hit...
On Thu, May 30, 2019, 16:21 Venkateswarlu Bommineni
wrote:
> Hello Team,
>
> I have got a
Venkat,
There is another way to do this. If you have a category of "thing" you are
attempting to filter over, then you create a query and tag the documents
with this category. So, create a 'categories' field and append 'thing' to
the field updating the field if need be. (Be wary of over
On 5/30/2019 4:13 PM, Venkateswarlu Bommineni wrote:
Thank you guys for quick response.
I was able to query solr by sending 1500 products using solrJ with http
post method.
But I had to change maxBooleanClauses to 4096 from default 1024.
But I wanted to check with you guys that, will there be
Thank you guys for quick response.
I was able to query solr by sending 1500 products using solrJ with http
post method.
But I had to change maxBooleanClauses to 4096 from default 1024.
But I wanted to check with you guys that, will there be any performance
issues by changing maxBooleanClauses
On 5/30/2019 2:20 PM, Venkateswarlu Bommineni wrote:
I have got a requirement to send many strings (~1500) in the filter query
param to the solr.
Can you please provide any suggestions/precautions we need to take care in
this particular scenario.
You'll probably want to send that as a POST,
You can use POST instead of GET.
But you may also want to see if you can refactor those 1500 strings somehow.
If you don't use it already, maybe Terms query parser could be useful:
https://lucene.apache.org/solr/guide/7_7/other-parsers.html#terms-query-parser
Also, if at least some of those
On 5/13/2019 2:51 AM, vishal patel wrote:
Executing an identical query again will likely satisfy the query from Solr's
caches. Solr won't need to talk to the actual index, and it will be REALLY
fast. Even a massively complex query, if it is cached, will be fast.
All caches are disabled in
Oh, and you can freely set docValues=true _and_ have indexed=true on the same
field, Solr will use the right structure for the operations it needs. HOWEVER:
if you change that definition you _must_ re-index the entire collection.
> On May 13, 2019, at 1:22 AM, Bernd Fehling
> wrote:
>
> Your
That indicates you’re hitting the queryResultCache, which is also supported by
your statement about how fast queries are returned after they’re run once. Look
at admin UI>>select core>>stats/plugins>>cache>>queryResultCache and you’ll
probably see a very hit ratio, approaching 1.
You also have
cet?
If I do not do a separate field then any performance issue when the same field
will search in a query?
Sent from Outlook<http://aka.ms/weboutlook>
From: Bernd Fehling
Sent: Monday, May 13, 2019 11:52 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr qu
hema file because of our indexing and
searching ratio is high in our live environment.
Sent from Outlook<http://aka.ms/weboutlook>
From: Shawn Heisey
Sent: Friday, May 10, 2019 9:32 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr query takes a too mu
Your "sort" parameter has "sort=id+desc,id+desc".
1. It doesn't make sense to have a sort on "id" in descending order twice.
2. Be aware that the id field has the highest cadinality.
3. To speedup sorting have a separate field with docValues=true for sorting.
E.g.
Regards
Bernd
Am
On 5/10/2019 7:32 AM, vishal patel wrote:
We have 2 shards and 2 replicas in Live environment. we have multiple
collections.
Some times some query takes much time(QTime=52552). There are so many
documents indexing and searching within milliseconds.
There could be any number of causes of
first inclination is your index is cold.
On Fri, May 10, 2019 at 9:32 AM vishal patel
wrote:
> We have 2 shards and 2 replicas in Live environment. we have multiple
> collections.
> Some times some query takes much time(QTime=52552). There are so many
> documents indexing and searching within
Thanks Saurabh And Prince, Works perfectly.
On Sun, 14 Apr 2019 at 17:21, Prince Manohar
wrote:
> Basically, you need to boost some documents low.
>
> For this, you can either use solr’s Boost Query ( bq ) or Boost Function
> (bf)
> parameter.
>
> For example in your case:-
>
> If you want the
Basically, you need to boost some documents low.
For this, you can either use solr’s Boost Query ( bq ) or Boost Function (bf)
parameter.
For example in your case:-
If you want the documents with countries A and B to show last in the
result, you can use:-
bq=( country:A OR country:B )^-1
Note
fq=country :c1 OR c2 OR c3=if(termfreq (country,c2),0,1) desc
Correcting query.
On Sun 14 Apr, 2019, 3:36 PM Saurabh Sharma,
wrote:
> I would suggest to sort on the basis of condition. First find all the
> records and then sort on the basis of condition where you will be putting
> spcific
I would suggest to sort on the basis of condition. First find all the
records and then sort on the basis of condition where you will be putting
spcific countries below other.
fq=country :c1 OR c2 OR c3=if(termfreq (country,c2),1,0) desc
Here we are putting c2 below c1 and c3.
You can also try
On 3/22/2019 7:52 AM, Rajdeep Sahoo wrote:
My solr query sometime taking more than 60 sec to return the response .
Is there any way I can check why it is taking so much time .
Please let me know if there is any way to analyse this issue(high
response time ) .Thanks
With the information
Hi Shilpa,
I am assuming you know the functionality of synonym.
Synonym in Solr can be applied over the tokens getting indexed/queried for
the field. In order to apply synonym to a field you need to update the
configuration file schema.xml where you also define a file (synonym.txt is
default,
Hi,
You have to check if both of settings are using the same configurations,
and if the production Solr server have other programs running?
Also, the query performance might be affected if there is indexing going on
at the same time.
Regards,
Edwin
On Thu, 10 Jan 2019 at 06:50, Dasarathi Minjur
Hi Rajdeep,
For production-deployment at my company, we are using prometheus exporter,
https://github.com/noony/prometheus-solr-exporter.
You can start the exporter along with solr server and the exporter will
collect important metrics from solr.
By the way, you need to install and configure
Rajdeep,
Not an external tool, but there is the option of using the "debug"
parameter in the Solr query that can be used at least as a starting point
for looking at the query timing.
https://lucene.apache.org/solr/guide/6_6/common-query-parameters.html#CommonQueryParameters-ThedebugParameter
t;>> Hi Emir,
>>>
>>> If using OR-ed conditions for different years then the query will be very
>>> long if I got 100 years and I think this is not practical.
>>> You have any other idea?
>>>
>>> Regards,
>>> Albert
>>>
birthdate_year, birthdate_month,
> birthdate_day.
> Is this practical adding so much additional fields?
>
> Albert
> From: Stefan Matheis
> Sent: Thursday, March 15, 2018 3:05 PM
> To: solr-user@lucene.apache.org
> Subject: RE: solr query
>
> > You have any other idea?
>
&
: Stefan Matheis
Sent: Thursday, March 15, 2018 3:05 PM
To: solr-user@lucene.apache.org
Subject: RE: solr query
> You have any other idea?
Yes, we go back to start and discuss again why you're not adding a separate
field for that. It's the simplest thing possible and avoids all those
workarou
anagement - Alerting - Anomaly Detection
> > >> Solr & Elasticsearch Consulting Support Training -
> http://sematext.com/
> > >>
> > >>
> > >>
> > >>> On 14 Mar 2018, at 10:53, Albert Lee <albertlee8...@gmail.com>
> wrote
Hi Emir,
If using OR-ed conditions for different years then the query will be very long
if I got 100 years and I think this is not practical.
You have any other idea?
Regards,
Albert
From: Gus Heck
Sent: Thursday, March 15, 2018 12:43 AM
To: solr-user@lucene.apache.org
Subject: Re: solr query
ng Support Training - http://sematext.com/
>>>>
>>>>
>>>>
>>>>> On 14 Mar 2018, at 10:53, Albert Lee <albertlee8...@gmail.com> wrote:
>>>>>
>>>>> I don’t want to add separate fields since I have many da
- Alerting - Anomaly Detection
> >> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> >>
> >>
> >>
> >>> On 14 Mar 2018, at 10:53, Albert Lee <albertlee8...@gmail.com> wrote:
> >>>
> >>> I
sematext.com/
>>
>>
>>
>>> On 14 Mar 2018, at 10:53, Albert Lee <albertlee8...@gmail.com> wrote:
>>>
>>> I don’t want to add separate fields since I have many dates to index.
>> How to index it as timestamp and do function query, any ex
> >
> > From: Emir Arnautović
> > Sent: Wednesday, March 14, 2018 5:38 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: solr query
> >
> > Hi Albert,
> > The simplest solution is to index month/year as separate fields.
> Alternative is t
r Arnautović
> Sent: Wednesday, March 14, 2018 5:38 PM
> To: solr-user@lucene.apache.org
> Subject: Re: solr query
>
> Hi Albert,
> The simplest solution is to index month/year as separate fields. Alternative
> is to index it as timestamp and do function query to do some m
I don’t want to add separate fields since I have many dates to index. How to
index it as timestamp and do function query, any example or documentation?
Regards,
Albert
From: Emir Arnautović
Sent: Wednesday, March 14, 2018 5:38 PM
To: solr-user@lucene.apache.org
Subject: Re: solr query
Hi
th?
>
> Regard,
> Albert
>
>
> From: Emir Arnautović
> Sent: Wednesday, March 14, 2018 5:26 PM
> To: solr-user@lucene.apache.org
> Subject: Re: solr query
>
> Hi Albert,
> It does - you can use NOW/MONTH and NOW/YEAR to get the start of month/year.
> Here is ref
NOW/MONTH and NOW/YEAR to get the start of month/year, but how can I get
current month of regardless year. Like the use case, people who’s birthdate is
this month?
Regard,
Albert
From: Emir Arnautović
Sent: Wednesday, March 14, 2018 5:26 PM
To: solr-user@lucene.apache.org
Subject: Re: solr
Hi Albert,
It does - you can use NOW/MONTH and NOW/YEAR to get the start of month/year.
Here is reference to date math:
https://lucene.apache.org/solr/guide/6_6/working-with-dates.html#WorkingwithDates-DateMathSyntax
On 10/17/2017 5:53 PM, Phillip Wu wrote:
> I've indexed a lot of documents (*.docx & *.vsd).
>
> When I run a query from the website it returns only a small proportion of the
> data in the index:
> {
> "responseHeader":{
> "status":0,
> "QTime":66,
> "params":{
>"q":"NS Finance 9.2",
>
bq: Is this expected behavior where it returns only a subset of the
documents it has found?
No. But there is _so_ much you're leaving out here that it's totally
impossible to say much.
bq: I've indexed a lot of documents (*.docx & *.vsd).
how? Tika? ExtractingRequestHandler? Some custom code?
You can add a ~3 to the query to allow the order to be reversed, but you
will get extra hits. Maybe it is a ~4, i can never remember on phrases and
reversals. I usually just try it.
Alternatively, you can create a custom query field for what you need from
dates. For example, if you want to
What field types are you using for your dates?
Have a look at:
https://cwiki.apache.org/confluence/display/solr/Working+with+Dates
On Thu, Aug 17, 2017 at 10:08 AM, Nawab Zada Asad Iqbal
wrote:
> Hi Krishna
>
> I haven't used date range queries myself. But if Solr only
Hi Krishna
I haven't used date range queries myself. But if Solr only supports a
particular date format, you can write a thin client for queries, which will
convert the date to solr's format and query solr.
Nawab
On Thu, Aug 17, 2017 at 7:36 AM, chiru s wrote:
> Hello
: I could have sworn I was paraphrasing _your_ presentation Hoss. I
: guess I did not learn my lesson well enough.
:
: Thank you for the correction.
Trust but verify! ... we're both wrong.
Boolean functions (like lt(), gt(), etc...) behave just like sum() -- they
"exist" for a document if and
Bother,
I could have sworn I was paraphrasing _your_ presentation Hoss. I
guess I did not learn my lesson well enough.
Thank you for the correction.
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 1 June 2017 at 15:26, Chris Hostetter
: Because the value of the function will be treated as a relevance value
: and relevance value of 0 (and less?) will cause the record to be
: filtered out.
I don't believe that's true? ... IIRC 'fq' doesn't care what the scores
are as long as the query is a "match" and a 'func' query will match
Function queries:
https://cwiki.apache.org/confluence/display/solr/Function+Queries
The function would be sub
Then you want its result mapped to a fq. could probably be as simple
as fq={!func}sub(value,cost).
Because the value of the function will be treated as a relevance value
and relevance
About adding fields, consider adding a custom DocumentTransformer
instead, that's much less invasive.
Best,
Erick
On Wed, May 31, 2017 at 5:36 AM, Susheel Kumar wrote:
> Some of these like restricting user to not query some fields (based on
> their authorization) etc. we
Some of these like restricting user to not query some fields (based on
their authorization) etc. we do in our service layer. The service layer is
what exposed to consumers and this service connects to Solr using SolrJ to
execute queries etc. and get back results (in binary format).
This is one
On 5/10/2017 12:33 AM, Adnan Shaikh wrote:
> Thanks Alexandre for the update.
>
> Please help me to understand the other part of the query as well , if there
> is any limit to how many values we can pass for a key.
The limit is not the number of values, but the size of the request in bytes.
A
How many values are you trying to pass in? And in which format? And
what issues are you facing? There are too many variables here to give
a generic advice.
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 10 May 2017 at 02:33, Adnan Shaikh
Hello,
Thanks Alexandre for the update.
Please help me to understand the other part of the query as well , if there
is any limit to how many values we can pass for a key.
Thanks,
Mohammad Adnan Shaikh
On May 9, 2017, at 8:05 PM, Alexandre Rafalovitch
wrote:
I am not aware
I am not aware of any limits in Solr itself. However, if you are using
a GET request to do the query, you may be running into browser
limitations regarding URL length.
It may be useful to know that Solr can accept the query parameters in
the POST body as well.
Regards,
Alex.
Thanks everyone for taking time to respond to my email. I think you are
correct in that the query results might be coming from main memory as I
only had around 7k queries.
However it is still not clear to me, given that everything was being
served from main memory, why is that I am not able to
On 4/28/2017 12:43 PM, Toke Eskildsen wrote:
> Shawn Heisey wrote:
>> Adding more shards as Toke suggested *might* help,[...]
> I seem to have phrased my suggestion poorly. What I meant to suggest
> was a switch to a single shard (with 4 replicas) setup, instead of the
>
Beautiful, thank you.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Friday, April 28, 2017 3:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Query Performance benchmarking
I use the JMeter plugins. They’ve been reorganized recently, so
Davis, Daniel (NIH/NLM) [C]
> <daniel.da...@nih.gov> wrote:
>
> Walter,
>
> If you can share a pointer to that JMeter add-on, I'd love it.
>
> -----Original Message-
> From: Walter Underwood [mailto:wun...@wunderwood.org]
> Sent: Friday, April 28, 2017 2:53 PM
> To: sol
Walter,
If you can share a pointer to that JMeter add-on, I'd love it.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Friday, April 28, 2017 2:53 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Query Performance benchmarking
I use production logs
I use production logs to get a mix of common and long-tail queries. It is very
hard to get a realistic distribution with synthetic queries.
A benchmark run goes like this, with a big shell script driving it.
1. Reload the collection to clear caches.
2. Split the log into a cache warming set
Shawn Heisey wrote:
> Adding more shards as Toke suggested *might* help,[...]
I seem to have phrased my suggestion poorly. What I meant to suggest was a
switch to a single shard (with 4 replicas) setup, instead of the current 2
shards (with 2 replicas).
- Toke
Well, the best way to get no cache hits is to set the cache sizes to
zero ;). That provides worst-case scenarios and tells you exactly how
much you're relying on caches. I'm not talking the lower-level Lucene
caches here.
One thing I've done is use the TermsComponent to generate a list of
terms
(aside: Using Gatling or Jmeter?)
Question: How can you easily randomize something in the query so you get no
cache hits? I think there are several levels of caching.
--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
re: the q vs. fq question. My claim (not verified) is that the fastest
of all would be q=*:*={!cache=false}. That would bypass the scoring
that putting it in the "q" clause would entail as well as bypass the
filter cache.
But I have to agree with Walter, this is very suspicious IMO. Here's
what
More “unrealistic” than “amazing”. I bet the set of test queries is smaller
than the query result cache size.
Results from cache are about 2 ms, but network communication to the shards
would add enough overhead to reach 40 ms.
wunder
Walter Underwood
wun...@wunderwood.org
On 4/27/2017 5:20 PM, Suresh Pendap wrote:
> Max throughput that I get: 12000 to 12500 reqs/sec
> 95 percentile query latency: 30 to 40 msec
These numbers are *amazing* ... far better than I would have expected to
see on a 27GB index, even in a situation where it fits entirely into
available
On Thu, 2017-04-27 at 23:20 +, Suresh Pendap wrote:
> Number of Solr Nodes: 4
> Number of shards: 2
> replication-factor: 2
> Index size: 55 GB
> Shard/Core size: 27.7 GB
> maxConnsPerHost: 1000
The overhead of sharding is not trivial. Your overall index size is
fairly small, relative to
Hi Emir,Grouping is exactly what I wanted to achieve. Thanks !!Thank
you,Vrinda Davda
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Query-Suggestion-tp4323180p4323743.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Vrinda,
You should use field collapsing
(https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results)
or if you cannot live with its limitations, you can use results grouping
(https://cwiki.apache.org/confluence/display/solr/Result+Grouping)
HTH,
Emir
On 03.03.2017
On 2/2/2017 6:16 AM, deepak.gha...@mediawide.com wrote:
> I am writting query for getting response from specific index content first.
> eg.
> http://192.168.200.14:8983/solr/mypgmee/select?q=*blood*=id:(*/939/* OR
> **)=id=json=true
>
> In above query I am getting response, Means suppose I Get
Hmmm i have to check something
it seems, that it's no error
There are some zip files which are indexed, and on the admin page
there are fetched all fields, including the contents ... and the zip
document has a realy big content :O
Zitat von sn0...@ulysses-erp.com:
Hello - an hour ago,
Hi Kshitij,
Query time depends on query parameters, number of docs matched,
collection size, index size on disk, resources available and caches.
Number of fields per doc will results in index being bigger on disk, but
assuming there are enough resources - mainly RAM for OS caches - that
Thanks Shawn for your insight!
On Fri, Jul 22, 2016 at 6:32 PM, Shawn Heisey wrote:
> On 7/22/2016 12:41 AM, Shyam R wrote:
> > I see that SOLR returns status value as 0 for successful searches
> > org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1]
> >
On 7/22/2016 12:41 AM, Shyam R wrote:
> I see that SOLR returns status value as 0 for successful searches
> org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1]
> webapp=/solr path=/user/ping params={} status=0 QTime=0 I do see that
> the status come's back as 400 whenever the search is
Perfect - got it.
Now I know where I stand when I'm writing the bq, bf, etc...
Thank you very much.
On Thu, Jun 23, 2016 at 1:35 PM, Erick Erickson
wrote:
> bq:
>
> My big question however is whether I can
> trust that the Boost Functions and
> Boost Query are running
bq:
My big question however is whether I can
trust that the Boost Functions and
Boost Query are running against the entire index
In a word "no". The bf's are intended to alter
the score of the documents found by the primary query
by multiplying the function results by the raw score.
If a
Oh - gotcha... Thanks for taking the time to reply. My use of the phrase
"sub query" is probably misleading...
Here's the XML (below). I'm calling the Boost Query and Boost Function
statements "sub queries"...
The thing I was referencing was this -- where I create an "alias" for the
query
John:
I'm not objecting to the XML, but to the very presence of "more than
one query in a request handler". Request handlers don't have, AFAIK,
"query chains". They have a list of defaults for the _single_ query
being sent at a time to that handler. So having
blah blah
is something I've never
Hi Erick -
I was trying to simplify and not waste anyone's time parsing my
requestHandler... That is, as you imply, bogus xml.
The basic question is: If I have two "sub queries" in a single
requestHandler, do they both run independently against the entire index?
Alternatively, is there some
Where are you seeing that this does anything? It wouldn't be the first time
new functionality happened that I totally missed, but I've never seen that
config.
You might get some mileage out of ReRankingQParserPlugin though, that runs
the top N queries from one query through another.
Best,
Erick
Wild card and fuzzy queries are in general expensive to compute for the
simple reason that the number of query combinations that solr has to check
against increases.
So the lesser amount of combinations solr has to try, the faster it'll be.
I believe that this is what you're seeing.
Additionally,
Thanks Binoy, these links helps. Explain or debug log really helped me, and
after few experimentation and debugging, I conclude that if we move wild
card queries (marked with *) to right; it improves performance. I haven't
been able to find a reference in documentation, but does this statement
There is another resource to help analyze your queries: splainer.io
As for query tuning, that is a really vast topic and there is no
straightforward answer. You'll have to experiment and find the settings
that suit you best.
Here's a few resources to help you get started:
Thank you Binoy. Is there any pointer available to tune similar queries, as
it is taking a huge amount of time?
Shahzad
On Mon, Feb 15, 2016 at 10:18 AM, Binoy Dalal
wrote:
> Append =true to your query.
> It isn't exactly like a SQL execution plan but will give you the
Append =true to your query.
It isn't exactly like a SQL execution plan but will give you the details of
how the query was parsed, scored and how much time was taken by each module
used by the request handler.
On Mon, 15 Feb 2016, 10:42 Shahzad Masud <
shahzad.ma...@northbaysolutions.net> wrote:
I suppose that /get is the query by id API. I wonder if its reasonable to
expect it to be smart in SolrCloud usage.
On Thursday, January 14, 2016, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Stupid thought/question. Is there a query by id API that understands
> SolrCloud
On 1/14/2016 5:20 PM, Shivaji Dutta wrote:
> I am working with a customer that has about a billion documents on 20 shards.
> The documents are extremely small about 100 characters each.
> The insert rate is pretty good, but they are trying to fetch the document by
> using SolrJ SolrQuery
>
>
Stupid thought/question. Is there a query by id API that understands
SolrCloud routing and can simply fwd the query to the shard that would hold
said document? Barring that, can one use SolrJ's routing brains to see what
shard a given id would be routed to and only query that shard?
-Doug
On
1 - 100 of 282 matches
Mail list logo