We have the following setup , solr 7.7.2 with 1 TLOG Leader & 1 TLOG
replica with a single shard. We have about 34.5 million documents with an
approximate index size of 600GB. I have noticed a degraded query
performance whenever the replica is trying to (guessing here) sync or
perform ac
at improve our query
performance .?
With the information available, the only suggestion I have currently is
to replace "q=*" with "q=*:*" -- assuming that the intent is to match
all documents with the main query. According to what you attached
(which I am very surprised to se
On 7/8/2019 3:08 AM, Midas A wrote:
I have enabled docvalues on facet field but query is still taking time.
How i can improve the Query time .
docValues="true" multiValued="true" termVectors="true" />
*Query: *
There's very little information here -- only a single field definition
and th
Hi
How i can know whether DocValues are getting used or not ?
Please help me here .
On Mon, Jul 8, 2019 at 2:38 PM Midas A wrote:
> Hi ,
>
> I have enabled docvalues on facet field but query is still taking time.
>
> How i can improve the Query time .
> docValues="true" multiValued="true" termV
Hi ,
I have enabled docvalues on facet field but query is still taking time.
How i can improve the Query time .
*Query: *
http://X.X.X.X:
/solr/search/select?df=ttl&ps=0&hl=true&fl=id,upt&f.ind.mincount=1&hl.usePhraseHighlighter=true&f.pref.mincount=1&q.op=OR&fq=NOT+hemp:(%22xgidx29760%22+
FYI
https://issues.apache.org/jira/browse/SOLR-11437
https://issues.apache.org/jira/browse/SOLR-12488
On Thu, Apr 18, 2019 at 7:24 AM Shawn Heisey wrote:
> On 4/17/2019 11:49 PM, John Davis wrote:
> > I did a few tests with our instance solr-7.4.0 and field:* vs field:[* TO
> > *] doesn't seem m
On 4/17/2019 11:49 PM, John Davis wrote:
I did a few tests with our instance solr-7.4.0 and field:* vs field:[* TO
*] doesn't seem materially different compared to has_field:1. If no one
knows why Lucene optimizes one but not another, it's not clear whether it
even optimizes one to be sure.
Que
I did a few tests with our instance solr-7.4.0 and field:* vs field:[* TO
*] doesn't seem materially different compared to has_field:1. If no one
knows why Lucene optimizes one but not another, it's not clear whether it
even optimizes one to be sure.
On Wed, Apr 17, 2019 at 4:27 PM Shawn Heisey w
On 4/17/2019 1:21 PM, John Davis wrote:
If what you describe is the case for range query [* TO *], why would lucene
not optimize field:* similar way?
I don't know. Low level lucene operation is a mystery to me.
I have seen first-hand that the range query is MUCH faster than the
wildcard quer
If what you describe is the case for range query [* TO *], why would lucene
not optimize field:* similar way?
On Wed, Apr 17, 2019 at 10:36 AM Shawn Heisey wrote:
> On 4/17/2019 10:51 AM, John Davis wrote:
> > Can you clarify why field:[* TO *] is lot more efficient than field:*
>
> It's a range
On 4/17/2019 10:51 AM, John Davis wrote:
Can you clarify why field:[* TO *] is lot more efficient than field:*
It's a range query. For every document, Lucene just has to answer two
questions -- is the value more than any possible value and is the value
less than any possible value. The answ
Can you clarify why field:[* TO *] is lot more efficient than field:*
On Sun, Apr 14, 2019 at 12:14 PM Shawn Heisey wrote:
> On 4/13/2019 12:58 PM, John Davis wrote:
> > We noticed a sizable performance degradation when we add certain fq
> filters
> > to the query even though the result set does
On 4/13/2019 12:58 PM, John Davis wrote:
We noticed a sizable performance degradation when we add certain fq filters
to the query even though the result set does not change between the two
queries. I would've expected solr to optimize internally by picking the
most constrained fq filter first, bu
Patches welcome, but how would that be done? There’s no fixed schema at the
Lucene level. It’s even possible that no two documents in the index have any
fields in common. Given the structure of an inverted index, answering the
question “for document X does it have any value?" is rather “interes
> field1:* is slow in general for indexed fields because all terms for the
> field need to be iterated (e.g. does term1 match doc1, does term2 match
> doc1, etc)
This feels like something could be optimized internally by tracking
existence of the field in a doc instead of making users index yet an
Also note that field1:* does not necessarily match all documents. A document
without that field will not match. So it really can’t be optimized they way you
might expect since, as Yonik says, all the terms have to be enumerated….
Best,
Erick
> On Apr 13, 2019, at 12:30 PM, Yonik Seeley wrote:
More constrained but matching the same set of documents just guarantees
that there is more information to evaluate per document matched.
For your specific case, you can optimize fq = 'field1:* AND field2:value'
to &fq=field1:*&fq=field2:value
This will at least cause field1:* to be cached and reuse
Hi there,
We noticed a sizable performance degradation when we add certain fq filters
to the query even though the result set does not change between the two
queries. I would've expected solr to optimize internally by picking the
most constrained fq filter first, but maybe my understanding is wron
Hi all,
We would like to perform a benchmark of
https://issues.apache.org/jira/browse/SOLR-11831
The patch improves the performance of grouped queries asking only for one
result per group (aka. group.limit=1).
I remember seeing a page showing a benchmark of the query performance on
Wikipedia
Thanks everyone for taking time to respond to my email. I think you are
correct in that the query results might be coming from main memory as I
only had around 7k queries.
However it is still not clear to me, given that everything was being
served from main memory, why is that I am not able to push
On 4/28/2017 12:43 PM, Toke Eskildsen wrote:
> Shawn Heisey wrote:
>> Adding more shards as Toke suggested *might* help,[...]
> I seem to have phrased my suggestion poorly. What I meant to suggest
> was a switch to a single shard (with 4 replicas) setup, instead of the
> current 2 shards (with 2
Beautiful, thank you.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Friday, April 28, 2017 3:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Query Performance benchmarking
I use the JMeter plugins. They’ve been reorganized recently, so they
niel (NIH/NLM) [C]
> wrote:
>
> Walter,
>
> If you can share a pointer to that JMeter add-on, I'd love it.
>
> -Original Message-
> From: Walter Underwood [mailto:wun...@wunderwood.org]
> Sent: Friday, April 28, 2017 2:53 PM
> To: solr-user@lucene
Walter,
If you can share a pointer to that JMeter add-on, I'd love it.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Friday, April 28, 2017 2:53 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Query Performance benchmarking
I use production
I use production logs to get a mix of common and long-tail queries. It is very
hard to get a realistic distribution with synthetic queries.
A benchmark run goes like this, with a big shell script driving it.
1. Reload the collection to clear caches.
2. Split the log into a cache warming set (usu
Shawn Heisey wrote:
> Adding more shards as Toke suggested *might* help,[...]
I seem to have phrased my suggestion poorly. What I meant to suggest was a
switch to a single shard (with 4 replicas) setup, instead of the current 2
shards (with 2 replicas).
- Toke
Well, the best way to get no cache hits is to set the cache sizes to
zero ;). That provides worst-case scenarios and tells you exactly how
much you're relying on caches. I'm not talking the lower-level Lucene
caches here.
One thing I've done is use the TermsComponent to generate a list of
terms ac
(aside: Using Gatling or Jmeter?)
Question: How can you easily randomize something in the query so you get no
cache hits? I think there are several levels of caching.
--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
re: the q vs. fq question. My claim (not verified) is that the fastest
of all would be q=*:*&fq={!cache=false}. That would bypass the scoring
that putting it in the "q" clause would entail as well as bypass the
filter cache.
But I have to agree with Walter, this is very suspicious IMO. Here's
what
More “unrealistic” than “amazing”. I bet the set of test queries is smaller
than the query result cache size.
Results from cache are about 2 ms, but network communication to the shards
would add enough overhead to reach 40 ms.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunder
On 4/27/2017 5:20 PM, Suresh Pendap wrote:
> Max throughput that I get: 12000 to 12500 reqs/sec
> 95 percentile query latency: 30 to 40 msec
These numbers are *amazing* ... far better than I would have expected to
see on a 27GB index, even in a situation where it fits entirely into
available memor
On Thu, 2017-04-27 at 23:20 +, Suresh Pendap wrote:
> Number of Solr Nodes: 4
> Number of shards: 2
> replication-factor: 2
> Index size: 55 GB
> Shard/Core size: 27.7 GB
> maxConnsPerHost: 1000
The overhead of sharding is not trivial. Your overall index size is
fairly small, relative to your
Hi,
I am trying to perform Solr Query performance benchmarking and trying to
measure the maximum throughput and latency that I can get from.a given Solr
cluster.
Following are my configurations
Number of Solr Nodes: 4
Number of shards: 2
replication-factor: 2
Index size: 55 GB
Shard/Core size
Thanks a lot Shawn.
Regards,
Prateek Jain
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: 23 December 2016 01:36 PM
To: solr-user@lucene.apache.org
Subject: Re: DataImportHandler | Query | performance
On 12/23/2016 5:15 AM, Prateek Jain J wrote:
> We n
On 12/23/2016 5:15 AM, Prateek Jain J wrote:
> We need some advice/views on the way we push our documents in SOLR (4.8.1).
> So, here are the requirements:
>
> 1. Document could be from 5 to 100 KB in size.
>
> 2. 10-50 users actively querying solr with different sort of data.
>
> 3.
Hi All,
We need some advice/views on the way we push our documents in SOLR (4.8.1). So,
here are the requirements:
1. Document could be from 5 to 100 KB in size.
2. 10-50 users actively querying solr with different sort of data.
3. Data will be available frequently to be pu
On Mon, 2016-11-14 at 11:36 +0530, Midas A wrote:
> How to improve facet query performance
1) Don't shard unless you really need to. Replicas are fine.
2) If the problem is the first facet call, then enable DocValues and
re-index.
3) Keep facet.limit <= 100, especially if you shard
How to improve facet query performance
Good tip Rick,
I'll dig in and make sure everything is set up correctly.
Thanks!
-D
Dave Seltzer
Chief Systems Architect
TVEyes
(203) 254-3600 x222
On Wed, Nov 2, 2016 at 9:05 PM, Rick Leir wrote:
> Here is a wild guess. Whenever I see a 5 second delay in networking, I
> think DNS timeouts.
Here is a wild guess. Whenever I see a 5 second delay in networking, I
think DNS timeouts. YMMV, good luck.
cheers -- Rick
On 2016-11-01 04:18 PM, Dave Seltzer wrote:
Hello!
I'm trying to utilize Solr Cloud to help with a hash search problem. The
record set has only 4,300 documents.
When I r
Hello!
I'm trying to utilize Solr Cloud to help with a hash search problem. The
record set has only 4,300 documents.
When I run my search against a single core I get results on the order of
10ms. When I run the same search against Solr Cloud results take about
5,000 ms.
Is there something about
Hi
I have a few filter queries that use multiple cores join to filter
documents. After I inverted those joins they became slower. So, it looks
something like that:
I used to query "product" core with query that contains fq={!join to=tags
from=preferred_tags fromIndex=user}(country:US AND
...)&fq=
> Last week, I tried to re-index the whole collection from scratch, using
source data. Query performance on the resulting re-index proved to be abysmal,
I could get barely 10% of my previous query throughput, and even that was at
latencies that were orders of magnitude higher than what I had
he whole collection from scratch, using source
data. Query performance on the resulting re-index proved to be abysmal, I could
get barely 10% of my previous query throughput, and even that was at latencies
that were orders of magnitude higher than what I had in production.
I hooked up some CPU profil
whole collection from scratch, using source
data. Query performance on the resulting re-index proved to be abysmal, I could
get barely 10% of my previous query throughput, and even that was at latencies
that were orders of magnitude higher than what I had in production.
I hooked up some CPU
I want to search document from
>> all shards, it will slow down and take too long time.
>>
>> I know in case of solr Cloud, it will query all shard node and then return
>> result. Is there any way to search document in all shard with best
>> performance(qp
t;
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4287763.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
there any way to search document in all shard with best
performance(qps)
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4287763.html
Sent from the Solr - User mailing list archive at Nabble.com.
ly 10K records in one shard. What's your index/document size?
>
> Thanks,
> Susheel
>
> On Mon, Jul 18, 2016 at 2:08 AM, kasimjinwala
> wrote:
>
>> currently I am using solrCloud 5.0 and I am facing query performance issue
>> while using 3 implicit shar
.
>
> please provide comment or suggestion to solve above issue
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4287600.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
currently I am using solrCloud 5.0 and I am facing query performance issue
while using 3 implicit shards, each shard contain around 10K records.
when I am specifying shards parameter(*shards=shard1*) in query it gives
30K-35K qps. but while removing shards parameter from query it give
*1000
;
>> Best,
>> Erick
>>
>> On Mon, Apr 25, 2016 at 3:48 PM, Jay Potharaju
>> wrote:
>> > Hi,
>> > I am trying to measure how will are queries performing ie how long are
>> they
>> > taking. In order to measure query speed I am u
t; Hi,
> > I am trying to measure how will are queries performing ie how long are
> they
> > taking. In order to measure query speed I am using solrmeter with 50k
> > unique filter queries. And then checking if any of the queries are slower
> > than 50ms. Is this a goo
with 50k
> unique filter queries. And then checking if any of the queries are slower
> than 50ms. Is this a good approach to measure query performance?
>
> Are there any guidelines on how to measure if a given instance can handle a
> given number of qps(query per sec)? For example if
Hi,
I am trying to measure how will are queries performing ie how long are they
taking. In order to measure query speed I am using solrmeter with 50k
unique filter queries. And then checking if any of the queries are slower
than 50ms. Is this a good approach to measure query performance?
Are
On 4/18/2016 5:06 AM, Mugeesh Husain wrote:
> 1.)solr normal query(q=*:*) vs facet query(facet.query="abc") ?
> 2.)solr normal query(q=*:*) vs facet
> search(facet=tru&facet.field=coullumn_name) ?
> 3.)solr filter query(q=Column:some value) vs facet query(facet.query="abc")
> ?
> 4.)solr normal que
cet.query="abc")
?
4.)solr normal query(q=*:*) vs filter query(q=column:some value) ?
Also provide some good tutorial for above these things.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/normal-solr-query-vs-facet-query-performance-tp4270907.html
S
some environments."
Thanks & Regards,
Bhaumik Joshi
From: billnb...@gmail.com
Sent: Monday, April 11, 2016 7:07 AM
To: solr-user@lucene.apache.org
Subject: Re: Soft commit does not affecting query performance
Why do you think it would ?
Bill Bell
Se
Why do you think it would ?
Bill Bell
Sent from mobile
> On Apr 11, 2016, at 7:48 AM, Bhaumik Joshi wrote:
>
> Hi All,
>
> We are doing query performance test with different soft commit intervals. In
> the test with 1sec of soft commit interval and 1min of soft commit int
Hi All,
We are doing query performance test with different soft commit intervals. In
the test with 1sec of soft commit interval and 1min of soft commit interval we
didn't notice any improvement in query timings.
We did test with SolrMeter (Standalone java tool for stress tests with
out.
Thank
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223988.html
Sent from the Solr - User mailing list archive at Nabble.com.
ively, there can be a lot of different queries. If I still want to
> > take the advantage of the filterCache, can I limit the size of the three
> > caches so that the RAM usage will be under control?
> >
> > Thanks
> >
> >
> >
> > --
> > View this message in context:
> > http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223960.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
ed on fresh index data.
>
> Cumulatively, there can be a lot of different queries. If I still want to
> take the advantage of the filterCache, can I limit the size of the three
> caches so that the RAM usage will be under control?
>
> Thanks
>
>
>
> --
> View this
filterCache, can I limit the size of the three
caches so that the RAM usage will be under control?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223960.html
Sent from the Solr - User mailing list archive
e the definition of docValues=true for integer type did not
> work with faceted search. There was a time I accidentally used filter
> query
> with the string type property and I found the query performance degraded
> quite a lot.
>
> Is it generally true that fq works better with int
ype for two properties: DateDep and
Duration since the definition of docValues=true for integer type did not
work with faceted search. There was a time I accidentally used filter query
with the string type property and I found the query performance degraded
quite a lot.
Is it generally true that fq wo
; filter query cache.
>
> To load up low level lucene cache without creating filtercache/document
> cache etc, can I turn off the three cache and send a lot of queries to Solr
> before I start to test the performance of each individual queries?
>
> Thanks
>
>
>
>
&g
t of queries to Solr
before I start to test the performance of each individual queries?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223758.html
Sent from the Solr - User mailing list archive at Nabble.com.
ith or without the
> two facets after indexing the data (to take advantage of cache warming).
>
> Thanks
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223744.html
> Sent from the Solr - User mailing list archive at Nabble.com.
after indexing the data (to take advantage of cache warming).
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223744.html
Sent from the Solr - User mailing list archive at Nabble.com.
e"
> size="4096"
> initialSize="1024"
> autowarmCount="32"/>
>
>class="solr.LRUCache"
> size="512"
> initialSize="512"
> autowarmCount="32"/>
>
> class="solr.LRUCache"
> size="1"
> initialSize="256"
> autowarmCount="0"/>
>
>
> Thanks
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699.html
> Sent from the Solr - User mailing list archive at Nabble.com.
ilter queries. This is due to the limited values for some filter
queries.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699.html
Sent from the Solr - User mailing list archive at Nabble.com.
, it will help you a lot !
>
> Cheers
>
> 2015-07-21 16:49 GMT+01:00 Nagasharath :
>
>> Any recommended tool to test the query performance would be of great help.
>>
>> Thanks
>
>
>
> --
> --
>
> Benedetti A
SolrMeter mate,
http://code.google.com/p/solrmeter/
Take a look, it will help you a lot !
Cheers
2015-07-21 16:49 GMT+01:00 Nagasharath :
> Any recommended tool to test the query performance would be of great help.
>
> Thanks
>
--
--
Benedetti Alessan
Any recommended tool to test the query performance would be of great help.
Thanks
Shawn, thank you very much for that explanation. It helps a lot.
Cheers, Ryan
On Wed, May 20, 2015 at 5:07 PM, Shawn Heisey wrote:
> On 5/20/2015 5:57 PM, Ryan Cutter wrote:
> > GC is operating the way I think it should but I am lacking memory. I am
> > just surprised because indexing is perf
On 5/20/2015 5:57 PM, Ryan Cutter wrote:
> GC is operating the way I think it should but I am lacking memory. I am
> just surprised because indexing is performing fine (documents going in) but
> deletions are really bad (documents coming out).
>
> Is it possible these deletes are hitting many seg
GC is operating the way I think it should but I am lacking memory. I am
just surprised because indexing is performing fine (documents going in) but
deletions are really bad (documents coming out).
Is it possible these deletes are hitting many segments, each of which I
assume must be re-built? An
On 5/20/2015 5:41 PM, Ryan Cutter wrote:
> I have a collection with 1 billion documents and I want to delete 500 of
> them. The collection has a dozen shards and a couple replicas. Using Solr
> 4.4.
>
> Sent the delete query via HTTP:
>
> http://hostname:8983/solr/my_collection/update?stream.bo
I have a collection with 1 billion documents and I want to delete 500 of
them. The collection has a dozen shards and a couple replicas. Using Solr
4.4.
Sent the delete query via HTTP:
http://hostname:8983/solr/my_collection/update?stream.body=
source:foo
Took a couple minutes and several repli
Butler wrote:
> We currently have a SolrCloud cluster that contains two collections which
> we toggle between for querying and indexing. When bulk indexing to our
> “offline" collection, our query performance from the “online” collection
> suffers somewhat. When segment merges occur
We currently have a SolrCloud cluster that contains two collections which we
toggle between for querying and indexing. When bulk indexing to our “offline"
collection, our query performance from the “online” collection suffers
somewhat. When segment merges occur, it gets downright abysma
On 11/28/2013 3:01 AM, Ahmet Arslan wrote:
> Are you sure you are using the same exact parameters? I would include
> enhoParams=all and compare parameters. Only wt parameter would be different.
> wt=javabin for solrJ
You can also look at the Solr log, which if you are logging at the
normal leve
Hi Parsi,
Are you sure you are using the same exact parameters? I would include
enhoParams=all and compare parameters. Only wt parameter would be different.
wt=javabin for solrJ
On Thursday, November 28, 2013 11:42 AM, Prasi S wrote:
Hi,
We recently saw a behavior which I wanted to confirm
Hi,
We recently saw a behavior which I wanted to confirm, WE are using solrj to
query solr. From the code, we use HttpSolrServer to hit the query and
return the response
1. When a sample query is hit using Solrj, we get the QTime as 4seconds.
The same query when we hit against solr in the browser,
Ah, got it now - thanks for the explanation.
On Sat, Sep 28, 2013 at 3:33 AM, Upayavira wrote:
> The thing here is to understand how a join works.
>
> Effectively, it does the inner query first, which results in a list of
> terms. It then effectively does a multi-term query with those values.
>
The thing here is to understand how a join works.
Effectively, it does the inner query first, which results in a list of
terms. It then effectively does a multi-term query with those values.
q=size:large {!join fromIndex=other from=someid
to=someotherid}type:shirt
Imagine the inner join returned
Hi Joel,
I tried this patch and it is quite a bit faster. Using the same query on a
larger index (500K docs), the 'join' QTime was 1500 msec, and the 'hjoin'
QTime was 100 msec! This was for true for large and small result sets.
A few notes: the patch didn't compile with 4.3 because of the
SolrCo
It looks like you are using int join keys so you may want to check out
SOLR-4787, specifically the hjoin and bjoin.
These perform well when you have a large number of results from the
fromIndex. If you have a small number of results in the fromIndex the
standard join will be faster.
On Wed, Sep
I forgot to mention - this is Solr 4.3
Peter
On Wed, Sep 25, 2013 at 3:38 PM, Peter Keegan wrote:
> I'm doing a cross-core join query and the join query is 30X slower than
> each of the 2 individual queries. Here are the queries:
>
> Main query: http://localhost:8983/solr/mainindex/select?q=ti
I'm doing a cross-core join query and the join query is 30X slower than
each of the 2 individual queries. Here are the queries:
Main query: http://localhost:8983/solr/mainindex/select?q=title:java
QTime: 5 msec
hit count: 1000
Sub query: http://localhost:8983/solr/subindex/select?q=+fld1:[0.1 TO
sing softCommit=true in update url and check if it
> gives us desired performance.
>
> Thanks for looking into this. Appreciate your help.
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Tuesday, August 13, 2013 8:12 AM
> To:
check if it
gives us desired performance.
Thanks for looking into this. Appreciate your help.
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, August 13, 2013 8:12 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 update and query perfor
t and do
> auto hard commit every 10-15 minutes.
>
> 3. We're not seeing improved query performance compared to Solr3.
> Queries which took 3-5 seconds in Solr3 (300 mil docs) are taking 20
> seconds with Solr4. We think this could be due to frequent hard commits and
> s
not seeing improved query performance compared to Solr3. Queries
which took 3-5 seconds in Solr3 (300 mil docs) are taking 20 seconds with
Solr4. We think this could be due to frequent hard commits and searcher
refresh. Do you think when we change to soft commit and increase the batch
size, we w
:34 PM
To: solr-user@lucene.apache.org
Subject: Re: Query Performance
Actually I have to rewrite my question:
Query 1:
q=*:*&rows=row_count&sort=id asc&start=X
and
Query2:
q={X TO *}&rows=row_count&sort=id asc&start=0
2013/7/29 Jack Krupansky
The second query excludes do
> Sent: Sunday, July 28, 2013 5:06 PM
> To: solr-user@lucene.apache.org
> Subject: Query Performance
>
>
> What is the difference between:
>
> q=*:*&rows=row_count&sort=id asc
>
> and
>
> q={X TO *}&rows=row_count&sort=id asc
>
> Does the first one
-
From: Furkan KAMACI
Sent: Sunday, July 28, 2013 5:06 PM
To: solr-user@lucene.apache.org
Subject: Query Performance
What is the difference between:
q=*:*&rows=row_count&sort=id asc
and
q={X TO *}&rows=row_count&sort=id asc
Does the first one trys to get all the documents but c
What is the difference between:
q=*:*&rows=row_count&sort=id asc
and
q={X TO *}&rows=row_count&sort=id asc
Does the first one trys to get all the documents but cut the result or they
are same or...? What happens at underlying process of Solr for that two
queries?
Hi,
Does that OR query need to be scored?
Does it repeat?
If answers are no and yes, you should use fq, not q.
Otis
--
Solr & ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm
On Wed, Jul 3, 2013 at 12:07 PM, Kevin Osborn wrote:
> Also, what is th
1 - 100 of 310 matches
Mail list logo