I’d ignore the form of the query for the present, I think that’s a red herring.
Start by taking all your sort clauses off. Then add them back one by one (you
have
to restart Solr between these experiments). My bet: your problem is
“uninverting” and you’ll see your startup speed get worse the mor
Hi Shawn,
It's a vague question and I haven't tried it out yet.
Can I instead mention query as below:
Basically instead of
q=*:*&fq=PARENT_DOC_ID:100&fq=MODIFY_TS:[1970-01-01T00:00:00Z TO
*]&fq=PHY_KEY2:"HQ012206"&fq=PHY_KEY1:"BAMBOOROSE"&rows=1000&sort=MODIFY_TS
desc,LOGICAL_SECT_NAME asc,
Great, thanks Erick
On Mon, 8 Jun 2020 at 13:22, Erick Erickson wrote:
> It’s _bounded_ buy MaxDoc/8 + (some overhead). The overhead is
> both the map overhead and the representation of the query.
>
> This is an upper bound, the full bitset is not stored if there
> are few entries that match the
It’s _bounded_ buy MaxDoc/8 + (some overhead). The overhead is
both the map overhead and the representation of the query.
This is an upper bound, the full bitset is not stored if there
are few entries that match the filter, in that case the
doc IDs are stored. Consider if maxDoc is 1M and only 2 d
Sorry to hijack this a little bit. Shawn, what's the calculation for the
size of the filter cache?
Is that 1 bit per document in the core / shard?
Thanks
On Fri, 5 Jun 2020 at 17:20, Shawn Heisey wrote:
> On 6/5/2020 12:17 AM, Srinivas Kashyap wrote:
> > q=*:*&fq=PARENT_DOC_ID:100&fq=MODIFY_TS:[
On 6/5/2020 12:17 AM, Srinivas Kashyap wrote:
q=*:*&fq=PARENT_DOC_ID:100&fq=MODIFY_TS:[1970-01-01T00:00:00Z TO
*]&fq=PHY_KEY2:"HQ012206"&fq=PHY_KEY1:"JACK"&rows=1000&sort=MODIFY_TS
desc,LOGICAL_SECT_NAME asc,TRACK_ID desc,TRACK_INTER_ID asc,PHY_KEY1 asc,PHY_KEY2 asc,PHY_KEY3 asc,PHY_KEY4 asc,PH
ven for the simple query with filter query mentioned as
> shown above, it is consuming JVM memory. So, how much memory or what
> configuration should I be doing on solrconfig.xml to make it work.
>
> Thanks,
> Srinivas
>
> From: Jörn Franke
> Sent: 05 June 2020 12:30
>
ent: 05 June 2020 12:30
To: solr-user@lucene.apache.org
Subject: Re: Solr takes time to warm up core with huge data
I think DIH is the wrong solution for this. If you do an external custom load
you will be probably much faster.
You have too much JVM memory from my point of view. Reduce it to eig
I think DIH is the wrong solution for this. If you do an external custom load
you will be probably much faster.
You have too much JVM memory from my point of view. Reduce it to eight or
similar.
It seems you are just exporting data so you are better off work the exporting
handler.
Add docvalue
Thanks Shawn,
The filter queries are not complex. Below are the filter queries I’m running
for the corresponding schema entry:
q=*:*&fq=PARENT_DOC_ID:100&fq=MODIFY_TS:[1970-01-01T00:00:00Z TO
*]&fq=PHY_KEY2:"HQ012206"&fq=PHY_KEY1:"JACK"&rows=1000&sort=MODIFY_TS
desc,LOGICAL_SECT_NAME asc,TRACK
On 6/4/2020 9:51 PM, Srinivas Kashyap wrote:
We are on solr 8.4.1 and In standalone server mode. We have a core with
497,767,038 Records indexed. It took around 32Hours to load data through DIH.
The disk occupancy is shown below:
82G /var/solr/data//data/index
When I restarted solr instan
Hello,
We are on solr 8.4.1 and In standalone server mode. We have a core with
497,767,038 Records indexed. It took around 32Hours to load data through DIH.
The disk occupancy is shown below:
82G /var/solr/data//data/index
When I restarted solr instance and went to this core to query on so
12 matches
Mail list logo