We are using solr 5.4.0 in the production environment. we are planning to
migrate to solr 8.5.
We have observed that in solr 8.5 if we keep `sow`(split on whitespace)
parameter as false(default) query is parsed as field-centric and if `sow` is
marked as true query is parsed as term-centric.
Our
Hi Community members,
I tried the following approaches but non of them worked for my use case.
1. For achieving exact match in solr we have to kept sow='false' (solr will
use field centric matching mode) and grouped multiple similar fields into
one copy field. It does solve the problem of
In our index, we have few fields defined as `ExternalFileField` field type.
We decided to use docValues for such fields. Here is the field type
definition
OLD => (ExternalFileField)
NEW => (docValues)
After this modification we started getting the following `timeout warning`
messages:
```The
Hey Erick,
In cases for which we are getting this warning, I'm not able to extract the
`exact solr query`. Instead logger is logging `parsedquery ` for such cases.
Here is one example:
2020-09-29 13:09:41.279 WARN (qtp926837661-82461) [c:mycollection
s:shard1_0 r:core_node5
raj.yadav wrote
> In cases for which we are getting this warning, I'm not able to extract
> the
> `exact solr query`. Instead logger is logging `parsedquery ` for such
> cases.
> Here is one example:
>
>
> 2020-09-29 13:09:41.279 WARN (qtp926837661-82461) [c:m
harjags wrote
> Below errors are very common in 7.6 and we have solr nodes failing with
> tanking memory.
>
> The request took too long to iterate over terms. Timeout: timeoutAt:
> 162874656583645 (System.nanoTime(): 162874701942020),
>
as ExternalFileField for functional query?
2. Why I got warning message when system was under load but no when there
was no laod?
When we were performing load test (load scale is same) with
ExternalFileField type were not getting any warning messages in our logs.
raj.yadav wrote
> Hey Erick,
>
>
Hi,
I went through other queries for which we are getting `The request took too
long to iterate over doc values` warning. As pointed by Erick I have cross
check all the fields that are being used in query and there is no such field
against which we are searching and it as index=false and
Hi,
I went through other queries for which we are getting `The request took too
long to iterate over doc values` warning. As pointed by Erick I have cross
check all the fields that are being used in query and there is no such field
against which we are searching and it as index=false and
Erik Hatcher-4 wrote
> Wouldn’t a “string” field be as good, if not better, for this use case?
What is the rationale behind this type change to 'string'. How will it speed
up search/filtering? Will it not increase the index size. Since in general
string type takes more space storage then int (not
Erick Erickson wrote
> Also, the default pint type is not as efficient for single-value searches
> like this, the trie fields are better. Trie support will be kept until
> there’s a good alternative for the single-value lookup with pint.
>
> So for what you’re doing, I’d change to TrieInt,
Hi Chris,
Chris Hostetter-3 wrote
> ...ExternalFileField is "special" and as noted in it's docs it is not
> searchable -- it doesn't actaully care what the indexed (or "stored")
> properties are ... but the default values of those properties as assigend
> by the schema defaults are still
I have a use case where none of the document in my solr index is changing but
I still want to open a new searcher through the curl api.
On executing the below curl command
curl
"XXX.XX.XX.XXX:9744/solr/mycollection/update?openSearcher=true=true"
it doesn't open a new searcher.
Below is what I
Erick Erickson wrote
> Ah, ok. That makes sense. I wonder if your use-case would be better
> served, though, by “in place updates”, see:
> https://lucene.apache.org/solr/guide/8_1/updating-parts-of-documents.html
> This has been around in since Solr 6.5…
As per documentation `in place update` is
Shawn Heisey-2 wrote
> Atomic updates are nearly identical to simple indexing, except that the
> existing document is read from the index to populate a new document
> along with whatever updates were requested, then the new document is
> indexed and the old one is deleted.
As per the above
Vadim Ivanov wrote
> Hello, Raj
>
> I've just checked my Schema page for external file field
>
> Solr version 8.3.1 gives only such parameters for externalFileField:
>
>
> Field: fff
>
> Field-Type:
>
> org.apache.solr.schema.ExternalFileField
>
>
> Flags:
>
> UnInvertible
>
> Omit Term
Chris Hostetter-3 wrote
> : *
> : class="solr.ExternalFileField" valType="float"/>
> *
> :
> : *
>
> *
> ...
> : I was expecting that for field "fieldA" indexed will be marked as false
> and
> : it will not be part of the index. But Solr admin "SCHEMA page" (we get
> this
> : option after
matthew sporleder wrote
> Are you stuck in iowait during that commit?
I am not sure how do I determine that, could you help me here.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
matthew sporleder wrote
> On unix the top command will tell you. On windows you need to find
> the disk latency stuff.
Will check this and report here
matthew sporleder wrote
> Are you on a spinning disk or on a (good) SSD?
we are using SSD
matthew sporleder wrote
> Anyway, my theory is
Hi Everyone,
matthew sporleder wrote
> Are you stuck in iowait during that commit?
During commit operation, there is no iowait.
Infact most of the time cpu utilization percentage is very low.
/*As I mentioned in my previous post that we are getting `SolrCmdDistributor
Hey Karl,
Can you elaborate more about your system? How many shards does your
collection have, what is replica type? Are you using an external zookeeper?
Its looks like (from logs) that you are running solr on SolrCloud mode?
--
Sent from:
matthew sporleder wrote
> Is zookeeper on the solr hosts or on its own? Have you tried
> opensearcher=false (soft commit?)
1. we are using zookeeper in ensemble mode. Its hosted on 3 seperate node.
2. Soft commit (opensearcher=false) is working fine. All the shards are
getting commit request
Hi Folks,
Do let me know if any more information required to debug this.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hey All,
We have updated our system from solr 5.4 to solr 8.5.2 and we are suddenly
seeing a lot of the below errors in our logs.
HttpChannelState org.eclipse.jetty.io.EofException: Reset
cancel_stream_error
Is this related to some system level or solr level config?
How do I find the cause of
matthew sporleder wrote
> I would stick to soft commits and schedule hard-commits as
> spaced-out-as-possible in regular maintenance windows until you can
> find the culprit of the timeout.
>
> This way you will have very focused windows for intense monitoring
> during the hard-commit runs.
Hi All,
I tried debugging but unable to find any solution. Do let me know in case
details/logs shared by me are not suffiecient/clear.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi everyone,
As per suggestions in previous post (by Erick and Shawn) we did following
changes.
OLD
NEW
*Reduced JVM heap size from 30GB to 26GB*
GC setting:
GC_TUNE=" \
-XX:+UseG1GC \
-XX:+PerfDisableSharedMem \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=8m \
Hi All,
As I mentioned in my previous post that reloading/refreshing of the external
file is consuming most of the time during a commit operation.
In order to nullify the impact of external files, I had deleted external
files from all the shards and issued commit through the curl command. Commit
Hi All,
Till we investigate further about this issue.
Can anyone please share what other ways we can issue a commit or point me to
existing documents that have a relevant example.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
elivis wrote
> See:
> https://lucene.472066.n3.nabble.com/SolrServerException-Timeout-occured-while-waiting-response-from-server-tc4464632.html
>
> Maybe this will help somebody. I was dealing with exact same problem. We
> are
> running on VMs, and all of our timeout problems went away after we
>
Solr Setup: (Running in solrCloud mode)
It has 6 shards, and each shard has only one replica (which is also a
leader) and the replica type is NRT.
Each shards are hosted on the separate physical host.
Zookeeper => We are using external zookeeper ensemble (3 separate node
cluster)
Shard and Host
Hi All,
For further investigation, I have raised a JIRA ticket.
https://issues.apache.org/jira/browse/SOLR-15045
In case, anyone has any information to share, feel free to mention it here.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Thanks, Shawn and Erick.
We are step by step trying out the changes suggested in your post.
Will get back once we have some numbers.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Recently we had modified `noCFSRatio` parameter of merge policy.
8
5
50.0
4000
0.0
This is our current merge policy. Earlier `noCFSRatio` was set to `0.1`.
generally to reflect any changes of solrconfig we reload the collection. But
we stop
He Scott,
We have also recently migrated to solr 8.5.2. And facing similar issue.
Are you able to resolve this
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi everyone,
We have two parallel system one is solr 8.5.2 and other one is solr 5.4
In solr_5.4 commit time with opensearcher true is 10 to 12 minutes while in
solr_8 it's around 25 minutes.
This is our current caching policy of solr_8
In solr 5, we are using FastLRUCache
Hi Everyone,
We are using solr8.5.2 (Solr cloud mode), external zookeeper ensemble
(hosted on the separate node)
All of a sudden we are seeing sudden spike in CPU but at the same same time
neither any heavy indexing is performed nor any sudden increase in request
rate.
Collection info:
37 matches
Mail list logo