The below solrconfig.xml settings resolved the TIMED_WAIT in
ConcurrentMergeScheduler.doStall(). Thanks to Shawn and Erik for their
pointers.
...
30
100
30.0
18
6
300
...
${solr.autoCommit.maxTime:3}
I didn’t have time to look at the full thread dump, but noticed one thing in
pasted stack trace:
AddSchemaFieldsUpdateProcessor.processAdd
Is it possible that you are doing a lot changes to your schema?
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch
> https://github.com/mohsinbeg/datadump/tree/master/solr58f449cec94a2c75_core_256
I had uploaded the output at the above link.
The OS has no swap configured. There are other processes on the host but
<1GB or <5% CPU cumulatively but none inside the docker as `top` shows. Solr
JVM heap is at
On 2/10/2018 11:58 PM, mmb1234 wrote:
Only `top` available on Photon OS is
https://github.com/vmware/photon/blob/1.0/SPECS/procps-ng/procps-ng.spec.
Those screenshots are attached.
Attachments rarely make it to the mailing list. I don't see any
attachments.
This is an example of what I am
Hi Shawn, Erik
> updates should slow down but not deadlock.
The net effect is the same. As the CLOSE_WAITs increase, jvm ultimately
stops accepting new socket requests, at which point `kill ` is the
only option.
This means if replication handler is invoked which sets the deletion policy,
the
If you flood the system with updates, eventually you run out of merge
threads and then updates block until the merge is done. It's may be
that at this point you get into some deadlocked state, updates should
slow down but not deadlock.
How are you updating? Are you stringing together a zillion
On 2/9/2018 8:20 PM, mmb1234 wrote:
Ran /solr/58f449cec94a2c75-core-248/admin/luke at 7:05pm PST
It showed "lastModified: 2018-02-10T02:25:08.231Z" indicating commit blocked
for about 41 mins.
Hard commit is set as 10secs in solrconfig.xml
Other cores are also now blocked.
Ran /solr/58f449cec94a2c75-core-248/admin/luke at 7:05pm PST
It showed "lastModified: 2018-02-10T02:25:08.231Z" indicating commit blocked
for about 41 mins.
Hard commit is set as 10secs in solrconfig.xml
Other cores are also now blocked.
https://jstack.review analysis of the thread dump says
On 2/8/2018 5:50 PM, mmb1234 wrote:
I then removed my custom element from my solrconfig.xml and
both hard commit and /solr/admin/core hang issues seemed to go way for a
couple of hours.
The mergeScheduler config you had is not likely to be causing any
issues. It's a good config. But
Shawn, Eric,
Were you able to look at the thread dump ?
https://github.com/mohsinbeg/datadump/blob/master/threadDump-7pjql_1.zip
Or is there additional data I may provide.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> Setting openSearcher to false on autoSoftCommit makes no sense.
That was my mistake in my solrconfig.xml. Thank you for identifying it. I
have corrected it.
I then removed my custom element from my solrconfig.xml and
both hard commit and /solr/admin/core hang issues seemed to go way for a
On 2/7/2018 9:01 PM, mmb1234 wrote:
> I am seeing that after some time hard commits in all my solr cores stop and
> each one's searcher has an "opened at" date to be hours ago even though they
> are continuing to ingesting data successfully (index size increasing
> continuously).
>
>
> If you issue a manual commit
> (http://blah/solr/core/update?commit=true) what happens?
That call never returned back to client browser.
So I also tried a core reload and did capture in the thread dump. That too
never returned.
"qtp310656974-1022" #1022 prio=5 os_prio=0
This is very odd. Do you by any chance have custom code in place
that's not closing searchers properly? If you take a heap dump, how
many searchers to you have open?
If you issue a manual commit
(http://blah/solr/core/update?commit=true) what happens?
Best,
Erick
On Wed, Feb 7, 2018 at 8:01 PM,
14 matches
Mail list logo