[
https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16176719#comment-16176719
]
Nawab Zada Asad iqbal commented on SOLR-11297:
----------------------------------------------
[~erickerickson] i would have loved to test it, but my test cluster which
naturally hit this issue has been reclaimed and i will not have access to
another for another week. I am looking forward to solr 7.0; as we haven't
rolled solr 6 to production yet.
Locally, I can mock the repeated pings by Luiz's script but if you have already
tested with it, then that is not much useful.
> Message "Lock held by this virtual machine" during startup. Solr is trying
> to start some cores twice
> -----------------------------------------------------------------------------------------------------
>
> Key: SOLR-11297
> URL: https://issues.apache.org/jira/browse/SOLR-11297
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Affects Versions: 6.6
> Reporter: Shawn Heisey
> Assignee: Erick Erickson
> Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.sh,
> solr6_6-startup.log
>
>
> Sometimes when Solr is restarted, I get some "lock held by this virtual
> machine" messages in the log, and the admin UI has messages about a failure
> to open a new searcher. It doesn't happen on all cores, and the list of
> cores that have the problem changes on subsequent restarts. The cores that
> exhibit the problems are working just fine -- the first core load is
> successful, the failure to open a new searcher is on a second core load
> attempt, which fails.
> None of the cores in the system are sharing an instanceDir or dataDir. This
> has been verified several times.
> The index is sharded manually, and the servers are not running in cloud mode.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]