[
https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172292#comment-16172292
]
Erick Erickson commented on SOLR-11297:
---------------------------------------
Luiz:
Nice sleuthing! This looks quite promising, I can see where this would be a
problem. I looked through the code and I think I see some other places this
could happen, so I'm going to put up some changes.
I want particularly to look at other createCoreFromDescriptor calls and see if
we can move the locking via waitAddPendingCoreOps could be moved there,
although on a quick glance I'm not sure.
More soon.
> Message "Lock held by this virtual machine" during startup. Solr is trying
> to start some cores twice
> -----------------------------------------------------------------------------------------------------
>
> Key: SOLR-11297
> URL: https://issues.apache.org/jira/browse/SOLR-11297
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Affects Versions: 6.6
> Reporter: Shawn Heisey
> Assignee: Erick Erickson
> Attachments: SOLR-11297.patch, SOLR-11297.sh, solr6_6-startup.log
>
>
> Sometimes when Solr is restarted, I get some "lock held by this virtual
> machine" messages in the log, and the admin UI has messages about a failure
> to open a new searcher. It doesn't happen on all cores, and the list of
> cores that have the problem changes on subsequent restarts. The cores that
> exhibit the problems are working just fine -- the first core load is
> successful, the failure to open a new searcher is on a second core load
> attempt, which fails.
> None of the cores in the system are sharing an instanceDir or dataDir. This
> has been verified several times.
> The index is sharded manually, and the servers are not running in cloud mode.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]