I've created a JIRA ticket now:
https://issues.apache.org/jira/browse/SOLR-14969
I'd be really glad, if a Solr developer could help or comment on the issue.
Thank you,
Andreas
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
be called
subsequently for the same core. The second call then fails to create an
IndexWriter (LockObtainFailedException), and this causes a call to
SolrCores#removeCoreDescriptor [3]. This mean, the second call removes the
CoreDescriptor for the SolrCore created with the first call
Hi,
we're running tests on a stand-alone Solr instance, which create Solr
cores from multiple applications using CoreAdmin (via SolrJ).
Lately, we upgraded from 8.4.1 to 8.6.3, and sometimes we now see a
LockObtainFailedException for a lock held by the same JVM, after which
Solr is broken
Hmmm, 6.4 was considerably before the refactoring that this patch
addresses so it's not a surprise that it doesn't apply.
On Thu, Sep 21, 2017 at 10:28 PM, Shashank Pedamallu
wrote:
> Hi Luiz,
>
> Unfortunately, I’m on version Solr-6.4.2 and the patch does not apply
>
Hi Luiz,
Unfortunately, I’m on version Solr-6.4.2 and the patch does not apply straight
away.
Thanks,
Shashank
On 9/21/17, 8:35 PM, "Luiz Armesto" wrote:
Hi Shashank,
There is an open issue about this exception [1]. Can you take a look and
test the
Hi Shashank,
There is an open issue about this exception [1]. Can you take a look and
test the patch to see if it works in your case?
[1] https://issues.apache.org/jira/browse/SOLR-11297
On Sep 21, 2017 10:19 PM, "Shashank Pedamallu"
wrote:
Hi,
I’m seeing the following
Hi,
I’m seeing the following exception in Solr that gets automatically resolved
eventually.
2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [ x:spedamallu1-core-1]
o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error opening
new searcher
Hmmm. oddly another poster was seeing this due to permissions issues,
although I don't know why that would clear up after a while. But it's
something to check.
Erick
On Wed, Aug 30, 2017 at 3:24 PM, Sundeep T wrote:
> Hello,
>
> Occasionally we are seeing errors opening
Hello,
Occasionally we are seeing errors opening new searcher for certain solr
cores. Whenever this happens, we are unable to query or ingest new data
into these cores. It seems to clear up after some time though. The root
cause seems to be - *"org.apache.lucene.store.LockObtainFailedException:
Hi,
I have solr cloud(4.1) setup with embedded jetty server.
I use the below command to start and stop the server.
start server : nohup java -DSTOP.PORT=8085 -DSTOP.KEY=key -DnumShards=2
-Dbootstrap_confdir=./solr/nlp/conf -Dcollection.configName=myconf
Hi,
I have got a searcher server replicating index from master server. Recently
I have noticed the huge difference in the index size between master and
slave followed by LockObtainFailedException in catalin.out log. When I
debugged the searcher index folder, I could see more that 100 segement_N
Will check later to use different data dirs for the core on
each instance.
But because each Solr sits in it's own openvz instance (virtual
server respectively) they should be totally separated. At least
from my point of understanding virtualization.
Will check and get back here...
Thanks.
On
OK, I think I have found it. I provided when starting the 4 solr instances
via start.jar always the data directory property via
*-Dsolr.data.dir=/home/myuser/data
*
After removing this it worked fine. What is weird is, that all 4 instances
are totally separated, so that instance-2 should never
On 6/14/2012 2:05 AM, Daniel Brügge wrote:
Will check later to use different data dirs for the core on
each instance.
But because each Solr sits in it's own openvz instance (virtual
server respectively) they should be totally separated. At least
from my point of understanding virtualization.
Aha, OK. That was new to me. Will check this. Thanks.
On Thu, Jun 14, 2012 at 3:52 PM, Yury Kats yuryk...@yahoo.com wrote:
On 6/14/2012 2:05 AM, Daniel Brügge wrote:
Will check later to use different data dirs for the core on
each instance.
But because each Solr sits in it's own openvz
Hi,
am struggling around with creating multiple collections on a 4 instances
SolrCloud
setup:
I have 4 virtual OpenVZ instances, where I have installed SolrCloud on each
and
on one is also a standalone Zookeeper running.
Loading the Solr configuration into ZK works fine.
Then I startup the 4
BTW: i am running the solr instances using -Xms512M -Xmx1024M
so not so little memory.
Daniel
On Wed, Jun 13, 2012 at 4:28 PM, Daniel Brügge
daniel.brue...@googlemail.com wrote:
Hi,
am struggling around with creating multiple collections on a 4 instances
SolrCloud
setup:
I have 4
Thats an interesting data dir location: NativeFSLock@/home/myuser/
data/index/write.lock
Where are the other data dirs located? Are you sharing one drive or
something? It looks like something already has a writer lock - are you sure
another solr instance is not running somehow?
On Wed, Jun 13,
What command are you using to create the cores?
I had this sort of problem, and it was because I'd accidentally created
two cores with the same instanceDir within the same SOLR process. Make
sure you don't have that kind of collision. The easiest way is to
specify an explicit instanceDir and
HI Peter
I found the issue,
Actually we were getting this exception because of JVM space. I allocated
512 xms and 1024 xmx .. finally increased the time limit for write lock to
20 secs .. things are working fine ... but still it did not help ...
On closely analysis of doc which we were
Hi,
When you get this exception with no other error or explananation in
the logs, this is almost always because the JVM has run out of memory.
Have you checked/profiled your mem usage/GC during the stream operation?
On Thu, Aug 11, 2011 at 3:18 AM, Naveen Gupta nkgiit...@gmail.com wrote:
Hi,
Yes this was happening because of JVM heap size
But the real issue is that if our index size is growing (very high)
then indexing time is taking very long (using streaming)
earlier for indexing 15,000 docs at a time (commit after 15000 docs) , it
was taking 3 mins 20 secs time,
after deleting
Optimizing indexing time is a very different question.
I'm guessing your 3mins+ time you refer to is the commit time.
There are a whole host of things to take into account regarding
indexing, like: number of segments, schema, how many fields, storing
fields, omitting norms, caching, autowarming,
Hi,
We are doing streaming update to solr for multiple user,
We are getting
Aug 10, 2011 11:56:55 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out: NativeFSLock@/var/lib/solr/data/index/write.lock
at
Hi,
we are running Solr 3.2.0 on Jetty for a web application. Since we just
went online and are still in beta tests, we don't have very much load on
our servers (indeed, they're currently much oversized for the current
usage), and our index size on file system ist just 1.1 MB.
We have one
will anyone help me why and how?
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
SimpleFSLock@/usr/local/se
archengine/apache-solr-1.2.0/fr_companies/solr/data/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:70)
at
quick fix
look for a lucene lock file in your tmp directory and delete it, then
restart solr, should start
I am an idiot though, so be careful, in fact, I'm worse than an
idiot, I know a little
:-)
you got a lock file somewhere though, deleting that will help you
out, for me it was in
In solrconfig.xml,
useCompoundFilefalse/useCompoundFile
mergeFactor10/mergeFactor
maxBufferedDocs25000/maxBufferedDocs
maxMergeDocs1400/maxMergeDocs
maxFieldLength500/maxFieldLength
writeLockTimeout1000/writeLockTimeout
commitLockTimeout1/commitLockTimeout
Does
28 matches
Mail list logo