Re: write.lock file after unloading core

2020-11-30 Thread Erick Erickson
I’m a little confused here. Are you unloading/copying/creating the core on 
master?
I’ll assume so since I can’t really think of how doing this on one of the other
cores would make sense…..

I’m having a hard time wrapping my head around the use-case. You’re 
“delivering a new index”, which I take to mean you’re building a completely new
index somewhere else.

But you’re also updating the target index. What’s the relationship between the
index you’re “delivering” and the update sent while the core is unloaded? Are
the updates _already_ in the index you’re delivering or would you expect them
to be in the new index? Or are they just lost? Or does the indexing program
resend them after the core is created?

The unloaded core should not have any open index writers though. What I’m 
guessing is that updates are coming in before the unload is complete. Instead
of a sleep, have you tried specifying the async parameter and waiting until
REQUESTSTATUS tells you the unload is complete?

Best,
Erick

> On Nov 30, 2020, at 7:41 AM, elisabeth benoit  
> wrote:
> 
> Hello all,
> 
> We are using solr 7.3.1, with master and slave config.
> 
> When we deliver a new index we unload the core, with option delete data dir
> = true, then recreate the data folder and copy the new index files into
> that folder before sending solr a command to recreate the core (with the
> same name).
> 
> But we have, at the same time, some batches indexing non stop the core we
> just unloaded, and it happens quite frequently that we have an error at
> this point, the copy cannot be done, and I guess it is because of a
> write.lock file created by a solr index writer in the index directory.
> 
> Is it possible, when unloading the core, to stop / kill index writer? I've
> tried including a sleep after the unload and before recreation of the index
> folder, it seems to work but I was wondering if a better solution exists.
> 
> Best regards,
> Elisabeth



write.lock file after unloading core

2020-11-30 Thread elisabeth benoit
Hello all,

We are using solr 7.3.1, with master and slave config.

When we deliver a new index we unload the core, with option delete data dir
= true, then recreate the data folder and copy the new index files into
that folder before sending solr a command to recreate the core (with the
same name).

But we have, at the same time, some batches indexing non stop the core we
just unloaded, and it happens quite frequently that we have an error at
this point, the copy cannot be done, and I guess it is because of a
write.lock file created by a solr index writer in the index directory.

Is it possible, when unloading the core, to stop / kill index writer? I've
tried including a sleep after the unload and before recreation of the index
folder, it seems to work but I was wondering if a better solution exists.

Best regards,
Elisabeth


Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-27 Thread zhenyuan wei
@Shawn Heisey  Yeah, delete "write.lock" files manually is ok finally。
@Walter Underwood  Have some performace evaluation about Solr on HDFS vs
LocalFS  recently?

Shawn Heisey  于2018年8月28日周二 上午4:10写道:

> On 8/26/2018 7:47 PM, zhenyuan wei wrote:
> >  I found an exception when running Solr on HDFS。The detail is:
> > Running solr on HDFS,and update doc was running always,
> > then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart
> all.
>
> If you use "kill -9" to stop a Solr instance, the lockfile will get left
> behind and you may have difficulty starting Solr back up on ANY kind of
> filesystem until you delete the file in each core's data directory.  The
> filename defaults to "write.lock" if you don't change it.
>
> Thanks,
> Shawn
>
>


Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-27 Thread Shawn Heisey

On 8/26/2018 7:47 PM, zhenyuan wei wrote:

 I found an exception when running Solr on HDFS。The detail is:
Running solr on HDFS,and update doc was running always,
then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart all.


If you use "kill -9" to stop a Solr instance, the lockfile will get left 
behind and you may have difficulty starting Solr back up on ANY kind of 
filesystem until you delete the file in each core's data directory.  The 
filename defaults to "write.lock" if you don't change it.


Thanks,
Shawn



Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-27 Thread Walter Underwood
I accidentally put my Solr indexes on NFS once about ten years ago.
It was 100X slower. I would not recommend that.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Aug 27, 2018, at 1:39 AM, zhenyuan wei  wrote:
> 
> Thanks for your answer! @Erick Erickson 
> So, It's not recommended to run Solr on NFS ( like HDFS) now?  Maybe
> because of crash error or performance problem.
> I have a look at SOLR-8335, there is no good solution for this
> now, And maybe manual removal is the best option?
> 
> 
> Erick Erickson  于2018年8月27日周一 上午11:41写道:
> 
>> Because HDFS doesn't follow the file semantics that Solr expects.
>> 
>> There's quite a bit of background here:
>> https://issues.apache.org/jira/browse/SOLR-8335
>> 
>> Best,
>> Erick
>> On Sun, Aug 26, 2018 at 6:47 PM zhenyuan wei  wrote:
>>> 
>>> Hi all,
>>>I found an exception when running Solr on HDFS。The detail is:
>>> Running solr on HDFS,and update doc was running always,
>>> then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart
>> all.
>>> The exception  appears like:
>>> 
>>> 2018-08-26 22:23:12.529 ERROR
>>> 
>> (coreContainerWorkExecutor-2-thread-1-processing-n:cluster-node001:8983_solr)
>>> [   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on
>>> startup
>>> org.apache.solr.common.SolrException: Unable to create core
>>> [collection002_shard56_replica_n110]
>>>at
>>> 
>> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1061)
>>>at
>>> org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:640)
>>>at
>>> 
>> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>>>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>at
>>> 
>> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>>>at
>>> 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
>>>at
>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
>>>at java.lang.Thread.run(Thread.java:834)
>>> Caused by: org.apache.solr.common.SolrException: Index dir
>>> 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
>>> 'collection002_shard56_replica_n110' is already locked. The most likely
>>> cause is another Solr server (or another solr core in this server) also
>>> configured to use this directory; other possible causes may be specific
>> to
>>> lockType: hdfs
>>>at org.apache.solr.core.SolrCore.(SolrCore.java:1009)
>>>at org.apache.solr.core.SolrCore.(SolrCore.java:864)
>>>at
>>> 
>> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1040)
>>>... 7 more
>>> Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir
>>> 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
>>> 'collection002_shard56_replica_n110' is already locked. The most likely
>>> cause is another Solr server (or another solr core in this server) also
>>> configured to use this directory; other possible causes may be specific
>> to
>>> lockType: hdfs
>>>at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)
>>>at org.apache.solr.core.SolrCore.(SolrCore.java:955)
>>>... 9 more
>>> 
>>> 
>>> In fact, a print out a hdfs api level exception stack, it reports like:
>>> 
>>> Caused by: org.apache.hadoop.fs.FileAlreadyExistsException:
>>> /solr/collection002/core_node17/data/index/write.lock for client
>>> 192.168.0.12 already exists
>>>at
>>> 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2563)
>>>at
>>> 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)
>>>at
>>> 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)
>>>at
>>> 
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:623)
>>>at
>>> 
>> org.apache.hadoop.hdfs.protocolPB.C

Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-27 Thread zhenyuan wei
Thanks for your answer! @Erick Erickson 
So, It's not recommended to run Solr on NFS ( like HDFS) now?  Maybe
because of crash error or performance problem.
I have a look at SOLR-8335, there is no good solution for this
now, And maybe manual removal is the best option?


Erick Erickson  于2018年8月27日周一 上午11:41写道:

> Because HDFS doesn't follow the file semantics that Solr expects.
>
> There's quite a bit of background here:
> https://issues.apache.org/jira/browse/SOLR-8335
>
> Best,
> Erick
> On Sun, Aug 26, 2018 at 6:47 PM zhenyuan wei  wrote:
> >
> > Hi all,
> > I found an exception when running Solr on HDFS。The detail is:
> > Running solr on HDFS,and update doc was running always,
> > then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart
> all.
> > The exception  appears like:
> >
> > 2018-08-26 22:23:12.529 ERROR
> >
> (coreContainerWorkExecutor-2-thread-1-processing-n:cluster-node001:8983_solr)
> > [   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on
> > startup
> > org.apache.solr.common.SolrException: Unable to create core
> > [collection002_shard56_replica_n110]
> > at
> >
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1061)
> > at
> > org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:640)
> > at
> >
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at
> >
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> > at java.lang.Thread.run(Thread.java:834)
> > Caused by: org.apache.solr.common.SolrException: Index dir
> > 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
> > 'collection002_shard56_replica_n110' is already locked. The most likely
> > cause is another Solr server (or another solr core in this server) also
> > configured to use this directory; other possible causes may be specific
> to
> > lockType: hdfs
> > at org.apache.solr.core.SolrCore.(SolrCore.java:1009)
> > at org.apache.solr.core.SolrCore.(SolrCore.java:864)
> > at
> >
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1040)
> > ... 7 more
> > Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir
> > 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
> > 'collection002_shard56_replica_n110' is already locked. The most likely
> > cause is another Solr server (or another solr core in this server) also
> > configured to use this directory; other possible causes may be specific
> to
> > lockType: hdfs
> > at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)
> > at org.apache.solr.core.SolrCore.(SolrCore.java:955)
> > ... 9 more
> >
> >
> > In fact, a print out a hdfs api level exception stack, it reports like:
> >
> > Caused by: org.apache.hadoop.fs.FileAlreadyExistsException:
> > /solr/collection002/core_node17/data/index/write.lock for client
> > 192.168.0.12 already exists
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2563)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:623)
> > at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> > at
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:

Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-26 Thread Erick Erickson
Because HDFS doesn't follow the file semantics that Solr expects.

There's quite a bit of background here:
https://issues.apache.org/jira/browse/SOLR-8335

Best,
Erick
On Sun, Aug 26, 2018 at 6:47 PM zhenyuan wei  wrote:
>
> Hi all,
> I found an exception when running Solr on HDFS。The detail is:
> Running solr on HDFS,and update doc was running always,
> then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart all.
> The exception  appears like:
>
> 2018-08-26 22:23:12.529 ERROR
> (coreContainerWorkExecutor-2-thread-1-processing-n:cluster-node001:8983_solr)
> [   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on
> startup
> org.apache.solr.common.SolrException: Unable to create core
> [collection002_shard56_replica_n110]
> at
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1061)
> at
> org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:640)
> at
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.solr.common.SolrException: Index dir
> 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
> 'collection002_shard56_replica_n110' is already locked. The most likely
> cause is another Solr server (or another solr core in this server) also
> configured to use this directory; other possible causes may be specific to
> lockType: hdfs
> at org.apache.solr.core.SolrCore.(SolrCore.java:1009)
> at org.apache.solr.core.SolrCore.(SolrCore.java:864)
> at
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1040)
> ... 7 more
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir
> 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
> 'collection002_shard56_replica_n110' is already locked. The most likely
> cause is another Solr server (or another solr core in this server) also
> configured to use this directory; other possible causes may be specific to
> lockType: hdfs
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)
> at org.apache.solr.core.SolrCore.(SolrCore.java:955)
> ... 9 more
>
>
> In fact, a print out a hdfs api level exception stack, it reports like:
>
> Caused by: org.apache.hadoop.fs.FileAlreadyExistsException:
> /solr/collection002/core_node17/data/index/write.lock for client
> 192.168.0.12 already exists
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2563)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:623)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1727)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
>
> at sun.reflect.GeneratedConstructorAccessor140.newInstance(Unknown
> Source)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at
>

An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-26 Thread zhenyuan wei
Hi all,
I found an exception when running Solr on HDFS。The detail is:
Running solr on HDFS,and update doc was running always,
then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart all.
The exception  appears like:

2018-08-26 22:23:12.529 ERROR
(coreContainerWorkExecutor-2-thread-1-processing-n:cluster-node001:8983_solr)
[   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on
startup
org.apache.solr.common.SolrException: Unable to create core
[collection002_shard56_replica_n110]
at
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1061)
at
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:640)
at
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
at java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.solr.common.SolrException: Index dir
'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
'collection002_shard56_replica_n110' is already locked. The most likely
cause is another Solr server (or another solr core in this server) also
configured to use this directory; other possible causes may be specific to
lockType: hdfs
at org.apache.solr.core.SolrCore.(SolrCore.java:1009)
at org.apache.solr.core.SolrCore.(SolrCore.java:864)
at
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1040)
... 7 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir
'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
'collection002_shard56_replica_n110' is already locked. The most likely
cause is another Solr server (or another solr core in this server) also
configured to use this directory; other possible causes may be specific to
lockType: hdfs
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)
at org.apache.solr.core.SolrCore.(SolrCore.java:955)
... 9 more


In fact, a print out a hdfs api level exception stack, it reports like:

Caused by: org.apache.hadoop.fs.FileAlreadyExistsException:
/solr/collection002/core_node17/data/index/write.lock for client
192.168.0.12 already exists
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2563)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:623)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1727)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

at sun.reflect.GeneratedConstructorAccessor140.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1839)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624)
at
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
at
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.create

Re: write.lock file appears and solr wont open

2017-09-06 Thread Erick Erickson
Or only catch the specific exception and only swallow that? But yeah,
this is something that should change as I see this "in the field" and
a more specific error message would short-circuit a lot of unnecessary
pain.

see: LUCENE-7959

Erick

On Wed, Sep 6, 2017 at 5:49 AM, Shawn Heisey  wrote:
> On 9/4/2017 5:53 PM, Erick Erickson wrote:
>> Gah, thanks for letting us know. I can't tell you how often
>> permissions issues have tripped me up. You're right, it does seem like
>> there could be a better error message though.
>
> I see this code in NativeFSLockFactory, code that completely ignores any
> problems creating the lockfile, right before the point in the
> obtainFSLock method where Phil's exception came from:
>
> try {
>   Files.createFile(lockFile);
> } catch (IOException ignore) {
>   // we must create the file to have a truly canonical path.
>   // if it's already created, we don't care. if it cant be created,
> it will fail below.
> }
>
> I think that if we replaced that code with the following code, the
> *reason* for ignoring the creation problem (file already exists) will be
> preserved.  Any creation problem (like permissions) would throw a
> (hopefully understandable) standard Java exception that propagates up
> into what Solr logs:
>
> // If the lockfile already exists, we're going to do nothing.
> // If there are problems with that lockfile, they will be caught later.
> // If we *do* create the file here, exceptions will propagate upward.
> if (Files.notExists(lockFile))
> {
>   Files.createFile(lockFile);
> }
>
> The method signature already includes IOException, so this doesn't
> represent an API change.
>
> Thanks,
> Shawn
>


Re: write.lock file appears and solr wont open

2017-09-06 Thread Shawn Heisey
On 9/4/2017 5:53 PM, Erick Erickson wrote:
> Gah, thanks for letting us know. I can't tell you how often
> permissions issues have tripped me up. You're right, it does seem like
> there could be a better error message though.

I see this code in NativeFSLockFactory, code that completely ignores any
problems creating the lockfile, right before the point in the
obtainFSLock method where Phil's exception came from:

    try {
  Files.createFile(lockFile);
    } catch (IOException ignore) {
  // we must create the file to have a truly canonical path.
  // if it's already created, we don't care. if it cant be created,
it will fail below.
    }

I think that if we replaced that code with the following code, the
*reason* for ignoring the creation problem (file already exists) will be
preserved.  Any creation problem (like permissions) would throw a
(hopefully understandable) standard Java exception that propagates up
into what Solr logs:

    // If the lockfile already exists, we're going to do nothing.
    // If there are problems with that lockfile, they will be caught later.
    // If we *do* create the file here, exceptions will propagate upward.
    if (Files.notExists(lockFile))
    {
  Files.createFile(lockFile);
    }

The method signature already includes IOException, so this doesn't
represent an API change.

Thanks,
Shawn



Re: write.lock file appears and solr wont open

2017-09-04 Thread Erick Erickson
Gah, thanks for letting us know. I can't tell you how often
permissions issues have tripped me up. You're right, it does seem like
there could be a better error message though.

Erick

On Mon, Sep 4, 2017 at 1:55 PM, Phil Scadden <p.scad...@gns.cri.nz> wrote:
> We finally got a resolution to this - trivial but related to trying to do 
> things by remote control. The solr process did not have the permissions to 
> write to the core that was imported. When it tried to create the lock file it 
> failed. The Solr code obviously assumes that file create failure means file 
> already exists rather than perhaps insufficient permissions. Checking for 
> file existence would result in a more informative message but I am guessing 
> the test/production setup when developers are not allowed access to the 
> servers is reasonably unique (I hope so anyway because it sucks).
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Saturday, 26 August 2017 9:15 a.m.
> To: solr-user <solr-user@lucene.apache.org>
> Subject: Re: write.lock file appears and solr wont open
>
> Odd. The way core discovery works, it starts at SOLR_HOME and recursively 
> descends the directories. Whenever the recursion finds a "core.properties" 
> file it says "Aha, this must be a core". From there it assumes the data 
> directory is immediately below where it found the core.properties file in the 
> absence of any dataDir overrides.
>
> So how the write.lock file is getting preserved across Solr restarts is a 
> mystery to me. Doing a "kill -9" is one way to make that happen if it is done 
> at just the wrong time, but that's unlikely in what you're describing.
>
> Are you totally sure that there were no old Solr processes running?
> And there have been some issues in the past where the log display of the 
> admin UI hold on to errors and displays them after the problem has been 
> fixed. I'm assuming you can't query the new core, is that correct? Because if 
> you can query the core then _something_ has the index open. I'm grasping at 
> straws here mind you.
>
> Best,
> Erick
>
> On Thu, Aug 24, 2017 at 9:02 PM, Phil Scadden <p.scad...@gns.cri.nz> wrote:
>> SOLR_HOME is /var/www/solr/data
>> The zip was actually the entire data directory which also included 
>> configsets. And yes core.properties is in var/www/solr/data/prindex (just 
>> has single line name=prindex, in it). No other cores are present.
>> The data directory should have been unzipped before the solr instance was 
>> started (I cant actually touch the machine so communicating via a deployment 
>> document but the operator usually follows every step to the letter.
>> The sequence was:
>> mkdir /var/www/solr
>> sudo bash ./install_solr_service.sh solr-6.5.1.tgz -i /opt/local -d
>> /var/www/solr edit /etc/default/solr.in.sh to set various items. (esp
>> SOLR_HOME and to set SOLR_PID_DIR to /var/www/solr) unzip the data
>> directory service solr start.
>>
>> No other instance of solr installed.
>>
>> Notice: This email and any attachments are confidential and may not be used, 
>> published or redistributed without the prior written consent of the 
>> Institute of Geological and Nuclear Sciences Limited (GNS Science). If 
>> received in error please destroy and immediately notify GNS Science. Do not 
>> copy or disclose the contents.
> Notice: This email and any attachments are confidential and may not be used, 
> published or redistributed without the prior written consent of the Institute 
> of Geological and Nuclear Sciences Limited (GNS Science). If received in 
> error please destroy and immediately notify GNS Science. Do not copy or 
> disclose the contents.


RE: write.lock file appears and solr wont open

2017-09-04 Thread Phil Scadden
We finally got a resolution to this - trivial but related to trying to do 
things by remote control. The solr process did not have the permissions to 
write to the core that was imported. When it tried to create the lock file it 
failed. The Solr code obviously assumes that file create failure means file 
already exists rather than perhaps insufficient permissions. Checking for file 
existence would result in a more informative message but I am guessing the 
test/production setup when developers are not allowed access to the servers is 
reasonably unique (I hope so anyway because it sucks).

-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Saturday, 26 August 2017 9:15 a.m.
To: solr-user <solr-user@lucene.apache.org>
Subject: Re: write.lock file appears and solr wont open

Odd. The way core discovery works, it starts at SOLR_HOME and recursively 
descends the directories. Whenever the recursion finds a "core.properties" file 
it says "Aha, this must be a core". From there it assumes the data directory is 
immediately below where it found the core.properties file in the absence of any 
dataDir overrides.

So how the write.lock file is getting preserved across Solr restarts is a 
mystery to me. Doing a "kill -9" is one way to make that happen if it is done 
at just the wrong time, but that's unlikely in what you're describing.

Are you totally sure that there were no old Solr processes running?
And there have been some issues in the past where the log display of the admin 
UI hold on to errors and displays them after the problem has been fixed. I'm 
assuming you can't query the new core, is that correct? Because if you can 
query the core then _something_ has the index open. I'm grasping at straws here 
mind you.

Best,
Erick

On Thu, Aug 24, 2017 at 9:02 PM, Phil Scadden <p.scad...@gns.cri.nz> wrote:
> SOLR_HOME is /var/www/solr/data
> The zip was actually the entire data directory which also included 
> configsets. And yes core.properties is in var/www/solr/data/prindex (just has 
> single line name=prindex, in it). No other cores are present.
> The data directory should have been unzipped before the solr instance was 
> started (I cant actually touch the machine so communicating via a deployment 
> document but the operator usually follows every step to the letter.
> The sequence was:
> mkdir /var/www/solr
> sudo bash ./install_solr_service.sh solr-6.5.1.tgz -i /opt/local -d
> /var/www/solr edit /etc/default/solr.in.sh to set various items. (esp
> SOLR_HOME and to set SOLR_PID_DIR to /var/www/solr) unzip the data
> directory service solr start.
>
> No other instance of solr installed.
>
> Notice: This email and any attachments are confidential and may not be used, 
> published or redistributed without the prior written consent of the Institute 
> of Geological and Nuclear Sciences Limited (GNS Science). If received in 
> error please destroy and immediately notify GNS Science. Do not copy or 
> disclose the contents.
Notice: This email and any attachments are confidential and may not be used, 
published or redistributed without the prior written consent of the Institute 
of Geological and Nuclear Sciences Limited (GNS Science). If received in error 
please destroy and immediately notify GNS Science. Do not copy or disclose the 
contents.


Re: write.lock file appears and solr wont open

2017-08-25 Thread Erick Erickson
Odd. The way core discovery works, it starts at SOLR_HOME and
recursively descends the directories. Whenever the recursion finds a
"core.properties" file it says "Aha, this must be a core". From there
it assumes the data directory is immediately below where it found the
core.properties file in the absence of any dataDir overrides.

So how the write.lock file is getting preserved across Solr restarts
is a mystery to me. Doing a "kill -9" is one way to make that happen
if it is done at just the wrong time, but that's unlikely in what
you're describing.

Are you totally sure that there were no old Solr processes running?
And there have been some issues in the past where the log display of
the admin UI hold on to errors and displays them after the problem has
been fixed. I'm assuming you can't query the new core, is that
correct? Because if you can query the core then _something_ has the
index open. I'm grasping at straws here mind you.

Best,
Erick

On Thu, Aug 24, 2017 at 9:02 PM, Phil Scadden <p.scad...@gns.cri.nz> wrote:
> SOLR_HOME is /var/www/solr/data
> The zip was actually the entire data directory which also included 
> configsets. And yes core.properties is in var/www/solr/data/prindex (just has 
> single line name=prindex, in it). No other cores are present.
> The data directory should have been unzipped before the solr instance was 
> started (I cant actually touch the machine so communicating via a deployment 
> document but the operator usually follows every step to the letter.
> The sequence was:
> mkdir /var/www/solr
> sudo bash ./install_solr_service.sh solr-6.5.1.tgz -i /opt/local -d 
> /var/www/solr
> edit /etc/default/solr.in.sh to set various items. (esp SOLR_HOME and to set 
> SOLR_PID_DIR to /var/www/solr)
> unzip the data directory
> service solr start.
>
> No other instance of solr installed.
>
> Notice: This email and any attachments are confidential and may not be used, 
> published or redistributed without the prior written consent of the Institute 
> of Geological and Nuclear Sciences Limited (GNS Science). If received in 
> error please destroy and immediately notify GNS Science. Do not copy or 
> disclose the contents.


RE: write.lock file appears and solr wont open

2017-08-24 Thread Phil Scadden
SOLR_HOME is /var/www/solr/data
The zip was actually the entire data directory which also included configsets. 
And yes core.properties is in var/www/solr/data/prindex (just has single line 
name=prindex, in it). No other cores are present.
The data directory should have been unzipped before the solr instance was 
started (I cant actually touch the machine so communicating via a deployment 
document but the operator usually follows every step to the letter.
The sequence was:
mkdir /var/www/solr
sudo bash ./install_solr_service.sh solr-6.5.1.tgz -i /opt/local -d 
/var/www/solr
edit /etc/default/solr.in.sh to set various items. (esp SOLR_HOME and to set 
SOLR_PID_DIR to /var/www/solr)
unzip the data directory
service solr start.

No other instance of solr installed.

Notice: This email and any attachments are confidential and may not be used, 
published or redistributed without the prior written consent of the Institute 
of Geological and Nuclear Sciences Limited (GNS Science). If received in error 
please destroy and immediately notify GNS Science. Do not copy or disclose the 
contents.


Re: write.lock file appears and solr wont open

2017-08-24 Thread Erick Erickson
It's certainly possible to move a core like this. You say you moved
the core. Did you move the core.properties file as well? And did it
point to the _same_ directory as the original (dataDir property)? The
whole purpose of write.lock is to keep two cores from being able to
update the same index at the same time.

If you have a dataDir in the core.properties file _and_ it points to
an absolute directory then this is proper behavior. And also look at
the paths printed out. "/var/www/solr/data/prindex/data/index/" is
where the write.lock file is. This implies that somehow you have two
cores pointing to the same place.

Of course I may be way off base here

Oh, and I'd shut down the Solr instance before copying anyghing.

Oh2, I'd have all my Solr instances shut down before copying anything.

Oh3, _really_ examine the core.properties file(s) and be sure they
have different names assigned to each core

Best,
Erick

On Thu, Aug 24, 2017 at 6:52 PM, Phil Scadden <p.scad...@gns.cri.nz> wrote:
> I am slowing moving 6.5.1 from development to production. After installing
> solr on the final test machine, I tried to supply a core by zipping up the
> data directory on development and unzipping on test.
>
> When I go to admin I get:
>
> Write.lock obviously causing a block. So deleted the write.lock and
> restarted. I get the same error! What gives? Is it not possible to move a
> core like this?
>
>
>
> Looking at the log I see:
>
> java.util.concurrent.ExecutionException:
> org.apache.solr.common.SolrException: Unable to create core [prindex]
>
>  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>
>  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>
>  at
> org.apache.solr.core.CoreContainer.lambda$load$6(CoreContainer.java:581)
>
>  at
> org.apache.solr.core.CoreContainer$$Lambda$107/390689829.run(Unknown Source)
>
>  at
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>
>  at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
>  at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>
>  at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$106/671471369.run(Unknown
> Source)
>
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>  at java.lang.Thread.run(Thread.java:745)
>
> Caused by: org.apache.solr.common.SolrException: Unable to create core
> [prindex]
>
>  at
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:933)
>
>  at
> org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:553)
>
>  at
> org.apache.solr.core.CoreContainer$$Lambda$105/1138410383.call(Unknown
> Source)
>
>  at
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>
>  ... 6 more
>
> Caused by: org.apache.solr.common.SolrException:
> /var/www/solr/data/prindex/data/index/write.lock
>
>  at org.apache.solr.core.SolrCore.(SolrCore.java:965)
>
>  at org.apache.solr.core.SolrCore.(SolrCore.java:831)
>
>  at
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:918)
>
>  ... 9 more
>
> Caused by: java.nio.file.NoSuchFileException:
> /var/www/solr/data/prindex/data/index/write.lock
>
>  at
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>
>  at
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>
>  at
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>
>  at sun.nio.fs.UnixPath.toRealPath(UnixPath.java:837)
>
>  at
> org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:104)
>
>  at
> org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
>
>  at
> org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
>
>  at
> org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:104)
>
>  at
> org.apache.lucene.index.IndexWriter.isLocked(IndexWriter.java:4773)
>
>  at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:710)
>
>  at org.apache.solr.core.SolrCore.(SolrCore.java:912)
>
>
>
> Notice: This email and any attachments are confidential and may not be used,
> published or redistributed without the prior written consent of the
> Institute of Geological and Nuclear Sciences Limited (GNS Science). If
> received in error please destroy and immediately notify GNS Science. Do not
> copy or disclose the contents.


write.lock file appears and solr wont open

2017-08-24 Thread Phil Scadden
I am slowing moving 6.5.1 from development to production. After installing solr 
on the final test machine, I tried to supply a core by zipping up the data 
directory on development and unzipping on test.
When I go to admin I get:
[cid:image001.png@01D31DA9.1B0EF540]
Write.lock obviously causing a block. So deleted the write.lock and restarted. 
I get the same error! What gives? Is it not possible to move a core like this?

Looking at the log I see:
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core [prindex]
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
org.apache.solr.core.CoreContainer.lambda$load$6(CoreContainer.java:581)
 at 
org.apache.solr.core.CoreContainer$$Lambda$107/390689829.run(Unknown Source)
 at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
 at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$106/671471369.run(Unknown
 Source)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core [prindex]
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:933)
 at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:553)
 at 
org.apache.solr.core.CoreContainer$$Lambda$105/1138410383.call(Unknown Source)
 at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ... 6 more
Caused by: org.apache.solr.common.SolrException: 
/var/www/solr/data/prindex/data/index/write.lock
 at org.apache.solr.core.SolrCore.(SolrCore.java:965)
 at org.apache.solr.core.SolrCore.(SolrCore.java:831)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:918)
 ... 9 more
Caused by: java.nio.file.NoSuchFileException: 
/var/www/solr/data/prindex/data/index/write.lock
 at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
 at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
 at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
 at sun.nio.fs.UnixPath.toRealPath(UnixPath.java:837)
 at 
org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:104)
 at 
org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
 at 
org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
 at 
org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:104)
 at org.apache.lucene.index.IndexWriter.isLocked(IndexWriter.java:4773)
 at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:710)
 at org.apache.solr.core.SolrCore.(SolrCore.java:912)

Notice: This email and any attachments are confidential and may not be used, 
published or redistributed without the prior written consent of the Institute 
of Geological and Nuclear Sciences Limited (GNS Science). If received in error 
please destroy and immediately notify GNS Science. Do not copy or disclose the 
contents.


Re: write.lock

2015-09-22 Thread Alessandro Benedetti
Hi Mark,
let's summarise a little bit:

First of all you are using the IndexBasedSpellChecker which is what is
usually called "based on the sidecar index" .
Basically you are building a mini lucene index to be used with the
spellcheck component.
It behaves as a classic Lucene index, so it needs commits for visibility
and need to be updated with the terms coming from the main index.
>From the official doc :

*" The spellcheckIndexDir defines the location of the directory that holds
the spellcheck index, while the field defines the source field (defined
in schema.xml) for spell check terms. When choosing a field for the
spellcheck index, it's best to avoid a heavily processed field to get more
accurate results. If the field has many word variations from processing
synonyms and/or stemming, the dictionary will be created with those
variations in addition to more valid spelling data."*

If you don'[t want to keep a parallel sidecar index, you can go with the
DirectSpellcheck which works on the main index ( building an FSA in memory
for providing the misspelled suggestions).
>From the documentation :
" The DirectSolrSpellChecker uses terms from the Solr index without
building a parallel index like the IndexBasedSpellChecker. This spell
checker has the benefit of not having to be built regularly, meaning that
the terms are always up-to-date with terms in the index"
I am not so sure about this honestly, because I think it needs to be built
from time to time, even if the data structures are in memory.
I will double check this.

Related the FileBased one, will be similar to the IndexBased one, and will
produce a lucent index out of the original dictionary file.

I would honestly use 3 different directories, to avoid lock problems.
I would suggest to put them in the data directory ( sibling of the index
dir and tlog dir).
Or wherever you prefer.

Please give more details about the misspells you are not getting.
in the case remember that the max number of edits for the levenstein
distance has a well defined maximum ( it should be 2 edits for all the
spellcheck implementations).

Cheers




2015-09-22 14:19 GMT+01:00 Mark Fenbers :

> Mikhail,
>
> Yes, both the Index-based and File-based spell checkers reference the same
> index location.  My understanding is they were supposed to.  I didn't
> realize this was for writing indexes.  Rather, I thought this was for
> reading the main index.  So, I need to make 3 separate locations for
> indexes (main, index-based and File-based)?? Can I make them subdirs of the
> main index (in /localapps/dev/EventLog/index)?  Or would that mess up the
> main index?
>
> Thanks for raising my awareness of these errors!
> Mark
>
>
> On 9/21/2015 5:07 PM, Mikhail Khludnev wrote:
>
>> Both of these guys below try to write spell index into the same dir.
>> Don't they?
>>
>> To make it clear, it's not possible so far.
>>
>>   
>>solr.IndexBasedSpellChecker
>>/localapps/dev/EventLog/index
>>
>>
>> 
>>   solr.FileBasedSpellChecker
>>   > name="spellcheckIndexDir">/localapps/dev/EventLog/index
>>
>> Also, can you make sure that this path doesn't lead to main index dir.
>>
>>
>> On Mon, Sep 21, 2015 at 5:13 PM, Mark Fenbers 
>> wrote:
>>
>>
>>
>


-- 
--

Benedetti Alessandro
Visiting card - http://about.me/alessandro_benedetti
Blog - http://alexbenedetti.blogspot.co.uk

"Tyger, tyger burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?"

William Blake - Songs of Experience -1794 England


Re: write.lock

2015-09-22 Thread Mark Fenbers

Mikhail,

Yes, both the Index-based and File-based spell checkers reference the 
same index location.  My understanding is they were supposed to.  I 
didn't realize this was for writing indexes.  Rather, I thought this was 
for reading the main index.  So, I need to make 3 separate locations for 
indexes (main, index-based and File-based)?? Can I make them subdirs of 
the main index (in /localapps/dev/EventLog/index)?  Or would that mess 
up the main index?


Thanks for raising my awareness of these errors!
Mark

On 9/21/2015 5:07 PM, Mikhail Khludnev wrote:

Both of these guys below try to write spell index into the same dir. Don't they?

To make it clear, it's not possible so far.

  
   solr.IndexBasedSpellChecker
   /localapps/dev/EventLog/index



  solr.FileBasedSpellChecker
  /localapps/dev/EventLog/index

Also, can you make sure that this path doesn't lead to main index dir.


On Mon, Sep 21, 2015 at 5:13 PM, Mark Fenbers  wrote:






Re: write.lock

2015-09-22 Thread Mark Fenbers
OK, I gave each of these spellcheckIndexDir tokens distinct location -- 
from each other and from the main index.  This has resolved the 
write.lock problem when I attempt a spellcheck.build!  Thanks for the help!


I looked in the new spellcheckIndexDir location and the directory is 
populated with a few files.  So it seems "spellcheck.build" worked, but 
I am still not getting any hits when I purposefully misspell a word.  
But I'll post this problem with more details in a separate post.


Mark

On 9/21/2015 5:07 PM, Mikhail Khludnev wrote:

Both of these guys below try to write spell index into the same dir. Don't they?

To make it clear, it's not possible so far.

  
   solr.IndexBasedSpellChecker
   /localapps/dev/EventLog/index



  solr.FileBasedSpellChecker
  /localapps/dev/EventLog/index

Also, can you make sure that this path doesn't lead to main index dir.


On Mon, Sep 21, 2015 at 5:13 PM, Mark Fenbers <mark.fenb...@noaa.gov> wrote:


A snippet of my solrconfig.xml is attached.  The snippet only contains the
Spell checking sections (for brevity) which should be sufficient for you to
see all the pertinent info you seek.

Thanks!
Mark


On 9/19/2015 3:29 AM, Mikhail Khludnev wrote:


Mark,

What's your solconfig.xml?

On Sat, Sep 19, 2015 at 12:34 AM, Mark Fenbers <mark.fenb...@noaa.gov>
wrote:

Greetings,

Whenever I try to build my spellcheck index
(params.set("spellcheck.build", true); or put a check in the
spellcheck.build box in the web interface) I get the following
stacktrace.
Removing the write.lock file does no good.  The message comes right back
anyway.  I read in a post that increasing writeLockTimeout would help.
It
did not help for me even increasing it to 20,000 msec.  If I don't build,
then my resultset count is always 0, i.e., empty results.  What could be
causing this?

Mark










Re: write.lock

2015-09-21 Thread Mark Fenbers
A snippet of my solrconfig.xml is attached.  The snippet only contains 
the Spell checking sections (for brevity) which should be sufficient for 
you to see all the pertinent info you seek.


Thanks!
Mark

On 9/19/2015 3:29 AM, Mikhail Khludnev wrote:

Mark,

What's your solconfig.xml?

On Sat, Sep 19, 2015 at 12:34 AM, Mark Fenbers <mark.fenb...@noaa.gov>
wrote:


Greetings,

Whenever I try to build my spellcheck index
(params.set("spellcheck.build", true); or put a check in the
spellcheck.build box in the web interface) I get the following stacktrace.
Removing the write.lock file does no good.  The message comes right back
anyway.  I read in a post that increasing writeLockTimeout would help.  It
did not help for me even increasing it to 20,000 msec.  If I don't build,
then my resultset count is always 0, i.e., empty results.  What could be
causing this?

Mark






 
  

text_en





  index
  logtext
  
  solr.IndexBasedSpellChecker
  /localapps/dev/EventLog/index
  true  
  
  
  
  0.5
  
  2
  
  1
  
  5
  
  4
  
  0.01
  




  wordbreak
  solr.WordBreakSolrSpellChecker
  logtext
  true
  true
  10










   
 solr.FileBasedSpellChecker
logtext 
FileDict
 /usr/share/dict/words
 UTF-8
 /localapps/dev/EventLog/index
  

  
  0.5
  
  2
  
  1
  
  5
  
  4
  
  0.01
  
   
  
  

  
  

  


  FileDict

  on
  true
  10
  5
  5
  true
  true
  10
  5


  spellcheck

  




Re: write.lock

2015-09-21 Thread Mikhail Khludnev
Both of these guys below try to write spell index into the same dir. Don't they?

To make it clear, it's not possible so far.

 
  solr.IndexBasedSpellChecker
  /localapps/dev/EventLog/index


   
 solr.FileBasedSpellChecker
 /localapps/dev/EventLog/index

Also, can you make sure that this path doesn't lead to main index dir.


On Mon, Sep 21, 2015 at 5:13 PM, Mark Fenbers <mark.fenb...@noaa.gov> wrote:

> A snippet of my solrconfig.xml is attached.  The snippet only contains the
> Spell checking sections (for brevity) which should be sufficient for you to
> see all the pertinent info you seek.
>
> Thanks!
> Mark
>
>
> On 9/19/2015 3:29 AM, Mikhail Khludnev wrote:
>
>> Mark,
>>
>> What's your solconfig.xml?
>>
>> On Sat, Sep 19, 2015 at 12:34 AM, Mark Fenbers <mark.fenb...@noaa.gov>
>> wrote:
>>
>> Greetings,
>>>
>>> Whenever I try to build my spellcheck index
>>> (params.set("spellcheck.build", true); or put a check in the
>>> spellcheck.build box in the web interface) I get the following
>>> stacktrace.
>>> Removing the write.lock file does no good.  The message comes right back
>>> anyway.  I read in a post that increasing writeLockTimeout would help.
>>> It
>>> did not help for me even increasing it to 20,000 msec.  If I don't build,
>>> then my resultset count is always 0, i.e., empty results.  What could be
>>> causing this?
>>>
>>> Mark
>>>
>>>
>>
>>
>


-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

<http://www.griddynamics.com>
<mkhlud...@griddynamics.com>


Re: write.lock

2015-09-19 Thread Mikhail Khludnev
Mark,

What's your solconfig.xml?

On Sat, Sep 19, 2015 at 12:34 AM, Mark Fenbers <mark.fenb...@noaa.gov>
wrote:

> Greetings,
>
> Whenever I try to build my spellcheck index
> (params.set("spellcheck.build", true); or put a check in the
> spellcheck.build box in the web interface) I get the following stacktrace.
> Removing the write.lock file does no good.  The message comes right back
> anyway.  I read in a post that increasing writeLockTimeout would help.  It
> did not help for me even increasing it to 20,000 msec.  If I don't build,
> then my resultset count is always 0, i.e., empty results.  What could be
> causing this?
>
> Mark
>



-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

<http://www.griddynamics.com>
<mkhlud...@griddynamics.com>


write.lock

2015-09-18 Thread Mark Fenbers

Greetings,

Whenever I try to build my spellcheck index 
(params.set("spellcheck.build", true); or put a check in the 
spellcheck.build box in the web interface) I get the following 
stacktrace.  Removing the write.lock file does no good.  The message 
comes right back anyway.  I read in a post that increasing 
writeLockTimeout would help.  It did not help for me even increasing it 
to 20,000 msec.  If I don't build, then my resultset count is always 0, 
i.e., empty results.  What could be causing this?


Mark

indent
debugQuery
dismax
edismax
hl
facet
spatial
spellcheck
spellcheck.build
spellcheck.reload
spellcheck.q
spellcheck.dictionary
spellcheck.count
spellcheck.onlyMorePopular
spellcheck.extendedResults
spellcheck.collate
spellcheck.maxCollations
spellcheck.maxCollationTries
spellcheck.accuracy
http://localhost:8983/solr/EventLog/ELspell?df=logtext=xml=true=true=true=Sunday=true





  500
  42


  Lock held by this virtual machine: 
/localapps/dev/EventLog/index/write.lock
  org.apache.lucene.store.LockObtainFailedException: Lock 
held by this virtual machine: /localapps/dev/EventLog/index/write.lock
at 
org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:127)
at 
org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
at 
org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:775)
at 
org.apache.lucene.search.spell.SpellChecker.clearIndex(SpellChecker.java:455)
at 
org.apache.solr.spelling.FileBasedSpellChecker.build(FileBasedSpellChecker.java:70)
at 
org.apache.solr.handler.component.SpellCheckComponent.prepare(SpellCheckComponent.java:124)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:251)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)

  500






Re: what does this write.lock does not exist mean??

2014-12-23 Thread brian4
Haven't seen this particular problem before, but it sounds like it could be a
problem with permissions or data size limits - it may be worth looking into.

The write.lock file is used when an index is being modified - it is how
lucene handles concurrent attempts to modify the index - a writer obtains
the lock on the index
(http://wiki.apache.org/lucene-java/LuceneFAQ#What_is_the_purpose_of_write.lock_file.2C_when_is_it_used.2C_and_by_which_classes.3F).
  

Did you check if the file is actually there (in the data/index folder of
your solr core)?  If it is, then maybe the app has permission to create the
file, but the created file does not have permissions to read and/or modify
it, so the app thinks it exists but cannot access it?  

If it's not there, maybe it is being deleted unexpectedly by some other
process/person, or alternatively, maybe it can't even create the file -
either it doesn't have permissions for that directory or there is no more
free space.

I've seen this issue several times before of running out of allowed disk
space preventing index files from being created. It's kind of like the index
is locked in its current state - or at least can't be updated all the way. 
Are you able to add a large number of documents to the index and then
confirm that they have actually been added (search for them by ID for
instance)?







--
View this message in context: 
http://lucene.472066.n3.nabble.com/what-does-this-write-lock-does-not-exist-mean-tp4175291p4175773.html
Sent from the Solr - User mailing list archive at Nabble.com.


what does this write.lock does not exist mean??

2014-12-19 Thread solr-user
I looked for messages on the following error but dont see anything in nabble. 
Does anyone know what this error means and how to correct it??

SEVERE: java.lang.IllegalArgumentException:
/var/apache/my-solr-slave/solr/coreA/data/index/write.lock does not exist

I also occasionally see error messages about specific index files such as
this:

SEVERE: null:java.lang.IllegalArgumentException:
/var/apache/my_solr-slave/solr/coreA/data/index/_md39_1.del does not exist

I am using Solr 4.0.0, with Java 1.7.0_11-b21 and tomcat 7.0.34, running on
a 12GB centos box; we have master/slave setup with multiple slave searchers
per indexer.

any thoughts on this would be appreciated



--
View this message in context: 
http://lucene.472066.n3.nabble.com/what-does-this-write-lock-does-not-exist-mean-tp4175291.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: BlendedInfixSuggester index write.lock failures on core reload

2014-08-19 Thread Varun Thacker
Hi,

Yes this indeed is a bug. I am currently trying to get a patch for it.

This is the Jira issue - https://issues.apache.org/jira/browse/SOLR-6246


On Thu, Aug 14, 2014 at 7:52 PM, Zisis Tachtsidis zist...@runbox.com
wrote:

 Hi all,

 I'm using Solr 4.9.0 and have setup a spellcheck component for returning
 suggestions. The  configuration inside my solr.SpellCheckComponent has as
 follows.

 str name=classnameorg.apache.solr.spelling.suggest.Suggester/str
 str

 name=lookupImplorg.apache.solr.spelling.suggest.fst.BlendedInfixLookupFactory/str
 along with a custom value for
 str name=indexPath/str

 The server is starting properly and data gets indexed but once i hit the
 'Reload' button from 'Core Admin' I get the following error.

 null:org.apache.solr.common.SolrException: Error handling 'reload' action
 at

 org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:791)
 at

 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:224)
 at

 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:187)
 at

 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at

 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
 at

 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
 at

 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at

 com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:89)
 at

 com.caucho.server.webapp.WebAppFilterChain.doFilter(WebAppFilterChain.java:156)
 at

 com.caucho.server.webapp.AccessLogFilterChain.doFilter(AccessLogFilterChain.java:95)
 at

 com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:289)
 at
 com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:838)
 at

 com.caucho.network.listen.TcpSocketLink.dispatchRequest(TcpSocketLink.java:1345)
 at

 com.caucho.network.listen.TcpSocketLink.handleRequest(TcpSocketLink.java:1301)
 at

 com.caucho.network.listen.TcpSocketLink.handleRequestsImpl(TcpSocketLink.java:1285)
 at

 com.caucho.network.listen.TcpSocketLink.handleRequests(TcpSocketLink.java:1193)
 at

 com.caucho.network.listen.TcpSocketLink.handleAcceptTaskImpl(TcpSocketLink.java:992)
 at
 com.caucho.network.listen.ConnectionTask.runThread(ConnectionTask.java:117)
 at
 com.caucho.network.listen.ConnectionTask.run(ConnectionTask.java:93)
 at

 com.caucho.network.listen.SocketLinkThreadLauncher.handleTasks(SocketLinkThreadLauncher.java:169)
 at

 com.caucho.network.listen.TcpSocketAcceptThread.run(TcpSocketAcceptThread.java:61)
 at
 com.caucho.env.thread2.ResinThread2.runTasks(ResinThread2.java:173)
 at com.caucho.env.thread2.ResinThread2.run(ResinThread2.java:118)
 Caused by: org.apache.solr.common.SolrException: Unable to reload core:
 autocomplete
 at
 org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:911)
 at
 org.apache.solr.core.CoreContainer.reload(CoreContainer.java:660)
 at

 org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:789)
 ... 24 more
 Caused by: org.apache.solr.common.SolrException
 at org.apache.solr.core.SolrCore.init(SolrCore.java:868)
 at org.apache.solr.core.SolrCore.reload(SolrCore.java:426)
 at
 org.apache.solr.core.CoreContainer.reload(CoreContainer.java:650)
 ... 25 more
 Caused by: java.lang.RuntimeException
 at

 org.apache.solr.spelling.suggest.fst.BlendedInfixLookupFactory.create(BlendedInfixLookupFactory.java:102)
 at
 org.apache.solr.spelling.suggest.Suggester.init(Suggester.java:105)
 at

 org.apache.solr.handler.component.SpellCheckComponent.inform(SpellCheckComponent.java:636)
 at
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:651)
 at org.apache.solr.core.SolrCore.init(SolrCore.java:851)
 ... 27 more

 Debugging Solr code I found out that the original exception comes from the
 IndexWriter construction inside AnalyzingInfixSuggester.java ( more
 specifically org.apache.lucene.store.Lock:89). The exception is Lock
 obtain
 timed out: NativeFSLock@$indexPath/write.lock but seems to be hidden by
 the
 RuntimeException thrown by BlendedInfixLookupFactory.

 If I use the default indexPath I get another error (write lock related
 again) in the logs.
 org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
 NativeFSLock@$indexPath/blendedInfixSuggesterIndexDir/write.lock
 at org.apache.lucene.store.Lock.obtain(Lock.java:89)
 at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:724

BlendedInfixSuggester index write.lock failures on core reload

2014-08-14 Thread Zisis Tachtsidis
Hi all, 

I'm using Solr 4.9.0 and have setup a spellcheck component for returning
suggestions. The  configuration inside my solr.SpellCheckComponent has as
follows. 

str name=classnameorg.apache.solr.spelling.suggest.Suggester/str
str
name=lookupImplorg.apache.solr.spelling.suggest.fst.BlendedInfixLookupFactory/str
along with a custom value for 
str name=indexPath/str

The server is starting properly and data gets indexed but once i hit the
'Reload' button from 'Core Admin' I get the following error.

null:org.apache.solr.common.SolrException: Error handling 'reload' action
at
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:791)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:224)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:187)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at
com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:89)
at
com.caucho.server.webapp.WebAppFilterChain.doFilter(WebAppFilterChain.java:156)
at
com.caucho.server.webapp.AccessLogFilterChain.doFilter(AccessLogFilterChain.java:95)
at
com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:289)
at 
com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:838)
at
com.caucho.network.listen.TcpSocketLink.dispatchRequest(TcpSocketLink.java:1345)
at
com.caucho.network.listen.TcpSocketLink.handleRequest(TcpSocketLink.java:1301)
at
com.caucho.network.listen.TcpSocketLink.handleRequestsImpl(TcpSocketLink.java:1285)
at
com.caucho.network.listen.TcpSocketLink.handleRequests(TcpSocketLink.java:1193)
at
com.caucho.network.listen.TcpSocketLink.handleAcceptTaskImpl(TcpSocketLink.java:992)
at
com.caucho.network.listen.ConnectionTask.runThread(ConnectionTask.java:117)
at com.caucho.network.listen.ConnectionTask.run(ConnectionTask.java:93)
at
com.caucho.network.listen.SocketLinkThreadLauncher.handleTasks(SocketLinkThreadLauncher.java:169)
at
com.caucho.network.listen.TcpSocketAcceptThread.run(TcpSocketAcceptThread.java:61)
at com.caucho.env.thread2.ResinThread2.runTasks(ResinThread2.java:173)
at com.caucho.env.thread2.ResinThread2.run(ResinThread2.java:118)
Caused by: org.apache.solr.common.SolrException: Unable to reload core:
autocomplete
at
org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:911)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:660)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:789)
... 24 more
Caused by: org.apache.solr.common.SolrException
at org.apache.solr.core.SolrCore.init(SolrCore.java:868)
at org.apache.solr.core.SolrCore.reload(SolrCore.java:426)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:650)
... 25 more
Caused by: java.lang.RuntimeException
at
org.apache.solr.spelling.suggest.fst.BlendedInfixLookupFactory.create(BlendedInfixLookupFactory.java:102)
at org.apache.solr.spelling.suggest.Suggester.init(Suggester.java:105)
at
org.apache.solr.handler.component.SpellCheckComponent.inform(SpellCheckComponent.java:636)
at
org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:651)
at org.apache.solr.core.SolrCore.init(SolrCore.java:851)
... 27 more

Debugging Solr code I found out that the original exception comes from the
IndexWriter construction inside AnalyzingInfixSuggester.java ( more
specifically org.apache.lucene.store.Lock:89). The exception is Lock obtain
timed out: NativeFSLock@$indexPath/write.lock but seems to be hidden by the
RuntimeException thrown by BlendedInfixLookupFactory.

If I use the default indexPath I get another error (write lock related
again) in the logs.
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
NativeFSLock@$indexPath/blendedInfixSuggesterIndexDir/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:724)
at
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.build(AnalyzingInfixSuggester.java:222)
at org.apache.lucene.search.suggest.Lookup.build(Lookup.java:190)
at org.apache.solr.spelling.suggest.Suggester.build(Suggester.java:142)
at
org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener.buildSpellIndex(SpellCheckComponent.java:737

I still can creat index when the write.lock exists

2014-02-28 Thread Chen Lion
Dear all,
I hava a problem i can't understand it.

I use solr 4.6.1, and 2 nodes, one leader and one follower, both have the
write.lock file.

I did not think i could create index since the write.lock file exists,
right?

But I could, why?

Jiahui Chen


Re: I still can creat index when the write.lock exists

2014-02-28 Thread Mark Miller
I’m pretty sure the default config will unlock on startup.

- Mark

http://about.me/markrmiller

On Feb 28, 2014, at 3:50 AM, Chen Lion chnlio...@gmail.com wrote:

 Dear all,
 I hava a problem i can't understand it.
 
 I use solr 4.6.1, and 2 nodes, one leader and one follower, both have the
 write.lock file.
 
 I did not think i could create index since the write.lock file exists,
 right?
 
 But I could, why?
 
 Jiahui Chen



Re: solr 4.3: write.lock is not removed

2013-05-30 Thread bbarani
How are you indexing the documents? Are you using indexing program?

The below post discusses the same issue..

http://lucene.472066.n3.nabble.com/removing-write-lock-file-in-solr-after-indexing-td3699356.html



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-4-3-write-lock-is-not-removed-tp4066908p4067101.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: solr 4.3: write.lock is not removed

2013-05-30 Thread Zhang, Lisheng
Hi,

We just use CURL from PHP code to submit indexing request, like: 

/update?commit=true..

This worked well in solr 3.6.1. I saw the link you showed and really appreciate
(if no other choice I will change java source code but hope there is a better 
way..)?

Thanks very much for helps, Lisheng

-Original Message-
From: bbarani [mailto:bbar...@gmail.com]
Sent: Thursday, May 30, 2013 9:45 AM
To: solr-user@lucene.apache.org
Subject: Re: solr 4.3: write.lock is not removed


How are you indexing the documents? Are you using indexing program?

The below post discusses the same issue..

http://lucene.472066.n3.nabble.com/removing-write-lock-file-in-solr-after-indexing-td3699356.html



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-4-3-write-lock-is-not-removed-tp4066908p4067101.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: solr 4.3: write.lock is not removed

2013-05-30 Thread Zhang, Lisheng
I did more tests and get more info: the basic setting is that we created core 
from PHP CURl
API where we define:

schema
config
instanceDir=my_solr_home
dataDir=my_solr_home/data/new_collection_name

In solr 3.6.1 we donot need to define schema/config because conf folder is not 
inside each
collection. 

1/ Indexing works OK but write.lock is not removed (we use 
/update?commit=true..)
2/ Shutdown tomcat, I saw write.lock is gone
3/ Restart Tomcat, indexed data was created at the instanceDir/data level, with 
some warning
   messages. It seems that in solr.xml, dataDir is not defined?

Thanks very much for helps, Lisheng




-Original Message-
From: Zhang, Lisheng [mailto:lisheng.zh...@broadvision.com]
Sent: Thursday, May 30, 2013 10:57 AM
To: solr-user@lucene.apache.org
Subject: RE: solr 4.3: write.lock is not removed


Hi,

We just use CURL from PHP code to submit indexing request, like: 

/update?commit=true..

This worked well in solr 3.6.1. I saw the link you showed and really appreciate
(if no other choice I will change java source code but hope there is a better 
way..)?

Thanks very much for helps, Lisheng

-Original Message-
From: bbarani [mailto:bbar...@gmail.com]
Sent: Thursday, May 30, 2013 9:45 AM
To: solr-user@lucene.apache.org
Subject: Re: solr 4.3: write.lock is not removed


How are you indexing the documents? Are you using indexing program?

The below post discusses the same issue..

http://lucene.472066.n3.nabble.com/removing-write-lock-file-in-solr-after-indexing-td3699356.html



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-4-3-write-lock-is-not-removed-tp4066908p4067101.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr 4.3: write.lock is not removed

2013-05-30 Thread Chris Hostetter

: I recently upgraded solr from 3.6.1 to 4.3, it works well, but I noticed that 
after finishing
: indexing 
:  
: write.lock
:  
: is NOT removed. Later if I index again it still works OK. Only after I 
shutdown Tomcat 
: then write.lock is removed. This behavior caused some problem like I could 
not use luke
: to observe indexed data.

IIRC, This was an intentional change.  In older versions of Solr the 
IndexWRiter was only opened if/when updates needed to be made, but that 
made it impossible to safely take advantage of some internal optimizations 
related to NRT IndexReader reloading, so the logic was modified to always 
keep the IndexWriter open as lon as the SolrCore is loaded.

In general, your past behavior of pointing luke at a live solr index could 
have also produced problems if updates came into solr while luke had the 
write lock active.


-Hoss


RE: solr 4.3: write.lock is not removed

2013-05-30 Thread Zhang, Lisheng
Hi,

Thanks very much for the explanation! Could we config to get to old behavior?
I asked this option because our app has many small cores so that we prefer 
create/close writer on the fly (otherwise we may have memory issue quickly).

We also do not need NRT for now.

Thanks very much for helps, Lisheng

-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Thursday, May 30, 2013 11:35 AM
To: solr-user@lucene.apache.org
Subject: Re: solr 4.3: write.lock is not removed



: I recently upgraded solr from 3.6.1 to 4.3, it works well, but I noticed that 
after finishing
: indexing 
:  
: write.lock
:  
: is NOT removed. Later if I index again it still works OK. Only after I 
shutdown Tomcat 
: then write.lock is removed. This behavior caused some problem like I could 
not use luke
: to observe indexed data.

IIRC, This was an intentional change.  In older versions of Solr the 
IndexWRiter was only opened if/when updates needed to be made, but that 
made it impossible to safely take advantage of some internal optimizations 
related to NRT IndexReader reloading, so the logic was modified to always 
keep the IndexWriter open as lon as the SolrCore is loaded.

In general, your past behavior of pointing luke at a live solr index could 
have also produced problems if updates came into solr while luke had the 
write lock active.


-Hoss


RE: solr 4.3: write.lock is not removed

2013-05-30 Thread Zhang, Lisheng
I did more test and it seems that this is still a bug (previous issue 3/):

1/ Create a core by CURL command with dataDir=some_folder, core is created OK
   and later indexing worked OK also.

2/ But in solr.xml, dadaDir is not defined in element core 

3/ After restart solr, dataDir information is lost and solr issued WARN.

4/ If I manually add dataDir attribute into core element in solr.xml after core
   is created, restarting solr will be fine.

Thanks very much for helps, Lisheng


-Original Message-
From: Zhang, Lisheng 
Sent: Thursday, May 30, 2013 11:22 AM
To: 'solr-user@lucene.apache.org'
Subject: RE: solr 4.3: write.lock is not removed


I did more tests and get more info: the basic setting is that we created core 
from PHP CURl
API where we define:

schema
config
instanceDir=my_solr_home
dataDir=my_solr_home/data/new_collection_name

In solr 3.6.1 we donot need to define schema/config because conf folder is not 
inside each
collection. 

1/ Indexing works OK but write.lock is not removed (we use 
/update?commit=true..)
2/ Shutdown tomcat, I saw write.lock is gone
3/ Restart Tomcat, indexed data was created at the instanceDir/data level, with 
some warning
   messages. It seems that in solr.xml, dataDir is not defined?

Thanks very much for helps, Lisheng




-Original Message-
From: Zhang, Lisheng [mailto:lisheng.zh...@broadvision.com]
Sent: Thursday, May 30, 2013 10:57 AM
To: solr-user@lucene.apache.org
Subject: RE: solr 4.3: write.lock is not removed


Hi,

We just use CURL from PHP code to submit indexing request, like: 

/update?commit=true..

This worked well in solr 3.6.1. I saw the link you showed and really appreciate
(if no other choice I will change java source code but hope there is a better 
way..)?

Thanks very much for helps, Lisheng

-Original Message-
From: bbarani [mailto:bbar...@gmail.com]
Sent: Thursday, May 30, 2013 9:45 AM
To: solr-user@lucene.apache.org
Subject: Re: solr 4.3: write.lock is not removed


How are you indexing the documents? Are you using indexing program?

The below post discusses the same issue..

http://lucene.472066.n3.nabble.com/removing-write-lock-file-in-solr-after-indexing-td3699356.html



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-4-3-write-lock-is-not-removed-tp4066908p4067101.html
Sent from the Solr - User mailing list archive at Nabble.com.


solr 4.3: write.lock is not removed

2013-05-29 Thread Zhang, Lisheng
Hi,
 
I recently upgraded solr from 3.6.1 to 4.3, it works well, but I noticed that 
after finishing
indexing 
 
write.lock
 
is NOT removed. Later if I index again it still works OK. Only after I shutdown 
Tomcat 
then write.lock is removed. This behavior caused some problem like I could not 
use luke
to observe indexed data.
 
I did not see any error/warning messages.
 
Is this the designed behavior? Can I have the old behavior (after commit 
write.lock is
removed) through configuration?
 
Thanks very much for helps, Lisheng


Solr 4.1.0 index leaving write.lock file

2013-02-01 Thread dm_tim
Howdy,
I've been using Solr 4.1.0 for a little while now and I just noticed that
when I index any core I have the write.lock file doesn't go away until I
stop the server where solr is running. The data I'm indexing is fairly small
(16k rows in a db) so it shouldn't take much time at all though I have
waited upwards of 15 minutes for the lock files to clear. Is there something
I'm missing here?

Regards,

Tim



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-4-1-0-index-leaving-write-lock-file-tp4038046.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 4.1.0 index leaving write.lock file

2013-02-01 Thread Yonik Seeley
On Fri, Feb 1, 2013 at 5:41 PM, dm_tim dm_...@yahoo.com wrote:
 I've been using Solr 4.1.0 for a little while now and I just noticed that
 when I index any core I have the write.lock file doesn't go away until I
 stop the server where solr is running.

Sounds like it's working as it should.  The write lock is just to keep
another IndexWriter from opening that index.
Solr keeps a single IndexWriter open (even after a commit) for better
performance.

-Yonik
http://lucidworks.com


Re: Solr 4.1.0 index leaving write.lock file

2013-02-01 Thread dm_tim
Well that makes sense. The problem is that I am working in both Solr and
Lucene directly. I have some indexes that work great in Solr and now I want
to do the same thing in Java using the Lucene libs. So I'm writing to the
same index dir. I do testing by creating an index in Solr, look at it, and
then attempt to recreate it using Lucene. However the lock file was killing
me. Is there any way around this?

T



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-4-1-0-index-leaving-write-lock-file-tp4038046p4038055.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 4.1.0 index leaving write.lock file

2013-02-01 Thread Mark Miller
You can use 'none' for the lock type in solrconfig.xml.

You risk corruption if two IW's try to modify the index at once though.

- Mark

On Feb 1, 2013, at 6:56 PM, dm_tim dm_...@yahoo.com wrote:

 Well that makes sense. The problem is that I am working in both Solr and
 Lucene directly. I have some indexes that work great in Solr and now I want
 to do the same thing in Java using the Lucene libs. So I'm writing to the
 same index dir. I do testing by creating an index in Solr, look at it, and
 then attempt to recreate it using Lucene. However the lock file was killing
 me. Is there any way around this?
 
 T
 
 
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Solr-4-1-0-index-leaving-write-lock-file-tp4038046p4038055.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Solr 4.1.0 index leaving write.lock file

2013-02-01 Thread dm_tim
Cool. I can use that setting while testing then set it back when I'm just
running Lucene. Many thanks folks!

Regards,

Tim



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-4-1-0-index-leaving-write-lock-file-tp4038046p4038060.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: write.lock

2012-06-21 Thread Dmitry Kan
Hi,

We are running exactly same solr version and have these issues relatively
frequently. The main cause in our case has usually been the out of memory
exceptions, as some of our shards are pretty fat. Allocating more RAM
usually helps for a while. The lock file needs to be manually removed
still, unfortunately.

There are also sometimes commit collisions, and we get max warming
searchers exceeded exceptions, but haven't yet figured out, if that may
cause the locking as well.

-- Dmitry

On Wed, Jun 20, 2012 at 7:45 PM, Christopher Gross cogr...@gmail.comwrote:

 I'm running Solr 3.4.  The past 2 months I've been getting a lot of
 write.lock errors.  I switched to the simple lockType (and made it
 clear the lock on restart), but my index is still locking up a few
 times a week.

 I can't seem to determine what is causing the locks -- does anyone out
 there have any ideas/experience as to what is causing the locks, and
 what config changes that I can make in order to prevent the lock?

 Any help would be very appreciated!

 -- Chris




-- 
Regards,

Dmitry Kan


write.lock

2012-06-20 Thread Christopher Gross
I'm running Solr 3.4.  The past 2 months I've been getting a lot of
write.lock errors.  I switched to the simple lockType (and made it
clear the lock on restart), but my index is still locking up a few
times a week.

I can't seem to determine what is causing the locks -- does anyone out
there have any ideas/experience as to what is causing the locks, and
what config changes that I can make in order to prevent the lock?

Any help would be very appreciated!

-- Chris


Re: removing write.lock file in solr after indexing

2012-01-31 Thread Erick Erickson
On Mon, Jan 30, 2012 at 2:42 AM, Shyam Bhaskaran
shyam.bhaska...@synopsys.com wrote:
 Hi,

 We are using Solr 4.0 and after indexing every time it is observed that the 
 write.lock remains without getting cleared and for the next indexing we have 
 to delete the file to get the indexing process running.

 We use SolrServer for our indexing and I do not see any  methods to close or 
 clear the indexes on completion of indexing.


 I have seen that adding the below lines into solrconfig.xml file avoids the 
 issue of physically removing the write.lock file when doing indexing.



 indexDefaults

  lockTypesimple/lockType

  unlockOnStartuptrue/unlockOnStartup

 /indexDefaults


 But I am hesitant in adding this directive, as it might not be a good idea to 
 set this directive in production as it would defeat the purpose of locking 
 the index while another process writes into it.

 Let me know if we can do this programmatically, is there something like 
 close() which would remove the write.lock file after completion of indexing 
 using SolrServer?

 Thanks
 Shyam


Re: removing write.lock file in solr after indexing

2012-01-31 Thread Erick Erickson
Oops, fat fingers... Anyway, this is surprising. Can you provide
more details on how you do your indexing?

Best
Erick

On Tue, Jan 31, 2012 at 8:59 AM, Erick Erickson erickerick...@gmail.com wrote:
 On Mon, Jan 30, 2012 at 2:42 AM, Shyam Bhaskaran
 shyam.bhaska...@synopsys.com wrote:
 Hi,

 We are using Solr 4.0 and after indexing every time it is observed that the 
 write.lock remains without getting cleared and for the next indexing we have 
 to delete the file to get the indexing process running.

 We use SolrServer for our indexing and I do not see any  methods to close or 
 clear the indexes on completion of indexing.


 I have seen that adding the below lines into solrconfig.xml file avoids the 
 issue of physically removing the write.lock file when doing indexing.



 indexDefaults

  lockTypesimple/lockType

  unlockOnStartuptrue/unlockOnStartup

 /indexDefaults


 But I am hesitant in adding this directive, as it might not be a good idea 
 to set this directive in production as it would defeat the purpose of 
 locking the index while another process writes into it.

 Let me know if we can do this programmatically, is there something like 
 close() which would remove the write.lock file after completion of indexing 
 using SolrServer?

 Thanks
 Shyam


RE: removing write.lock file in solr after indexing

2012-01-31 Thread Shyam Bhaskaran
Hi Erick,


Below is the sample flow.


String solrHome = /opt/solr/home;

File solrXml = new File( solrHome, solr.xml ); 

container = new CoreContainer(); 

container.load(solrHome, solrXml); 

SolrServer solr = new EmbeddedSolrServer(container, core1); 

solr.deleteByQuery(*:*); 

SolrInputDocument doc1 = new SolrInputDocument(); 

doc1.addField( id, id1, 1.0f ); 

doc1.addField( name, doc1, 1.0f ); 

CollectionSolrInputDocument docs = new ArrayListSolrInputDocument(); 

docs.add( doc1 ); 

solr.commit(); 

SolrCore curCore = container.getCore(core1);
 
curCore.close();



I have also seen that EmbeddedSolrServer process is not terminating after 
completion of the indexing process, can this be a reason. But even after manual 
termination of the process the 'write.lock' file stays in the index directory.


-Shyam

-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Tuesday, January 31, 2012 7:30 PM
To: solr-user@lucene.apache.org
Subject: Re: removing write.lock file in solr after indexing

Oops, fat fingers... Anyway, this is surprising. Can you provide
more details on how you do your indexing?

Best
Erick

On Tue, Jan 31, 2012 at 8:59 AM, Erick Erickson erickerick...@gmail.com wrote:
 On Mon, Jan 30, 2012 at 2:42 AM, Shyam Bhaskaran
 shyam.bhaska...@synopsys.com wrote:
 Hi,

 We are using Solr 4.0 and after indexing every time it is observed that the 
 write.lock remains without getting cleared and for the next indexing we have 
 to delete the file to get the indexing process running.

 We use SolrServer for our indexing and I do not see any  methods to close or 
 clear the indexes on completion of indexing.


 I have seen that adding the below lines into solrconfig.xml file avoids the 
 issue of physically removing the write.lock file when doing indexing.



 indexDefaults

  lockTypesimple/lockType

  unlockOnStartuptrue/unlockOnStartup

 /indexDefaults


 But I am hesitant in adding this directive, as it might not be a good idea 
 to set this directive in production as it would defeat the purpose of 
 locking the index while another process writes into it.

 Let me know if we can do this programmatically, is there something like 
 close() which would remove the write.lock file after completion of indexing 
 using SolrServer?

 Thanks
 Shyam


Re: Indexing leave behind write.lock file.

2012-01-31 Thread Koorosh Vakhshoori
Here is how I got SolrJ to delete the write.lock file. I switched to the
CoreContainer's remove() method. So the new code is:

...
SolrCore curCore = container.remove(core1);
curCore.close();

Now my understanding for why it is working. Based on Solr source code, the
issue had to do with the core's reference count not ending up at zero when
the close() method is called. The getCore() method increments the reference
count while remove() doesn't. Since the close() method decrements the count
first and if and only if the count is zero it would unlock the core, i.e.
remove the write.lock.

Regards,

Koorosh 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Indexing-leave-behind-write-lock-file-tp3701915p3705554.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: removing write.lock file in solr after indexing

2012-01-31 Thread Shyam Bhaskaran
Hi Erick,

I was able to resolve the issue with 'write.lock' files.

Using container.remove(core1) or using container.shutdown() is helping to 
remove the 'write.lock' files.

-Shyam



Indexing leave behind write.lock file.

2012-01-30 Thread Koorosh Vakhshoori
Hi,
 I am using SolrJ to reindex a core in a multiCore setup. The general
flow of my program is as follows (pseudo code):

String solrHome = /opt/solr/home;
File solrXml = new File( solrHome, solr.xml );
container = new CoreContainer();
container.load(solrHome, solrXml);
SolrServer solr = new EmbeddedSolrServer(container, core1);
solr.deleteByQuery(*:*);
SolrInputDocument doc1 = new SolrInputDocument();
doc1.addField( id, id1, 1.0f );
doc1.addField( name, doc1, 1.0f );
CollectionSolrInputDocument docs = new ArrayListSolrInputDocument();
docs.add( doc1 );
solr.commit();
SolrCore curCore = container.getCore(core1);
curCore.close();

I thought for sure by calling close(), I would also be releasing all
associated resources including the lock on the core that is
I would getting rid of the write.lock file.

I am using Solr 4.0 code from the development truck which is about a month old.

Any suggestion here appreciated.

Regards,

Koorosh


removing write.lock file in solr after indexing

2012-01-29 Thread Shyam Bhaskaran
Hi,

We are using Solr 4.0 and after indexing every time it is observed that the 
write.lock remains without getting cleared and for the next indexing we have to 
delete the file to get the indexing process running.

We use SolrServer for our indexing and I do not see any  methods to close or 
clear the indexes on completion of indexing.


I have seen that adding the below lines into solrconfig.xml file avoids the 
issue of physically removing the write.lock file when doing indexing.



indexDefaults

  lockTypesimple/lockType

  unlockOnStartuptrue/unlockOnStartup

/indexDefaults


But I am hesitant in adding this directive, as it might not be a good idea to 
set this directive in production as it would defeat the purpose of locking the 
index while another process writes into it.

Let me know if we can do this programmatically, is there something like close() 
which would remove the write.lock file after completion of indexing using 
SolrServer?

Thanks
Shyam


Re: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out : SingleInstanceLock: write.lock

2010-09-15 Thread Dennis Gearon
I saw something about having separate reader vs writer to an index. The email 
said that the reader had to do occasional (empty) commits to keep the cache 
warm and for another reason. Is this relevant?
Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php


--- On Tue, 9/14/10, Bharat Jain bharat.j...@gmail.com wrote:

 From: Bharat Jain bharat.j...@gmail.com
 Subject: Re: org.apache.lucene.store.LockObtainFailedException: Lock obtain 
 timed out : SingleInstanceLock: write.lock
 To: solr-user@lucene.apache.org
 Date: Tuesday, September 14, 2010, 7:26 PM
 Thanks Mark for taking time to reply.
 What else could cause this issue to
 happen so frequently. We have a master/slave configuration
 and only one
 update server that writes to index. We have plenty of disk
 space available.
 
 
 Thanks
 Bharat Jain
 
 
 On Fri, Sep 10, 2010 at 8:19 AM, Mark Miller markrmil...@gmail.com
 wrote:
 
  iwAccess is the reader lock to iwCommit's writer lock
 - so the scenario
  you bring up should be protected - the reader lock is
 used in only one
  place in the class (addDoc), while every other call to
 openWriter is
  protected by the writer lock.
 
  I'd worry more about the case where two add documents
 hit at the same
  time when the indexWriter is null - then, because both
 calls to
  openWriter are protected with the reader lock, you
 could race - but
  that's what the synchronized(this) protects against
 ;)
 
  It all looks good to me.
 
  - Mark
 
  On 9/9/10 4:17 AM, Bharat Jain wrote:
   Hi,
  
   We are using SOLR 1.3 and getting this error
 often. There is only one
   instance that index the data. I did some analysis
 which I have put below
  and
   scenario when this error can happen. Can you guys
 please validate the
  issue?
   thanks a lot in advance.
  
   SEVERE:
 org.apache.lucene.store.LockObtainFailedException: Lock
 obtain
  timed
   out
   : SingleInstanceLock: write.lock
           at
 org.apache.lucene.store.Lock.obtain(Lock.java:85)
           at
 
 org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
           at
 
 org.apache.lucene.index.IndexWriter.init(IndexWriter.java:938)
           at
  
 org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:116)
           at
  
 org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHand
   ler.java:122)
           at
  
 org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHa
   ndler2.java:167)
           at
  
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandle
   r2.java:221)
  
   I think this error can happen in the following
 scneario
  
   1. Thread T1 enters the commit method and its
 actually an optimize
  request.
       a. T1 gets the iwCommit
 lock
       b. T1 enters the
 openWriter() method
       c. T1 does the (writer ==
 null check) - time x1
  
   2. Thread T2 enters the addDoc method
       a. T2 gets iwAccess lock
       b. T2 gets the mutex
 this
       c. T2 enters operWriter
 method
       d. T2 does the (writer ==
 null check) - time x1
  
   Now after the 1.c is done thread yields and 2.d
 gets execution and it
  also
   sees writer as null and so now both threads will
 try to create the
   indexwriter and it will fail. I have pasted the
 relevant portion of code
   here.
  
  
     // iwCommit protects internal
 data and open/close of the IndexWriter
  and
     // is a mutex. Any use of the
 index writer should be protected by
   iwAccess,
     // which admits multiple
 simultaneous acquisitions.  iwAccess is
     // mutually-exclusive with the
 iwCommit lock.
     protected final Lock iwAccess,
 iwCommit;
  
  
  
   // must only be called when iwCommit lock held
     protected void openWriter()
 throws IOException {
       if (writer==null) {
         writer =
 createMainIndexWriter(DirectUpdateHandler2, false);
       }
     }
  
   addDoc(...) {
   ...
  
   iwAccess.lock();
       try {
  
         // We can't use
 iwCommit to protect internal data here, since it
  would
         // block other
 addDoc calls.  Hence, we synchronize to protect
   internal
         // state. 
 This is safe as all other state-changing operations are
         // protected with
 iwCommit (which iwAccess excludes from this
  block).
         synchronized
 (this) {
           // adding
 document -- prep writer
       
    closeSearcher();
       
    openWriter();
       
    tracker.addedDocument();
         } // end
 synchronized block
  
         // this is the
 only unsynchronized code in the iwAccess block,
  which
         // should account
 for most of the time
  
   }
  
   commit() {
   ...
  
   iwCommit.lock();
       try {
         log.info(start
 +cmd);
  
         if (cmd.optimize)
 {
       
    closeSearcher();
       
    openWriter();
       
    writer.optimize(cmd.maxOptimizeSegments);
         }
  
         closeSearcher();
         closeWriter

Re: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out : SingleInstanceLock: write.lock

2010-09-14 Thread Bharat Jain
Thanks Mark for taking time to reply. What else could cause this issue to
happen so frequently. We have a master/slave configuration and only one
update server that writes to index. We have plenty of disk space available.


Thanks
Bharat Jain


On Fri, Sep 10, 2010 at 8:19 AM, Mark Miller markrmil...@gmail.com wrote:

 iwAccess is the reader lock to iwCommit's writer lock - so the scenario
 you bring up should be protected - the reader lock is used in only one
 place in the class (addDoc), while every other call to openWriter is
 protected by the writer lock.

 I'd worry more about the case where two add documents hit at the same
 time when the indexWriter is null - then, because both calls to
 openWriter are protected with the reader lock, you could race - but
 that's what the synchronized(this) protects against ;)

 It all looks good to me.

 - Mark

 On 9/9/10 4:17 AM, Bharat Jain wrote:
  Hi,
 
  We are using SOLR 1.3 and getting this error often. There is only one
  instance that index the data. I did some analysis which I have put below
 and
  scenario when this error can happen. Can you guys please validate the
 issue?
  thanks a lot in advance.
 
  SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain
 timed
  out
  : SingleInstanceLock: write.lock
  at org.apache.lucene.store.Lock.obtain(Lock.java:85)
  at
 org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
  at
 org.apache.lucene.index.IndexWriter.init(IndexWriter.java:938)
  at
  org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:116)
  at
  org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHand
  ler.java:122)
  at
  org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHa
  ndler2.java:167)
  at
  org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandle
  r2.java:221)
 
  I think this error can happen in the following scneario
 
  1. Thread T1 enters the commit method and its actually an optimize
 request.
  a. T1 gets the iwCommit lock
  b. T1 enters the openWriter() method
  c. T1 does the (writer == null check) - time x1
 
  2. Thread T2 enters the addDoc method
  a. T2 gets iwAccess lock
  b. T2 gets the mutex this
  c. T2 enters operWriter method
  d. T2 does the (writer == null check) - time x1
 
  Now after the 1.c is done thread yields and 2.d gets execution and it
 also
  sees writer as null and so now both threads will try to create the
  indexwriter and it will fail. I have pasted the relevant portion of code
  here.
 
 
// iwCommit protects internal data and open/close of the IndexWriter
 and
// is a mutex. Any use of the index writer should be protected by
  iwAccess,
// which admits multiple simultaneous acquisitions.  iwAccess is
// mutually-exclusive with the iwCommit lock.
protected final Lock iwAccess, iwCommit;
 
 
 
  // must only be called when iwCommit lock held
protected void openWriter() throws IOException {
  if (writer==null) {
writer = createMainIndexWriter(DirectUpdateHandler2, false);
  }
}
 
  addDoc(...) {
  ...
 
  iwAccess.lock();
  try {
 
// We can't use iwCommit to protect internal data here, since it
 would
// block other addDoc calls.  Hence, we synchronize to protect
  internal
// state.  This is safe as all other state-changing operations are
// protected with iwCommit (which iwAccess excludes from this
 block).
synchronized (this) {
  // adding document -- prep writer
  closeSearcher();
  openWriter();
  tracker.addedDocument();
} // end synchronized block
 
// this is the only unsynchronized code in the iwAccess block,
 which
// should account for most of the time
 
  }
 
  commit() {
  ...
 
  iwCommit.lock();
  try {
log.info(start +cmd);
 
if (cmd.optimize) {
  closeSearcher();
  openWriter();
  writer.optimize(cmd.maxOptimizeSegments);
}
 
closeSearcher();
closeWriter();
 
callPostCommitCallbacks();
  }
 
  Thanks
  Bharat Jain
 




org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out : SingleInstanceLock: write.lock

2010-09-09 Thread Bharat Jain
Hi,

We are using SOLR 1.3 and getting this error often. There is only one
instance that index the data. I did some analysis which I have put below and
scenario when this error can happen. Can you guys please validate the issue?
thanks a lot in advance.

SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out
: SingleInstanceLock: write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:938)
at
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:116)
at
org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHand
ler.java:122)
at
org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHa
ndler2.java:167)
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandle
r2.java:221)

I think this error can happen in the following scneario

1. Thread T1 enters the commit method and its actually an optimize request.
a. T1 gets the iwCommit lock
b. T1 enters the openWriter() method
c. T1 does the (writer == null check) - time x1

2. Thread T2 enters the addDoc method
a. T2 gets iwAccess lock
b. T2 gets the mutex this
c. T2 enters operWriter method
d. T2 does the (writer == null check) - time x1

Now after the 1.c is done thread yields and 2.d gets execution and it also
sees writer as null and so now both threads will try to create the
indexwriter and it will fail. I have pasted the relevant portion of code
here.


  // iwCommit protects internal data and open/close of the IndexWriter and
  // is a mutex. Any use of the index writer should be protected by
iwAccess,
  // which admits multiple simultaneous acquisitions.  iwAccess is
  // mutually-exclusive with the iwCommit lock.
  protected final Lock iwAccess, iwCommit;



// must only be called when iwCommit lock held
  protected void openWriter() throws IOException {
if (writer==null) {
  writer = createMainIndexWriter(DirectUpdateHandler2, false);
}
  }

addDoc(...) {
...

iwAccess.lock();
try {

  // We can't use iwCommit to protect internal data here, since it would
  // block other addDoc calls.  Hence, we synchronize to protect
internal
  // state.  This is safe as all other state-changing operations are
  // protected with iwCommit (which iwAccess excludes from this block).
  synchronized (this) {
// adding document -- prep writer
closeSearcher();
openWriter();
tracker.addedDocument();
  } // end synchronized block

  // this is the only unsynchronized code in the iwAccess block, which
  // should account for most of the time

}

commit() {
...

iwCommit.lock();
try {
  log.info(start +cmd);

  if (cmd.optimize) {
closeSearcher();
openWriter();
writer.optimize(cmd.maxOptimizeSegments);
  }

  closeSearcher();
  closeWriter();

  callPostCommitCallbacks();
}

Thanks
Bharat Jain


Re: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out : SingleInstanceLock: write.lock

2010-09-09 Thread Lance Norskog

Hello-

There were a few bugs in this area that are fixed in Solr 1.4. There are 
many other bugs which were also fixed. We suggest everyone upgrade to 1.4.


There are different locking managers, and you may be able to use a 
different one. Also, if this is over NFS that can cause further problems.


Lance

Bharat Jain wrote:

Hi,

We are using SOLR 1.3 and getting this error often. There is only one
instance that index the data. I did some analysis which I have put below and
scenario when this error can happen. Can you guys please validate the issue?
thanks a lot in advance.

SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out
: SingleInstanceLock: write.lock
 at org.apache.lucene.store.Lock.obtain(Lock.java:85)
 at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
 at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:938)
 at
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:116)
 at
org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHand
ler.java:122)
 at
org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHa
ndler2.java:167)
 at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandle
r2.java:221)

I think this error can happen in the following scneario

1. Thread T1 enters the commit method and its actually an optimize request.
 a. T1 gets the iwCommit lock
 b. T1 enters the openWriter() method
 c. T1 does the (writer == null check) -  time x1

2. Thread T2 enters the addDoc method
 a. T2 gets iwAccess lock
 b. T2 gets the mutex this
 c. T2 enters operWriter method
 d. T2 does the (writer == null check) -  time x1

Now after the 1.c is done thread yields and 2.d gets execution and it also
sees writer as null and so now both threads will try to create the
indexwriter and it will fail. I have pasted the relevant portion of code
here.


   // iwCommit protects internal data and open/close of the IndexWriter and
   // is a mutex. Any use of the index writer should be protected by
iwAccess,
   // which admits multiple simultaneous acquisitions.  iwAccess is
   // mutually-exclusive with the iwCommit lock.
   protected final Lock iwAccess, iwCommit;



// must only be called when iwCommit lock held
   protected void openWriter() throws IOException {
 if (writer==null) {
   writer = createMainIndexWriter(DirectUpdateHandler2, false);
 }
   }

addDoc(...) {
...

iwAccess.lock();
 try {

   // We can't use iwCommit to protect internal data here, since it would
   // block other addDoc calls.  Hence, we synchronize to protect
internal
   // state.  This is safe as all other state-changing operations are
   // protected with iwCommit (which iwAccess excludes from this block).
   synchronized (this) {
 // adding document -- prep writer
 closeSearcher();
 openWriter();
 tracker.addedDocument();
   } // end synchronized block

   // this is the only unsynchronized code in the iwAccess block, which
   // should account for most of the time

}

commit() {
...

iwCommit.lock();
 try {
   log.info(start +cmd);

   if (cmd.optimize) {
 closeSearcher();
 openWriter();
 writer.optimize(cmd.maxOptimizeSegments);
   }

   closeSearcher();
   closeWriter();

   callPostCommitCallbacks();
}

Thanks
Bharat Jain

   


Re: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out : SingleInstanceLock: write.lock

2010-09-09 Thread Mark Miller
iwAccess is the reader lock to iwCommit's writer lock - so the scenario
you bring up should be protected - the reader lock is used in only one
place in the class (addDoc), while every other call to openWriter is
protected by the writer lock.

I'd worry more about the case where two add documents hit at the same
time when the indexWriter is null - then, because both calls to
openWriter are protected with the reader lock, you could race - but
that's what the synchronized(this) protects against ;)

It all looks good to me.

- Mark

On 9/9/10 4:17 AM, Bharat Jain wrote:
 Hi,
 
 We are using SOLR 1.3 and getting this error often. There is only one
 instance that index the data. I did some analysis which I have put below and
 scenario when this error can happen. Can you guys please validate the issue?
 thanks a lot in advance.
 
 SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
 out
 : SingleInstanceLock: write.lock
 at org.apache.lucene.store.Lock.obtain(Lock.java:85)
 at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
 at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:938)
 at
 org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:116)
 at
 org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHand
 ler.java:122)
 at
 org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHa
 ndler2.java:167)
 at
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandle
 r2.java:221)
 
 I think this error can happen in the following scneario
 
 1. Thread T1 enters the commit method and its actually an optimize request.
 a. T1 gets the iwCommit lock
 b. T1 enters the openWriter() method
 c. T1 does the (writer == null check) - time x1
 
 2. Thread T2 enters the addDoc method
 a. T2 gets iwAccess lock
 b. T2 gets the mutex this
 c. T2 enters operWriter method
 d. T2 does the (writer == null check) - time x1
 
 Now after the 1.c is done thread yields and 2.d gets execution and it also
 sees writer as null and so now both threads will try to create the
 indexwriter and it will fail. I have pasted the relevant portion of code
 here.
 
 
   // iwCommit protects internal data and open/close of the IndexWriter and
   // is a mutex. Any use of the index writer should be protected by
 iwAccess,
   // which admits multiple simultaneous acquisitions.  iwAccess is
   // mutually-exclusive with the iwCommit lock.
   protected final Lock iwAccess, iwCommit;
 
 
 
 // must only be called when iwCommit lock held
   protected void openWriter() throws IOException {
 if (writer==null) {
   writer = createMainIndexWriter(DirectUpdateHandler2, false);
 }
   }
 
 addDoc(...) {
 ...
 
 iwAccess.lock();
 try {
 
   // We can't use iwCommit to protect internal data here, since it would
   // block other addDoc calls.  Hence, we synchronize to protect
 internal
   // state.  This is safe as all other state-changing operations are
   // protected with iwCommit (which iwAccess excludes from this block).
   synchronized (this) {
 // adding document -- prep writer
 closeSearcher();
 openWriter();
 tracker.addedDocument();
   } // end synchronized block
 
   // this is the only unsynchronized code in the iwAccess block, which
   // should account for most of the time
 
 }
 
 commit() {
 ...
 
 iwCommit.lock();
 try {
   log.info(start +cmd);
 
   if (cmd.optimize) {
 closeSearcher();
 openWriter();
 writer.optimize(cmd.maxOptimizeSegments);
   }
 
   closeSearcher();
   closeWriter();
 
   callPostCommitCallbacks();
 }
 
 Thanks
 Bharat Jain
 



Re: replication of lucene-write.lock file

2009-05-15 Thread Noble Paul നോബിള്‍ नोब्ळ्
the replication relies on lucene API to know what are the files
associated with an index version. If it returns the lock file also it
is replicated too.

I guess we must ignore the .lock file if it is returned in the list of files.

you can raise an issue and we can fix it.
--Noble

On Fri, May 15, 2009 at 12:38 AM, Bryan Talbot btal...@aeriagames.com wrote:

 When using solr 1.4 replication, I see that the lucene-write.lock file is
 being replicated to slaves.  I'm importing data from a db every 5 minutes
 using cron to trigger a DIH delta-import.  Replication polls every 60
 seconds and the master is configured to take a snapshot (replicateAfter)
 commit.

 Why should the lock file be replicated to slaves?

 The lock file isn't stale on the master and is absent unless the
 delta-import is in process.  I've not tried it yet, but with the lock file
 replicated, it seems like promotion of a slave to a master in a failure
 recovery scenario requires the manual removal of the lock file.



 -Bryan








-- 
-
Noble Paul | Principal Engineer| AOL | http://aol.com


Re: replication of lucene-write.lock file

2009-05-15 Thread Bryan Talbot


https://issues.apache.org/jira/browse/SOLR-1170


-Bryan




On May 15, 2009, at May 15, 12:24 AM, Noble Paul നോബിള്‍  
नोब्ळ् wrote:



the replication relies on lucene API to know what are the files
associated with an index version. If it returns the lock file also it
is replicated too.

I guess we must ignore the .lock file if it is returned in the list  
of files.


you can raise an issue and we can fix it.
--Noble

On Fri, May 15, 2009 at 12:38 AM, Bryan Talbot  
btal...@aeriagames.com wrote:


When using solr 1.4 replication, I see that the lucene-write.lock  
file is
being replicated to slaves.  I'm importing data from a db every 5  
minutes

using cron to trigger a DIH delta-import.  Replication polls every 60
seconds and the master is configured to take a snapshot  
(replicateAfter)

commit.

Why should the lock file be replicated to slaves?

The lock file isn't stale on the master and is absent unless the
delta-import is in process.  I've not tried it yet, but with the  
lock file
replicated, it seems like promotion of a slave to a master in a  
failure

recovery scenario requires the manual removal of the lock file.



-Bryan









--
-
Noble Paul | Principal Engineer| AOL | http://aol.com




replication of lucene-write.lock file

2009-05-14 Thread Bryan Talbot


When using solr 1.4 replication, I see that the lucene-write.lock file  
is being replicated to slaves.  I'm importing data from a db every 5  
minutes using cron to trigger a DIH delta-import.  Replication polls  
every 60 seconds and the master is configured to take a snapshot  
(replicateAfter) commit.


Why should the lock file be replicated to slaves?

The lock file isn't stale on the master and is absent unless the delta- 
import is in process.  I've not tried it yet, but with the lock file  
replicated, it seems like promotion of a slave to a master in a  
failure recovery scenario requires the manual removal of the lock file.




-Bryan






Re: SingleInstanceLock: write.lock

2009-04-01 Thread Otis Gospodnetic

Hi,

Are you sure there really is/was enough free space?  Were you monitoring the 
disk space when this error happened?


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



- Original Message 
 From: davidb dav...@mate1inc.com
 To: solr-user@lucene.apache.org
 Sent: Tuesday, March 31, 2009 5:31:42 PM
 Subject: SingleInstanceLock: write.lock
 
 Hi,
 
 I am new to Solr and am having an issue with the following 
 SingleInstanceLock: 
 write.lock. We have solr 1.3 running under tomcat 1.6.0_11.  We have an index 
 of 
 users that are online at any given time (Usually around 4000 users).  The 
 records from solr are deleted and repopulated at around 30 second intervals 
 (we 
 would like this to be as fast as possible).  The server runs fine for a 
 period 
 of time, then we get the following:
 
 SEVERE: auto commit error...
 java.io.FileNotFoundException: /data/solr-oni/data/index/_3w.prx (No space 
 left 
 on device)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(Unknown Source)
at 
 org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:639)
at 
 org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:442)
at 
 org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:104)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:145)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:74)
at 
 org.apache.lucene.index.DocFieldConsumers.flush(DocFieldConsumers.java:75)
at 
 org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)
at 
 org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:574)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3615)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3524)
at 
 org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1709)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1674)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1648)
at 
 org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:153)
at 
 org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:175)
at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:359)
at 
 org.apache.solr.update.DirectUpdateHandler2$CommitTracker.run(DirectUpdateHandler2.java:515)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
  
 Source)
at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
  
 Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
 26-Mar-2009 6:19:50 PM org.apache.solr.common.SolrException log
 SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed 
 out: SingleInstanceLock: write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:938)
at 
 org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:116)
at 
 org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:122)
at 
 org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:167)
at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:221)
at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:59)
at 
 com.pjaol.search.solr.update.LocalUpdaterProcessor.processAdd(LocalUpdateProcessorFactory.java:148)
at 
 org.apache.solr.handler.XmlUpdateRequestHandler.processUpdate(XmlUpdateRequestHandler.java:196)
at 
 org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:123)
at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
 org.apache.catalina.core.StandardWrapperValve.invoke

SingleInstanceLock: write.lock

2009-03-31 Thread davidb

Hi,

I am new to Solr and am having an issue with the following 
SingleInstanceLock: write.lock. We have solr 1.3 running under tomcat 
1.6.0_11.  We have an index of users that are online at any given time 
(Usually around 4000 users).  The records from solr are deleted and 
repopulated at around 30 second intervals (we would like this to be as 
fast as possible).  The server runs fine for a period of time, then we 
get the following:


SEVERE: auto commit error...
java.io.FileNotFoundException: /data/solr-oni/data/index/_3w.prx (No 
space left on device)

   at java.io.RandomAccessFile.open(Native Method)
   at java.io.RandomAccessFile.init(Unknown Source)
   at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.init(FSDirectory.java:639)
   at 
org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:442)
   at 
org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:104)

   at org.apache.lucene.index.TermsHash.flush(TermsHash.java:145)
   at org.apache.lucene.index.DocInverter.flush(DocInverter.java:74)
   at 
org.apache.lucene.index.DocFieldConsumers.flush(DocFieldConsumers.java:75)
   at 
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)
   at 
org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:574)
   at 
org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3615)

   at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3524)
   at 
org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1709)

   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1674)
   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1648)
   at 
org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:153)
   at 
org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:175)
   at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:359)
   at 
org.apache.solr.update.DirectUpdateHandler2$CommitTracker.run(DirectUpdateHandler2.java:515)
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source)

   at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown 
Source)
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown 
Source)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)

   at java.lang.Thread.run(Unknown Source)
26-Mar-2009 6:19:50 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain 
timed out: SingleInstanceLock: write.lock

   at org.apache.lucene.store.Lock.obtain(Lock.java:85)
   at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
   at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:938)
   at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:116)
   at 
org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:122)
   at 
org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:167)
   at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:221)
   at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:59)
   at 
com.pjaol.search.solr.update.LocalUpdaterProcessor.processAdd(LocalUpdateProcessorFactory.java:148)
   at 
org.apache.solr.handler.XmlUpdateRequestHandler.processUpdate(XmlUpdateRequestHandler.java:196)
   at 
org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:123)
   at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)

   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
   at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
   at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
   at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
   at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
   at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
   at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
   at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
   at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109

clearing solr write.lock

2007-12-20 Thread Kasi Sankaralingam
I am running into a problem where previous residual lock files are left in solr 
data directory after
A failed process, can we programmatically/efficiently remove this .lock file. 
Also, has anyone
Externalized the handling of lock files (meaning keep the lock file for example 
in database?)
Any plug in available?

Thanks,

Kasi


Re: clearing solr write.lock

2007-12-20 Thread Mike Klaas

On 20-Dec-07, at 11:24 AM, Kasi Sankaralingam wrote:

I am running into a problem where previous residual lock files are  
left in solr data directory after
A failed process, can we programmatically/efficiently remove  
this .lock file. Also, has anyone
Externalized the handling of lock files (meaning keep the lock file  
for example in database?)

Any plug in available?


Well, the lock isn't really important at all for typical Solr  
operation.  It is recommended to use lockTypesimple/lockType  
(1.3) which avoids FS locks.  Otherwise, I would set  
mainIndexunlockOnStartuptrue/unlockOnStartup/mainIndex and  
not worry about it.


-Mike


RE: clearing solr write.lock

2007-12-20 Thread Kasi Sankaralingam
Hi Mike,

Thanks a lot, where would this lock information go and also
How do I set the lock timeout?

Kasi

-Original Message-
From: Mike Klaas [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 20, 2007 12:19 PM
To: solr-user@lucene.apache.org
Subject: Re: clearing solr write.lock

On 20-Dec-07, at 11:24 AM, Kasi Sankaralingam wrote:

 I am running into a problem where previous residual lock files are
 left in solr data directory after
 A failed process, can we programmatically/efficiently remove
 this .lock file. Also, has anyone
 Externalized the handling of lock files (meaning keep the lock file
 for example in database?)
 Any plug in available?

Well, the lock isn't really important at all for typical Solr
operation.  It is recommended to use lockTypesimple/lockType
(1.3) which avoids FS locks.  Otherwise, I would set
mainIndexunlockOnStartuptrue/unlockOnStartup/mainIndex and
not worry about it.

-Mike


Re: clearing solr write.lock

2007-12-20 Thread Mike Klaas

Hey Kasi,

Take a look at the solr config file for the included example (example/ 
solr/conf/solrconfig.xml).  It is the canonical documentation.


cheers,
-Mike

On 20-Dec-07, at 1:46 PM, Kasi Sankaralingam wrote:


Hi Mike,

Thanks a lot, where would this lock information go and also
How do I set the lock timeout?

Kasi

-Original Message-
From: Mike Klaas [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 20, 2007 12:19 PM
To: solr-user@lucene.apache.org
Subject: Re: clearing solr write.lock

On 20-Dec-07, at 11:24 AM, Kasi Sankaralingam wrote:


I am running into a problem where previous residual lock files are
left in solr data directory after
A failed process, can we programmatically/efficiently remove
this .lock file. Also, has anyone
Externalized the handling of lock files (meaning keep the lock file
for example in database?)
Any plug in available?


Well, the lock isn't really important at all for typical Solr
operation.  It is recommended to use lockTypesimple/lockType
(1.3) which avoids FS locks.  Otherwise, I would set
mainIndexunlockOnStartuptrue/unlockOnStartup/mainIndex and
not worry about it.

-Mike




Re: DirectSolrConnection, write.lock and Too Many Open Files

2007-09-10 Thread Adrian Sutton
We use DirectSolrConnection via JNI in a couple of client apps that  
sometimes have 100s of thousands of new docs as fast as Solr will  
have them. It would crash relentlessly if I didn't force all calls  
to update or query to be on the same thread using objc's  
@synchronized and a message queue. I never narrowed down if this  
was a solr issue or a JNI one.


That doesn't sound promising. I'll throw in synchronization around  
the update code and see what happens. That's doesn't seem good for  
performance though. Can Solr as a web app handle multiple updates at  
once or does it synchronize to avoid it?


Thanks,

Adrian Sutton
http://www.symphonious.net


Re: DirectSolrConnection, write.lock and Too Many Open Files

2007-09-10 Thread Mike Klaas

On 10-Sep-07, at 1:50 PM, Adrian Sutton wrote:

We use DirectSolrConnection via JNI in a couple of client apps  
that sometimes have 100s of thousands of new docs as fast as Solr  
will have them. It would crash relentlessly if I didn't force all  
calls to update or query to be on the same thread using objc's  
@synchronized and a message queue. I never narrowed down if this  
was a solr issue or a JNI one.


That doesn't sound promising. I'll throw in synchronization around  
the update code and see what happens. That's doesn't seem good for  
performance though. Can Solr as a web app handle multiple updates  
at once or does it synchronize to avoid it?


Solr can handle multiple simultaneous updates.  The entire request  
processing is concurrent, as is the document analysis.  Only the  
final write is synchronized (this includes lucene segment merging).


In the future, segment merging will occur in a separate thread,  
further improving concurrency.


-Mike


Re: DirectSolrConnection, write.lock and Too Many Open Files

2007-09-10 Thread Brian Whitman


On Sep 10, 2007, at 5:00 PM, Mike Klaas wrote:


On 10-Sep-07, at 1:50 PM, Adrian Sutton wrote:

We use DirectSolrConnection via JNI in a couple of client apps  
that sometimes have 100s of thousands of new docs as fast as Solr  
will have them. It would crash relentlessly if I didn't force all  
calls to update or query to be on the same thread using objc's  
@synchronized and a message queue. I never narrowed down if this  
was a solr issue or a JNI one.


That doesn't sound promising. I'll throw in synchronization around  
the update code and see what happens. That's doesn't seem good for  
performance though. Can Solr as a web app handle multiple updates  
at once or does it synchronize to avoid it?


Solr can handle multiple simultaneous updates.  The entire request  
processing is concurrent, as is the document analysis.  Only the  
final write is synchronized (this includes lucene segment merging).





Yes, i do want to disclaim that it's very likely my thread problems  
are an implementation detail w/ JNI, nothing to do w/ DSC.


-b




Re: DirectSolrConnection, write.lock and Too Many Open Files

2007-09-10 Thread Ryan McKinley


The other problem is that after some time we get a Too Many Open Files 
error when autocommit fires. 


Have you checked your ulimit settings?

http://wiki.apache.org/lucene-java/LuceneFAQ#head-48921635adf2c968f7936dc07d51dfb40d638b82

ulimit -n number.

As mike mentioned, you may also want to use 'single' as the lockType. 
In solrconfig set:


indexDefaults
  ...
  lockTypesingle/lockType
/indexDefaults




I could of course switch to using the Solr webapp since we're running in 
Tomcat anyway, however I really like the ability to have a single WAR 
file that contains everything and also not have to worry about actually 
making HTTP requests and the complexity that adds.




This sounds like a good candidate to try solrj:
 http://wiki.apache.org/solr/Solrj

This way you write your app independent of how you connect to solr.  It 
also takes care of the XML parsing for you and lets you work with 
objects rather then strings.


ryan


Re: DirectSolrConnection, write.lock and Too Many Open Files

2007-09-10 Thread Ryan McKinley

Adrian Sutton wrote:

On 11/09/2007, at 7:21 AM, Ryan McKinley wrote:

The other problem is that after some time we get a Too Many Open 
Files error when autocommit fires.


Have you checked your ulimit settings?

http://wiki.apache.org/lucene-java/LuceneFAQ#head-48921635adf2c968f7936dc07d51dfb40d638b82 



ulimit -n number.


Yeah I'm aware of the ulimit, I'm just keen to identify what's causing 
it to happen before starting to increase limits. Given the write.lock 
errors as well I'm particularly suspicious of it. That said, most likely 
it happens whenever a search and a write are happening at the same time 
and two sets of the files get opened which is enough to kick it over the 
limit. The fact that it fixes itself is a good indication that it's not 
a file handle leak.


lucene opens a lot of files.  It can easily get beyond 1024. (I think 
the default).  I'm no expert on how the file handling works, but I think 
more files are open if you are searching and writing at the same time.


If you can't increase the limit you can try:
 useCompoundFiletrue/useCompoundFile

It is slower, but if you are unable to change the ulimit on the deployed 
machines





As mike mentioned, you may also want to use 'single' as the lockType. 
In solrconfig set:


indexDefaults
  ...
  lockTypesingle/lockType
/indexDefaults


I'll give that a go. Looks like it didn't make it into Solr 1.2 so I'll 
try upgrading to the nightly build.




If you need to use this in production soon, I'd suggest sticking with 
1.2 for a while.  There has been a LOT of action in trunk and it may be 
good to let it settle before upgrading a production system.


You should not need to upgrade to fix the write.lock and Too Many Open 
Files problem.  Try increasing ulimit or using a compoundfile before 
upgrading.




Just when you think you know everything on the wiki 


someone finally updates it!




Re: DirectSolrConnection, write.lock and Too Many Open Files

2007-09-10 Thread Adrian Sutton

On 11/09/2007, at 8:46 AM, Ryan McKinley wrote:
lucene opens a lot of files.  It can easily get beyond 1024. (I  
think the default).  I'm no expert on how the file handling works,  
but I think more files are open if you are searching and writing at  
the same time.


If you can't increase the limit you can try:
 useCompoundFiletrue/useCompoundFile

It is slower, but if you are unable to change the ulimit on the  
deployed machines


I've done a bit of poking on the server and ulimit doesn't seem to be  
the problem:

e2wiki:~$ ulimit
unlimited
e2wiki:~$ cat /proc/sys/fs/file-max
170355

So there's either something going on behind my back (quite possible,  
it's a VM) or lucene is opening a really insane number of files. I  
did check that those values were the same for the tomcat55 user that  
Tomcat actually runs as.  An lsof -p on the Tomcat process always  
shows 40 files in use, the total open files sits around 1000-1500  
even when reindexing all the content. I'll watch it a bit more over  
time and see what happens.


I notice that Confluence recommends at least 20 for the max file  
limit, at least before they switched to compound indexing so it's  
possible that the 170355 limit could be reached, but it seems  
unlikely with our load.


If you need to use this in production soon, I'd suggest sticking  
with 1.2 for a while.  There has been a LOT of action in trunk and  
it may be good to let it settle before upgrading a production system.


You should not need to upgrade to fix the write.lock and Too Many  
Open Files problem.  Try increasing ulimit or using a compoundfile  
before upgrading.


We're quite a way off of real production, it's just internal use at  
the moment (on the real product server, but we're a small company so  
we can handle having some problems). I'll try out the current nightly  
build and see how it goes, as much as anything out of interest but  
probably won't pull new builds very often.


Thanks again,

Adrian Sutton
http://www.symphonious.net


Re: DirectSolrConnection, write.lock and Too Many Open Files

2007-09-10 Thread Ryan McKinley


I've done a bit of poking on the server and ulimit doesn't seem to be 
the problem:

e2wiki:~$ ulimit
unlimited
e2wiki:~$ cat /proc/sys/fs/file-max
170355


try: ulimit -n

ulimit on its own is something else.  On my machine I get:

[EMAIL PROTECTED]:~$ ulimit
unlimited
[EMAIL PROTECTED]:~$ cat /proc/sys/fs/file-max
364770
[EMAIL PROTECTED]:~$ ulimit -n
1024


I have to run:
ulimit -n 2

to get lucene to run w/ a large index...


Re: DirectSolrConnection, write.lock and Too Many Open Files

2007-09-10 Thread Adrian Sutton

On 11/09/2007, at 9:48 AM, Ryan McKinley wrote:

try: ulimit -n

ulimit on its own is something else.  On my machine I get:

[EMAIL PROTECTED]:~$ ulimit
unlimited
[EMAIL PROTECTED]:~$ cat /proc/sys/fs/file-max
364770
[EMAIL PROTECTED]:~$ ulimit -n
1024


I have to run:
ulimit -n 2

to get lucene to run w/ a large index...


Bingo, I'm an idiot - or rather, I now know *why* I'm an idiot. :)   
I'll give it a go.


Also, this is likely to be the cause of my write.lock problems - the  
Too many files exception just occured and the write.lock file gets  
left around (should have seen that one coming too).


Thanks for your help, I'm anticipating that this will solve our  
problems.


Regards,

Adrian Sutton
http://www.symphonious.net