Hi Alex,

Thanks for your reply. ES instance running on a Linux machine and I have 
only one instance 1 shard 1 replica too.
I think it is solved now by adding this line.
index.store.fs.lock: none

After I have added that parameter this problem no more occur.


On Monday, June 16, 2014 2:42:18 PM UTC+3, Alexander Reelsen wrote:
>
> Hey,
>
> what elasticsearch version are you using? Judging from the directory I 
> dont think you are using NFS, right? Are you running multiple instances 
> locally?
> Have you shutdown elasticsearch properly so that no other instance is 
> lingering around (you can use jps or or to check)
>
>
> --Alex
>
>
> On Mon, Jun 2, 2014 at 1:16 PM, Fatih Karatana <[email protected] 
> <javascript:>> wrote:
>
>> I try to create an index in a couple of seconds and i got this:
>> [2014-06-02 14:10:14,414][WARN ][index.engine.internal    ] [shardicaprio
>> ] [myindex][0] Could not lock IndexWriter isLocked [false]
>>
>>
>> And here is full stack trace:
>>
>> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
>> NativeFSLock@/var/lib/elasticsearch/data/shardicaprio/nodes/0/indices/
>> myindex/0/index/write.lock
>>         at org.apache.lucene.store.Lock.obtain(Lock.java:84)
>>         at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:
>> 702)
>>         at org.elasticsearch.index.engine.internal.InternalEngine.
>> createWriter(InternalEngine.java:1388)
>>         at org.elasticsearch.index.engine.internal.InternalEngine.start(
>> InternalEngine.java:256)
>>         at org.elasticsearch.index.shard.service.InternalIndexShard.
>> postRecovery(InternalIndexShard.java:684)
>>         at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.
>> recover(LocalIndexShardGateway.java:158)
>>         at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run
>> (IndexShardGatewayService.java:189)
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1145)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:745)
>> [2014-06-02 14:10:14,533][WARN ][indices.cluster          ] [shardicaprio
>> ] [myindex][0] failed to start shard
>> org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [
>> myindex][0] failed recovery
>>         at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run
>> (IndexShardGatewayService.java:248)
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1145)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.elasticsearch.index.engine.EngineCreationFailureException: 
>> [myindex][0] failed to create engine
>>         at org.elasticsearch.index.engine.internal.InternalEngine.start(
>> InternalEngine.java:258)
>>         at org.elasticsearch.index.shard.service.InternalIndexShard.
>> postRecovery(InternalIndexShard.java:684)
>>         at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.
>> recover(LocalIndexShardGateway.java:158)
>>         at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run
>> (IndexShardGatewayService.java:189)
>>         ... 3 more
>> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock 
>> obtain timed out: NativeFSLock@/var/lib/elasticsearch/data/shardicaprio/
>> nodes/0/indices/myindex/0/index/write.lock
>>         at org.apache.lucene.store.Lock.obtain(Lock.java:84)
>>         at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:
>> 702)
>>         at org.elasticsearch.index.engine.internal.InternalEngine.
>> createWriter(InternalEngine.java:1388)
>>         at org.elasticsearch.index.engine.internal.InternalEngine.start(
>> InternalEngine.java:256)
>>         ... 6 more
>> [2014-06-02 14:10:14,536][WARN ][cluster.action.shard     ] [shardicaprio
>> ] [myindex][0] sending failed shard for [myindex][0], node[
>> kHOedr2wQpa3DSZj81ep_A], [P], s[INITIALIZING], indexUUID [29Uf2hH4S2-
>> FJf1LnNrM0A], reason [Failed to start shard, message [
>> IndexShardGatewayRecoveryException[[myindex][0] failed recovery]; nested: 
>> EngineCreationFailureException[[myindex][0] failed to create engine]; 
>> nested: LockObtainFailedException[Lock obtain timed out: NativeFSLock@/
>> var/lib/elasticsearch/data/shardicaprio/nodes/0/indices/myindex/0/index/
>> write.lock]; ]]
>> [2014-06-02 14:10:14,536][WARN ][cluster.action.shard     ] [shardicaprio
>> ] [myindex][0] received shard failed for [myindex][0], node[
>> kHOedr2wQpa3DSZj81ep_A], [P], s[INITIALIZING], indexUUID [29Uf2hH4S2-
>> FJf1LnNrM0A], reason [Failed to start shard, message [
>> IndexShardGatewayRecoveryException[[myindex][0] failed recovery]; nested: 
>> EngineCreationFailureException[[myindex][0] failed to create engine]; 
>> nested: LockObtainFailedException[Lock obtain timed out: NativeFSLock@/
>> var/lib/elasticsearch/data/shardicaprio/nodes/0/indices/myindex/0/index/
>> write.lock]; ]]
>>
>> I have no memory overloading, my heapsize is fine, but CPU. CPU gets 
>> overload even more than 100% of usage. I tried to recover index, delete 
>> index, recreate index but it tells the same thing every time. I could not 
>> figure it out that what causes this.
>>
>> Any idea?
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/0d148962-ea40-4d28-84f4-0070ce85974a%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/0d148962-ea40-4d28-84f4-0070ce85974a%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fab544f6-ca26-4e40-8b20-d9083fa98bb3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to