You could send a waitForYellow request just after your index creation?

curl -XGET 'http://localhost:9200/_cluster/health?wait_for_status=yellow'
HTH

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


Le 26 déc. 2013 à 07:19, Tarang Dawer <[email protected]> a écrit :

While browsing for this issue, i came across a comment from spinscale  @ 
https://github.com/elasticsearch/elasticsearch/issues/2922
It says here, that after creating an index, it takes some time for index to be 
fully functional, what i am facing as i think, is that if the node gets killed 
during that time, then the index gets corrupted and indexing stops with 
UnavailableShardsException being thrown.
is there some setting so that i can make sure that index creation call gets 
returned only after that index is fully functional ? that would solve my 
problem, as the index if created would be fully functional , if not then  the 
code will go again to create the index again.


> On Thu, Dec 26, 2013 at 10:49 AM, Tarang Dawer <[email protected]> wrote:
> Tried one again, this time, the status showed 3 successful shards, but 
> however the indexing got stuck,  and after a while an exception was thrown 
> 
> Caused by: org.elasticsearch.action.UnavailableShardsException: 
> [indexName][4] [2] shardIt, [0] active : Timeout waiting for [1m], request: 
> index {[indexName][indexName][ID], source["Record":"Details"]}
>     at 
> org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.raiseTimeoutFailure(TransportShardReplicationOperationAction.java:548)
>     at 
> org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$3.onTimeout(TransportShardReplicationOperationAction.java:538)
>     at 
> org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:483)
>     ... 3 more
> 
> and the cluster state is still in red, with elasticsearch not able to recover 
> the shards after the restart
> 
> Conquer Lordcluster health: red (1, 3)
> 
> 
>> On Thu, Dec 26, 2013 at 10:32 AM, Tarang Dawer <[email protected]> 
>> wrote:
>> tried this on 90.9 also . This time after the restart when i get the index 
>> status as :- 
>> 
>> In the elasticsearch head plugin, it shows the cluster is in red state, with 
>> no shards being allocated to the node even after the restart of 
>> elasticsearch.
>> 
>> http://localhost:9200/indexName/_status
>> {"ok":true,"_shards":{"total":10,"successful":0,"failed":0},"indices":{}}
>> 
>> 
>> 
>> 
>> 
>> 
>> In the elasticsearch head plugin, it shows the cluster is in red state, with 
>> no shards being allocated to the node even 
>> after the restart of elasticsearch.
>> 
>> Squirrel Girlcluster health: red (1, 0)
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>> On Thu, Dec 26, 2013 at 10:06 AM, Tarang Dawer <[email protected]> 
>>> wrote:
>>> i have reliably recreated this many times, happens while creating index on 
>>> a single node, (default 5 shards). i have set  "action.auto_create_index: 
>>> false" , "discovery.zen.ping.multicast.enabled: false" & "node.master=true" 
>>> so i am creating indices via java API, . i kill(Kill -9 ) the elasticsearch 
>>> immediately after the index is created. when i restart the elasticsearch, 
>>> out of the 5 primary shards, it shows 3/4 shards in a corrupt state, with 
>>> "503" status code.
>>> 
>>> 
>>>> On Thu, Dec 26, 2013 at 3:58 AM, Alexander Reelsen <[email protected]> 
>>>> wrote:
>>>> Hey,
>>>> 
>>>> can you reliably recreate this? And try to create a gist? Preferrably, 
>>>> when using elasticsearch 0.90.9. When you create an index, you usually 
>>>> create 5 shards, so, how can 3/4 shards be corrupt? Did you change 
>>>> anything and do not use the defaults (are you changing the defaults 
>>>> somewhere else as well)? It would be great if you could provide a 
>>>> reproducible example using a gist, like mentioned in 
>>>> http://www.elasticsearch.org/help
>>>> 
>>>> 
>>>> --Alex
>>>> 
>>>> 
>>>>> On Wed, Dec 25, 2013 at 4:35 PM, Tarang Dawer <[email protected]> 
>>>>> wrote:
>>>>> Hi all
>>>>> I am facing an issue of corrupt index creation, whenever the es node is 
>>>>> killed just after the index is created. When the node is restarted, the 
>>>>> index shows 3/4 shards corrupt, with 503 status, which never recover, and 
>>>>> as a result, my indexing gets stuck. I am doing this on a single node, 
>>>>> with es version 90.1 . Please help me out.
>>>>> Thanks 
>>>>> Tarang Dawer
>>>>> 
>>>>> -- 
>>>>> You received this message because you are subscribed to the Google Groups 
>>>>> "elasticsearch" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>>> email to [email protected].
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/elasticsearch/CAEGWFQVihK%3DzJ9wE%2BzufhOSoOHg97g-FM6FFnqtw8JCpYO4VCQ%40mail.gmail.com.
>>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>> 
>>>> -- 
>>>> You received this message because you are subscribed to the Google Groups 
>>>> "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>> email to [email protected].
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/elasticsearch/CAGCwEM-ggGZXBRj_LwNc0ksEg-dcRDTZ-bKH_9AqJmOLi9A4UQ%40mail.gmail.com.
>>>> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEGWFQU9uPamQhXbZpkV%3DKvDVSJZziiEwpfDoAc%3DJqNuhA8xyQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6DA8A787-3616-4F14-802C-F03BCA014063%40pilato.fr.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to