Hi,

We're upgrading our staging cluster from 1.2.1 to 1.3.0 one box at a time. 
We have stopped Elasticsearch on the first box, removed the Groovy plugin 
we were using with 1.2.1 and deployed using chef. The new box reports as 
1.3.0 but when it rejoins the cluster (of two boxes, the other box is still 
running 1.2.1) the logs fill with: 

[2014-07-28 09:40:57,121][WARN ][cluster.action.shard     ] [stg-elastic-1] 
[development][3] sending failed shard for [development][3], 
node[ob-VHwcSR3KWEJcaS8GyVA], [R], s[INITIALIZING], indexUUID [_na_], 
reason [Failed to start shard, message [IllegalArgumentException[No enum 
constant org.apache.lucene.util.Version.4.3.1]]]
[2014-07-28 09:40:57,133][WARN ][index.engine.internal    ] [stg-elastic-1] 
[development][4] failed engine [corrupted preexisting index]
[2014-07-28 09:40:57,134][WARN ][indices.cluster          ] [stg-elastic-1] 
[development][4] failed to start shard
java.lang.IllegalArgumentException: No enum constant 
org.apache.lucene.util.Version.4.3.1
at java.lang.Enum.valueOf(Enum.java:236)
at org.apache.lucene.util.Version.valueOf(Version.java:32)
at org.apache.lucene.util.Version.parseLeniently(Version.java:250)
at 
org.elasticsearch.index.store.Store$MetadataSnapshot.buildMetadata(Store.java:451)
at 
org.elasticsearch.index.store.Store$MetadataSnapshot.<init>(Store.java:433)
at org.elasticsearch.index.store.Store.getMetadata(Store.java:144)
at 
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:724)
at 
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:576)
at 
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:183)
at 
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:444)
at 
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

Most of the indexes have both primary and second shards enabled but the 
index in question (development) is seen with secondaries going in and out 
of the newly upgraded node.

The documentation seems to suggest that it should be possible to do a 
rolling upgrade from 1.2.1 and that the boxes should coexist reasonably 
happily, but this does not seem to be the case. The cluster doesn't seem to 
be going green. Slightly concerned from the words 'corrupted preexisting 
index' that some damage might have been done?

We've stopped Elasticsearch on the 1.3.0 box whilst we seek your advice.

Thanks,

Ollie

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/31f56380-f1f1-4dbd-be7c-20498d5159b1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to