I've been running into this issue with the elasticsearch image
(8b05ccbe-0890-11e5-8d30-57386d1d5482) for a couple weeks now and haven't
been able to get it off the ground and running, because of it.
I've even removed OpenJDK, and pointed the elasticsearch service
"/opt/local/java/openjdk7/bin/java" to Oracle's java binary. The
elasticsearch service starts without a problem.
Even then, I still receive the error messages below:
Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state
not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];
{:level=>:error, :file=>"logstash/outputs/elasticsearch.rb", :line=>"568",
:method=>"flush"}
Failed to flush outgoing items {:outgoing_count=>54,
:exception=>"Java::OrgElasticsearchClusterBlock::ClusterBlockException",
:backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)",
"org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)",
"org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:215)",
"org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:67)",
"org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:153)",
"org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)",
"java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)",
"java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn,
:file=>"stud/buffer.rb", :line=>"231", :method=>"buffer_flush"}
^CSIGINT received. Shutting down the pipeline. {:level=>:warn,
:file=>"logstash/agent.rb", :line=>"126", :method=>"execute"}
Sending shutdown signal to input thread {:thread=>#<Thread:0x3f24d196
sleep>, :level=>:info, :file=>"logstash/pipeline.rb", :line=>"260",
:method=>"shutdown"}
Sending shutdown signal to input thread {:thread=>#<Thread:0x148c481b
sleep>, :level=>:info, :file=>"logstash/pipeline.rb", :line=>"260",
:method=>"shutdown"}
caller requested sincedb write () {:level=>:debug,
:file=>"filewatch/tail.rb", :line=>"204", :method=>"sincedb_write"}
caller requested sincedb write () {:level=>:debug,
:file=>"filewatch/tail.rb", :line=>"204", :method=>"sincedb_write"}
^CSIGINT received. Terminating immediately.. {:level=>:fatal,
:file=>"logstash/agent.rb", :line=>"123", :method=>"execute"}
I've finally given up on the image today and ran base64
(62f148f8-6e84-11e4-82c5-efca60348b9f), with Oracle Java 8, and
Elasticsearch (https://www.elastic.co/downloads/elasticsearch), and
everything runs.
My index can be seen created below, whereas using
8b05ccbe-0890-11e5-8d30-57386d1d5482 only yielded ".kibana"
[root@elasticsearch0 ~]# curl localhost:9200/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size
pri.store.size
yellow open .kibana 1 1 2 0
9.3kb 9.3kb
yellow open logstash-monthly 5 1 130063 0
72.4mb 72.4mb
8b05ccbe-0890-11e5-8d30-57386d1d5482 elasticsearch 15.1.1
smartos 2015-06-01T19:00:59Z
-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com