Hello Matt,

That is something you experience when you install ES on 1 node instead of > 3 
nodes as recommended by the guide. There are two things to check in ES health 
and shards status:
curl http://mfoley-metron-1.openstacklocal:9200/_cat/shards?pretty
If you have health critical and shards and Na, then you have it badly 
configured as mentioned by Justin.

In the guide (that we discussing in user@ thread) here:
You can find ES yml config for single node:
cluster.name: metron
network.host: ["_eth0:ipv4_","_local:ipv4_"]
discovery.zen.ping.unicast.hosts: [ <single_node_domain> ]
path.data: /opt/lmm/es_data
index.number_of_replicas: 0

When you update it in /etc/elasticsearch/elasticsearch.yml (remove everything 
you have there first) restart ES through:
service elasticsearch restart
And NOT through Ambari (which will revert config to its initial conflicting 

Please let me know if you have any questions.
Thank you.

- Dima

On 12/01/2016 04:19 AM, Matt Foley wrote:

Apropos of the discussion of documenting the install procedure, I was 
experiencing a lot of instability with the vagrant quick-dev-platform on my Mac 
due to memory pressure, so I fired up a 16GB cloud vm and installed a 
single-node “cluster” for Metron, using Ambari metron-mpack and rpms built on 
my mac.  I followed the instructions at 
 and just installed all services on the single available node.  The node is 
Centos 7, the browser is Firefox, and I managed to get Python 2.7.11 on it.

It all worked just fine, and all services including Metron are green in Ambari.

However, when I access localhost:5000, Kibana complains that 
“plugin:elasticsearch” is “Service Unavailable”. (Red)

In Kibana config, kibana_es_url is http://mfoley-metron-1.openstacklocal:9200

In Elasticsearch config, http_port is “9200-9300” and transport_tcp_port is 

If I just point the browser at http://mfoley-metron-1.openstacklocal:9200 , it 


  "name" : "mfoley-metron-1.openstacklocal",

  "cluster_name" : "metron",

  "version" : {

    "number" : "2.3.3",

    "build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde",

    "build_timestamp" : "2016-05-17T15:40:04Z",

    "build_snapshot" : false,

    "lucene_version" : "5.5.0"


  "tagline" : "You Know, for Search"


so Elasticsearch is listening on that port.

Has anyone had experience how to resolve this problem with the Kibana 
elasticsearch plugin?

I’ve tried starting and stopping both elasticsearch and kibana; tried the URL 
with and without a slash at the end; tried port 9300 as well as 9200 (9300 
doesn’t work in browser, nor in kibana either); and tried un-commenting the set 
of elasticsearch-related timeouts in kibana/config/kibana.yml.  Nothing helped. 
 Even with logging.verbose=true, log only contains:

 plugin elasticsearch"}

 changed from uninitialized to yellow - Waiting for 

 changed from yellow to red - Service 
Unavailable","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

which looks like it did NOT wait, even tho I left the timeouts set to many 

# Time in milliseconds to wait for elasticsearch to respond to pings, defaults 

# request_timeout setting

elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or elasticsearch.

# This must be > 0

elasticsearch.requestTimeout: 30000

# Time in milliseconds for Elasticsearch to wait for responses from shards.

# Set to 0 to disable.

elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before 

elasticsearch.startupTimeout: 5000

Thanks in advance for any help,


Reply via email to