Dima, thanks so much for your excellent answer, and for sharing your knowledge. 
 
All the other components of a traditional Hadoop stack are designed to run in a 
single node for testing purposes, so I’m glad ES does too.

Best,
--Matt

On 12/1/16, 2:08 AM, "Dima Kovalyov" <dima.koval...@sstech.us> wrote:

    Hello Matt,
    
    That is something you experience when you install ES on 1 node instead of > 
3 nodes as recommended by the guide. There are two things to check in ES health 
and shards status:
    curl 
http://mfoley-metron-1.openstacklocal:9200/_cat/health?v<http://172.16.16.2:9200/_cat/health?v>
    curl http://mfoley-metron-1.openstacklocal:9200/_cat/shards?pretty
    If you have health critical and shards and Na, then you have it badly 
configured as mentioned by Justin.
    
    In the guide (that we discussing in user@ thread) here:
    https://goo.gl/HWGwpj
    You can find ES yml config for single node:
    cluster.name: metron
    network.host: ["_eth0:ipv4_","_local:ipv4_"]
    discovery.zen.ping.unicast.hosts: [ <single_node_domain> ]
    path.data: /opt/lmm/es_data
    index.number_of_replicas: 0
    
    When you update it in /etc/elasticsearch/elasticsearch.yml (remove 
everything you have there first) restart ES through:
    service elasticsearch restart
    And NOT through Ambari (which will revert config to its initial conflicting 
state).
    
    Please let me know if you have any questions.
    Thank you.
    
    - Dima
    
    On 12/01/2016 04:19 AM, Matt Foley wrote:
    
    Apropos of the discussion of documenting the install procedure, I was 
experiencing a lot of instability with the vagrant quick-dev-platform on my Mac 
due to memory pressure, so I fired up a 16GB cloud vm and installed a 
single-node “cluster” for Metron, using Ambari metron-mpack and rpms built on 
my mac.  I followed the instructions at 
https://community.hortonworks.com/content/kbentry/60805/deploying-a-fresh-metron-cluster-using-ambari-serv.html
 and just installed all services on the single available node.  The node is 
Centos 7, the browser is Firefox, and I managed to get Python 2.7.11 on it.
    
    
    
    It all worked just fine, and all services including Metron are green in 
Ambari.
    
    
    
    However, when I access localhost:5000, Kibana complains that 
“plugin:elasticsearch” is “Service Unavailable”. (Red)
    
    In Kibana config, kibana_es_url is 
http://mfoley-metron-1.openstacklocal:9200
    
    In Elasticsearch config, http_port is “9200-9300” and transport_tcp_port is 
“9300-9400”
    
    
    
    If I just point the browser at http://mfoley-metron-1.openstacklocal:9200 , 
it responds:
    
    {
    
      "name" : "mfoley-metron-1.openstacklocal",
    
      "cluster_name" : "metron",
    
      "version" : {
    
        "number" : "2.3.3",
    
        "build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde",
    
        "build_timestamp" : "2016-05-17T15:40:04Z",
    
        "build_snapshot" : false,
    
        "lucene_version" : "5.5.0"
    
      },
    
      "tagline" : "You Know, for Search"
    
    }
    
    so Elasticsearch is listening on that port.
    
    
    
    Has anyone had experience how to resolve this problem with the Kibana 
elasticsearch plugin?
    
    
    
    I’ve tried starting and stopping both elasticsearch and kibana; tried the 
URL with and without a slash at the end; tried port 9300 as well as 9200 (9300 
doesn’t work in browser, nor in kibana either); and tried un-commenting the set 
of elasticsearch-related timeouts in kibana/config/kibana.yml.  Nothing helped. 
 Even with logging.verbose=true, log only contains:
    
    
    
    
{"type":"log","@timestamp":"2016-12-01T02:10:45+00:00","tags":["plugins","debug"],"pid":4863,"plugin":{"name":"elasticsearch","version":"1.0.0"},"message":"Initializing
 plugin elasticsearch"}
    
    
    
    
{"type":"log","@timestamp":"2016-12-01T02:10:45+00:00","tags":["status","plugin:elasticsearch","info"],"pid":4863,"name":"plugin:elasticsearch","state":"yellow","message":"Status
 changed from uninitialized to yellow - Waiting for 
Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
    
    
    
    
{"type":"log","@timestamp":"2016-12-01T02:10:45+00:00","tags":["status","plugin:elasticsearch","error"],"pid":4863,"name":"plugin:elasticsearch","state":"red","message":"Status
 changed from yellow to red - Service 
Unavailable","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
    
    
    
    which looks like it did NOT wait, even tho I left the timeouts set to many 
seconds:
    
    
    
    # Time in milliseconds to wait for elasticsearch to respond to pings, 
defaults to
    
    # request_timeout setting
    
    elasticsearch.pingTimeout: 1500
    
    
    
    # Time in milliseconds to wait for responses from the back end or 
elasticsearch.
    
    # This must be > 0
    
    elasticsearch.requestTimeout: 30000
    
    
    
    # Time in milliseconds for Elasticsearch to wait for responses from shards.
    
    # Set to 0 to disable.
    
    elasticsearch.shardTimeout: 0
    
    
    
    # Time in milliseconds to wait for Elasticsearch at Kibana startup before 
retrying
    
    elasticsearch.startupTimeout: 5000
    
    
    
    Thanks in advance for any help,
    
    --Matt
    
    
    
    
    
    
    
    
    
    
    

Reply via email to