Hi

Im trying to run a server  with elasticsearch and logstash,
I did configure minimalistic settings, and still I can not get it running:

In ES log i see:
[transport.netty          ] [lekNo1] Message not fully read (request) for 
[30] and action [], resetting


The elasticsearch.yml contains:

cluster.name: "elasticqa"
network.host: 0.0.0.0
node.data: true
node.master: true
node.name: "lekNo1"
path.data: /home/elasticsearch/data
path.logs: /home/elasticsearch/logs
path.work: /home/elasticsearch/data/temp


root@szl:~# curl -s http://10.13.201.103:9200/_status?pretty=true
{
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  },
  "indices" : { }
}
(reverse-i-search)`cu': apt-get install ^Crl
root@szl:~# curl 'localhost:9200/_nodes/jvm?pretty'
{
  "cluster_name" : "elasticqa",
  "nodes" : {
    "6yPHl-6ETL-XI0ht9ieFFA" : {
      "name" : "lekNo1",
      "transport_address" : "inet[/10.13.201.103:9300]",
      "host" : "szl",
      "ip" : "10.13.201.103",
      "version" : "1.0.1",
      "build" : "5c03844",
      "http_address" : "inet[/10.13.201.103:9200]",
      "attributes" : {
        "master" : "true"
      },
      "jvm" : {
        "pid" : 2636,
        "version" : "1.6.0_27",
        "vm_name" : "OpenJDK 64-Bit Server VM",
        "vm_version" : "20.0-b12",
        "vm_vendor" : "Sun Microsystems Inc.",
        "start_time" : 1394139699953,
        "mem" : {
          "heap_init_in_bytes" : 268435456,
          "heap_max_in_bytes" : 1071579136,
          "non_heap_init_in_bytes" : 24313856,
          "non_heap_max_in_bytes" : 224395264,
          "direct_max_in_bytes" : 1071579136
        },
        "gc_collectors" : [ "Copy", "ConcurrentMarkSweep" ],
        "memory_pools" : [ "Code Cache", "Eden Space", "Survivor Space", 
"CMS Old Gen", "CMS Perm Gen" ]
      }
    }
  }
}


The logstash config file  contains:

output {
        elasticsearch {
                host => "localhost"
#               cluster => "elasticqa"
#                port => 9300
#               node_name => "lekNo1"
                protocol => "transport"
        }

        #debuging
        file {
               path => "/root/test.log"
        }

I do start logstash as follows:
/usr/bin/java -jar /usr/share/logstash/bin/logstash-1.3.3-flatjar.jar agent 
-f /etc/logstash.d/elasticsearch/

-------------

When I did switch protocol from transport to node

output {
        elasticsearch {

        }

}


It looks like discovery is failing:


ES
java.io.IOException: No transport address mapped to [22369]
        at 
org.elasticsearch.common.transport.TransportAddressSerializers.addressFromStream(TransportAddressSerializers.java:71)
        at 
org.elasticsearch.cluster.node.DiscoveryNode.readFrom(DiscoveryNode.java:267)
        at 
org.elasticsearch.cluster.node.DiscoveryNode.readNode(DiscoveryNode.java:257)
        at 
org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Receiver.run(MulticastZenPing.java:410)
        at java.lang.Thread.run(Thread.java:679)

LS
{:timestamp=>"2014-03-06T22:22:58.537000+0100", :message=>"Failed to flush 
outgoing items", :outgoing_count=>4, 
:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException: 
waited for [30s], 
:backtrace=>["org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(TransportMasterNodeOperationAction.java:180)",
 
"org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:483)",
 
"java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)",
 
"java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)",
 
"java.lang.Thread.run(Thread.java:679)"], :level=>:warn}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b0a4d2de-fe0a-42f8-98c8-9e3ea4ea1b26%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to