Hi David,
This is what I've been wondering about
I have copierd elasticsearch 0.90.9 from my virtual machine I've been
testing with a month ago, but no luck (it was installed
with https://github.com/valentinogagliardi/ansible-logstash ansible play)
There is logstash 1.3.2, I will copy it too, and check again.
Current config is now:
cluster.name: "elasticqa"
node.master: true
node.name: "lekNo1"
path.data: /home/elasticsearch/data
path.logs: /home/elasticsearch/logs
path.work: /home/elasticsearch/data/temp
output {
elasticsearch {}
}
ES LOG
[2014-03-06 23:35:21,215][INFO ][node ] [lekNo1]
stopping ...
[2014-03-06 23:35:21,234][INFO ][node ] [lekNo1] stopped
[2014-03-06 23:35:21,234][INFO ][node ] [lekNo1]
closing ...
[2014-03-06 23:35:21,239][INFO ][node ] [lekNo1] closed
[2014-03-06 23:36:31,040][INFO ][node ] [lekNo1]
version[0.90.9], pid[4186], build[a968646/2013-12-23T10:35:28Z]
[2014-03-06 23:36:31,040][INFO ][node ] [lekNo1]
initializing ...
[2014-03-06 23:36:31,048][INFO ][plugins ] [lekNo1] loaded
[], sites []
[2014-03-06 23:36:33,600][INFO ][node ] [lekNo1]
initialized
[2014-03-06 23:36:33,600][INFO ][node ] [lekNo1]
starting ...
[2014-03-06 23:36:33,730][INFO ][transport ] [lekNo1]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.13.201.103:9300]}
[2014-03-06 23:36:36,768][INFO ][cluster.service ] [lekNo1]
new_master
[lekNo1][SgnO9OG-RP23lftg5h5E4w][inet[/10.13.201.103:9300]]{master=true},
reason: zen-disco-join (elected_as_master)
[2014-03-06 23:36:36,799][INFO ][discovery ] [lekNo1]
elasticqa/SgnO9OG-RP23lftg5h5E4w
[2014-03-06 23:36:36,836][INFO ][http ] [lekNo1]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address
{inet[/10.13.201.103:9200]}
[2014-03-06 23:36:36,837][INFO ][node ] [lekNo1] started
[2014-03-06 23:36:36,857][INFO ][gateway ] [lekNo1]
recovered [0] indices into cluster_state
LS
un(ThreadPoolExecutor.java:615)", "java.lang.Thread.run(Thread.java:679)"],
:level=>:warn}
{:timestamp=>"2014-03-06T23:53:36.961000+0100", :message=>"Failed to flush
outgoing items", :outgoing_count=>1,
:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException:
waited for [30s],
:backtrace=>["org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(TransportMasterNodeOperationAction.java:180)",
"org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:483)",
"java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)",
"java.lang.Thread.run(Thread.java:679)"], :level=>:warn}
W dniu czwartek, 6 marca 2014 22:51:28 UTC+1 użytkownik David Pilato
napisał:
>
> I think this logstash version is not compatible with elasticsearch 1.0.1.
> You should try with another elasticsearch version I think.
>
> My 2 cents
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
>
> Le 6 mars 2014 à 22:48, sirkubax <[email protected] <javascript:>>
> a écrit :
>
> Hi
>
> Im trying to run a server with elasticsearch and logstash,
> I did configure minimalistic settings, and still I can not get it running:
>
> In ES log i see:
> [transport.netty ] [lekNo1] Message not fully read (request) for
> [30] and action [], resetting
>
>
> The elasticsearch.yml contains:
>
> cluster.name: "elasticqa"
> network.host: 0.0.0.0
> node.data: true
> node.master: true
> node.name: "lekNo1"
> path.data: /home/elasticsearch/data
> path.logs: /home/elasticsearch/logs
> path.work: /home/elasticsearch/data/temp
>
>
> root@szl:~# curl -s http://10.13.201.103:9200/_status?pretty=true
> {
> "_shards" : {
> "total" : 0,
> "successful" : 0,
> "failed" : 0
> },
> "indices" : { }
> }
> (reverse-i-search)`cu': apt-get install ^Crl
> root@szl:~# curl 'localhost:9200/_nodes/jvm?pretty'
> {
> "cluster_name" : "elasticqa",
> "nodes" : {
> "6yPHl-6ETL-XI0ht9ieFFA" : {
> "name" : "lekNo1",
> "transport_address" : "inet[/10.13.201.103:9300]",
> "host" : "szl",
> "ip" : "10.13.201.103",
> "version" : "1.0.1",
> "build" : "5c03844",
> "http_address" : "inet[/10.13.201.103:9200]",
> "attributes" : {
> "master" : "true"
> },
> "jvm" : {
> "pid" : 2636,
> "version" : "1.6.0_27",
> "vm_name" : "OpenJDK 64-Bit Server VM",
> "vm_version" : "20.0-b12",
> "vm_vendor" : "Sun Microsystems Inc.",
> "start_time" : 1394139699953,
> "mem" : {
> "heap_init_in_bytes" : 268435456,
> "heap_max_in_bytes" : 1071579136,
> "non_heap_init_in_bytes" : 24313856,
> "non_heap_max_in_bytes" : 224395264,
> "direct_max_in_bytes" : 1071579136
> },
> "gc_collectors" : [ "Copy", "ConcurrentMarkSweep" ],
> "memory_pools" : [ "Code Cache", "Eden Space", "Survivor Space",
> "CMS Old Gen", "CMS Perm Gen" ]
> }
> }
> }
> }
>
>
> The logstash config file contains:
>
> output {
> elasticsearch {
> host => "localhost"
> # cluster => "elasticqa"
> # port => 9300
> # node_name => "lekNo1"
> protocol => "transport"
> }
>
> #debuging
> file {
> path => "/root/test.log"
> }
>
> I do start logstash as follows:
> /usr/bin/java -jar /usr/share/logstash/bin/logstash-1.3.3-flatjar.jar
> agent -f /etc/logstash.d/elasticsearch/
>
> -------------
>
> When I did switch protocol from transport to node
>
> output {
> elasticsearch {
>
> }
>
> }
>
>
> It looks like discovery is failing:
>
>
> ES
> java.io.IOException: No transport address mapped to [22369]
> at
> org.elasticsearch.common.transport.TransportAddressSerializers.addressFromStream(TransportAddressSerializers.java:71)
> at
> org.elasticsearch.cluster.node.DiscoveryNode.readFrom(DiscoveryNode.java:267)
> at
> org.elasticsearch.cluster.node.DiscoveryNode.readNode(DiscoveryNode.java:257)
> at
> org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Receiver.run(MulticastZenPing.java:410)
> at java.lang.Thread.run(Thread.java:679)
>
> LS
> {:timestamp=>"2014-03-06T22:22:58.537000+0100", :message=>"Failed to flush
> outgoing items", :outgoing_count=>4,
> :exception=>org.elasticsearch.discovery.MasterNotDiscoveredException:
> waited for [30s],
> :backtrace=>["org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(TransportMasterNodeOperationAction.java:180)",
>
> "org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:483)",
>
> "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)",
>
> "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)",
>
> "java.lang.Thread.run(Thread.java:679)"], :level=>:warn}
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected] <javascript:>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/b0a4d2de-fe0a-42f8-98c8-9e3ea4ea1b26%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/b0a4d2de-fe0a-42f8-98c8-9e3ea4ea1b26%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/67747cab-32cf-439a-af44-e8a351a9bd51%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.