Bump... Anyone???

On Friday, August 29, 2014 11:28:01 AM UTC-4, Brian Callanan wrote:
>
>
> Hi, Need a little help. I'm Using Openstack Ceilometer and I've configured 
> it to push metered data over UDP to a host:port. I installed logstash and 
> configured it to receive the the UDP data from Ceilometer using the codec: 
> msgpack.
> This works great! Really! Now I'm trying to Stuff the data on output into 
> ElasticSearch and its getting an exception when pushing data into 
> elasticsearch. Pushed data throws the following from elastic search:
>
> [2014-08-29 11:05:08,646][WARN ][http.netty               ] [Amphibian] 
> Caught exception while handling client http traffic, closing connection 
> [id: 0x7d45e4d7, /127.0.0.1:53745 => /127.0.0.1:9200]
> java.lang.IllegalArgumentException: invalid version format: 
> LOGSTASH-LINUX-CAL-13046-2010L9O160SXTFILI-RJ6DDVLG LINUX-CAL       
> 10.2.3.23
>         at 
> org.elasticsearch.common.netty.handler.codec.http.HttpVersion.<init>(HttpVersion.java:102)
>         at 
> org.elasticsearch.common.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:62)
>         at 
> org.elasticsearch.common.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:75)
>         at 
> org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:189)
>         at 
> org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:101)
>         at 
> org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
>         at 
> org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
>                   ...
>
> *Can anyone shed any light on why the exception is being thrown?*
>
> My elastic search version:
>
> brian.callanan@linux-cal 143 % ./elasticsearch -v
> Version: 1.3.2, Build: dee175d/2014-08-13T14:29:30Z, JVM: 1.7.0_40
>
> My Logstash version
>
> brian.callanan@linux-cal 159 % logstash -V
> logstash 1.4.2
>
> My logstash conf
>
> input {
>   udp {
>     codec => msgpack # codec (optional), default: "plain"
>     port => 40001 # number (required)
>     type => ceilometer # string (optional)
>   }
> }
> output {
>   elasticsearch {
>       host => localhost
>       port => 9200
>       codec => json
>   }
>   stdout { codec => rubydebug }
> }
>
>  A sample data:
> {
>          "counter_name" => "network.incoming.bytes.rate",
>           "resource_id" => 
> "instance-00000017-bec82aeb-b06a-4569-8b91-fcb6acd491e0-tap06349b1b-2d",
>             "timestamp" => "2014-08-29T13:49:12Z",
>        "counter_volume" => 8285.777777777777,
>               "user_id" => "cbf803c4aeb6415eb492c04ed8debe2c",
>     "message_signature" => 
> "e96ade5e06e1ec903e459f4c8a383413d1058bda0c1f7546dea62800e5f289f8",
>     "resource_metadata" => {
>                  "name" => "tap06349b1b-2d",
>            "parameters" => {},
>                  "fref" => nil,
>           "instance_id" => "bec82aeb-b06a-4569-8b91-fcb6acd491e0",
>         "instance_type" => "3422a1d6-d61c-4577-9d38-47e1b25e8ad3",
>                   "mac" => "fa:16:3e:a5:82:09"
>     },
>                "source" => "openstack",
>          "counter_unit" => "B/s",
>            "project_id" => "e7a434ef0aa549c9824d963029a02454",
>            "message_id" => "4210ce68-2f83-11e4-9f59-f01fafe5cc22",
>          "counter_type" => "gauge",
>              "@version" => "1",
>            "@timestamp" => "2014-08-29T13:49:12.410Z",
>                  "tags" => [],
>                  "type" => "ceilometer",
>                  "host" => "10.2.24.7"
> }
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3970e4c3-d7c1-4056-b5bc-636473558d5b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to