Responses inline, below.

On 11/16/2015 06:23 AM, web user wrote:
Hi Group,

Long time go user. I'm trying to setup a logging and monitoring server
and was hoping I could get some advice from members here. I'm currently
only looking at tools written in golang.

1. For file logging you have:

filebeat
heka

2. For system level metrics:

topbeat
scollector http://bosun.org/scollector/ (from stack exchange

3. For network packet captures:

packetbeat

4. For metric aggragation:

http://prometheus.io/
influxdb

5. For text search on indexes:

an app built on bleve.

Questions:

a. For 2 above. Which is easier to get working with heka?
topbeat/scollector or some other tool written in golang.
Not sure, really. Either would work. It's not immediately clear, but it looks 
like they're both using HTTP POST requests to push their data, which would work 
w/ Heka's HttpListenInput. On Linux, *some* of the system data you're looking 
to gather can be processed using Heka itself, w/ the FilePollingInput reading 
data directly out of /proc. Heka ships w/ some decoders that know how to parse 
the contents of these files (http://is.gd/r2Fk8j, http://is.gd/vvbFDy, 
http://is.gd/wMR0W3, http://is.gd/XQXzxZ), and some filters that know how to 
process the output from those decoders (http://is.gd/P7tBqc, 
http://is.gd/d9zKPB, http://is.gd/DQGOYU, http://is.gd/A0ymX8). These probably 
don't cover everything you want to know, however, so you might need one of the 
external tools anyway, in which case you might choose to use the other tool for 
consistency.

Anyway, getting the data into Heka is probably the easier part. What can be 
trickier is parsing that data to extract the information you need. Since I 
don't know much about the formats that the tools in question generate, I can't 
speak to which would be easier to deal with.
b. For 1 above, if I'm using heka, can i add a token to each log entry
sent to the heka aggregation server which identifies which customer/user
this is going to be coming from?
Yes, you can do arbitrary transformation of any input data, using a variety of 
strategies. The best way to achieve this varies from use case to use case, but 
one likely solution is to use a ScribbleDecoder (or to scribble the data from 
within a SandboxDecoder that is doing other parsing).
c. For 4 above, is a push to Prometheus supported?
Heka doesn't yet have any built in Prometheus support, but there are folks out 
there in the community using Heka w/ Prometheus:

- https://github.com/davidbirdsong/heka-prometheus
- https://github.com/docker-infra/heka-prometheus

I've used neither and can't speak to how well they work, or will fit your use 
cases.

As a final note, it should be mentioned that Heka is not a pure Go project. 
While most of it is in Go, a lot of what makes Heka powerful is the way it 
makes use of the Lua sandbox. The Lua sandbox itself is written in C 
(https://github.com/mozilla-services/lua_sandbox), and the use of said sandbox, 
which is the recommended strategy for tackling many of the problems for which 
Heka is intended, of course involves using Lua. The sandbox is the core of the 
greater Heka ecosystem, and there are other wrappers around the sandbox, such 
as Hindsight (https://github.com/trink/hindsight), which is written in C.

Which is all to say that if you're strongly attached to a pure Go 
infrastructure, then Heka might not be the choice for you.

-r


_______________________________________________
Heka mailing list
[email protected]
https://mail.mozilla.org/listinfo/heka

Reply via email to