Thanks. Everything is coming in as GELF regardless as to it's actual
source. I am going to have a crack at configuring multiple Grok patterns
on the one input and we'll see what happens. :)
On Friday, October 23, 2015 at 2:06:25 AM UTC+13, Arie wrote:
>
> I've had this answer from Kay:
>
> Hi!
>
> Generally speaking:
>
> If your log senders need special treatment (i.e. if you need to set up
> different extractors), then use different inputs.
> If you send gelf directly, you are generally ok with one input.
> Syslog-like inputs often need special extractors, so in those cases
> you have special "plain text" inputs with extractors, Cisco "syslog"
> is like that in many cases. Or ESXi.
>
> Another case is if you want to tag sources in a special way, using a
> "static field". Those are per input, so you would configure different
> inputs. Use-cases could be different applications deployed across many
> servers, where you don't really care about which server actually
> handled the request, or rather at some level you don't care. This
> often includes GELF sent from applications, where you cannot
> differentiate between messages because they all look similar. By using
> different target addresses, you can tag them.
>
> Other than that standard network considerations apply, e.g. load
> balancing or firewalls.
>
> HTH,
> Kay
>
>
>
> Op woensdag 21 oktober 2015 09:27:54 UTC+2 schreef Patrick Brennan:
>>
>> Hi all,
>>
>> We have just stood up a Proof-of-Concept Graylog cluster and we are
>> ingesting log data from around 50 nodes. The Graylog cluster itself is
>> working fine and is stable ingesting at something like 8000 msgs/sec. Now
>> it's time to try to do something useful with that data. And herein lies
>> the crux of my question.
>>
>> At the moment I have a single input configured of type "GELF TCP"
>> listening on port TCP/12201. This is behind a load balancer and all logs
>> are being forwarded with graylog collector. A subset of the collector
>> configuration is below:
>>
>> inputs {
>> syslog {
>> type = "file"
>> path-glob-root = "/var/log"
>> path-glob-pattern = "{syslog,auth.log,dpkg.log,kern.log}"
>> }
>> nginx-logs {
>> type = "file"
>> path-glob-root = "/var/log/nginx"
>> path-glob-pattern = "*log"
>> }
>> app-logs [
>> type = "file"
>> path = "/var/log/application.json"
>> }
>> }
>>
>> outputs {
>> gelf-tcp {
>> type = "gelf"
>> host = "server"
>> port = 12201
>> [ ... SNIP ... ]
>> }
>> }
>>
>>
>> Obviously we are sending logs of many different formats to the same
>> graylog input. Some are JSON, some are syslog, some are http combined, and
>> there are many others as well).
>>
>> I am curious what others do in this situation. I imported the nginx
>> content pack and it created an input (on a different port) for nginx access
>> logs and another (again on a different port) for nginx error logs. Is this
>> best practice? It doesn't seem overly desirable to me as it pushes the
>> classification of logs into the collector which I was trying to avoid. The
>> alternative would seem to be to have all extractors running on my single
>> input, but I can't see any easy way to keep this under control. Both from
>> a number of extractors perspective, but also to constrain particular
>> extractors to particular message types (for example based on a regex
>> against source_file).
>>
>> I would appreciate anyone else's thoughts or experiences.
>>
>> Thanks!
>> Patrick
>>
>> BTW, having used an ELK based stack previously, I am really like Graylog
>> thus far. Kudos to the developers for actually starting out by designing
>> an architecture. :)
>>
>
--
You received this message because you are subscribed to the Google Groups
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/graylog2/71c2ce93-cbe3-4ad5-bb3c-968ea6cf0f09%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.