While our system currently isn't that large I'm trying to determine the 
best way to configure Graylog to make future updates and extensions simple 
to manage.

Where I'm struggling with this is with the impact in terms of performance 
of configuring things certain ways.

So, for example, we have data being sourced from several different types of 
logs

   - IIS Logs
   - nginx logs
   - Windows event logs
   - PHP Error logs
   - Custom application logs
   - syslogs from various devices and servers
   - tomcat/java logs

Each of these different types has various requirements in terms of 
extractors and processing that we do to provide us with useful fields for 
searching.

The options as I see them are 

   1. create a small number of inputs that handle all the messages and have 
   a large set of extractors to deal with all the different message types that 
   come through the input.
   2. create an input for each type of message source with the extractors 
   for that type of message as needed

To me, option 2 seems the more sensible in terms of future management and 
even initial setup but I'm unsure of the impact of having more inputs 
versus less inputs with more extractors.

I'd appreciate any insight/advice on this (or pointers to documentation 
that I may have missed)

Cheers,
Michael

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/3f22860f-7b86-4f6c-a0bb-2f1431adf874%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to