Hey Martian,

I think I solved the problem, it was the 'output_batch_size = 25' so the 
output buffer was always full and writing to disk, and he never managed to 
write all logs into elasticsearch. I also raised the 'net.core.rmem_max' 
limit because in debian is 200k by default (thanks Arie)

I tried your tool, works great, comparing to my netcat solution that is 
loosing like 7-10% of my messages for some reason (invalid json?), but it 
has a few downsides for us:

When graylog is down, the daemon dies:

udpShutdown: Cannot send gelf msg: write udp 10.107.60.45:12202: connection 
refused

and if you restart the daemon, it parses again all entries in the log file 
and sends them once again to graylog.

Cheers,
Razvan


On Friday, December 5, 2014 5:18:32 PM UTC+1, Martin Schütte wrote:
>
> But you might want to take a look at https://github.com/DECK36/go-log2gelf 
> I use it to forward JSON log files via GELF. Local log files cause some 
> I/O 
> overhead (unless they are on a memory backed file system), but at the same 
> time 
> serve as a buffer, so the added reliability is often worth it. 
>
> -- 
> Martin 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to