I actually set message_cache_off_heap = false and increased my heap size 
for java to 16G.  The box is fairly large so this shoulld

On Thursday, December 4, 2014 10:42:02 AM UTC-5, Jochen Schalanda wrote:
>
> Hi Chris,
>
> On Wednesday, 3 December 2014 19:06:42 UTC+1, Chris Tresco wrote:
>>
>> I am wondering why this file gets so large and what I can do to keep the 
>> size down.  It being that big, it seems to me it would be a problem with 
>> feeding messages to elasticsearch for indexing but I am not sure how to 
>> troubleshoot.
>>
>
> The spool files will currently be compacted (which means that old messages 
> will be physically removed and not only marked as deleted) when starting 
> Graylog2. There is no "on-line" compaction at the moment.
>
> The output cache should usually be empty if Elasticsearch can index 
> messages fast enough. If you're using a queued input (e. g. an AMQP or 
> Kafka input) you might want to disable the output cache (possible since 
> Graylog2 0.92.0, see 
> https://github.com/Graylog2/graylog2-server/blob/0.92/misc/graylog2.conf#L343-347)
>  
> to generate proper back-pressure. Otherwise messages will just be written 
> into the output cache, even if the actual backend is not able to keep up 
> with indexing).
>
>
> Cheers,
> Jochen
>

-- 
You received this message because you are subscribed to the Google Groups 
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to