Thanks! New journal directory helped us also. Do you know how to fix and 
process those old journal files?

On Monday, March 2, 2015 at 6:10:49 PM UTC+2, Ed Totman wrote:
>
> I deleted the journal and re-enabled it, and also changed 
> the index.refresh_interval as recommended by Tristan.
>
> On Monday, March 2, 2015 at 3:05:10 AM UTC-8, Bernd Ahlers wrote:
>>
>> Ed, 
>>
>> if you want to delete all of the journal, stop the server, delete the 
>> journal dir (see "message_journal_dir" setting in graylog.conf) and 
>> start the server again. 
>>
>> Bernd 
>>
>> On 26 February 2015 at 16:13, Ed Totman <[email protected]> wrote: 
>> > Thanks for the reply.  How do I clear the journal of old messages 
>> before I 
>> > restart it? 
>> > 
>> > On Wednesday, February 25, 2015 at 10:54:42 PM UTC-8, Bernd Ahlers 
>> wrote: 
>> >> 
>> >> Ed, 
>> >> 
>> >> as Tristan already said, if you constantly sending in more messages 
>> >> than Graylog or Elasticsearch can process, you will always fill up 
>> >> your journal. 
>> >> Disabling the journal does not really fix the problem, because you 
>> >> will now lose messages. 
>> >> 
>> >> Please check the node details page (System -> Nodes -> click on the 
>> >> node name) and check the disk journal stats. If you writing more into 
>> >> the journal than reading from it, you have a problem with processing 
>> >> throughput. 
>> >> 
>> >> Regards, 
>> >> Bernd 
>> >> 
>> >> On 26 February 2015 at 00:50, Tristan Rhodes <[email protected]> 
>> wrote: 
>> >> > Ed, 
>> >> > 
>> >> > I had this same problem.  However, increasing the journal size will 
>> only 
>> >> > help if your rate of messages periodically decreases below what your 
>> >> > system 
>> >> > can process.  (For example, you will grow the journal during peak 
>> hours 
>> >> > of 
>> >> > the day, and drain the journal when fewer logs are being sent to 
>> >> > Graylog). 
>> >> > 
>> >> > If you are always sending more messages than your Elasticsearch can 
>> >> > ingest, 
>> >> > the journal will not help.  I increased my Elasticsearch ingesting 
>> >> > performance by changing this setting in elasticsearch.yml: 
>> >> > 
>> >> > index.refresh_interval: 30s 
>> >> > 
>> >> > You can read more about this setting here: 
>> >> > 
>> >> > 
>> >> > 
>> http://blog.sematext.com/2013/07/08/elasticsearch-refresh-interval-vs-indexing-performance/
>>  
>> >> > 
>> >> > 
>> http://www.elasticsearch.org/blog/performance-considerations-elasticsearch-indexing/
>>  
>> >> > 
>> >> > Disclaimer: I am new to graylog+elastisearch and barely know what I 
>> am 
>> >> > doing.  :) 
>> >> > 
>> >> > Cheers! 
>> >> > 
>> >> > Tristan 
>> >> > 
>> >> > On Mon, Feb 23, 2015 at 10:41 AM, Ed Totman <[email protected]> 
>> wrote: 
>> >> >> 
>> >> >> I deployed the latest appliance from the ova file.  Graylog2 worked 
>> >> >> fine 
>> >> >> for several days, but then the journal files grew to 5GB which is 
>> the 
>> >> >> default limit and search returns no current results.  On the System 
>> >> >> page 
>> >> >> this error appeared: 
>> >> >> 
>> >> >> Journal utilization is too high a few seconds ago 
>> >> >> Journal utilization is too high and may go over the limit soon. 
>> Please 
>> >> >> verify that your Elasticsearch cluster is healthy and fast enough. 
>> You 
>> >> >> may 
>> >> >> also want to review your Graylog journal settings and set a higher 
>> >> >> limit. 
>> >> >> (Node: 43a9cc82-dc5a-4492-936b-418e1bc98f5e, journal utilization: 
>> >> >> 96.0%) 
>> >> >> 
>> >> >> I increased the journal limit to 10GB but this did not fix the 
>> problem. 
>> >> >> I 
>> >> >> restarted all services and checked the logs, but could not find any 
>> >> >> obvious 
>> >> >> problem.  The VM is running on very fast storage with lots of CPU 
>> and 
>> >> >> memory.  I set "message_journal_enabled = false" which seems to 
>> have 
>> >> >> temporarily resolved the problem. 
>> >> >> 
>> >> >> How do I troubleshoot the journal?  All of the other components are 
>> >> >> working fine. 
>> >> >> 
>> >> >> -- 
>> >> >> You received this message because you are subscribed to the Google 
>> >> >> Groups 
>> >> >> "graylog2" group. 
>> >> >> To unsubscribe from this group and stop receiving emails from it, 
>> send 
>> >> >> an 
>> >> >> email to [email protected]. 
>> >> >> For more options, visit https://groups.google.com/d/optout. 
>> >> > 
>> >> > 
>> >> > 
>> >> > 
>> >> > -- 
>> >> > Tristan Rhodes 
>> >> > 
>> >> > -- 
>> >> > You received this message because you are subscribed to the Google 
>> >> > Groups 
>> >> > "graylog2" group. 
>> >> > To unsubscribe from this group and stop receiving emails from it, 
>> send 
>> >> > an 
>> >> > email to [email protected]. 
>> >> > For more options, visit https://groups.google.com/d/optout. 
>> >> 
>> >> 
>> >> 
>> >> -- 
>> >> Developer 
>> >> 
>> >> Tel.: +49 (0)40 609 452 077 
>> >> Fax.: +49 (0)40 609 452 078 
>> >> 
>> >> TORCH GmbH - A Graylog company 
>> >> Steckelhörn 11 
>> >> 20457 Hamburg 
>> >> Germany 
>> >> 
>> >> Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
>> >> Geschäftsführer: Lennart Koopmann (CEO) 
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups 
>> > "graylog2" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an 
>> > email to [email protected]. 
>> > For more options, visit https://groups.google.com/d/optout. 
>>
>>
>>
>> -- 
>> Developer 
>>
>> Tel.: +49 (0)40 609 452 077 
>> Fax.: +49 (0)40 609 452 078 
>>
>> TORCH GmbH - A Graylog company 
>> Steckelhörn 11 
>> 20457 Hamburg 
>> Germany 
>>
>> Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
>> Geschäftsführer: Lennart Koopmann (CEO) 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/154e0d25-b4ec-4fd5-bbaf-1a218b7db93a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to