On 16/10/15 09:17, Andrei S wrote:
> So, it seems the issue was caused by a corrupt journal file. It was a
> coincidence that i upgraded to the new version just a day before.

Perhaps a naive question, but shouldn't there be a way to detect corrupt
files and skip them? I've been hit by similar issues in the past: ran
out of disk space, too much incoming data for backend, etc - tonnes of
ways that such problems can occur. To fix them I've always had to go in
and delete "corrupt" journal files before graylog (elasticsearch?) would
start working again. Surely it should be able to self-heal in these
situations? I mean a corrupt file is useless - so why tolerate them?

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/56200C60.2010703%40trimble.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to