ore. You should
> investigate in that direction.
>
> Cheers,
> Jochen
>
> On Thursday, 10 November 2016 09:47:39 UTC+1, Jimmy Chen wrote:
>>
>> Sorry I forgot to mention I increased the VM total memory to 8G and heap
>> to 4G. We are storing 2 messages I bel
, November 9, 2016 at 11:41:01 PM UTC-8, Jochen Schalanda wrote:
>
> Hi Jimmy,
>
> On Wednesday, 9 November 2016 19:41:50 UTC+1, Jimmy Chen wrote:
>>
>> I bumped the memory to 4G for Xms and Xms.
>>
>
> Using 4 GiB of heap memory on a system with only 4 GiB of ma
e server logs.
On Wednesday, November 9, 2016 at 5:23:42 AM UTC-8, Jochen Schalanda wrote:
>
> Hi Jimmy,
>
> On Wednesday, 9 November 2016 11:00:13 UTC+1, Jimmy Chen wrote:
>>
>> Graylog is on 2.1.2 which was just updated yesterday. ElasticSearch is
>> 2.3.5.
>
- Which version of Graylog and Elasticsearch are you running?
>- Which hardware are you using to run these?
>- Are there other error or warning messages in the logs of your
>Graylog and Elasticsearch nodes?
>
>
> Cheers,
> Jochen
>
> On Tuesday, 8 November 2016
I am having some major problems with our production Graylog server
processing incoming logs. We are seeing a large amount of errors like the
following. It is not clear to me what the source or the message that is
causing Graylog to choke.
2016-11-08 12:56:16,728 ERROR:
We have recently started seeing this in our graylog collector server. I've
searched through threads of others reporting this issue but none of which
seems to apply. Namely, NTP service is running for all nodes and synced to
the same local server. I also bumped up the resources for the collector
We currently have a cluster of ES 1.7 nodes and Graylog 1.3 servers, we are
looking to upgrade all of it to the latest version while retaining all the
data. I have looked at the documentations for upgrading both. Although the
ElasticSearch 2.3 upgrade seems pretty straight forward, it looks
Mapper Size plugin (
> https://www.elastic.co/guide/en/elasticsearch/plugins/2.3/mapper-size.html)
> for this.
>
> Cheers,
> Jochen
>
> On Thursday, 2 June 2016 19:11:07 UTC+2, Jimmy Chen wrote:
>>
>> Thanks for the reply. Is there way to see how big the messages are t
Currently we have a cluster of Graylog/ES nodes that is strictly taking UDP
GELF log messages as input. We are noticing high amount of large log
messages being injected into the data nodes and would like to track down
which of the messages are unusually large. My search for a solution first
user_id)
> 3a) Populate user_id with value from user.id
> 4a) remove old field (user.id)
> 3) Logstash pushes new index data to new ES cluster
>
> I'm sure I've left out something crucial here. Seems to be par for the
> course, but I'm hopeful. :)
>
>
>
>
>
esday, 1 June 2016 20:33:28 UTC+2, Jimmy Chen wrote:
>>
>> Is there a way to configure max log message size in Graylog 2.0.1. Our
>> input is limited to UDP GELF only.
>>
>
--
You received this message because you are subscribed to the Google Groups
"Graylog User
Did this work for you? I am going to be looking into upgrading our existing
cluster to 2.x too.
On Tuesday, May 31, 2016 at 5:08:05 PM UTC-7, Robert Hough wrote:
>
> Came across this: https://gist.github.com/markwalkom/8a7201e3f6ea4354ae06
>
> third time's the charm? :)
>
>
> On Friday, May
Is there a way to configure max log message size in Graylog 2.0.1. Our
input is limited to UDP GELF only.
--
You received this message because you are subscribed to the Google Groups
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
We currently have a Graylog cluster set up for our Production environment
and we are noticing a sudden increase to the data storage for our
ElasticSearch servers. I am trying to nail down what is sending large
messages to Graylog/ES. I am wondering if there is a way to list the
messages being
14 matches
Mail list logo