[graylog2] Re: mongod process using over 100% CPU slowing down graylog

2016-07-25 Thread Ariel Godinez
Hello Jochen,

I am using WiredTiger and am not seeing any unusual messages in the 
mongod.log file, even when the mongod CPU usage spikes. Below are the top 
five collections in the graylog db, the sizes (in bytes) don't seem out of 
this world (to me atleast).

  
{ 
"name" : "alarmcallbackhistory",  
"count" : 486,
"size" : 249533   
},
{ 
"name" : "alerts",
"count" : 495,
"size" : 187138   
},
{ 
"name" : "sessions",  
"count" : 40, 
"size" : 31200
},
{ 
"name" : "inputs",
"count" : 8,  
"size" : 20421
},
{ 
"name" : "collector_configurations",  
"count" : 1,  
"size" : 18462
} 
] 

Let me know what you think.

Thanks for help,
Ari


On Monday, July 25, 2016 at 11:20:01 AM UTC-5, Jochen Schalanda wrote:
>
> Hi Ariel,
>
> MongoDB shouldn't need much processing power when being used by Graylog.
>
> Are there any error messages in the logs of your MongoDB nodes? Are there 
> any unusually large collections in the MongoDB database used by Graylog?
>
> Which MongoDB storage engine (MMAPv1, WiredTiger) are you using?
>
>
> Cheers,
> Jochen
>
> On Tuesday, 19 July 2016 21:09:47 UTC+2, Ariel Godinez wrote:
>>
>> Hello,
>>
>> I am running the single node setup below:
>>
>> Graylog 2.0.3
>> MongoDB 3.2.7
>> Elasticsearch 2.3.3 
>> Red Hat Enterprise Linux Server 6.5
>> Java 8 
>> NXlog and Graylog Collector Sidecar for reading from local logs 
>>
>> On average graylog is reading about 50 logs per second. MongoDB is not 
>> being used for any other services other than graylog. Yet, occasionally I 
>> notice that the system is hanging and proceed to do a  *$top *where I 
>> see that the mongod process is consuming well over 100% CPU. I'm wondering 
>> if the load is just to heavy or if there is something wrong with my setup 
>> that is causing mongod to overload. 
>>
>> I am not seeing any warnings or errors in the graylog server logs or in 
>> the mongod.log file when I look after a slowdown has occurred. Any advice 
>> on how to further investigate would be much appreciated. 
>>
>> Thanks,
>> Ari
>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/3f529b1d-8cb4-424f-89c4-009b736991c6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] mongod process using over 100% CPU slowing down graylog

2016-07-19 Thread Ariel Godinez
Hello,

I am running the single node setup below:

Graylog 2.0.3
MongoDB 3.2.7
Elasticsearch 2.3.3 
Red Hat Enterprise Linux Server 6.5
Java 8 
NXlog and Graylog Collector Sidecar for reading from local logs 

On average graylog is reading about 50 logs per second. MongoDB is not 
being used for any other services other than graylog. Yet, occasionally I 
notice that the system is hanging and proceed to do a  *$top *where I see 
that the mongod process is consuming well over 100% CPU. I'm wondering if 
the load is just to heavy or if there is something wrong with my setup that 
is causing mongod to overload. 

I am not seeing any warnings or errors in the graylog server logs or in the 
mongod.log file when I look after a slowdown has occurred. Any advice on 
how to further investigate would be much appreciated. 

Thanks,
Ari

   

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/da11761d-0e65-45d3-b16d-80c7d73263f0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Graylog IO Exception Error

2016-07-11 Thread Ariel Godinez
Increasing the heap size on ES and Graylog respectively fixed the issue. 

On Friday, July 8, 2016 at 11:07:46 AM UTC-5, Ariel Godinez wrote:
>
> After further investigation I think this was due to elasticsearch and 
> graylog being overloaded. I have increased their heap sizes accordingly and 
> will see how the system performs.
>
> Ariel
>
> On Wednesday, July 6, 2016 at 12:21:11 PM UTC-5, Ariel Godinez wrote:
>>
>> Hello,
>>
>> I've been using graylog for a couple weeks now and started to notice some 
>> unusual behavior today. I am currently running a single node setup.
>>
>> The Issue:
>>
>> Every once in awhile I start to notice that that graylog is dragging 
>> quite a bit (the loading spinner is persisting much longer than usual) so I 
>> go check the logs and find the following error message. 
>>
>> ERROR [ServerRuntime$Responder] An I/O error has occurred while writing a 
>> response message entity to the container output stream.
>> org.glassfish.jersey.server.internal.process.MappableException: 
>> java.io.IOException: Connection closed
>> at 
>> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:92)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:711)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:444)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:329) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:315) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:297) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:267) 
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) 
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224) 
>> [graylog.jar:?]
>> at 
>> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>>  
>> [graylog.jar:?]
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  
>> [?:1.8.0_91]
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>  
>> [?:1.8.0_91]
>> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
>> Caused by: java.io.IOException: Connection closed
>> at 
>> org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:317) 
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:501)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:412)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.NIOConnection.doClose(NIOConnection.java:604) 
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.NIOConnection$5.run(NIOConnection.java:570) 
>> ~[graylog.jar:?]
>> at 
>> org.glas

[graylog2] Re: Graylog IO Exception Error

2016-07-08 Thread Ariel Godinez
After further investigation I think this was due to elasticsearch and 
graylog being overloaded. I have increased their heap sizes accordingly and 
will see how the system performs.

Ariel

On Wednesday, July 6, 2016 at 12:21:11 PM UTC-5, Ariel Godinez wrote:
>
> Hello,
>
> I've been using graylog for a couple weeks now and started to notice some 
> unusual behavior today. I am currently running a single node setup.
>
> The Issue:
>
> Every once in awhile I start to notice that that graylog is dragging quite 
> a bit (the loading spinner is persisting much longer than usual) so I go 
> check the logs and find the following error message. 
>
> ERROR [ServerRuntime$Responder] An I/O error has occurred while writing a 
> response message entity to the container output stream.
> org.glassfish.jersey.server.internal.process.MappableException: 
> java.io.IOException: Connection closed
> at 
> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:92)
>  
> ~[graylog.jar:?]
> at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
>  
> ~[graylog.jar:?]
> at 
> org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130)
>  
> ~[graylog.jar:?]
> at 
> org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:711)
>  
> [graylog.jar:?]
> at 
> org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:444)
>  
> [graylog.jar:?]
> at 
> org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434)
>  
> [graylog.jar:?]
> at 
> org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:329) 
> [graylog.jar:?]
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) 
> [graylog.jar:?]
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) 
> [graylog.jar:?]
> at org.glassfish.jersey.internal.Errors.process(Errors.java:315) 
> [graylog.jar:?]
> at org.glassfish.jersey.internal.Errors.process(Errors.java:297) 
> [graylog.jar:?]
> at org.glassfish.jersey.internal.Errors.process(Errors.java:267) 
> [graylog.jar:?]
> at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
>  
> [graylog.jar:?]
> at 
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) 
> [graylog.jar:?]
> at 
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
>  
> [graylog.jar:?]
> at 
> org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)
>  
> [graylog.jar:?]
> at 
> org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224) 
> [graylog.jar:?]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  
> [graylog.jar:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> [?:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> [?:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
> Caused by: java.io.IOException: Connection closed
> at 
> org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:317) 
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:501)
>  
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:412)
>  
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.NIOConnection.doClose(NIOConnection.java:604) 
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.NIOConnection$5.run(NIOConnection.java:570) 
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.DefaultSelectorHandler.execute(DefaultSelectorHandler.java:235)
>  
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.NIOConnection.terminate0(NIOConnection.java:564) 
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.transport.TCPNIOConnection.terminate0(TCPNIOConnection.java:291)
>  
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.writeCompositeRecord(TCPNIOAsyncQueueWriter.java:197)
>  
> ~[graylog.jar:?]
> at 
> org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:92)
&g

[graylog2] Graylog IO Exception Error

2016-07-06 Thread Ariel Godinez
Hello,

I've been using graylog for a couple weeks now and started to notice some 
unusual behavior today. I am currently running a single node setup.

The Issue:

Every once in awhile I start to notice that that graylog is dragging quite 
a bit (the loading spinner is persisting much longer than usual) so I go 
check the logs and find the following error message. 

ERROR [ServerRuntime$Responder] An I/O error has occurred while writing a 
response message entity to the container output stream.
org.glassfish.jersey.server.internal.process.MappableException: 
java.io.IOException: Connection closed
at 
org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:92)
 
~[graylog.jar:?]
at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
 
~[graylog.jar:?]
at 
org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130)
 
~[graylog.jar:?]
at 
org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:711)
 
[graylog.jar:?]
at 
org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:444)
 
[graylog.jar:?]
at 
org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434)
 
[graylog.jar:?]
at 
org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:329) 
[graylog.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) 
[graylog.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) 
[graylog.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:315) 
[graylog.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:297) 
[graylog.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:267) 
[graylog.jar:?]
at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
 
[graylog.jar:?]
at 
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) 
[graylog.jar:?]
at 
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
 
[graylog.jar:?]
at 
org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)
 
[graylog.jar:?]
at 
org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224) 
[graylog.jar:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 
[graylog.jar:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_91]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
Caused by: java.io.IOException: Connection closed
at 
org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:317) 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:501)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:412)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.NIOConnection.doClose(NIOConnection.java:604) 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.NIOConnection$5.run(NIOConnection.java:570) 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.DefaultSelectorHandler.execute(DefaultSelectorHandler.java:235)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.NIOConnection.terminate0(NIOConnection.java:564) 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.transport.TCPNIOConnection.terminate0(TCPNIOConnection.java:291)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.writeCompositeRecord(TCPNIOAsyncQueueWriter.java:197)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:92)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.processAsync(AbstractNIOAsyncQueueWriter.java:344)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:107)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) 
~[graylog.jar:?]
at 
org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.strategies.SameThreadIOStrategy.executeIoEvent(SameThreadIOStrategy.java:103)
 
~[graylog.jar:?]
at 
org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(AbstractIOStrategy.java:89)
 

[graylog2] Re: graylog server warning every 5-30 minutes

2016-06-21 Thread Ariel Godinez
Hello Jochen,

Thanks for the response and paraphrase explanation, that helped me make 
more sense of what was going on. I took another look at my NTP 
configuration and as it turns out, the system clock wasn't syncing as it 
should have been. I fixed that, and the warnings from graylog stopped. 

Thanks again,
Ariel

On Tuesday, June 21, 2016 at 4:59:22 AM UTC-5, Jochen Schalanda wrote:
>
> Hi Ariel,
>
> just for reference, I'll paraphrase the explanation from IRC:
>
> Each Graylog node "registers" itself (node id, URI to the Graylog REST 
>> API, timestamp of the last heartbeat) in MongoDB (see the nodes 
>> collection). The timeout/cleanup interval is quite aggressive (2s, see 
>> stale_master_timeout 
>> <https://github.com/Graylog2/graylog2-server/blob/2.0.3/misc/graylog.conf#L371-L372>),
>>  
>> so if your system clock is off by a minute or so, the information in 
>> MongoDB will be considered stale and the node is trying to re-register 
>> itself.
>
>
>
> Cheers,
> Jochen
>
> On Monday, 20 June 2016 18:13:52 UTC+2, Ariel Godinez wrote:
>>
>> Hello,
>>
>> I am getting the following related warnings from the graylog server every 
>> 5 to 30 minutes. 
>>
>> Warning (from graylog system messages page) : *Notification condition 
>> [NO_MASTER] has been fixed.*
>> Warning (from graylog server logs): *WARN : 
>> org.graylog2.periodical.NodePingThread - Did not find meta info of this 
>> node. Re-registering.*
>>
>> Upon googling these warnings I saw that multiple people were able to get 
>> these warnings to stop after installing NTP and synchronizing their 
>> system(s).
>>
>> I am running a single node configuration ( *my graylog server.conf: 
>> is_master = true* ) ,have installed NTP, and configured it. Graylog is 
>> working as expected but I just wanted to see if anyone had an idea as to 
>> what might be causing these annoying warnings and how I could get them to 
>> stop. 
>>
>> Any input would be much appreciated.
>>
>> System:
>> Oracle Linux Server release 6.5
>> Red Hat Enterprise Linux Server release 6.5 (Santiago)
>>
>> Thanks,
>> Ariel Godinez
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/54874238-d0ee-4067-84f0-4a88d029d39f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] graylog server warning every 5-30 minutes

2016-06-20 Thread Ariel Godinez
Hello,

I am getting the following related warnings from the graylog server every 5 
to 30 minutes. 

Warning (from graylog system messages page) : *Notification condition 
[NO_MASTER] has been fixed.*
Warning (from graylog server logs): *WARN : 
org.graylog2.periodical.NodePingThread - Did not find meta info of this 
node. Re-registering.*

Upon googling these warnings I saw that multiple people were able to get 
these warnings to stop after installing NTP and synchronizing their 
system(s).

I am running a single node configuration ( *my graylog server.conf: 
is_master = true* ) ,have installed NTP, and configured it. Graylog is 
working as expected but I just wanted to see if anyone had an idea as to 
what might be causing these annoying warnings and how I could get them to 
stop. 

Any input would be much appreciated.

System:
Oracle Linux Server release 6.5
Red Hat Enterprise Linux Server release 6.5 (Santiago)

Thanks,
Ariel Godinez



-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/bb978d9b-dc48-4af3-97a9-1d941c0a8f1b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.