If each Input could support an optional ES prefix, it would make
multitenancy on a single graylog host much easier. This would require each
to support their own retention strategy and max size. Stream matching could
still be used to grant granular access to users. Having a prefix allows
easier
Given the recommendations here, I think the omnibus scripts should try to
handle the 30.5GB memory boundary related to ES:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#compressed_oops
--
You received this message because you are subscribed to the Google Groups
"G
Jochen,
This points me in the right direction. Thanks!
On Sunday, October 25, 2015 at 2:31:26 AM UTC-6, Jochen Schalanda wrote:
>
> Hi Jesse,
>
> there are several possibilities to write plugin-specific data into MongoDB
> (none of which are documented, sorry for that).
>
>- If you can live
In one instance running 1.2.1 we have 3.8TB of data, which holds roughly 30
days of data. When I do a simple "*" query across the last 14 days, the ES
query finishes in about 6 seconds. Notice these 14 day queries returned:
Found *1,111,506,619 messages* in 5,869 ms, searched in 987 indices
+1 - same issue here
--
You received this message because you are subscribed to the Google Groups
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://gr
Hello all.
Has anyone written, or have advice pertaining to, a Graylog plugin that
writes to the embedded Mongo DB? We're trying to keep a meta data catalog -
tracking distinct values, keeping counters for certain things, etc. I'd
like to avoid having to define my own configuration in the plug
Thanks Marius! Exactly what I was looking for.
>
--
You received this message because you are subscribed to the Google Groups
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to graylog2+unsubscr...@googlegroups.com.
To view this discussio
When running graylog-ctl reconfigure, it seems to query etcd to get the
list of known servers. How can I manage this list and remove servers that
no longer exist?
--
You received this message because you are subscribed to the Google Groups
"Graylog Users" group.
To unsubscribe from this group
getMessage());
}
}
}
}
So now I can do queries like:
timestamp_day_week:(Sunday Saturday) OR (timestamp_day_week:(Monday Tuesday
Wednesday Thursday Friday) AND (timestamp_hour:[17 TO 23] OR
timestamp_hour:[0 TO 9]))
Which should find all events occurring outside of M
", e.getMessage());
}
}
}
So now I can do queries like:
timestamp_day_week:(Sunday Saturday) OR (timestamp_day_week:(Monday Tuesday
Wednesday Thursday Friday) AND (timestamp_hour:[17 TO 23] OR
timestamp_hour:[0 TO 9]))
Which should find all events occurring outside of M-F 9am-5pm
Not t
Hello everyone,
Is there a way to do a search for all records with a timestamp that is
outside normal business hours? I can't seem to do ranges on timestamps,
ignoring the date.
--
You received this message because you are subscribed to the Google Groups
"Graylog Users" group.
To unsubscrib
> https://github.com/Graylog2/graylog2-server/issues/1465 to get updates on
> the issue.
>
>
> Cheers,
> Jochen
>
>
> On Monday, 5 October 2015 17:10:43 UTC+2, Jesse Skrivseth wrote:
>>
>> Hey Jochen,
>>
>> Yes, I had to manually trigger the index
hould be calculated and stored (in contrast to Graylog 1.1.x
> and earlier, which most of the time calculated all index ranges).
>
> Cheers,
> Jochen
>
> On Saturday, 3 October 2015 19:11:15 UTC+2, Jesse Skrivseth wrote:
>>
>> A few examples of index recalc performan
server vs 4 smaller
servers.
On Friday, October 2, 2015 at 9:29:37 AM UTC-6, Jesse Skrivseth wrote:
>
> Note that for the 1800 indices we have in this instance, the ~5 second
> delay between each index during range calculation adds up to ~150 minutes
> of additional delay when
We have a 4 node cluster in AWS that looks like this:
1 x m4.2xlarge - runs all Graylog roles, processes incoming messages
3 x m4.xlarge - runs as "backend" roles - ES, graylog-server, etcd, mongo
All nodes have a 2.4TB EBS-backed data volume. We store about 4TB (2.5
billion messages, 1800 ind
Note that for the 1800 indices we have in this instance, the ~5 second
delay between each index during range calculation adds up to ~150 minutes
of additional delay when calculating ranges. My "> 20 minutes" comment
should be more like "> 2.5 hours", plus actual time spent calculating
ranges.
Note that the ~5 second delay between each index during range calculation
adds up to ~150 minutes of additional delay when calculating ranges, so my
"> 20 minutes" should be more like "> 2.5 hours".
--
You received this message because you are subscribed to the Google Groups
"Graylog Users" g
Marcel,
This is brilliant and seems to be the ideal solution to the problem. Thank
you for sharing this!
On Friday, October 2, 2015 at 3:21:49 AM UTC-6, Marcel Manz wrote:
>
> Have a look at:
> https://www.elastic.co/guide/en/elasticsearch/reference/1.6/index-modules-allocation.html
>
> You mi
t; Hi Jesse,
>
> do you see any corresponding errors in the logs of the Graylog server node?
>
>
> Cheers,
> Jochen
>
> On Friday, 2 October 2015 01:49:40 UTC+2, Jesse Skrivseth wrote:
>>
>> Since upgrading from 1.1.6 to 1.2.0, the System->Indices page - which
Since upgrading from 1.1.6 to 1.2.0, the System->Indices page - which used
to load in 2-5 seconds - now takes several minutes before nginx times out
with a 504 Gateway Timeout
In /var/log/graylog/web/current, I see API calls failing to get index info.
There are about 15 of these out of 1800 act
I tried this in a lab environment and ended up with a split-brain cluster.
You've been warned. ;)
On Wednesday, September 30, 2015 at 10:36:37 AM UTC-6, Jesse Skrivseth
wrote:
>
> This may be off-topic in this forum, but I wanted to focus on the
> omnibus-provided configurati
db.org/manual/core/replication-introduction/ for details.
>
>
> Cheers,
> Jochen
>
> On Tuesday, 29 September 2015 23:22:50 UTC+2, Jesse Skrivseth wrote:
>>
>> Hello all!
>>
>> I have three nodes, a master and two backend nodes. On the backend nodes,
>> it s
This may be off-topic in this forum, but I wanted to focus on the
omnibus-provided configuration provided in Graylog. We have an instance
with 1 large node - 6TB storage - and we're now breaking this out into 3 x
2TB smaller nodes. I've joined two of the 2TB nodes to the cluster and ES
has dist
Hello all!
I have three nodes, a master and two backend nodes. On the backend nodes,
it seems that mongo is not replicated since I don't see the graylog DB on
either node. That makes the master node a single point of failure? If each
of the three nodes should be able to assume the role of maste
STRING, and not GEO POINT, so I cannot use the map. Very curious to see
> how using an index template can help me ensure the field is processed as
> GEO POINT and not STRING.
>
> On Tuesday, June 23, 2015 at 1:17:04 PM UTC-5, Jesse Skrivseth wrote:
>>
>> Hi Kay! Thanks for
Yesterday the build worked fine, but today I've made no changes and I'm
getting npm issues:
npm WARN package.json graylog-web-interface@1.3.0-SNAPSHOT No repository
field.
npm WARN package.json graylog-web-interface@1.3.0-SNAPSHOT No README data
npm http GET https://registry.npmjs.org/npm/3.3.4
We ran into an issue with the provided AWS images. These were created from
version 1.1.4. This image uses MBR for partitioning instead of GPT, so that
rules out larger than 2TB volumes. Scaling out will increase costs much
more than resizing the volume.
Any suggestions on the best way to resiz
Counts
are correct in the UI.
I do think it is a bug that the Graylog UI showed "found XXX messages in
1,234ms searched in YYY indices", when in fact it silently failed to search
all those indices.
On Friday, August 14, 2015 at 11:48:23 AM UTC-6, Jesse Skrivseth wrote:
>
> Obvi
Obvious they should change. ;)
But the problem is that they are all over the place. If I do an all-time
search for something simple, like source:xxx, then do any type of
histogram, every time that histogram refreshes the whole graph changes,
even messages from days/weeks ago, by huge magnitude
Perhaps I'll need drools rules for this, but I want to run a key=value
tokenizer extractor on messages from a source matching a regex. Is this
possible? It seems in the UI the only option is extracting when the field
you are extracting from matches something.
--
You received this message beca
Server is an m4.4xlarge AWS machine. Built from the Graylog-provided AMIs.
I routinely see this:
2015-08-08_18:11:48.95572 WARN [AbstractNioSelector] Unexpected exception
in the selector loop.
2015-08-08_18:11:48.95574 java.lang.OutOfMemoryError: Direct buffer memory
2015-08-08_18:11:48.95575
Perhaps this is well known to those familiar with chef, ruby, and the
plethora of tools used in the Graylog pipeline, but every time I do a
`reconfigure` - usually an upgrade - as expected, my graylog.conf and
elasticsearch.yml get pushed back into the form chef wants it to be in.
Probably the
landa wrote:
>
> Hi Jesse,
>
> do you see any error messages in the logs of your Graylog node(s)?
>
>
> Cheers,
> Jochen
>
> On Thursday, 6 August 2015 20:02:35 UTC+2, Jesse Skrivseth wrote:
>>
>> Hello all. I upgraded from 1.1.4 to 1.1.6. There were/are a
Hello all. I upgraded from 1.1.4 to 1.1.6. There were/are about 100k
messages in the journal at the time. The upgrade went smoothly, but after
reconfigure, the server started but isn't processing any messages
from /var/opt/graylog/data/journal. Any ideas?
--
You received this message because y
I'm wondering if anyone out there has a good way to approach this; a point
of contact,or a paid support option. We need a feature implemented in
Graylog and realize the feature may not be valuable to everyone. Apparently
we can't wait for the feature to be implemented through traditional means.
Happy Tuesday, Graylog community
>From time-to-time, I find that the web interface cannot contact a graylog
server. This occurs in both clustered and non-clustered environments. To
simplify things, I'm focusing only on the all-in-one instances for now.
This is a Graylog 1.1.4 instance running i
hat's currently not possible with the query language of
> Graylog/Lucene. Feel free to add this as a feature request in our product
> portal (https://www.graylog.org/product-ideas/).
>
>
> Cheers,
> Jochen
>
> On Monday, 27 July 2015 01:22:11 UTC+2, Jesse Skrivseth wr
Hello all. I'm wondering how to export the list of all distinct values for
a given field. The list of the top 50 from Quick Values won't work. Our
lists will be several hundred long.
--
You received this message because you are subscribed to the Google Groups
"graylog2" group.
To unsubscribe
+49 (0)40 609 452 078
>
> TORCH GmbH - A Graylog company
> Steckelhörn 11
> 20457 Hamburg
> Germany
>
> Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175
> Geschäftsführer: Lennart Koopmann (CEO)
>
> > On 06.07.2015, at 18:02, Jesse Skrivseth > w
I have a stream with one defined regex rule, a simple:
'source' must match regular expression '(1\.2\.3\.4|9\.8\.7\.6)'
kind of thing. There are 6 IP addresses in this particular inclusive list.
I don't think the regex is performing slowly enough to stop the stream
processing, but perhaps th
Hi Kay! Thanks for the detailed response. Using templates is the route we
took and it works great. One shortcoming is that you must know the names of
the fields to define them in the template. If you're coding a plugin that
dynamically adds fields back to the message, and you can't know the name
The Message class has several field types that can be explicitly declared
when adding fields to messages. It seems to support:
Double
Long
String
If I want to attach a field as a custom elastic type such as "geo_point",
how can I declare this custom type? Without a custom type, my current
form
This may be out of place to ask here, but I am writing a MessageFilter
plugin for graylog and I'd like to know how to change/set the log level
(info, debug, trace, etc.) as you can do with the graylog server from the
UI. I'm using:
private static final Logger log =
LoggerFactory.getLogger(MyFi
ain.send.b @
jquery-2.1.1.min.js:4
Please forgive my ignorance, but where should I file the
bug? https://github.com/Graylog2/graylog2-web-interface/issues ?
On Thursday, June 18, 2015 at 1:14:04 AM UTC-6, Kay Röpke wrote:
>
>
> On 17 Jun 2015, at 20:26, Jesse Skrivseth > wrote:
>
The UI doesn't show any users in the list, though the users still exist in
Mongo and can log in. Is this a known issue?
--
You received this message because you are subscribed to the Google Groups
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email
, but I
can't think of a reason that would matter.
On Monday, June 1, 2015 at 8:30:40 AM UTC-6, Jesse Skrivseth wrote:
>
> Thanks to everyone for continuing to pursue this odd issue.
>
> Arie - We are using nxlog-ce version 2.9.1347
>
> Kay - I can't seem to rec
Thanks to everyone for continuing to pursue this odd issue.
Arie - We are using nxlog-ce version 2.9.1347
Kay - I can't seem to recreate the problem (yet) in a test environment,
whether 1.0.2 or 1.1.0. There are some (possibly irrelevant) differences
between test and production, but I'll menti
I'm not sure why, but suddenly the extractors are working today without any
further action on my part. There seems to be a very long delay between when
an extractor is configured and when it is in effect, at least in this
environment.
Another thing to note is that the data on this input is TLS
Much appreciated!
On Thursday, May 28, 2015 at 3:25:05 PM UTC-6, Kay Röpke wrote:
>
> I'm not an expert on the OVAs so I would recommend simply setting up a
> test instance to check this. Or you can wait until I get to it in the (my)
> morning ;)
>
--
You received this message because you are
I hear the upgrade path is still in the works, but is there a way to
upgrade in-place or at least without data loss?
On Thursday, May 28, 2015 at 3:18:06 PM UTC-6, Kay Röpke wrote:
>
> Many thanks!
>
> I will have a look in the morning.
> In the meantime it would be helpful if you could give 1.1
Many hours later, I'm no closer to a solution. It seems to be completely
unpredictable.
I have a grok extractor named "XTM515_firewall". It looks like this:
%{NOTSPACE:SerialNumber} %{SYSLOGPROG:MessageType}:
msg_id=%{QUOTEDSTRING:MessageId} %{NOTSPACE:Action}
%{NOTSPACE:SourceInterface} %{NOT
Very odd..
On Thursday, May 28, 2015 at 8:37:15 AM UTC-6, Jesse Skrivseth wrote:
>
> Jochen,
>
> After the extractor is created, I expected the fields to be available on
> the message itself. I look at all messages in the last 5 minutes, visually
> find a message that follows th
Jochen,
After the extractor is created, I expected the fields to be available on the
message itself. I look at all messages in the last 5 minutes, visually find a
message that follows this structure, click on it to show the field list, but
none of the supposedly extracted fields show in the fie
So I have a collection of Grok patterns, things like:
...
# Syslog Dates: Month Day HH:MM:SS
SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[\w._/%-]+)
SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
SYSLOGHOST %{IPORHOST}
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPD
So I know input encryption exists, using TLS, but what about output
encryption? If I want to make a chain of graylog instances to forward and
aggregate through, I will need encryption over the wire. It seems there is
no way to provide a cert for graylog outputs. Hopefully I'm missing
something.
I have 6 VMs, one webinterface, one backend (as master), two datanodes, and
two servers. When I try to log in, sometimes it is successful and other
times it rejects valid credentials. I tried making sure the password_secret
is the same on all nodes, just to see if it was a password salt issue. A
sages.
>
> Cheers,
> Jochen
>
> On Tuesday, 14 April 2015 18:04:56 UTC+2, Jesse Skrivseth wrote:
>>
>> Hello world. I have just started working with graylog2. I have it running
>> in Docker and I'm capturing Windows Event Logs as Syslog UDP. It works very
>
Hello world. I have just started working with graylog2. I have it running
in Docker and I'm capturing Windows Event Logs as Syslog UDP. It works very
well so far!
I have a few questions about visibility and scoping. Imagine you want to
capture log data from numerous tenants and you don't want
58 matches
Mail list logo