Hello Tom,
For now I cannot reach my Metron setup because of covid19, but as far as I
remember there is nothing special here, you have the login / password fields
available to configure your credentials in Elasticsearch definition.
What is your trouble?
From: Yerex, Tom [mailto:tom.y
Hello,
Here is a piece of configuration:
action(type="omkafka" name="" broker=[list of kafka brokers]
partitions.auto="on" topic="your topic"
confParam=["security.protocol=SASL_PLAINTEXT",
"sasl.mechan
Thanks Simon, saving as hive table is also what I had in mind, so easy to do
with spark.
Stéphane
From: Simon Elliston Ball [mailto:si...@simonellistonball.com]
Sent: Monday, July 15, 2019 17:43
To: user@metron.apache.org
Subject: Re: batch indexing in JSON format
Most users will have
Hello all,
Thanks for your useful answers, it all make sense for me now. So we will
probably go to post-processing file conversion.
Have a good day,
Stéphane
From: Otto Fowler [mailto:ottobackwa...@gmail.com]
Sent: Monday, July 15, 2019 16:19
To: user@metron.apache.org
Subject: Re
Hello all,
I have a question regarding batch indexing. As as I can see, data are stored
in json format in hdfs. Nevertheless, this uses a lot of storage because of
json verbosity, enrichment,.. Is there any way to use parquet for example? I
guess its possible to do it the day after, I mean you
On my side I’ve worked with this :
https://docs.hortonworks.com/HDPDocuments/HCP1/HCP-1.9.1/installation.html and
it works. My HDP is 2.6.5, because it seems that HCP isn’t currently ready for
HDP 3
From: Michael Miklavcic [mailto:michael.miklav...@gmail.com]
Sent: Thursday, May 23, 2019 17:
Hello all,
I'm going through the MaaS documentation and I see that the example is based on
the Python / Flask REST service. I was wondering what was used in a production
context by you all. Is Python / Flask a good choice in case of heavy load? Does
some of you use other framework, like Scala P
Yes, this what I’m currently playing with to find the best batch size. Thanks
for pointing the link
Stéphane
From: Michael Miklavcic [mailto:michael.miklav...@gmail.com]
Sent: Tuesday, May 21, 2019 16:12
To: user@metron.apache.org
Subject: Re: Very low throuput on topologies
Also take a look a
Thanks Nick.
From: Nick Allen [mailto:n...@nickallen.org]
Sent: Tuesday, May 21, 2019 14:15
To: user@metron.apache.org
Subject: Re: Very low throuput on topologies
> In the link you mention below, it is said that in case batchTimeout is not
> set, it will fall down to a fraction of topology.me
Hello Nick,
You are right, it was related to batchSize and batchTimeout settings, but I was
confused about the place it was, I was tweaking the Indexing ones. But now,
I’ve understood a little bit better about these settings and I can see their
effects.
By the way, I still have one question: i
Hello Nick,
I have 4 good physical servers with 32G of RAM each, SSD drives,… and no
activity on theses servers. Actually, at the beginning of my tests, I didn’t
face this kind of issue. It seems to be related to the fact that I’ve enabled
Kerberos, I’m currently reverting back to no Kerberos.
Hello Simon,
If you talk about this:
https://github.com/apache/metron/tree/master/metron-platform/metron-indexing#sensor-indexing-configuration
My settings are (I’ve tried many changes here):
{
"hdfs": {
"batchSize": 10,
"batchTimeout": 1,
"enabled": tru
Hello Simon,
It is what it looks like yes, but I’ve set topology.flush.tuple.freq.millis to
10, no change. Moreover, if I send let’s say 2 lines, it will anyway take a
lot of time to be fully processed.
From: Simon Elliston Ball [mailto:si...@simonellistonball.com]
Sent: Thursday, May 16
Hello Michael,
So, using curl and the API, I’ve been able to collect some statistics.
Currently, it is a test platform with nearly no activity. I’ve setup a basic
parser, with the following topology:
- 6 ackers (I’ve 6 kafka partitions per topic)
- Spout // = 6
- Sp
Hello Nick,
Thanks for your answer. By the way, the problem already happens before
indexing, at the parser level. It takes many time to go from sensor topic to
“enrichments” topic, and again many seconds to go from “enrichments” topic to
“indexing” topic.
I’ve tried the recommendations describ
Hello happy metron users,
I've a Metron cluster based on Hortonworks CP, and I've setup Kerberos on the
top of it, as you all probably have done since we deal with security :)
It seems that everything is working fine, Kerberos, ranger,... but I'm facing
an issue regarding the overall throuput.
Hello Nick,
Just to confirm you that it works perfectly ☺
Stéphane
From: Nick Allen [mailto:n...@nickallen.org]
Sent: Thursday, April 25, 2019 14:29
To: user@metron.apache.org
Subject: Re: Various questions around profiler
Try querying for that record in the REPL with PROFILE_GET and then usin
OK, I finally found the problem when pasting the whole error stack in the mail:
Caused by: java.lang.RuntimeException: Unexpected version format: 11.0.3
The first java in my path is java 11. When switching to Java 8 it worked
correctly
Stéphane
From: Nick Allen [mailto:n...@nickallen.org]
Sen
Hello,
Actually, I want to keep only the _source part. The full story is that these
data are a dump from another Elasticsearch cluster. After reading this:
https://metron.apache.org/current-book/metron-platform/metron-parsers/ParserChaining.html,
I thought I could do the same with JSON. In this
Hello,
I'm trying to load some JSON data which has the following structure (this is a
sample):
{
"_index": "indexing",
"_type": "Event",
"_id": "AWAkTAefYn0uCUpkHmCy",
"_score": 1,
"_source": {
"dst": "127.0.0.1",
"devTimeEpoch": "151243734",
"dstPort": "0",
"srcPor
Well, it seems that I have another issue right now:
[Stellar]>>> PROFILE_GET('simple_count','22.0.35.5', PROFILE_FIXED(30,
'MINUTES'))
[!] Unable to parse: PROFILE_GET('simple_count','22.0.35.5', PROFILE_FIXED(30,
'MINUTES')) due to: Unable to access table: profiler
It looks like a permission i
I realize that I’ve missed a part of the story regarding shards. The good size
for shards is around 40~50GB. So, if your index grows up to 200 or 300GB, you
of course need to increase the number of shards to come back around this size.
This is also why I’d suggest to have .MM.dd in the “Elas
Hello all,
As we heavily use Elasticsearch in our company, with some support from Elastic
company, I'd like to share with about index and template. Here is the starting
template I use:
{
"": {
"template": "_index_*",
"settings": {
"index": {
"number_of_shards": "1",
Anil,
Do you have any examples you can share about the use of profiler jars in your
Java code?
Thanks,
Stéphane
From: Anil Donthireddy [mailto:anil.donthire...@sstech.us]
Sent: Wednesday, April 24, 2019 19:25
To: DAVY Stephane OBS/CSO
Cc: user@metron.apache.org
Subject: RE: Various questions
Hello Nick,
Thanks for your answer. Well, I don’t know what was the issue, but after a
restart profiling data went to HBASE. With the configuration below ("result":
"{'count': count, 'sum_rcvd_bytes': sum_rcvd_bytes}"), I get something like
this in HBASE:
value=\x01\x00java.util.HashMa\xF0\x
Hello Anil,
Thanks for your feedback. By the way, I need to dig a little bit more on MAAS
usage to understand what can be done.
Have a nice day
Stéphane
From: Anil Donthireddy [mailto:anil.donthire...@sstech.us]
Sent: Wednesday, April 24, 2019 19:25
To: DAVY Stephane OBS/CSO
Cc: user@metron.a
Hello everybody,
I've been playing with Metron for a few weeks now, it is really a very exciting
project and I'd like first to thanks all the contributors. I'm currently
investigating around the use of profiler. I've tested it with the basic example
of counting IP address as explained in the do
Hello Stefan,
Thanks for your email. Actually, I don’t know exactly what was wrong with
Kafka, but I removed it, cleanup Zookeeper, reinstall Kafka again and it worked.
Stéphane
From: Stefan Kupstaitis-Dunkler [mailto:stefan@gmail.com]
Sent: Wednesday, April 24, 2019 07:06
To: user@metron.
Hello Anil,
I'm not very familiar with all of this, but if you check the following URL:
https://metron.apache.org/current-book/metron-platform/metron-data-management/index.html,
there is a section called "Loading utilities" which describes the use of Taxii
for loading threatintel data
Stéphane
Thanks Simon,
This solves my issue ☺
From: Simon Elliston Ball [mailto:si...@simonellistonball.com]
Sent: Wednesday, April 10, 2019 09:39
To: user@metron.apache.org
Subject: Re: Question about "parser_invalid"
Timestamp in Metron is always a unix epoch to avoid things like timezone issues.
In
Hello everybody,
Don't worry, I won't ask you to debug my Grok statement :)
By the way, I'm facing the following situation: I have in my "error_index"
Elastic index some documents with a raw_message field that shows that the
origin message was parsed (see screenshot) and contains in addition an
Hello,
I haven’t sorted out yet this issue, but I think I’ve narrowed it. Actually,
after many tests with Kafka console-consumer and basic Python scripts, I
realize that I can only consume messages when I specify the partition number
and not the group.id. This is of course not what storm tries
Hello Hema,
Unless I’m wrong, this must be setup in MySQL, the database you use for Metron
REST.
From: Hema malini [mailto:nhemamalin...@gmail.com]
Sent: Tuesday, April 09, 2019 09:42
To: user@metron.apache.org
Subject: Re: Snort logs flow issue
Hi Michael,
Sorry just noticed the error in met
Well, I realize that the console-consumer works with the—zookeeper option,
which is the “old consumer”, while it doesn’t work when I specify
–bootstrap-server, which is the “new consumer” way. So, it looks like a Kafka
issue…
From: DAVY Stephane OBS/CSO
Sent: Monday, April 08, 2019 16:45
To: '
Hello Simon,
I send just one line at a time, and the line has been validated in the Metron
UI. I see no message in the topology logs. I switched to DEBUG mode, and I can
see the following sequence again and again:
2019-04-08 16:35:50.463 o.a.k.c.c.i.AbstractCoordinator
Thread-14-kafkaSpout-exe
Hello Nick,
Thanks for your answer. I went through this post and see that all my events
should go in Elastic, which is what I want, but which it isn’t what I get ☹
I have setup the following basic setup:
- New telemetry with grok parser (validated in UI with sample) and a
kafka topic
Hello all,
There is one my point that isn't clear for me. When sending data into Metron,
are all the events all indexed sent to Elastic and / or HDFS, or only the
events that trigger a triage rule?
For now I'm trying to send some FW logs in Metron, I feed a Kafka topic with
Nifi, I can see tha
Hi Simon,
Thanks for your answer
I have faced the request issue during my install and as such I installed the
correct version. Metron GUI is not working, but actually I realized that even
if is reported as stopped, the process was still there but in a bad shape. I
killed the process, start it
Hello all,
I've installed Metron last week and everything was working correctly. I'm
currently playing with and trying to understand how it works. After a few hours
spent on the Management GUI, I started to have some disconnections and finally
I'm no longer able to login. I can see that actuall
Hello,
How many ES data nodes do you have? Given the following setting:
gateway:
recover_after_data_nodes: 3
you must have at least 3 living data nodes to have a working ES cluster. I
faced this issue last week after my install.
Stéphane
From: Meenakshi.S [mailto:meenakshi.subraman...@insp
Hello Mike,
Thanks for your reply. By the way, do you mean that I just have to copy / paste
my Logstash “filter” configuration and it would work?
Stéphane
From: Michael Miklavcic [mailto:michael.miklav...@gmail.com]
Sent: Thursday, March 28, 2019 19:14
To: user@metron.apache.org
Subject: Re: L
Hello all,
I'm new to Metron, my installation has been finished this morning, and I must
admit that it looks very exciting. I've a question regarding parsers. When I
add a new telemetry source, the "parser" list is longer than what it's
documented. More precisely, there is a "logstash" parser t
42 matches
Mail list logo