I have attached the output of this dump

/usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP



On Thu, Nov 9, 2017 at 12:06 AM, zeo...@gmail.com <zeo...@gmail.com> wrote:

> What is the output of:
>
> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>
> ?
>
> Jon
>
> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <mscs16...@itu.edu.pk>
> wrote:
>
>> This is the script/command i used
>>
>> sudo cat snort.out | 
>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <mscs16...@itu.edu.pk>
>> wrote:
>>
>>> sudo cat snort.out | 
>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>> --broker-list node1:6667 --topic snort
>>>
>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <ottobackwa...@gmail.com>
>>> wrote:
>>>
>>>> What topic?  what are the parameters you are calling the script with?
>>>>
>>>>
>>>>
>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>> mscs16...@itu.edu.pk) wrote:
>>>>
>>>> The metron installation I have (single node based vm install) comes
>>>> with sensor stubs. I assume that everything has already been done for those
>>>> stub sensors to push the canned data. I am doing the similar thing,
>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>
>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ceste...@gmail.com>
>>>> wrote:
>>>>
>>>>> How did you start the snort parser topology and what's the parser
>>>>> config (in zookeeper)?
>>>>>
>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>
>>>>>> This is what I am doing
>>>>>>
>>>>>> sudo cat snort.out | 
>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>> --broker-list node1:6667 --topic snort
>>>>>>
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ceste...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Are you directly writing to the "indexing" kafka topic from the
>>>>>>> parser or from some other source?  It looks like there are some records 
>>>>>>> in
>>>>>>> kafka that are not JSON.  By the time it gets to the indexing kafka 
>>>>>>> topic,
>>>>>>> it should be a JSON map.  The parser topology emits that JSON map and 
>>>>>>> then
>>>>>>> the enrichments topology enrich that map and emits the enriched map to 
>>>>>>> the
>>>>>>> indexing topic.
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> No I am no longer seeing the parsing topology error, here is the
>>>>>>>> full stack trace
>>>>>>>>
>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>
>>>>>>>> [image: Inline image 2]
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>> ottobackwa...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>>>> topology error?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (ceste...@gmail.com)
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> If you click on the port (6704) there in those errors, what's the
>>>>>>>>> full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>>>
>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>> individual writer into the writer component (It should be handled in 
>>>>>>>>> the
>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>> included.
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer ....
>>>>>>>>>> and now the error at storm parser topology is gone but I am now 
>>>>>>>>>> seeing this
>>>>>>>>>> at the indexing toology
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:
>>>>>>>>>>> 27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,
>>>>>>>>>>> 0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, zeo...@gmail.com <
>>>>>>>>>>> zeo...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are 
>>>>>>>>>>>> no copy
>>>>>>>>>>>> paste problems
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ottobackwa...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>>>
>>>>>>>>>>>>> private static String defaultDateFormat = 
>>>>>>>>>>>>> "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see
>>>>>>>>>>>>> this error I believe.
>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>
>>>>>>>>>>>>> If this is the case, then you will need to modify the default
>>>>>>>>>>>>> log timestamp format for snort in the short term.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>> ottobackwa...@gmail.com) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column is
>>>>>>>>>>>>> for a piece of data that is failing
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16...@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the logs
>>>>>>>>>>>>> I am trying to push
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can someone tell me the location of snort stub canned data
>>>>>>>>>>>>> file? Maybe I could see its formatting and try following the same 
>>>>>>>>>>>>> thing.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> After running this command, I copy paste a few lines from
>>>>>>>>>>>>>> here: https://raw.githubusercontent.com/apache/metron/master/
>>>>>>>>>>>>>> metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwa...@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think 
>>>>>>>>>>>>>>> of as live.
>>>>>>>>>>>>>>> I also think you have manually pushed messages, where you see 
>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for things
>>>>>>>>>>>>>>> that are automatically pushed to kafka as you do when you 
>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16...@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as 
>>>>>>>>>>>>>>> well then that
>>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them into
>>>>>>>>>>>>>>> kafka topic then no, I dont see any error at that time. If 
>>>>>>>>>>>>>>> 'live' means
>>>>>>>>>>>>>>> something else here then please tell me what could it be.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwa...@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as 
>>>>>>>>>>>>>>>> well then that
>>>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> You need to confirm that you see these same errors with the
>>>>>>>>>>>>>>>> live data or not.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka ->
>>>>>>>>>>>>>>>> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology 
>>>>>>>>>>>>>>>> -> HDFS |
>>>>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>>>>>>>> seeing things.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16...@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs in
>>>>>>>>>>>>>>>> kibana dashboard?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/
>>>>>>>>>>>>>>>> metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in
>>>>>>>>>>>>>>>> snort section:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the
>>>>>>>>>>>>>>>>> snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is
>>>>>>>>>>>>>>>>>> also relevant
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is related
>>>>>>>>>>>>>>>>>>> to snort. Could it be the logs I was pushing to kafka topic 
>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at
>>>>>>>>>>>>>>>>>>>> here?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, zeo...@gmail.com <
>>>>>>>>>>>>>>>>>>>>> zeo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>>>>>>>> there's your problem.  I would go look in 
>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <mscs16...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ottobackwa...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now
>>>>>>>>>>>>>>>>>>>>>>> where do I go in this to find out why I cant see the 
>>>>>>>>>>>>>>>>>>>>>>> snort logs in kibana
>>>>>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwa...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from
>>>>>>>>>>>>>>>>>>>>>>>> the play store.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>> mscs16...@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>> --
>
> Jon
>
GLOBAL Config: global
{
  "es.clustername" : "metron",
  "es.ip" : "node1:9300",
  "es.date.format" : "yyyy.MM.dd.HH",
  "parser.error.topic" : "indexing",
  "update.hbase.table" : "metron_update",
  "update.hbase.cf" : "t",
  "profiler.client.period.duration" : "15",
  "profiler.client.period.duration.units" : "MINUTES",
  "geo.hdfs.file" : "/apps/metron/geo/default/GeoLite2-City.mmdb.gz"
}
PARSER Config: websphere
{
  "parserClassName":"org.apache.metron.parsers.websphere.GrokWebSphereParser",
  "sensorTopic":"websphere",
  "parserConfig":
  {
    "grokPath":"/patterns/websphere",
    "patternLabel":"WEBSPHERE",
    "timestampField":"timestamp_string",
    "dateFormat":"yyyy MMM dd HH:mm:ss"
  }
}
PARSER Config: jsonMap
{
  "parserClassName":"org.apache.metron.parsers.json.JSONMapParser",
  "sensorTopic":"jsonMap"
}
PARSER Config: squid
{
  "parserClassName": "org.apache.metron.parsers.GrokParser",
  "sensorTopic": "squid",
  "parserConfig": {
    "grokPath": "/patterns/squid",
    "patternLabel": "SQUID_DELIMITED",
    "timestampField": "timestamp"
  },
  "fieldTransformations" : [
    {
      "transformation" : "STELLAR"
    ,"output" : [ "full_hostname", "domain_without_subdomains" ]
    ,"config" : {
      "full_hostname" : "URL_TO_HOST(url)"
      ,"domain_without_subdomains" : "DOMAIN_REMOVE_SUBDOMAINS(full_hostname)"
                }
    }
                           ]
}
PARSER Config: snort
{
  "parserClassName":"org.apache.metron.parsers.snort.BasicSnortParser",
  "sensorTopic":"snort",
  "parserConfig": {}
}
PARSER Config: asa
{
  "parserClassName": "org.apache.metron.parsers.asa.BasicAsaParser",
  "sensorTopic": "asa",
  "parserConfig": {
    "deviceTimeZone": "UTC"
  }
}

PARSER Config: bro
{
  "parserClassName":"org.apache.metron.parsers.bro.BasicBroParser",
  "sensorTopic":"bro",
  "parserConfig": {}
}
PARSER Config: yaf
{
  "parserClassName":"org.apache.metron.parsers.GrokParser",
  "sensorTopic":"yaf",
  "fieldTransformations" : [
                    {
                      "input" : "protocol"
                     ,"transformation": "IP_PROTOCOL"
                    }
                    ],
  "parserConfig":
  {
    "grokPath":"/patterns/yaf",
    "patternLabel":"YAF_DELIMITED",
    "timestampField":"start_time",
    "timeFields": ["start_time", "end_time"],
    "dateFormat":"yyyy-MM-dd HH:mm:ss.S"
  }
}
INDEXING Config: websphere
{
  "hdfs" : {
    "index": "websphere",
    "batchSize": 5,
    "enabled" : true
  },
  "elasticsearch" : {
    "index": "websphere",
    "batchSize": 5,
    "enabled" : true
  },
  "solr" : {
    "index": "websphere",
    "batchSize": 5,
    "enabled" : true
  }
}


INDEXING Config: error
{
  "hdfs" : {
    "index": "error",
    "batchSize": 5,
    "enabled" : true
  },
  "elasticsearch" : {
    "index": "error",
    "batchSize": 5,
    "enabled" : true
  },
  "solr" : {
    "index": "error",
    "batchSize": 5,
    "enabled" : true
  }
}

INDEXING Config: snort
{
  "hdfs" : {
    "index": "snort",
    "batchSize": 1,
    "enabled" : true
  },
  "elasticsearch" : {
    "index": "snort",
    "batchSize": 1,
    "enabled" : true
  },
  "solr" : {
    "index": "snort",
    "batchSize": 1,
    "enabled" : true
  }
}

INDEXING Config: asa
{
  "hdfs" : {
    "index": "asa",
    "batchSize": 5,
    "enabled" : true
  },
  "elasticsearch" : {
    "index": "asa",
    "batchSize": 5,
    "enabled" : true
  },
  "solr" : {
    "index": "asa",
    "batchSize": 5,
    "enabled" : true
  }
}


INDEXING Config: bro
{
  "hdfs" : {
    "index": "bro",
    "batchSize": 5,
    "enabled" : true
  },
  "elasticsearch" : {
    "index": "bro",
    "batchSize": 5,
    "enabled" : true
  },
  "solr" : {
    "index": "bro",
    "batchSize": 5,
    "enabled" : true
  }
}

INDEXING Config: yaf
{
  "hdfs" : {
    "index": "yaf",
    "batchSize": 5,
    "enabled" : true
  },
  "elasticsearch" : {
    "index": "yaf",
    "batchSize": 5,
    "enabled" : true
  },
  "solr" : {
    "index": "yaf",
    "batchSize": 5,
    "enabled" : true
  }
}

ENRICHMENT Config: websphere
{
  "enrichment": {
    "fieldMap": {
      "geo": [
        "ip_src_addr"
      ],
      "host": [
        "ip_src_addr"
      ]
    },
  "fieldToTypeMap": {
      "ip_src_addr": [
        "playful_classification"
      ]
    }
  }
}


ENRICHMENT Config: snort
{
  "enrichment" : {
    "fieldMap":
      {
      "geo": ["ip_dst_addr", "ip_src_addr"],
      "host": ["host"]
    }
  },
  "threatIntel" : {
    "fieldMap":
      {
      "hbaseThreatIntel": ["ip_src_addr", "ip_dst_addr"]
    },
    "fieldToTypeMap":
      {
      "ip_src_addr" : ["malicious_ip"],
      "ip_dst_addr" : ["malicious_ip"]
    },
    "triageConfig" : {
      "riskLevelRules" : [
        {
          "rule" : "not(IN_SUBNET(ip_dst_addr, '192.168.0.0/24'))",
          "score" : 10
        }
      ],
      "aggregator" : "MAX"
    }
  }
}

ENRICHMENT Config: asa
{
    "enrichment" : {
        "fieldMap": {
            "geo": ["ip_dst_addr", "ip_src_addr"]
        }
    }
}


ENRICHMENT Config: bro
{
  "enrichment" : {
    "fieldMap": {
      "geo": ["ip_dst_addr", "ip_src_addr"],
      "host": ["host"]
    }
  },
  "threatIntel": {
    "fieldMap": {
      "hbaseThreatIntel": ["ip_src_addr", "ip_dst_addr"]
    },
    "fieldToTypeMap": {
      "ip_src_addr" : ["malicious_ip"],
      "ip_dst_addr" : ["malicious_ip"]
    }
  }
}


ENRICHMENT Config: yaf
{
  "enrichment" : {
    "fieldMap":
      {
      "geo": ["ip_dst_addr", "ip_src_addr"],
      "host": ["host"]
    }
  },
  "threatIntel": {
    "fieldMap":
      {
      "hbaseThreatIntel": ["ip_src_addr", "ip_dst_addr"]
    },
    "fieldToTypeMap":
      {
      "ip_src_addr" : ["malicious_ip"],
      "ip_dst_addr" : ["malicious_ip"]
    }
  }
}

Reply via email to