OK, So the JSONFromPosition MessageGetStrategy is choking the csv… I don’t know if there is a position index or it is 0, but whatever it is, it is getting csv not json message.
On November 13, 2017 at 15:49:57, Otto Fowler ([email protected]) wrote: I guess I am wrong. But from looking at the output, it looks like this is error topic stuff that is failing doesn’t it? On November 13, 2017 at 15:06:20, [email protected] ([email protected]) wrote: Isn't sending indexing errors to the indexing topic intentional? I may need to refresh myself on the below conversation, but I recall it coming up in conversation on the mailing lists in the past. https://github.com/apache/metron/blob/master/metron-platform/metron-elasticsearch/src/main/config/elasticsearch.properties#L33 https://lists.apache.org/thread.html/01e4ed416bda8d1057f09f7717809d2802ae1de3035dc42f001d7bbe@%3Cdev.metron.apache.org%3E Jon On Mon, Nov 13, 2017 at 2:59 PM Otto Fowler <[email protected]> wrote: > OK. > > I think your sending errors to your indexing topic instead of the error > topic. > I think you posted your config before, but I don’t remember off the top of > my head > where the error topic is configured. > > If the error topic is the same as the indexing topic, and you ‘have > errors’ I think you may see this. > > > > On November 13, 2017 at 14:39:44, Syed Hammad Tahir ([email protected]) > wrote: > > Here we go. This is what I see when I do kafka client on indexing topic. > > [image: Inline image 1] > > On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <[email protected]> > wrote: > >> ok, I will try it again and report results >> >> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <[email protected]> >> wrote: >> >>> You have to be seeing data in the indexing topic, you have errors in the >>> indexing topology that reads from it. >>> >>> >>> >>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir ( >>> [email protected]) wrote: >>> >>> So you are saying: >>> >>> * when you do the kafka client on the enrichment topic things are in json >>> * when you do the kafka client on the indexing topic they are csv >>> >>> 1- Yes, kafka client on enrichment shows json >>> >>> 2- No, I dont see anything in kafka client on indexing topic >>> >>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <[email protected]> >>> wrote: >>> >>>> So you are saying: >>>> >>>> * when you do the kafka client on the enrichment topic things are in >>>> json >>>> * when you do the kafka client on the indexing topic they are csv >>>> >>>> ??? >>>> >>>> >>>> >>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir ( >>>> [email protected]) wrote: >>>> >>>> From one of your earlier messages, This is what I have figured out so >>>> far. >>>> >>>> [image: Inline image 1] >>>> >>>> The issue is inducated by red marked portion of the flow. >>>> >>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir < >>>> [email protected]> wrote: >>>> >>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean >>>>> which one should I look at because there are so many listed here. >>>>> >>>>> [image: Inline image 1] >>>>> >>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir < >>>>> [email protected]> wrote: >>>>> >>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it >>>>>> correctly since I am getting error in the indexing bolt not in the >>>>>> parser one. >>>>>> >>>>>> >>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir < >>>>>> [email protected]> wrote: >>>>>> >>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the >>>>>>> basic message and then convert it in JSON? >>>>>>> >>>>>>> >>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir < >>>>>>> [email protected]> wrote: >>>>>>> >>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see >>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via >>>>>>>> kafka >>>>>>>> producer. >>>>>>>> >>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, [email protected] <[email protected] >>>>>>>> > wrote: >>>>>>>> >>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer >>>>>>>>> script) and pull from the indexing topic. Are you seeing it in JSON >>>>>>>>> there? >>>>>>>>> >>>>>>>>> Jon >>>>>>>>> >>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir < >>>>>>>>> [email protected]> wrote: >>>>>>>>> >>>>>>>>>> Kindly give me the mechanism implemented in metron through which >>>>>>>>>> a line such as this >>>>>>>>>> >>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test >>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,, >>>>>>>>>> >>>>>>>>>> is converted into a json object. Maybe I am missing something here >>>>>>>>>> is the formatting. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir < >>>>>>>>>> [email protected]> wrote: >>>>>>>>>> >>>>>>>>>>> Restarted snort, still giving me error for indexing topologies >>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I >>>>>>>>>>> have >>>>>>>>>>> not run the kafka-producer command but its still giving error for >>>>>>>>>>> something. >>>>>>>>>>> >>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>> >>>>>>>>>>> [image: Inline image 2] >>>>>>>>>>> >>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir < >>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>> >>>>>>>>>>>> ok, Doing it. >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, [email protected] < >>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Can you restart storm and give it another shot? >>>>>>>>>>>>> >>>>>>>>>>>>> Jon >>>>>>>>>>>>> >>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir < >>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> hi, This problem still persists guys . >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir < >>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Any solution to these issues guys? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir < >>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I have attached the output of this dump >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m >>>>>>>>>>>>>>>> DUMP >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, [email protected] < >>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> What is the output of: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m >>>>>>>>>>>>>>>>> DUMP >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Jon >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir < >>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> This is the script/command i used >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> sudo cat snort.out | >>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh >>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir < >>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> sudo cat snort.out | >>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh >>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler < >>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> What topic? what are the parameters you are calling >>>>>>>>>>>>>>>>>>>> the script with? >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir ( >>>>>>>>>>>>>>>>>>>> [email protected]) wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm >>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything >>>>>>>>>>>>>>>>>>>> has already been >>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am >>>>>>>>>>>>>>>>>>>> doing the similar >>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to >>>>>>>>>>>>>>>>>>>> kafka topic. I can >>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor >>>>>>>>>>>>>>>>>>>> from monit but >>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I >>>>>>>>>>>>>>>>>>>> have shown >>>>>>>>>>>>>>>>>>>> earlier. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella < >>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's >>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)? >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir < >>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> This is what I am doing >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | >>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh >>>>>>>>>>>>>>>>>>>>>> --broker-list >>>>>>>>>>>>>>>>>>>>>> node1:6667 --topic snort >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella < >>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka >>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source? It >>>>>>>>>>>>>>>>>>>>>>> looks like there are >>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON. By the time >>>>>>>>>>>>>>>>>>>>>>> it gets to the >>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map. The >>>>>>>>>>>>>>>>>>>>>>> parser topology emits >>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich >>>>>>>>>>>>>>>>>>>>>>> that map and emits >>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir < >>>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology >>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2] >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler < >>>>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> What Casey said. We need the whole stack trace. >>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing >>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error? >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella ( >>>>>>>>>>>>>>>>>>>>>>>>> [email protected]) wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those >>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with >>>>>>>>>>>>>>>>>>>>>>>>> the suggestion you >>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)? >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding >>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component >>>>>>>>>>>>>>>>>>>>>>>>> (It should be handled >>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself). The fact that it's happening >>>>>>>>>>>>>>>>>>>>>>>>> for both HDFS and ES >>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the >>>>>>>>>>>>>>>>>>>>>>>>> full stacktrace there >>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the >>>>>>>>>>>>>>>>>>>>>>>>> individual writer >>>>>>>>>>>>>>>>>>>>>>>>> included. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir >>>>>>>>>>>>>>>>>>>>>>>>> <[email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka >>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser >>>>>>>>>>>>>>>>>>>>>>>>>> topology is gone but I am >>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir >>>>>>>>>>>>>>>>>>>>>>>>>> <[email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push >>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort >>>>>>>>>>>>>>>>>>>>>>>>>>> test >>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,, >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, [email protected] >>>>>>>>>>>>>>>>>>>>>>>>>>> <[email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and >>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... >>>>>>>>>>>>>>>>>>>>>>>>>>>> to make sure there are no >>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler < >>>>>>>>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in >>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = >>>>>>>>>>>>>>>>>>>>>>>>>>>>> "MM/dd/yy-HH:mm:ss.SSSSSS"; >>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter >>>>>>>>>>>>>>>>>>>>>>>>>>>>> dateTimeFormatter; >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy- format, then >>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format? >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to >>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort >>>>>>>>>>>>>>>>>>>>>>>>>>>>> in the short term. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler ( >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [email protected]) wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is >>>>>>>>>>>>>>>>>>>>>>>>>>>>> failing >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir ([email protected]) wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the >>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub >>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its >>>>>>>>>>>>>>>>>>>>>>>>>>>>> formatting and try following the >>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <[email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> under topic of snort. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pushed to kafka ), which I >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live. I also think you have >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manually pushed messages, where >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to kafka as you do when you >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir ([email protected]) wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem. If you see this error >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with your ‘live’ messages >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?" >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> any error at that time. If >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tell me what could it be. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <[email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem. If you see this >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> error with your ‘live’ messages >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -> Kafka -> Storm Indexing >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir ([email protected]) wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <[email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dashboard. Any help will be >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <[email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <[email protected]> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs I was pushing to kafka topic >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <[email protected]> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <[email protected]> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [email protected] <[email protected]> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would go look in >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Jon
