et up, especially for "wide" data sources that
> > may have 15-20 fields, e.g. Active Directory.
> > >
> > >More broadly speaking, I want to embrace the streaming data paradigm and
> > tried to avoid batch jobs. With the DNS example, you might imagine a
t;wide" data sources that
>> may have 15-20 fields, e.g. Active Directory.
>>>
>>> More broadly speaking, I want to embrace the streaming data paradigm
and
>> tried to avoid batch jobs. With the DNS example, you might imagine a
future
>> where the enrichment
is streamed based on DHCP registrations, DNS
> update events, etc. In principle this could reduce the window of time
where
> we might enrich a data source with out-of-date data.
> >
> >Charlie
> >
> >-Original Message-
> >From: Carolyn Duby [mailto:cd.
ive Directory.
>>>
>>> More broadly speaking, I want to embrace the streaming data paradigm and
>> tried to avoid batch jobs. With the DNS example, you might imagine a future
>> where the enrichment data is streamed based on DHCP registrations, DNS
>> update
adigm and
> tried to avoid batch jobs. With the DNS example, you might imagine a future
> where the enrichment data is streamed based on DHCP registrations, DNS
> update events, etc. In principle this could reduce the window of time where
> we might enrich a data source with out-of-date data.
> >
quot;: "org.apache.metron.enrichment.writer.
>>> SimpleHbaseEnrichmentWriter",
>>> "sensorTopic": "dns",
>>> "parserConfig": {
>>> "shew.table": " dns",
>>>
lto:cd...@hortonworks.com]
Sent: 12 June 2018 20:33
To: dev@metron.apache.org
Subject: Re: Writing enrichment data directly from NiFi with PutHBaseJSON
I like the streaming enrichment solutions but it depends on how you are getting
the data in. If you get the data in a csv file just call the fl
uot; dns",
>> "shew.cf": "dns",
>> "shew.keyColumns": "name",
>> "shew.enrichmentType": "dns",
>> "columns": {
>>
"name": 0,
> "type": 1,
> "data": 2
> }
> },
> }
>
> And... it seems to be working. At least, I have data in HBase which looks
> more like the output of the flatfile
s": {
"name": 0,
"type": 1,
"data": 2
}
},
}
And... it seems to be working. At least, I have data in HBase which looks more
like the output of the flatfile loader.
Cha
Having it in it’s own repo doesn’t tie it to Metron any less functional
wise, but allows
for a new release with Nifi only changes to be produced, or multiple
streams of releases
across nifi versions ( 1.7.x, 1.8.x ) to be produced.
On June 5, 2018 at 15:14:38, Casey Stella (ceste...@gmail.com) w
Also, the bundle would be part of the metron project I expect, so the NiFi
release shouldn’t matter much, now NiFi can version only processors
independently.
Simon
> On 5 Jun 2018, at 20:14, Casey Stella wrote:
>
> I agree with Simon here, the benefit of providing NiFi tooling is to enable
I agree with Simon here, the benefit of providing NiFi tooling is to enable
NiFi to use our infrastructure (e.g. our parsers, MaaS, stellar
enrichments, etc). This would tie it to Metron pretty closely.
On Tue, Jun 5, 2018 at 3:12 PM Otto Fowler wrote:
> Nifi releases more often then Metron doe
Nifi releases more often then Metron does, that might be an issue.
On June 5, 2018 at 14:07:22, Simon Elliston Ball (
si...@simonellistonball.com) wrote:
To be honest, I would expect this to be heavily linked to the Metron
releases, since it's going to use other metron classes and dependencies t
To be honest, I would expect this to be heavily linked to the Metron
releases, since it's going to use other metron classes and dependencies to
ensure compatibility. For example, a Stellar NiFi processor will be linked
to Metron's stellar-common, the enrichment loader will depend on key
constructio
Similar to Bro, we may need to release out of cycle.
On June 5, 2018 at 13:17:55, Simon Elliston Ball (
si...@simonellistonball.com) wrote:
Do you mean in the sense of a separate module, or are you suggesting we go
as far as a sub-project?
On 5 June 2018 at 10:08, Otto Fowler wrote:
> If we
Do you mean in the sense of a separate module, or are you suggesting we go
as far as a sub-project?
On 5 June 2018 at 10:08, Otto Fowler wrote:
> If we do that, we should have it as a separate component maybe.
>
>
> On June 5, 2018 at 12:42:57, Simon Elliston Ball (
> si...@simonellistonball.com
If we do that, we should have it as a separate component maybe.
On June 5, 2018 at 12:42:57, Simon Elliston Ball (
si...@simonellistonball.com) wrote:
@otto, well, of course we would use the record api... it's great.
@casey, I have actually written a stellar processor, which applies stellar
to
@otto, well, of course we would use the record api... it's great.
@casey, I have actually written a stellar processor, which applies stellar
to all FlowFile attributes outputting the resulting stellar variable space
to either attributes or as json in the content.
Is it worth us creating an nifi-m
We have jiras about ‘diverting’ and reading from nifi flows already
On June 5, 2018 at 11:11:45, Casey Stella (ceste...@gmail.com) wrote:
I'd be in strong support of that, Simon. I think we should have some other
NiFi components in Metron to enable users to interact with our
infrastructure from
PutMetronEnrichementRecords* ;)
On June 5, 2018 at 10:32:43, Simon Elliston Ball (
si...@simonellistonball.com) wrote:
Do we, the community, think it would be a good idea to create a
PutMetronEnrichment NiFi processor for this use case? It seems a number of
people want to use NiFi to manage and
I'd be in strong support of that, Simon. I think we should have some other
NiFi components in Metron to enable users to interact with our
infrastructure from NiFi (e.g. being able to transform via stellar, etc).
On Tue, Jun 5, 2018 at 10:32 AM Simon Elliston Ball <
si...@simonellistonball.com> wr
Do we, the community, think it would be a good idea to create a
PutMetronEnrichment NiFi processor for this use case? It seems a number of
people want to use NiFi to manage and schedule loading of enrichments for
example.
Simon
On 5 June 2018 at 06:56, Casey Stella wrote:
> The problem, as you
The problem, as you correctly diagnosed, is the key in HBase. We construct
the key very specifically in Metron, so it's unlikely to work out of the
box with the NiFi processor unfortunately. The key that we use is formed
here in the codebase:
https://github.com/cestella/incubator-metron/blob/mast
Hi Charles -
I think your best bet is to create a csv file and use the flatfile_loader.sh
This will be easier and you won’t have to worry if the format of Hbase storage
changes:
https://github.com/apache/metron/tree/master/metron-platform/metron-data-management#loading-utilities
The flat fi
Hello,
I work as a Dev/Ops Data Engineer within the security team at a company in
London where we are in the process of implementing Metron. I have been tasked
with implementing feeds of network environment data into HBase so that this
data can be used as enrichment sources for our security eve
26 matches
Mail list logo