Re: When things change in hdfs, how do we know

2018-01-31 Thread Otto Fowler
It depends if the transaction id is the same per instance.  If it is, then
we can de-doupe as long as we put the tx id into the event json.



On January 31, 2018 at 12:40:15, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

I take it your service would just be a thin daemon along the lines of the
PoC you linked, which makes a lot of sense, delegating the actual
notification to the zookeeper bits we already have.

That makes sense to me. One other question would be around the availability
of that service (which is not exactly critical, but would be nice to be
able to run HA). As far as I can see it’s not likely to be stateful, and as
long as there is some sort of de-dupe you could have two or more running.
Is that worth chewing, or do we just need one running and accept occasional
outages of the rarely firing non-critical service?

Simon

> On 31 Jan 2018, at 17:24, Otto Fowler  wrote:
>
> No,
>
> I would propose a new Ambari Service, the did the notify->zookeeper
stuff.
> Did you not see my awesome ascii art diagram?
>
>
>
>
> On January 31, 2018 at 11:51:51, Casey Stella (ceste...@gmail.com) wrote:
>
> Well, it'll be one listener per worker and if you have a lot of workers,
> it's going to be a bad time probably.
>
> On Wed, Jan 31, 2018 at 11:50 AM, Otto Fowler 
> wrote:
>
>> I don’t think the Unstable means the implementation will crash. I think
>> it means
>> it is a newish-api, and there should be 1 listeners.
>>
>> Having 1 listener shouldn’t be an issue.
>>
>>
>>
>> On January 31, 2018 at 11:45:54, Casey Stella (ceste...@gmail.com)
wrote:
>>
>> Hmm, I have heard this feedback before. Perhaps a more low-key approach
>> would be either a static timer that checked or a timer bolt that sent a
>> periodic timer and the parser bolt reconfigured the parser (or indeed we
>> added a Reloadable interface with a 'reload' method). We could be smart
>> also and only set up the topology with the timer bolt if the parser
>> actually implemented the Reloadable interface. Just some thoughts that
>> might be easy and avoid instability.
>>
>> On Tue, Jan 30, 2018 at 3:42 PM, Otto Fowler 
>> wrote:
>>
>>> It is still @unstable, but the jiras :
>>> https://issues.apache.org/jira/browse/HDFS-8940?jql=
>>> project%20%3D%20HDFS%20AND%20status%20in%20(Open%2C%20%
>>> 22In%20Progress%22)%20AND%20text%20~%20%22INotify%22
>>> that I see are stall from over the summer.
>>>
>>> They also seem geared to scale or changing the filter object not the
api.
>>>
>>>
>>>
>>> On January 30, 2018 at 14:19:56, JJ Meyer (jjmey...@gmail.com) wrote:
>>>
>>> Hello all,
>>>
>>> I had created a NiFi processor a long time back that used the inotify
>> API.
>>> One thing I noticed while working with it is that it is marked with the
>>> `Unstable` annotation. It may be worth checking if anymore work is
going
>> on
>>> with it and if it will impact this (if it hasn't already been looked
>> into).
>>>
>>> Thanks,
>>> JJ
>>>
>>> On Mon, Jan 29, 2018 at 7:27 AM, Otto Fowler 
>>> wrote:
>>>
 I have updated the jira as well


 On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
 wrote:

 https://github.com/ottobackwards/hdfs-inotify-zookeeper

>>>
>>
>>


Re: When things change in hdfs, how do we know

2018-01-31 Thread Simon Elliston Ball
I take it your service would just be a thin daemon along the lines of the PoC 
you linked, which makes a lot of sense, delegating the actual notification to 
the zookeeper bits we already have.

That makes sense to me. One other question would be around the availability of 
that service (which is not exactly critical, but would be nice to be able to 
run HA). As far as I can see it’s not likely to be stateful, and as long as 
there is some sort of de-dupe you could have two or more running. Is that worth 
chewing, or do we just need one running and accept occasional outages of the 
rarely firing non-critical service?

Simon

> On 31 Jan 2018, at 17:24, Otto Fowler  wrote:
> 
> No,
> 
> I would propose a new Ambari Service, the did the notify->zookeeper stuff.
> Did you not see my awesome ascii art diagram?
> 
> 
> 
> 
> On January 31, 2018 at 11:51:51, Casey Stella (ceste...@gmail.com) wrote:
> 
> Well, it'll be one listener per worker and if you have a lot of workers,
> it's going to be a bad time probably.
> 
> On Wed, Jan 31, 2018 at 11:50 AM, Otto Fowler 
> wrote:
> 
>> I don’t think the Unstable means the implementation will crash.  I think
>> it means
>> it is a newish-api, and there should be 1 listeners.
>> 
>> Having 1 listener shouldn’t be an issue.
>> 
>> 
>> 
>> On January 31, 2018 at 11:45:54, Casey Stella (ceste...@gmail.com) wrote:
>> 
>> Hmm, I have heard this feedback before. Perhaps a more low-key approach
>> would be either a static timer that checked or a timer bolt that sent a
>> periodic timer and the parser bolt reconfigured the parser (or indeed we
>> added a Reloadable interface with a 'reload' method). We could be smart
>> also and only set up the topology with the timer bolt if the parser
>> actually implemented the Reloadable interface. Just some thoughts that
>> might be easy and avoid instability.
>> 
>> On Tue, Jan 30, 2018 at 3:42 PM, Otto Fowler 
>> wrote:
>> 
>>> It is still @unstable, but the jiras :
>>> https://issues.apache.org/jira/browse/HDFS-8940?jql=
>>> project%20%3D%20HDFS%20AND%20status%20in%20(Open%2C%20%
>>> 22In%20Progress%22)%20AND%20text%20~%20%22INotify%22
>>> that I see are stall from over the summer.
>>> 
>>> They also seem geared to scale or changing the filter object not the api.
>>> 
>>> 
>>> 
>>> On January 30, 2018 at 14:19:56, JJ Meyer (jjmey...@gmail.com) wrote:
>>> 
>>> Hello all,
>>> 
>>> I had created a NiFi processor a long time back that used the inotify
>> API.
>>> One thing I noticed while working with it is that it is marked with the
>>> `Unstable` annotation. It may be worth checking if anymore work is going
>> on
>>> with it and if it will impact this (if it hasn't already been looked
>> into).
>>> 
>>> Thanks,
>>> JJ
>>> 
>>> On Mon, Jan 29, 2018 at 7:27 AM, Otto Fowler 
>>> wrote:
>>> 
 I have updated the jira as well
 
 
 On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
 wrote:
 
 https://github.com/ottobackwards/hdfs-inotify-zookeeper
 
>>> 
>> 
>> 



Re: When things change in hdfs, how do we know

2018-01-31 Thread Otto Fowler
No,

I would propose a new Ambari Service, the did the notify->zookeeper stuff.
Did you not see my awesome ascii art diagram?




On January 31, 2018 at 11:51:51, Casey Stella (ceste...@gmail.com) wrote:

Well, it'll be one listener per worker and if you have a lot of workers,
it's going to be a bad time probably.

On Wed, Jan 31, 2018 at 11:50 AM, Otto Fowler 
wrote:

> I don’t think the Unstable means the implementation will crash.  I think
> it means
> it is a newish-api, and there should be 1 listeners.
>
> Having 1 listener shouldn’t be an issue.
>
>
>
> On January 31, 2018 at 11:45:54, Casey Stella (ceste...@gmail.com) wrote:
>
> Hmm, I have heard this feedback before. Perhaps a more low-key approach
> would be either a static timer that checked or a timer bolt that sent a
> periodic timer and the parser bolt reconfigured the parser (or indeed we
> added a Reloadable interface with a 'reload' method). We could be smart
> also and only set up the topology with the timer bolt if the parser
> actually implemented the Reloadable interface. Just some thoughts that
> might be easy and avoid instability.
>
> On Tue, Jan 30, 2018 at 3:42 PM, Otto Fowler 
> wrote:
>
> > It is still @unstable, but the jiras :
> > https://issues.apache.org/jira/browse/HDFS-8940?jql=
> > project%20%3D%20HDFS%20AND%20status%20in%20(Open%2C%20%
> > 22In%20Progress%22)%20AND%20text%20~%20%22INotify%22
> > that I see are stall from over the summer.
> >
> > They also seem geared to scale or changing the filter object not the api.
> >
> >
> >
> > On January 30, 2018 at 14:19:56, JJ Meyer (jjmey...@gmail.com) wrote:
> >
> > Hello all,
> >
> > I had created a NiFi processor a long time back that used the inotify
> API.
> > One thing I noticed while working with it is that it is marked with the
> > `Unstable` annotation. It may be worth checking if anymore work is going
> on
> > with it and if it will impact this (if it hasn't already been looked
> into).
> >
> > Thanks,
> > JJ
> >
> > On Mon, Jan 29, 2018 at 7:27 AM, Otto Fowler 
> > wrote:
> >
> > > I have updated the jira as well
> > >
> > >
> > > On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
> > > wrote:
> > >
> > > https://github.com/ottobackwards/hdfs-inotify-zookeeper
> > >
> >
>
>


Re: When things change in hdfs, how do we know

2018-01-31 Thread Casey Stella
Well, it'll be one listener per worker and if you have a lot of workers,
it's going to be a bad time probably.

On Wed, Jan 31, 2018 at 11:50 AM, Otto Fowler 
wrote:

> I don’t think the Unstable means the implementation will crash.  I think
> it means
> it is a newish-api, and there should be 1 listeners.
>
> Having 1 listener shouldn’t be an issue.
>
>
>
> On January 31, 2018 at 11:45:54, Casey Stella (ceste...@gmail.com) wrote:
>
> Hmm, I have heard this feedback before. Perhaps a more low-key approach
> would be either a static timer that checked or a timer bolt that sent a
> periodic timer and the parser bolt reconfigured the parser (or indeed we
> added a Reloadable interface with a 'reload' method). We could be smart
> also and only set up the topology with the timer bolt if the parser
> actually implemented the Reloadable interface. Just some thoughts that
> might be easy and avoid instability.
>
> On Tue, Jan 30, 2018 at 3:42 PM, Otto Fowler 
> wrote:
>
> > It is still @unstable, but the jiras :
> > https://issues.apache.org/jira/browse/HDFS-8940?jql=
> > project%20%3D%20HDFS%20AND%20status%20in%20(Open%2C%20%
> > 22In%20Progress%22)%20AND%20text%20~%20%22INotify%22
> > that I see are stall from over the summer.
> >
> > They also seem geared to scale or changing the filter object not the
> api.
> >
> >
> >
> > On January 30, 2018 at 14:19:56, JJ Meyer (jjmey...@gmail.com) wrote:
> >
> > Hello all,
> >
> > I had created a NiFi processor a long time back that used the inotify
> API.
> > One thing I noticed while working with it is that it is marked with the
> > `Unstable` annotation. It may be worth checking if anymore work is going
> on
> > with it and if it will impact this (if it hasn't already been looked
> into).
> >
> > Thanks,
> > JJ
> >
> > On Mon, Jan 29, 2018 at 7:27 AM, Otto Fowler 
> > wrote:
> >
> > > I have updated the jira as well
> > >
> > >
> > > On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
>
> > > wrote:
> > >
> > > https://github.com/ottobackwards/hdfs-inotify-zookeeper
> > >
> >
>
>


Re: When things change in hdfs, how do we know

2018-01-31 Thread Otto Fowler
I don’t think the Unstable means the implementation will crash.  I think it
means
it is a newish-api, and there should be 1 listeners.

Having 1 listener shouldn’t be an issue.



On January 31, 2018 at 11:45:54, Casey Stella (ceste...@gmail.com) wrote:

Hmm, I have heard this feedback before. Perhaps a more low-key approach
would be either a static timer that checked or a timer bolt that sent a
periodic timer and the parser bolt reconfigured the parser (or indeed we
added a Reloadable interface with a 'reload' method). We could be smart
also and only set up the topology with the timer bolt if the parser
actually implemented the Reloadable interface. Just some thoughts that
might be easy and avoid instability.

On Tue, Jan 30, 2018 at 3:42 PM, Otto Fowler 
wrote:

> It is still @unstable, but the jiras :
> https://issues.apache.org/jira/browse/HDFS-8940?jql=
> project%20%3D%20HDFS%20AND%20status%20in%20(Open%2C%20%
> 22In%20Progress%22)%20AND%20text%20~%20%22INotify%22
> that I see are stall from over the summer.
>
> They also seem geared to scale or changing the filter object not the api.
>
>
>
> On January 30, 2018 at 14:19:56, JJ Meyer (jjmey...@gmail.com) wrote:
>
> Hello all,
>
> I had created a NiFi processor a long time back that used the inotify
API.
> One thing I noticed while working with it is that it is marked with the
> `Unstable` annotation. It may be worth checking if anymore work is going
on
> with it and if it will impact this (if it hasn't already been looked
into).
>
> Thanks,
> JJ
>
> On Mon, Jan 29, 2018 at 7:27 AM, Otto Fowler 
> wrote:
>
> > I have updated the jira as well
> >
> >
> > On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
> > wrote:
> >
> > https://github.com/ottobackwards/hdfs-inotify-zookeeper
> >
>


Re: When things change in hdfs, how do we know

2018-01-31 Thread Casey Stella
Hmm, I have heard this feedback before.  Perhaps a more low-key approach
would be either a static timer that checked or a timer bolt that sent a
periodic timer and the parser bolt reconfigured the parser (or indeed we
added a Reloadable interface with a 'reload' method).  We could be smart
also and only set up the topology with the timer bolt if the parser
actually implemented the Reloadable interface.  Just some thoughts that
might be easy and avoid instability.

On Tue, Jan 30, 2018 at 3:42 PM, Otto Fowler 
wrote:

> It is still @unstable, but the jiras :
> https://issues.apache.org/jira/browse/HDFS-8940?jql=
> project%20%3D%20HDFS%20AND%20status%20in%20(Open%2C%20%
> 22In%20Progress%22)%20AND%20text%20~%20%22INotify%22
> that I see are stall from over the summer.
>
> They also seem geared to scale or changing the filter object not the api.
>
>
>
> On January 30, 2018 at 14:19:56, JJ Meyer (jjmey...@gmail.com) wrote:
>
> Hello all,
>
> I had created a NiFi processor a long time back that used the inotify API.
> One thing I noticed while working with it is that it is marked with the
> `Unstable` annotation. It may be worth checking if anymore work is going on
> with it and if it will impact this (if it hasn't already been looked into).
>
> Thanks,
> JJ
>
> On Mon, Jan 29, 2018 at 7:27 AM, Otto Fowler 
> wrote:
>
> > I have updated the jira as well
> >
> >
> > On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
> > wrote:
> >
> > https://github.com/ottobackwards/hdfs-inotify-zookeeper
> >
>


Re: When things change in hdfs, how do we know

2018-01-30 Thread Otto Fowler
It is still @unstable, but the jiras :
https://issues.apache.org/jira/browse/HDFS-8940?jql=project%20%3D%20HDFS%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22)%20AND%20text%20~%20%22INotify%22
that I see are stall from over the summer.

They also seem geared to scale or changing the filter object not the api.



On January 30, 2018 at 14:19:56, JJ Meyer (jjmey...@gmail.com) wrote:

Hello all,

I had created a NiFi processor a long time back that used the inotify API.
One thing I noticed while working with it is that it is marked with the
`Unstable` annotation. It may be worth checking if anymore work is going on
with it and if it will impact this (if it hasn't already been looked into).

Thanks,
JJ

On Mon, Jan 29, 2018 at 7:27 AM, Otto Fowler 
wrote:

> I have updated the jira as well
>
>
> On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
> wrote:
>
> https://github.com/ottobackwards/hdfs-inotify-zookeeper
>


Re: When things change in hdfs, how do we know

2018-01-30 Thread JJ Meyer
Hello all,

I had created a NiFi processor a long time back that used the inotify API.
One thing I noticed while working with it is that it is marked with the
`Unstable` annotation. It may be worth checking if anymore work is going on
with it and if it will impact this (if it hasn't already been looked into).

Thanks,
JJ

On Mon, Jan 29, 2018 at 7:27 AM, Otto Fowler 
wrote:

> I have updated the jira as well
>
>
> On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
> wrote:
>
> https://github.com/ottobackwards/hdfs-inotify-zookeeper
>


Re: When things change in hdfs, how do we know

2018-01-29 Thread Otto Fowler
I have updated the jira as well


On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
wrote:

https://github.com/ottobackwards/hdfs-inotify-zookeeper


Re: When things change in hdfs, how do we know

2018-01-29 Thread Otto Fowler
POC is kind of complete, only thing not done is having the inotify listener
do a tree cache to pick up new configurations.



On January 28, 2018 at 22:39:18, Otto Fowler (ottobackwa...@gmail.com)
wrote:

Btw: https://issues.apache.org/jira/browse/METRON-534



On January 26, 2018 at 15:03:11, Otto Fowler (ottobackwa...@gmail.com)
wrote:


https://github.com/ottobackwards/hdfs-inotify-zookeeper
Has the basics framed out, short of pushing to zookeeper, which I mocked
out at this time.
I’ll add pushing to zk and a cache notification listener to the test soon.


On January 26, 2018 at 08:48:54, Otto Fowler (ottobackwa...@gmail.com)
wrote:

In the future, when the ‘filter paths on the name node side for inotify’
lands in hdfs ( there is a jira from the summer that is not making progress
) we can
just use the paths to register.


On January 26, 2018 at 08:47:11, Otto Fowler (ottobackwa...@gmail.com)
wrote:

In the end, what I’m thinking is this:

We have an ambari service that runs the notification -> zookeeper
it reads the ‘registration area’ from zookeeper to get it’s state and what
to watch
post 777 when parsers are installed and registered it is trivial to have my
installer also register the files to watch

the notifications service also has a notification from zookeeper for new
registrations.

On notify event, the ‘notification node’ has it’s content set to the event
details and time
which the parser would pick up…. causing the reload
???
profit


This would work for the future script parser etc etc.


On January 26, 2018 at 08:30:32, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Interesting, so you have an INotify listener to filter events, and then on
given changes, propagate a notification to zookeeper, which then triggers
the reconfiguration event via the curator client in Metron. I kinda like it
given our existing zookeeper methods.

Simon

On 26 Jan 2018, at 13:27, Otto Fowler  wrote:

https://github.com/ottobackwards/hdfs-inotify-zookeeper

Working on a poc



On January 26, 2018 at 07:41:44, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Should we consider using the Inotify interface to trigger reconfiguration,
in same way we trigger config changes in curator? We also need to fix
caching and lifecycle in the Grok parser to make the zookeeper changes
propagate pattern changes while we’re at it.

Simon

> On 26 Jan 2018, at 03:16, Casey Stella  wrote:
>
> Right now you have to restart the parser topology.
>
> On Thu, Jan 25, 2018 at 10:15 PM, Otto Fowler 
> wrote:
>
>> At the moment, when a grok file or something changes in HDFS, how do we
>> know? Do we have to restart the parser topology to pick it up?
>> Just trying to clarify for myself.
>>
>> ottO
>>


Re: When things change in hdfs, how do we know

2018-01-26 Thread Otto Fowler
https://github.com/ottobackwards/hdfs-inotify-zookeeper
Has the basics framed out, short of pushing to zookeeper, which I mocked
out at this time.
I’ll add pushing to zk and a cache notification listener to the test soon.


On January 26, 2018 at 08:48:54, Otto Fowler (ottobackwa...@gmail.com)
wrote:

In the future, when the ‘filter paths on the name node side for inotify’
lands in hdfs ( there is a jira from the summer that is not making progress
) we can
just use the paths to register.


On January 26, 2018 at 08:47:11, Otto Fowler (ottobackwa...@gmail.com)
wrote:

In the end, what I’m thinking is this:

We have an ambari service that runs the notification -> zookeeper
it reads the ‘registration area’ from zookeeper to get it’s state and what
to watch
post 777 when parsers are installed and registered it is trivial to have my
installer also register the files to watch

the notifications service also has a notification from zookeeper for new
registrations.

On notify event, the ‘notification node’ has it’s content set to the event
details and time
which the parser would pick up…. causing the reload
???
profit


This would work for the future script parser etc etc.


On January 26, 2018 at 08:30:32, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Interesting, so you have an INotify listener to filter events, and then on
given changes, propagate a notification to zookeeper, which then triggers
the reconfiguration event via the curator client in Metron. I kinda like it
given our existing zookeeper methods.

Simon

On 26 Jan 2018, at 13:27, Otto Fowler  wrote:

https://github.com/ottobackwards/hdfs-inotify-zookeeper

Working on a poc



On January 26, 2018 at 07:41:44, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Should we consider using the Inotify interface to trigger reconfiguration,
in same way we trigger config changes in curator? We also need to fix
caching and lifecycle in the Grok parser to make the zookeeper changes
propagate pattern changes while we’re at it.

Simon

> On 26 Jan 2018, at 03:16, Casey Stella  wrote:
>
> Right now you have to restart the parser topology.
>
> On Thu, Jan 25, 2018 at 10:15 PM, Otto Fowler 
> wrote:
>
>> At the moment, when a grok file or something changes in HDFS, how do we
>> know? Do we have to restart the parser topology to pick it up?
>> Just trying to clarify for myself.
>>
>> ottO
>>


Re: When things change in hdfs, how do we know

2018-01-26 Thread Otto Fowler
In the future, when the ‘filter paths on the name node side for inotify’
lands in hdfs ( there is a jira from the summer that is not making progress
) we can
just use the paths to register.


On January 26, 2018 at 08:47:11, Otto Fowler (ottobackwa...@gmail.com)
wrote:

In the end, what I’m thinking is this:

We have an ambari service that runs the notification -> zookeeper
it reads the ‘registration area’ from zookeeper to get it’s state and what
to watch
post 777 when parsers are installed and registered it is trivial to have my
installer also register the files to watch

the notifications service also has a notification from zookeeper for new
registrations.

On notify event, the ‘notification node’ has it’s content set to the event
details and time
which the parser would pick up…. causing the reload
???
profit


This would work for the future script parser etc etc.


On January 26, 2018 at 08:30:32, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Interesting, so you have an INotify listener to filter events, and then on
given changes, propagate a notification to zookeeper, which then triggers
the reconfiguration event via the curator client in Metron. I kinda like it
given our existing zookeeper methods.

Simon

On 26 Jan 2018, at 13:27, Otto Fowler  wrote:

https://github.com/ottobackwards/hdfs-inotify-zookeeper

Working on a poc



On January 26, 2018 at 07:41:44, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Should we consider using the Inotify interface to trigger reconfiguration,
in same way we trigger config changes in curator? We also need to fix
caching and lifecycle in the Grok parser to make the zookeeper changes
propagate pattern changes while we’re at it.

Simon

> On 26 Jan 2018, at 03:16, Casey Stella  wrote:
>
> Right now you have to restart the parser topology.
>
> On Thu, Jan 25, 2018 at 10:15 PM, Otto Fowler 
> wrote:
>
>> At the moment, when a grok file or something changes in HDFS, how do we
>> know? Do we have to restart the parser topology to pick it up?
>> Just trying to clarify for myself.
>>
>> ottO
>>


Re: When things change in hdfs, how do we know

2018-01-26 Thread Otto Fowler
In the end, what I’m thinking is this:

We have an ambari service that runs the notification -> zookeeper
it reads the ‘registration area’ from zookeeper to get it’s state and what
to watch
post 777 when parsers are installed and registered it is trivial to have my
installer also register the files to watch

the notifications service also has a notification from zookeeper for new
registrations.

On notify event, the ‘notification node’ has it’s content set to the event
details and time
which the parser would pick up…. causing the reload
???
profit


This would work for the future script parser etc etc.


On January 26, 2018 at 08:30:32, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Interesting, so you have an INotify listener to filter events, and then on
given changes, propagate a notification to zookeeper, which then triggers
the reconfiguration event via the curator client in Metron. I kinda like it
given our existing zookeeper methods.

Simon

On 26 Jan 2018, at 13:27, Otto Fowler  wrote:

https://github.com/ottobackwards/hdfs-inotify-zookeeper

Working on a poc



On January 26, 2018 at 07:41:44, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Should we consider using the Inotify interface to trigger reconfiguration,
in same way we trigger config changes in curator? We also need to fix
caching and lifecycle in the Grok parser to make the zookeeper changes
propagate pattern changes while we’re at it.

Simon

> On 26 Jan 2018, at 03:16, Casey Stella  wrote:
>
> Right now you have to restart the parser topology.
>
> On Thu, Jan 25, 2018 at 10:15 PM, Otto Fowler 
> wrote:
>
>> At the moment, when a grok file or something changes in HDFS, how do we
>> know? Do we have to restart the parser topology to pick it up?
>> Just trying to clarify for myself.
>>
>> ottO
>>


Re: When things change in hdfs, how do we know

2018-01-26 Thread Simon Elliston Ball
Interesting, so you have an INotify listener to filter events, and then on 
given changes, propagate a notification to zookeeper, which then triggers the 
reconfiguration event via the curator client in Metron. I kinda like it given 
our existing zookeeper methods. 

Simon

> On 26 Jan 2018, at 13:27, Otto Fowler  wrote:
> 
> https://github.com/ottobackwards/hdfs-inotify-zookeeper 
> 
> 
> Working on a poc
> 
> 
> 
> On January 26, 2018 at 07:41:44, Simon Elliston Ball 
> (si...@simonellistonball.com ) wrote:
> 
>> Should we consider using the Inotify interface to trigger reconfiguration, 
>> in same way we trigger config changes in curator? We also need to fix 
>> caching and lifecycle in the Grok parser to make the zookeeper changes 
>> propagate pattern changes while we’re at it.  
>> 
>> Simon 
>> 
>> > On 26 Jan 2018, at 03:16, Casey Stella > > > wrote: 
>> >  
>> > Right now you have to restart the parser topology. 
>> >  
>> > On Thu, Jan 25, 2018 at 10:15 PM, Otto Fowler > > > 
>> > wrote: 
>> >  
>> >> At the moment, when a grok file or something changes in HDFS, how do we 
>> >> know? Do we have to restart the parser topology to pick it up? 
>> >> Just trying to clarify for myself. 
>> >>  
>> >> ottO 
>> >> 



Re: When things change in hdfs, how do we know

2018-01-26 Thread Otto Fowler
https://github.com/ottobackwards/hdfs-inotify-zookeeper

Working on a poc



On January 26, 2018 at 07:41:44, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Should we consider using the Inotify interface to trigger reconfiguration,
in same way we trigger config changes in curator? We also need to fix
caching and lifecycle in the Grok parser to make the zookeeper changes
propagate pattern changes while we’re at it.

Simon

> On 26 Jan 2018, at 03:16, Casey Stella  wrote:
>
> Right now you have to restart the parser topology.
>
> On Thu, Jan 25, 2018 at 10:15 PM, Otto Fowler 
> wrote:
>
>> At the moment, when a grok file or something changes in HDFS, how do we
>> know? Do we have to restart the parser topology to pick it up?
>> Just trying to clarify for myself.
>>
>> ottO
>>


Re: When things change in hdfs, how do we know

2018-01-26 Thread Simon Elliston Ball
Should we consider using the Inotify interface to trigger reconfiguration, in 
same way we trigger config changes in curator? We also need to fix caching and 
lifecycle in the Grok parser to make the zookeeper changes propagate pattern 
changes while we’re at it. 

Simon

> On 26 Jan 2018, at 03:16, Casey Stella  wrote:
> 
> Right now you have to restart the parser topology.
> 
> On Thu, Jan 25, 2018 at 10:15 PM, Otto Fowler 
> wrote:
> 
>> At the moment, when a grok file or something changes in HDFS, how do we
>> know?  Do we have to restart the parser topology to pick it up?
>> Just trying to clarify for myself.
>> 
>> ottO
>> 



When things change in hdfs, how do we know

2018-01-25 Thread Otto Fowler
At the moment, when a grok file or something changes in HDFS, how do we
know?  Do we have to restart the parser topology to pick it up?
Just trying to clarify for myself.

ottO