It depends if the transaction id is the same per instance. If it is, then
we can de-doupe as long as we put the tx id into the event json.
On January 31, 2018 at 12:40:15, Simon Elliston Ball (
si...@simonellistonball.com) wrote:
I take it your service would just be a thin daemon along the
I take it your service would just be a thin daemon along the lines of the PoC
you linked, which makes a lot of sense, delegating the actual notification to
the zookeeper bits we already have.
That makes sense to me. One other question would be around the availability of
that service (which is
No,
I would propose a new Ambari Service, the did the notify->zookeeper stuff.
Did you not see my awesome ascii art diagram?
On January 31, 2018 at 11:51:51, Casey Stella (ceste...@gmail.com) wrote:
Well, it'll be one listener per worker and if you have a lot of workers,
it's going to be a
Well, it'll be one listener per worker and if you have a lot of workers,
it's going to be a bad time probably.
On Wed, Jan 31, 2018 at 11:50 AM, Otto Fowler
wrote:
> I don’t think the Unstable means the implementation will crash. I think
> it means
> it is a
I don’t think the Unstable means the implementation will crash. I think it
means
it is a newish-api, and there should be 1 listeners.
Having 1 listener shouldn’t be an issue.
On January 31, 2018 at 11:45:54, Casey Stella (ceste...@gmail.com) wrote:
Hmm, I have heard this feedback before.
Hmm, I have heard this feedback before. Perhaps a more low-key approach
would be either a static timer that checked or a timer bolt that sent a
periodic timer and the parser bolt reconfigured the parser (or indeed we
added a Reloadable interface with a 'reload' method). We could be smart
also
It is still @unstable, but the jiras :
https://issues.apache.org/jira/browse/HDFS-8940?jql=project%20%3D%20HDFS%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22)%20AND%20text%20~%20%22INotify%22
that I see are stall from over the summer.
They also seem geared to scale or changing the filter
Hello all,
I had created a NiFi processor a long time back that used the inotify API.
One thing I noticed while working with it is that it is marked with the
`Unstable` annotation. It may be worth checking if anymore work is going on
with it and if it will impact this (if it hasn't already been
I have updated the jira as well
On January 29, 2018 at 08:22:34, Otto Fowler (ottobackwa...@gmail.com)
wrote:
https://github.com/ottobackwards/hdfs-inotify-zookeeper
POC is kind of complete, only thing not done is having the inotify listener
do a tree cache to pick up new configurations.
On January 28, 2018 at 22:39:18, Otto Fowler (ottobackwa...@gmail.com)
wrote:
Btw: https://issues.apache.org/jira/browse/METRON-534
On January 26, 2018 at 15:03:11,
https://github.com/ottobackwards/hdfs-inotify-zookeeper
Has the basics framed out, short of pushing to zookeeper, which I mocked
out at this time.
I’ll add pushing to zk and a cache notification listener to the test soon.
On January 26, 2018 at 08:48:54, Otto Fowler (ottobackwa...@gmail.com)
In the future, when the ‘filter paths on the name node side for inotify’
lands in hdfs ( there is a jira from the summer that is not making progress
) we can
just use the paths to register.
On January 26, 2018 at 08:47:11, Otto Fowler (ottobackwa...@gmail.com)
wrote:
In the end, what I’m
In the end, what I’m thinking is this:
We have an ambari service that runs the notification -> zookeeper
it reads the ‘registration area’ from zookeeper to get it’s state and what
to watch
post 777 when parsers are installed and registered it is trivial to have my
installer also register the
Interesting, so you have an INotify listener to filter events, and then on
given changes, propagate a notification to zookeeper, which then triggers the
reconfiguration event via the curator client in Metron. I kinda like it given
our existing zookeeper methods.
Simon
> On 26 Jan 2018, at
https://github.com/ottobackwards/hdfs-inotify-zookeeper
Working on a poc
On January 26, 2018 at 07:41:44, Simon Elliston Ball (
si...@simonellistonball.com) wrote:
Should we consider using the Inotify interface to trigger reconfiguration,
in same way we trigger config changes in curator? We
Should we consider using the Inotify interface to trigger reconfiguration, in
same way we trigger config changes in curator? We also need to fix caching and
lifecycle in the Grok parser to make the zookeeper changes propagate pattern
changes while we’re at it.
Simon
> On 26 Jan 2018, at
At the moment, when a grok file or something changes in HDFS, how do we
know? Do we have to restart the parser topology to pick it up?
Just trying to clarify for myself.
ottO
17 matches
Mail list logo