Re: [ANNOUNCE] New Apache NiFi Committer Marton Szasz

2020-08-03 Thread Sivaprasanna
Congrats Marton!

On Mon, Aug 3, 2020 at 9:43 PM Andy LoPresto  wrote:

> Congratulations Marton. Thanks for all the great contributions so far and
> looking forward to many more.
>
> Andy LoPresto
> alopre...@apache.org
> alopresto.apa...@gmail.com
> He/Him
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> > On Aug 3, 2020, at 7:37 AM, Joe Witt  wrote:
> >
> > Congrats - your efforts around minifi and nifi at large are
> > greatly appreciated
> >
> > On Mon, Aug 3, 2020 at 7:35 AM Tony Kurc  wrote:
> >
> >> Congrats Marton!
> >>
> >> On Mon, Aug 3, 2020 at 4:02 AM Arpad Boda  wrote:
> >>
> >>> Apache NiFi community,
> >>>
> >>> On behalf of the Apache NiFI PMC, I am very pleased to announce that
> >> Marton
> >>> has accepted the PMC's invitation to become a committer on the Apache
> >> NiFi
> >>> project. We greatly appreciate all of Marton's hard work and generous
> >>> contributions to the project. We look forward to continued involvement
> in
> >>> the project.
> >>>
> >>> Marton had more than 100 contributions to MiNiFi C++ this year in
> various
> >>> areas: from Windows-specific memory leaks to nice new features and a
> lot
> >> of
> >>> code reviews. He also showed active presence on the mailing list
> helping
> >>> out the community.
> >>>
> >>> Welcome and congratulations!
> >>>
> >>
>
>


Re: [ANNOUNCE] New Apache NiFi Committer Peter Turcsanyi

2019-10-28 Thread Sivaprasanna
Congratulations, Peter.

On Mon, 28 Oct 2019 at 12:30 PM, Pierre Villard 
wrote:

> Congratulations Peter!
>
> Le lun. 28 oct. 2019 à 02:05, Aldrin Piri  a écrit :
>
> > Apache NiFi community,
> >
> > On behalf of the Apache NiFI PMC, I am very pleased to announce that
> Peter
> > has accepted the PMC's invitation to become a committer on the Apache
> NiFi
> > project. We greatly appreciate all of Peter's hard work and generous
> > contributions to the project. We look forward to continued involvement
> > in the project.
> >
> > Peter has provided several new extensions and improvements enhancing
> NiFi's
> > interoperability with cloud services.  He has also found and remedied
> > several bugs, is a regular participant in code reviews and an active
> > presence on the mailing lists helping out the community.
> >
> > Welcome and congratulations!
> >
>


Re: [ANNOUNCE] New Apache NiFi Committer Dániel Bakai

2019-10-25 Thread Sivaprasanna
Congratulations, Daniel. All the very best.

On Sat, 26 Oct 2019 at 6:01 AM, Tony Kurc  wrote:

> Congratulations Dániel!
>
> On Fri, Oct 25, 2019, 8:21 PM Otto Fowler  wrote:
>
> > std::cout << “Congratulations” << std::endl;
> >
> >
> >
> >
> > On October 25, 2019 at 12:38:20, Aldrin Piri (ald...@apache.org) wrote:
> >
> > Apache NiFi community,
> >
> > On behalf of the Apache NiFI PMC, I am very pleased to announce that
> Dániel
> > has accepted the PMC's invitation to become a committer on the Apache
> NiFi
> > project. We greatly appreciate all of Dániel's hard work and generous
> > contributions to the project. We look forward to continued involvement
> > in the project.
> >
> > Dániel has provided numerous contributions to the MiNiFi C++ codebase,
> > discovering and providing fixes for bugs, new functionality, and
> improving
> > build processes. Dániel is also a staple in review processes and
> > approaches each interaction with great communication and professionalism.
> >
> > Welcome and congratulations!
> > AP
> >
>


Re: [ANNOUNCE] New Apache NiFi Committer Kotaro Terada

2019-10-24 Thread Sivaprasanna
Congratulations, Kotaro!

On Fri, 25 Oct 2019 at 3:17 AM, Mike Thomsen  wrote:

> Congratulations, Kotaro!
>
> On Thu, Oct 24, 2019 at 2:49 PM Michael Moser  wrote:
>
> > Congrats on becoming a NiFi committer, Kotaro!
> >
> >
> >
> > On Thu, Oct 24, 2019 at 1:06 PM Otto Fowler 
> > wrote:
> >
> > > Congratulations!
> > >
> > >
> > >
> > >
> > > On October 24, 2019 at 09:50:17, Aldrin Piri (ald...@apache.org)
> wrote:
> > >
> > > Apache NiFi community,
> > >
> > > On behalf of the Apache NiFI PMC, I am very pleased to announce that
> > Kotaro
> > > has accepted the PMC's invitation to become a committer on the Apache
> > NiFi
> > > project. We greatly appreciate all of Kotaro's hard work and generous
> > > contributions to the project. We look forward to continued involvement
> > > in the project.
> > >
> > > Kotaro contributed to a breadth of areas in both NiFi and Registry as
> > well
> > > as
> > > being a regular reviewer of our releases. Kotaro's communication in
> Jira
> > > issues
> > > and responsiveness to the review processes highlighted great
> > collaboration
> > > and
> > > embodied our community goals for the project.
> > >
> > > Welcome and congratulations!
> > >
> > > --ap
> > >
> >
>


Re: [ANNOUNCE] New Apache NiFi Committer Rob Fellows

2019-09-24 Thread Sivaprasanna
Congratulations, Rob.

On Wed, 25 Sep 2019 at 5:39 AM, Joe Witt  wrote:

> Congrats and Thank You!
>
> On Tue, Sep 24, 2019 at 7:58 PM Andy LoPresto 
> wrote:
>
> > Congratulations Rob. Well-earned and welcome to the extra work of
> > committing PRs.
> >
> > Andy LoPresto
> > alopre...@apache.org
> > alopresto.apa...@gmail.com
> > PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> >
> > > On Sep 24, 2019, at 4:56 PM, Tony Kurc  wrote:
> > >
> > > Apache NiFi community,
> > > On behalf of the Apache NiFI PMC, I am very pleased to announce that
> Rob
> > > has accepted the PMC's invitation to become a committer on the Apache
> > NiFi
> > > project. We greatly appreciate all of Rob's hard work and generous
> > > contributions to the project. We look forward to his continued
> > involvement
> > > in the project.
> > >
> > > What stood out with Rob are his regular code contributions and reviews
> on
> > > many parts of the project to include NiFi, NiFi FDS, and NiFi Registry
> > > since early this year. Additionally, he's been doing the
> > > not-always-glamorous work of helping verify releases, which was a huge
> > > assist in getting NiFi 1.9.1, NiFi Registry 0.4.0 and 0.5.0, and NiFi
> FDS
> > > 0.2.0 out to the community.
> > >
> > > Welcome and congratulations!
> > > Tony
> >
> >
>


Re: [ANNOUNCE] New Apache NiFi PMC member Peter Wicks

2019-05-30 Thread Sivaprasanna
Congratulations, Peter!

On Fri, 31 May 2019 at 7:07 AM, Michael Moser  wrote:

> Great work, Peter.  Congrats!
>
>
> On Thu, May 30, 2019 at 8:05 PM Marc Parisi  wrote:
>
> > Congrats!
> >
> > On Thu, May 30, 2019, 2:58 PM Jeff  wrote:
> >
> > > Welcome to the PMC, Peter!  Congrats!
> > >
> > > On Thu, May 30, 2019 at 2:45 PM Tony Kurc  wrote:
> > >
> > > > Congratulations Peter!!
> > > >
> > > > On Thu, May 30, 2019 at 11:21 AM Aldrin Piri 
> > wrote:
> > > >
> > > > > NiFi Community,
> > > > >
> > > > > On behalf of the Apache NiFi PMC, I am pleased to announce that
> Peter
> > > > Wicks
> > > > > has accepted the PMC's invitation to join the Apache NiFi PMC.
> > > > >
> > > > > Peter's contributions have been plentiful in code, community,
> reviews
> > > and
> > > > > discussion after becoming a committer in November 2017.  His impact
> > > > across
> > > > > NiFi has lead to improvements surrounding Kerberos, GetFile,
> > ListFile,
> > > > > Clustering, Node Offload, Recordset Writers, HDFS, and Database
> > related
> > > > > processors among others.
> > > > >
> > > > > Thank you for all your contributions and welcome to the PMC, Peter!
> > > > >
> > > > > --aldrin
> > > > >
> > > >
> > >
> >
>


Re: [ANNOUNCE] New Apache NiFi Committer Arpad Boda

2019-05-24 Thread Sivaprasanna
Congratulations, Arpad. Thank you so much for all the efforts you are
putting in!

-
Sivaprasanna

On Fri, 24 May 2019 at 6:03 PM, Kevin Doran  wrote:

> Congrats, Arpad! Thanks for all the contributions to MiNiFi!
>
> On Thu, May 23, 2019 at 3:59 PM Andrew Lim 
> wrote:
> >
> > Congrats Arpad!
> >
> > > On May 23, 2019, at 9:23 AM, Aldrin Piri  wrote:
> > >
> > > On behalf of the Apache NiFI PMC, I am very pleased to announce that
> Arpad
> > > has accepted the PMC's invitation to become a committer on the Apache
> > > NiFi project.
> > > We greatly appreciate all of Arpad's hard work and generous
> contributions
> > > to the project. We look forward to his continued involvement in the
> project.
> > >
> > > Arpad has been highly involved in the MiNiFi C++ codebase providing
> > > contributions covering everything from code cleanup, tests, and new
> > > features.  Arpad has been an active reviewer, contributor to JIRAs and
> > > aided in verification of releases across the NiFi project.  Thank you
> for
> > > all your efforts!
> > >
> > > Welcome and congratulations!
> > >
> > > --aldrin
> >
>


Re: NIFI conectivity with HIVE to load data into HANA tables

2019-05-16 Thread Sivaprasanna
It can be done. I was briefly working on something similar. In my case, it
was HANA which was the source and Hive was the destination. You can query
Hive using SelectHiveQL and then use PutSQL/PutDatabaseRecord to write to
HANA.

On Thu, May 16, 2019 at 5:13 PM Patan Umejaiba  wrote:

> Hi Team,
>
> My requirement is to load the HIVE files data into HANA tables using NIFI.
> Let me know if that is possible and if that is a yes please provide the
> process to be followed.
>
> Thanks,
> Jaiba
>


Re: Contribution

2019-05-12 Thread Sivaprasanna
Hi Abdelwahid,

First place to start would be filing a Jira ticket at
https://issues.apache.org/jira/projects/NIFI

Then you can follow the usual review process by forking the project on
GitHub, commit your code, make a PR and also attach the patch to the Jira
that you created.

Please take a look at this guide
https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide. This
guide helped me a lot when I started off and was exactly in your position.

-
Sivaprasanna

On Sun, 12 May 2019 at 7:34 PM, Abdelwahid BENZERROUK 
wrote:

> Hi,
>
> I have developed a custom process that  connect Apache Nifi to Apache
> Ignite,Create Cache,tables , Write and get Data from Ignite.
> I have seen many Issues on Apache Nifi Forum for the process of PutIgnite .
>
> So I want to know how I can  contribute with my Custom Processor and make
> it Open source for every Users.
>
> Best,
> BENZERROUK,
> Abdelwahid
>


Re: [DISCUSS] NiFi and Java 11

2019-04-03 Thread Sivaprasanna
Yep. Adding to GitHub template is a great idea.

On Thu, 4 Apr 2019 at 5:27 AM, Andy LoPresto  wrote:

> We should add that to the Developer Guide [1], Contributor Guide [2], note
> it in the Migration Guide [3] when it takes effect, and include it in the
> GitHub PR template as well.
>
> [1] https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html <
> https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html>
> [2] https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide <
> https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide>
> [3] https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance <
> https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance>
>
>
> Andy LoPresto
> alopre...@apache.org
> alopresto.apa...@gmail.com
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> > On Apr 3, 2019, at 1:16 PM, Sivaprasanna 
> wrote:
> >
> > Sounds good to me.
> >
> > So contributors who bring in new changes are expected to write the code
> > that should be compatible with both Java 8 as well as Java 11, right?
> >
> > Do we see anything that we should have in our site that would help the
> > contributors/users with this change? Something like a document that
> > explains best practices which a developer should follow so that the
> change
> > which the developer is bringing in should run fine in both 8 and 11.
> >
> > -
> > Sivaprasanna
> >
> > On Thu, 4 Apr 2019 at 12:18 AM, Pierre Villard <
> pierre.villard...@gmail.com>
> > wrote:
> >
> >> Sounds good to me as well. Given the latest news on Java 8, having the 2
> >> PRs merged in would be great!
> >>
> >> Thanks,
> >> Pierre
> >>
> >> Le mer. 3 avr. 2019 à 20:34, Joe Witt  a écrit :
> >>
> >>> Jeff
> >>>
> >>> This seems very reasonable and thorough to me.
> >>>
> >>> The only short term implication that we'd have to buy into then is that
> >> PRs
> >>> going forward need to be able to build on both Java 8 and 11 which
> seems
> >> a
> >>> very fair way to bridge toward moving to Java 11 as the base
> requirement
> >> in
> >>> the next major release of NiFi (2.x).
> >>>
> >>> Thanks
> >>> Joe
> >>>
> >>> On Wed, Apr 3, 2019 at 2:19 PM Jeff  wrote:
> >>>
> >>>> I'm reaching out to the community today to propose a plan for moving
> >>>> forward with NiFi on Java 11.
> >>>>
> >>>> There are currently two PRs that deal with Java 11 compatibility.  The
> >>>> first PR allows NiFi built on Java 8 to be run on Java 11 [1].  The
> >>> second
> >>>> PR allows NiFi to be built on Java 11 and run on Java 11 [2].  There
> >> are
> >>> a
> >>>> lot of changes in the second PR, and it will require a lot of testing
> >> due
> >>>> to the breadth of changes from dependency upgrades.  Please take a
> look
> >>> at
> >>>> both of these PRs.  The earlier we can get feedback, the sooner we can
> >>> get
> >>>> Java 11 compatibility in an Apache NiFi release.
> >>>>
> >>>> I would to discuss with the community the following plan for upcoming
> >>> NiFi
> >>>> releases as they pertain to Java 11:
> >>>>
> >>>> For the 1.x release line, from either 1.10 or 1.11 (depending on when
> >>> these
> >>>> two PRs are merged to master) and onward, NiFi will be compatible with
> >>> both
> >>>> Java 8 and 11 for building and running NiFi:
> >>>>
> >>>>   - Build on Java 8, run on Java 8
> >>>>   - Build on Java 8, run on Java 11
> >>>>   - Build on Java 11, run on Java 11
> >>>>
> >>>> The 1.x release line will contain convenience builds for Java 8, and
> >> will
> >>>> NOT contain convenience builds created from Java 11.  Users that want
> >> to
> >>>> run NiFi 1.1x.y on Java 11 with Java 11 bytecode will have to build
> >> from
> >>>> the released source code.
> >>>>
> >>>> Once the Java 11 compatibility features [1] [2] are merged to master,
> >> PR
> >>>> reviews will require building and testing with Java 8 and Java 11
> >> before
> >>>> the PR is merged to ma

Re: [DISCUSS] NiFi and Java 11

2019-04-03 Thread Sivaprasanna
Sounds good to me.

So contributors who bring in new changes are expected to write the code
that should be compatible with both Java 8 as well as Java 11, right?

Do we see anything that we should have in our site that would help the
contributors/users with this change? Something like a document that
explains best practices which a developer should follow so that the change
which the developer is bringing in should run fine in both 8 and 11.

-
Sivaprasanna

On Thu, 4 Apr 2019 at 12:18 AM, Pierre Villard 
wrote:

> Sounds good to me as well. Given the latest news on Java 8, having the 2
> PRs merged in would be great!
>
> Thanks,
> Pierre
>
> Le mer. 3 avr. 2019 à 20:34, Joe Witt  a écrit :
>
> > Jeff
> >
> > This seems very reasonable and thorough to me.
> >
> > The only short term implication that we'd have to buy into then is that
> PRs
> > going forward need to be able to build on both Java 8 and 11 which seems
> a
> > very fair way to bridge toward moving to Java 11 as the base requirement
> in
> > the next major release of NiFi (2.x).
> >
> > Thanks
> > Joe
> >
> > On Wed, Apr 3, 2019 at 2:19 PM Jeff  wrote:
> >
> > > I'm reaching out to the community today to propose a plan for moving
> > > forward with NiFi on Java 11.
> > >
> > > There are currently two PRs that deal with Java 11 compatibility.  The
> > > first PR allows NiFi built on Java 8 to be run on Java 11 [1].  The
> > second
> > > PR allows NiFi to be built on Java 11 and run on Java 11 [2].  There
> are
> > a
> > > lot of changes in the second PR, and it will require a lot of testing
> due
> > > to the breadth of changes from dependency upgrades.  Please take a look
> > at
> > > both of these PRs.  The earlier we can get feedback, the sooner we can
> > get
> > > Java 11 compatibility in an Apache NiFi release.
> > >
> > > I would to discuss with the community the following plan for upcoming
> > NiFi
> > > releases as they pertain to Java 11:
> > >
> > > For the 1.x release line, from either 1.10 or 1.11 (depending on when
> > these
> > > two PRs are merged to master) and onward, NiFi will be compatible with
> > both
> > > Java 8 and 11 for building and running NiFi:
> > >
> > >- Build on Java 8, run on Java 8
> > >- Build on Java 8, run on Java 11
> > >- Build on Java 11, run on Java 11
> > >
> > > The 1.x release line will contain convenience builds for Java 8, and
> will
> > > NOT contain convenience builds created from Java 11.  Users that want
> to
> > > run NiFi 1.1x.y on Java 11 with Java 11 bytecode will have to build
> from
> > > the released source code.
> > >
> > > Once the Java 11 compatibility features [1] [2] are merged to master,
> PR
> > > reviews will require building and testing with Java 8 and Java 11
> before
> > > the PR is merged to master. We will add a new build to our Travis CI
> > > instance to cover building on Java 11, which should help with the PR
> > > process, since build status from Appveyor and Travis are shown on the
> PR.
> > > This will allow an easier transition to the NiFi 2.x release line.
> > >
> > > For the 2.x release line, the minimum Java version would be Java 11, at
> > > which point developers would be able to start using Java 11 language
> > > features.  Java 12 is not designated to be an LTS release, with its
> > support
> > > ending in September 2019.  I would not recommend making our minimum
> Java
> > > version a non-LTS Java release unless we are prepared to do NiFi
> releases
> > > to update the minimum Java version before the support for that version
> of
> > > Java ends.
> > >
> > > Taking this approach means we can reasonably allow users to run on Java
> > 11
> > > in the 1.x release line, ensure that we do not add commits which are
> not
> > > Java 11 compatible, and make it easier to transition into the 2.x line
> > for
> > > NiFi.
> > >
> > > - Jeff
> > >
> > > [1] https://github.com/apache/nifi/pull/3174
> > > [2] https://github.com/apache/nifi/pull/3404
> > >
> >
>


[DISCUSS] Deprecate processors who have Record oriented counterpart?

2019-02-23 Thread Sivaprasanna
Team,

Ever since the Record based processors were first introduced, there has
been active development in improving the Record APIs and constant interest
in introducing new set of Record oriented processors. It has gone to a
level where almost all the processors that deal with mainstream tech have a
Record based counterpart, such as the processors for MongoDB, Kafka, RDBMS,
HBase, etc., These record based processors have overcome the limitations of
the standard processors letting us build flows which are concise and
efficient especially when we are dealing with structured data. And more
over with the recent release of NiFi (1.9), we now have a new feature that
offers schema inference capability which even simplifies the process of
building flows with such processors. Having said that, I'm wondering if
this is a right time to raise the talk of deprecating processors which the
community believes has a much better record oriented counterpart, covering
all the functionalities currently offered by the standard processor.

There are a few things that has to be talked about, like how should the
deprecated processor be displayed in the UI, etc., but even before going
through that route, I want to understand the community's thoughts on this.

Thanks,
Sivaprasanna


Re: Which version to tag?

2019-02-19 Thread Sivaprasanna
I still don’t see a version tag for 1.10 in Jira board.

On Wed, 20 Feb 2019 at 10:04 AM, Joe Witt  wrote:

> ok now master is at 1.10.0-SNAPSHOT.
>
> Thanks
>
> On Tue, Feb 19, 2019 at 10:37 PM Joe Witt  wrote:
>
> > You are free to merge to master on approved/reviewed items as per normal.
> > Current master is on 1.9.0-SNAPSHOT so your edit would need to be as
> well.
> > Once the RC completes (which is the case in 14 minutes at current voting)
> > I'll update the versions on master to 1.10.0-SNAPSHOT and do the signed
> tag
> > dance/etc..
> >
> > Thanks
> >
> > On Tue, Feb 19, 2019 at 6:35 AM Mike Thomsen 
> > wrote:
> >
> >> Probably best to just hold off until the RC2 voting is done because you
> >> might have to update it twice if it gets tagged 1.10 and 1.9RC2 fails.
> >>
> >> On Tue, Feb 19, 2019 at 6:08 AM Sivaprasanna  >
> >> wrote:
> >>
> >> > All,
> >> >
> >> > I have just now merged PR 3285 to master. The associated Jira is
> >> NIFI-5987
> >> > <https://issues.apache.org/jira/browse/NIFI-5987>. Now, since Apache
> >> NiFi
> >> > 1.9.0 rc-2 is in voting, I'm confused whether to tag this particular
> >> issue
> >> > with 1.9.0 version or not. I don't see the version tag for the next
> one
> >> > (1.10).
> >> >
> >> > -
> >> > Sivaprasanna
> >> >
> >>
> >
>


Which version to tag?

2019-02-19 Thread Sivaprasanna
All,

I have just now merged PR 3285 to master. The associated Jira is NIFI-5987
<https://issues.apache.org/jira/browse/NIFI-5987>. Now, since Apache NiFi
1.9.0 rc-2 is in voting, I'm confused whether to tag this particular issue
with 1.9.0 version or not. I don't see the version tag for the next one
(1.10).

-
Sivaprasanna


Re: Help needed in contributing to nifi

2019-01-24 Thread Sivaprasanna
Hi Khamar,

It’s great to know that you are interested in contributing to the Apache
NiFi project. First place to start would be
https://issues.apache.org/jira/projects/NIFI/
Search if there is a Jira created already for the task you are interested
in. If you don’t find any, feel free to create one.

If you're also interested in contributing to the code, please take a look
at this contributor guide:
https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide
It is basically installing Git and cloning the project and setting up
locally. It also gives you some detailed overview about the code review
process and other useful tips.

Thanks,
Sivaprasanna

On Thu, 24 Jan 2019 at 11:08 AM, khamar shaikh 
wrote:

> Hi ,  i am a student and new to open source contribution . I want to add
> Data
> pipeline capability with OrientDB . Can someone please help me with how to
> proceed to do the same?
>
> --
> Thank you and Regards,
>
> Khamar Ali Shaikh
>


Re: [ANNOUNCE] New Apache NiFi Committer Nathan Gough

2019-01-02 Thread Sivaprasanna
Congratulations, Nathan!

On Thu, 3 Jan 2019 at 8:12 AM, Joe Witt  wrote:

> Congrats and thanks Nathan!
>
> On Wed, Jan 2, 2019 at 9:30 PM Tony Kurc  wrote:
> >
> > On behalf of the Apache NiFI PMC, I am very pleased to announce that
> Nathan
> > has accepted the PMC's invitation to become a committer on the Apache
> NiFi
> > project. We greatly appreciate all of Nathan's hard work and generous
> > contributions to the project. We look forward to his continued
> involvement
> > in the project.
> >
> > What stood out for the PMC was Nathan's long history of code contribution
> > especially in the area of security, and his always helpful conduct on the
> > mailing lists, the jiras, reviews, and releases. Thanks Nathan!
> >
> > Welcome and congratulations!
> >
> > - Tony
>


Re: [ANNOUNCE] New Apache NiFi Committer Ed Berezitsky

2019-01-02 Thread Sivaprasanna
Congratulations, Ed!

-
Sivaprasanna

On Thu, 3 Jan 2019 at 8:12 AM, Joe Witt  wrote:

> Congrats and thanks Ed!
>
> On Wed, Jan 2, 2019 at 9:36 PM Tony Kurc  wrote:
> >
> > On behalf of the Apache NiFI PMC, I am very pleased to announce that Ed
> has
> > accepted the PMC's invitation to become a committer on the Apache NiFi
> > project. We greatly appreciate all of Ed's hard work and generous
> > contributions to the project. We look forward to his continued
> involvement
> > in the project.
> >
> > Ed has been contributing code to the project through most of 2018, in
> areas
> > such as HBase, HDFS, and fixing some long standing bugs. Also, many of
> you
> > have had the pleasure of interacting with him on the dev and users
> mailing
> > lists, epitomizing The Apache Way.
> >
> > Welcome and congratulations!
> >
> > Tony
>


Re: [DISCUSS] Early, voluntary relocation to GitBox

2018-12-07 Thread Sivaprasanna
+1 (non-binding)

I’m in. Thanks for doing it, Aldrin.


On Sat, 8 Dec 2018 at 7:32 AM, James Wing  wrote:

> +1, thanks for volunteering.
>
> > On Dec 7, 2018, at 13:39, Kevin Doran  wrote:
> >
> > +1
> >
> > On 12/7/18, 15:17, "Andy LoPresto"  wrote:
> >
> >+1.
> >
> >Andy LoPresto
> >alopre...@apache.org
> >alopresto.apa...@gmail.com
> >PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> >
> >> On Dec 7, 2018, at 11:11 AM, Aldrin Piri  wrote:
> >>
> >> Hey folks,
> >>
> >> Daniel Gruno sent an email to the dev list about the deprecation of our
> >> git-wip repositories [1] (these are the canonical repos that committers
> >> push changes to and are currently mirrored to GitHub) and transitioning
> to
> >> GitBox [2].
> >>
> >> As highlighted in that email, this will not be an optional change, only
> in
> >> when and how it happens.  There was a previous discussion over this
> topic
> >> [3] where it was generally well received but I think some of the
> logistics
> >> were never mapped out and thus the actual request to do so was never
> >> executed.
> >>
> >> I am proposing we volunteer for the early relocation.  The process
> looks to
> >> be fairly straightforward and should ultimately only result in requiring
> >> our contributors to add/replace their original git-wip based remotes.
> >>
> >> If folks are in favor, I am happy to file the requisite INFRA ticket and
> >> provide the associated communication/docs to the community to make this
> >> happen.  Again, this is only about making the choice to perform this
> >> migration now, in a coordinated manner, in lieu of the mass migration
> that
> >> would happen later.
> >>
> >> From a project management standpoint, I think it is a nice bit of
> >> functionality that lets us better curate our open PRs and, given my
> prior
> >> interest in the subject, would like to see it happen.
> >>
> >> --aldrin
> >>
> >> [1]
> >>
> https://lists.apache.org/thread.html/8247bb17671d6131b1d7a646b4c018b21aac390c4f627d8fb1f421b2@%3Cdev.nifi.apache.org%3E
> >> [2] https://gitbox.apache.org/
> >> [3]
> >>
> https://lists.apache.org/thread.html/de5e103994f356b1b8a572410938eef44af8cb352210e35305c04bc9@%3Cdev.nifi.apache.org%3E
> >
> >
>


Re: [DISCUSS] Extension Registry

2018-11-13 Thread Sivaprasanna
One quick question. With the extension registry, my understanding is that
we would try to reduce the overall NiFi size by offloading certain existing
NAR bundles to the extension registry. So are we expecting the extension
registry to also come up with the ability/mechanism that lets an user to
download these bundles ?

-
Sivaprasanna

On Tue, 13 Nov 2018 at 11:07 PM, Joe Witt  wrote:

> Bryan
>
> Very exciting to see this under way!!!  We desperately have to get our
> core nifi build size way down and make it far easier for people to
> contribute new processors.  We have a long line of extensions that
> could be really useful/valuable and this will help unlock that while
> improving the user experience tremendously.
>
> For the doc/concerns noted above we should also add/mention that the
> relationship between nars (dependencies between nars for
> apis/controller services/parent nars/etc..) we want to have a way to
> manage/show that or a user experience for it.  Maybe not a day-1 thing
> but important to call out.
>
> Thanks!
> Joe
> On Tue, Nov 13, 2018 at 12:22 PM Bryan Bende  wrote:
> >
> > All,
> >
> > We've needed the elusive extension registry for quite some time now,
> > and with NiFi Registry I think we are in a good place to make some
> > progress in this area.
> >
> > I've started looking into adding "extension bundles" to NiFi Registry
> > as the next type of versioned item, along side the existing versioned
> > flows, and I wanted to take a minute to outline how that approach
> > could work before getting too far into it.
> >
> > Also, I'd like to focus this discussion on the design and
> > functionality of the extension registry, and not on how the community
> > is going to use it. Topics like hosting a central registry, changing
> > the build process, restructuring the git repo, releasing NARs, etc,
> > are all important topics, but first we need an extension registry
> > before we can do any of that :)
> >
> > Here is a high-level description of what needs to be done...
> >
> > NiFi Registry
> >
> > - Add a new type of item called an extension bundle, where each bundle
> > can contain one ore extensions or APIs
> >
> > - Support bundles for traditional NiFi (aka NARs) and also bundles for
> > MiNiFi CPP
> >
> > - Ability to upload the binary artifact for a bundle and extract the
> > metadata about the bundle, and metadata about the extensions contained
> > in the bundle (more on this later)
> >
> > - Provide a pluggable storage provider for saving the content of each
> > extension bundle so that we can have different implementations like
> > local fileysystem, S3, and other object stores
> >
> > - Provide a REST API for listing and retrieving available bundles,
> > integrate this into the registry Java client and NiFi CLI
> >
> > NAR Maven Plugin
> >
> > - Generate a descriptor for each component in the NAR which will
> > provide information like the description, tags, restricted or not,
> > etc.
> >
> > - These descriptors will be parsed by NiFi Registry when a NAR is
> > being uploaded so that NiFi Registry will know about the extensions
> > contained with in the NAR
> >
> > NiFi
> >
> > - Provide some type of extension manager experience where users can
> > search, browse, & install bundles that are available in any of the
> > registry clients that have been defined
> >
> > - Introduce a new security policy to control which users are allowed
> > to access the extension manager
> >
> > - Installing a bundle should load the NAR and make the extensions
> > available leveraging the recent work done to auto-load new NARs
> >
> > - Importing versioned flows from registry should provide an easy way
> > to install bundles that are required by the flow, but missing from the
> > local NiFi instance
> >
> >
> > If anyone has any thoughts or concerns about this approach, please let
> me know.
> >
> > Thanks,
> >
> > Bryan
>


Re: Local development and testing w/ kerberos

2018-10-24 Thread Sivaprasanna
Can you share the authorizers.xml? I guess something wrong with the CN
that’s mentioned there.

-
Sivaprasanna

On Wed, 24 Oct 2018 at 8:48 PM, Mike Thomsen  wrote:

> Alright, I think I'm pretty close here. I followed all of those steps,
> except I changed bbende to mthomsen.
>
> * I can run kinit mthom...@nifi.apache.org and it works.
> * I can run klist and see the expected output.
>
> When I bring up NiFi, I get the following (trimmed for brevity):
>
> Caused by:
> org.apache.nifi.authorization.exception.AuthorizerCreationException:
> org.apache.nifi.authorization.exception.AuthorizerCreationException: Unable
> to locate initial admin mthom...@nifi.apache.org to seed policies
> at
>
> org.apache.nifi.authorization.FileAccessPolicyProvider.onConfigured(FileAccessPolicyProvider.java:263)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
> org.apache.nifi.authorization.AccessPolicyProviderInvocationHandler.invoke(AccessPolicyProviderInvocationHandler.java:54)
> at com.sun.proxy.$Proxy76.onConfigured(Unknown Source)
> at
>
> org.apache.nifi.authorization.AuthorizerFactoryBean.getObject(AuthorizerFactoryBean.java:152)
> at
>
> org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:178)
> ... 96 common frames omitted
> Caused by:
> org.apache.nifi.authorization.exception.AuthorizerCreationException: Unable
> to locate initial admin mthom...@nifi.apache.org to seed policies
> at
>
> org.apache.nifi.authorization.FileAccessPolicyProvider.populateInitialAdmin(FileAccessPolicyProvider.java:598)
> at
>
> org.apache.nifi.authorization.FileAccessPolicyProvider.load(FileAccessPolicyProvider.java:541)
> at
>
> org.apache.nifi.authorization.FileAccessPolicyProvider.onConfigured(FileAccessPolicyProvider.java:254)
> ... 104 common frames omitted
>
> I double-checked the paths to krb5.conf and the keytab and they're both
> pointing to /tmp/docker-kdc
>
> Any ideas?
>
> Thanks,
>
> Mike
>
>
> On Wed, Oct 24, 2018 at 10:28 AM Mike Thomsen 
> wrote:
>
> > Awesome, thanks Bryan! I'm halfway through that (got klist view) and it's
> > working great so far.
> >
> > On Wed, Oct 24, 2018 at 9:36 AM Bryan Bende  wrote:
> >
> >> There is a docker-kdc project that is easy to use:
> >>
> >>
> >>
> https://bryanbende.com/development/2016/08/31/apache-nifi-1.0.0-kerberos-authentication
> >>
> >> It was made before docker for mac was good/popular and it previously
> >> relied on boot2docker, but I made the following modification to not
> >> use boot2docker
> >>
> >> docker-kdc$ git diff
> >> diff --git a/kdc b/kdc
> >> index 9410fc5..0a887e1 100755
> >> --- a/kdc
> >> +++ b/kdc
> >> @@ -90,10 +90,10 @@ CONTROL_VM='VBoxManage controlvm boot2docker-vm'
> >>  GET_KDC_HOST="echo $KDC_NATHOST"
> >>
> >>  # Adjust container in case of OSX.
> >> -if [[ $OSTYPE =~ darwin.+ ]]; then
> >> -   CONTAINER='boot2docker'
> >> -   GET_KDC_HOST='boot2docker ip'
> >> -fi
> >> +#if [[ $OSTYPE =~ darwin.+ ]]; then
> >> +#  CONTAINER='boot2docker'
> >> +#  GET_KDC_HOST='boot2docker ip'
> >> +#fi
> >>
> >> On Wed, Oct 24, 2018 at 7:35 AM Mike Thomsen 
> >> wrote:
> >> >
> >> > Looking for suggestions on local development and testing with
> kerberos.
> >> We
> >> > have a kerberized cluster set up in an AWS instance, but it's more for
> >> UAT
> >> > than development. Anyone have any suggestions/experience, say, setting
> >> up a
> >> > Mac or Linux box for developing and testing like this?
> >> >
> >> > Thanks,
> >> >
> >> > Mike
> >>
> >
>


Re: Need help with Controller Service implementation

2018-10-22 Thread Sivaprasanna
Thanks Bryan, Mike, Matt. I had both the things wrong. Now I have fixed it.
Thanks once again. :)

-
Sivaprasanna

On Mon, Oct 22, 2018 at 6:34 PM Bryan Bende  wrote:

> Usually this is due to a missing NAR dependency somewhere,
> double-check the dependencies between the processors NAR and service
> API NAR, and between the service impl NAR and service API NAR.
>
>
> https://cwiki.apache.org/confluence/display/NIFI/Maven+Projects+for+Extensions#MavenProjectsforExtensions-LinkingProcessorsandControllerServices
> On Mon, Oct 22, 2018 at 6:01 AM Mike Thomsen 
> wrote:
> >
> > What Matt said. META-INF.services might be how the IDE shows it (it does
> > that for docs in IntelliJ), but it is really META-INF/services on the
> disk.
> >
> > On Sun, Oct 21, 2018 at 2:16 PM Matt Burgess 
> wrote:
> >
> > > I’m not at my computer but I think your ControllerService file (for
> > > ServiceLoader) might need to be in META-INF/services instead of
> > > META-INF.services?
> > >
> > > Sent from my iPhone
> > >
> > > > On Oct 21, 2018, at 1:56 PM, Sivaprasanna  >
> > > wrote:
> > > >
> > > > Team,
> > > >
> > > > I'm working on a controller service implementation (NIFI-5621) and I
> > > have added new PropertyDescriptor to all the processors that would use
> this
> > > CS. This is the first time I'm writing a CS implementation so I went
> > > through the dev guide and implementation of other CS's across the NiFi
> > > project. The problem is, when I built the bundle and when I try to
> > > configure a processor with this new controller service, I'm getting "No
> > > controller service types found that are applicable for this property."
> I
> > > have attached the picture for reference. My code changes can be found
> at
> > > 7a31e122
> > > >
> > > > I know I'm missing something but where I'm missing, is something I
> don't
> > > know. Any help would be appreciated.
> > > >
> > > > -
> > > > Sivaprasanna
> > >
>


Re: [VOTE] Release Apache NiFi 1.8.0 (RC2)

2018-10-22 Thread Sivaprasanna
+1 (non-binding)

- Build passed
- Verified checksum
- Ran simple flows and everything seems to be stable

Thanks for RM'ing, Jeff. +1

-
Sivaprasanna

On Mon, Oct 22, 2018 at 12:54 PM Koji Kawamura 
wrote:

> +1 (binding).
>
> Build passed, confirmed few existing flows with a secure cluster.
> On Mon, Oct 22, 2018 at 12:01 PM James Wing  wrote:
> >
> > +1 (binding).  Thanks again, Jeff.
> >
> > On Sat, Oct 20, 2018 at 8:11 PM Jeff  wrote:
> >
> > > Hello,
> > >
> > > I am pleased to be calling this vote for the source release of Apache
> NiFi
> > > nifi-1.8.0.
> > >
> > > The source zip, including signatures, digests, etc. can be found at:
> > > https://repository.apache.org/content/repositories/orgapachenifi-1134
> > >
> > > The Git tag is nifi-1.8.0-RC2
> > > The Git commit ID is 19bdd375c32c97e2b7dfd41e5ffe65f5e1eb2435
> > >
> > >
> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=19bdd375c32c97e2b7dfd41e5ffe65f5e1eb2435
> > >
> > > Checksums of nifi-1.8.0-source-release.zip:
> > > SHA256:
> 72dc2934f70f41e0c62e0aeb2bdc48e9feaa743dc06319cbed42da04bdc0f827
> > > SHA512:
> > >
> > >
> 012194f79d4bd5060032588e29f5e9c4240aa5e4758946a6cbcc89c0a1499de9db0c46d3f76e5ee694f0f9345c5f1bee3f3e315ef6fcc1194447958cb3f8b003
> > >
> > > Release artifacts are signed with the following key:
> > > https://people.apache.org/keys/committer/jstorck.asc
> > >
> > > KEYS file available here:
> > > https://dist.apache.org/repos/dist/release/nifi/KEYS
> > >
> > > 204 issues were closed/resolved for this release:
> > >
> > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020=12343482
> > >
> > > Release note highlights can be found here:
> > >
> > >
> https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.8.0
> > >
> > > The vote will be open for 96 hours.
> > > Please download the release candidate and evaluate the necessary items
> > > including checking hashes, signatures, build
> > > from source, and test. Then please vote:
> > >
> > > [ ] +1 Release this package as nifi-1.8.0
> > > [ ] +0 no opinion
> > > [ ] -1 Do not release this package because...
> > >
>


Need help with Controller Service implementation

2018-10-21 Thread Sivaprasanna
Team,

I'm working on a controller service implementation (NIFI-5621) and I have
added new PropertyDescriptor to all the processors that would use this CS.
This is the first time I'm writing a CS implementation so I went through
the dev guide and implementation of other CS's across the NiFi project. The
problem is, when I built the bundle and when I try to configure a processor
with this new controller service, I'm getting "No controller service types
found that are applicable for this property." I have attached the picture
for reference. My code changes can be found at 7a31e122
<https://github.com/zenfenan/nifi/commit/7a31e122e5b32bc708924c06f06f3c10f4291765>

I know I'm missing something but where I'm missing, is something I don't
know. Any help would be appreciated.

-
Sivaprasanna


Re: [DISCUSS] Closing in on a release of NiFi 1.8.0?

2018-10-15 Thread Sivaprasanna
Great. Thanks. :)

-
Sivaprasanna

On Mon, Oct 15, 2018 at 7:09 AM Koji Kawamura 
wrote:

> Jeff, Sivasprasanna,
>
> NIFI-5698 (PR3073) Fixing DeleteAzureBlob bug is merged.
>
> Thanks,
> Koji
> On Mon, Oct 15, 2018 at 10:18 AM Koji Kawamura 
> wrote:
> >
> > Thank you for the fix Sivaprasanna,
> > I have Azure account. Reviewing it now.
> >
> > Koji
> > On Sun, Oct 14, 2018 at 11:21 PM Jeff  wrote:
> > >
> > > Sivaprasanna,
> > >
> > > Thanks for submitting a pull request for that issue!  Later today or
> > > tomorrow I'll have to check to see if I've already used up my free-tier
> > > access to Azure.  If I still have access, I can review your PR and
> we'll
> > > get it into 1.8.0.
> > >
> > > On Sun, Oct 14, 2018 at 4:30 AM Sivaprasanna <
> sivaprasanna...@gmail.com>
> > > wrote:
> > >
> > > > All - Just found one bug with DeleteAzureBlobStorage processor. It
> was
> > > > shared by one user on StackOverflow [1] and I later confirmed it. It
> looks
> > > > to be introduced by NIFI-4199. I have created a Jira [2] and made the
> > > > necessary changes (not huge, just few lines) and raised a PR [3]. I
> think,
> > > > if we can spend a little time in getting it reviewed, we can mark it
> for
> > > > 1.8.0. Thoughts?
> > > >
> > > > [1] -
> > > >
> > > >
> https://stackoverflow.com/questions/52766991/apache-nifi-deleteazureblobstorage-processor-is-throwing-an-error
> > > > [2] - https://issues.apache.org/jira/browse/NIFI-5698
> > > > [3] - https://github.com/apache/nifi/pull/3073
> > > >
> > > > -
> > > > Sivaprasanna
> > > >
> > > > On Fri, Oct 12, 2018 at 9:05 PM Mike Thomsen  >
> > > > wrote:
> > > >
> > > > > 4811 should be ready for review now. Rebased and cleaned it up
> with a
> > > > full
> > > > > listing of the Spring dependencies.
> > > > >
> > > > > On Fri, Oct 12, 2018 at 11:23 AM Joe Witt 
> wrote:
> > > > >
> > > > > > Jeff,
> > > > > >
> > > > > > I think for anything not tagged to 1.8.0 we just keep rolling.
> For
> > > > > > anything tagged 1.8.0 that should not be we should remove it
> until
> > > > > > ready.  For things tagged to 1.8.0 that cannot be moved we should
> > > > > > resolve.  For the tagged 1.8.0 section you had.
> > > > > >
> > > > > >- NIFI-4811 <https://issues.apache.org/jira/browse/NIFI-4811>
> -
> > > > Use a
> > > > > >newer version of spring-data-redis
> > > > > >- PR 2856 <https://github.com/apache/nifi/pull/2856>
> > > > > > *This needs to be resolved by either reverting the commit or
> ensuring
> > > > > > L accurately reflects all.  We have to do this always and for
> every
> > > > > > nar.  The process isnt easy or fun but it is necessary to produce
> > > > > > valid ASF releases.  Landing commits which change dependencies
> > > > > > requires this due diligence.  Now, we've put a lot of energy into
> > > > > > updating Spring dependencies because some older Spring libs had
> > > > > > vulnerabilities which while we likely aren't exposed to them we
> want
> > > > > > to fix in due course.  So reverting may require more analysis
> than if
> > > > > > we were just get L fixed with this new change.  I commented on
> the
> > > > > > JIRA.  But this needs to be resolved.
> > > > > >
> > > > > >
> > > > > >- NIFI-5426 <https://issues.apache.org/jira/browse/NIFI-5426>
> - Use
> > > > > >NIO.2 API for ListFile to avoid multiple disk reads
> > > > > >   - PR 2889 <https://github.com/apache/nifi/pull/2889>
> > > > > > *This just needed to be marked resolved.  The commit went in the
> day
> > > > > > after we cut 1.7.1.  So this one is sorted.
> > > > > >
> > > > > >- NIFI-5448 <https://issues.apache.org/jira/browse/NIFI-5448>
> -
> > > > > Failed
> > > > > >EL date parsing live-locks processors without a failure
> relationship
> > > > > > * The commit needs to be reverted.  I'm working on that now

Re: Overwriting attribute in GetMongo

2018-09-27 Thread Sivaprasanna
Rajesh, I don’t think the images were uploaded properly. If possible,
please export the flow as a template and attach it here.

-
Sivaprasanna

On Thu, 27 Sep 2018 at 7:31 PM, Rajesh Biswas 
wrote:

> Please find the flow,
>
>
>
>
>
> Below is the configuration for ListenHTTP processor
>
>
>
> Below is the configuration for PutMongo processor
>
>
>
>
>
> Thanks and Regards,
>
> Rajesh Biswas | +91 9886433461 | www.bridgera.com
>
>
>
> -Original Message-
> From: Mike Thomsen [mailto:mikerthom...@gmail.com]
> Sent: Thursday, September 27, 2018 7:08 PM
> To: dev@nifi.apache.org
> Subject: Re: Overwriting attribute in GetMongo
>
>
>
> Where does GetMongo sit relative to the connected ListenHTTP and PutMongo
>
> processors? That's the critical part that's missing here in helping you.
>
>
>
> Thanks,
>
>
>
> Mike
>
>
>
> On Thu, Sep 27, 2018 at 9:20 AM Rajesh Biswas 
>
> wrote:
>
>
>
> > Hello NiFi Team,
>
> >
>
> > We are using ListenHTTP processor to read HTTP input from client and we
>
> > have
>
> > PutMongo processor after that.
>
> >
>
> > We have seen the attributes we are getting from ListenHTTP is overwritten
>
> > when we call GetMongo collection.
>
> >
>
> > Would you please suggest how to preserve existing request attributes and
>
> > add
>
> > the document attributes feteched thru GetMongo processor.
>
> >
>
> >
>
> >
>
> > Thanks and Regards,
>
> >
>
> > Rajesh Biswas | +91 9886433461 |  <http://www.bridgera.com/>
>
> > www.bridgera.com
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > ---
>
> > This email has been checked for viruses by Avast antivirus software.
>
> > https://www.avast.com/antivirus
>
> >
>
>
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=emailclient>
>  Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=emailclient>
> <#m_-6568666549523592856_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


Re: [ANNOUNCE] New NiFi PMC member Jeremy Dyer

2018-07-31 Thread Sivaprasanna
Congratulations, Jeremy. Well deserved.

On Tue, 31 Jul 2018 at 9:44 PM, Scott Aslan  wrote:

> Congrats Jeremy!
>
> On Tue, Jul 31, 2018 at 11:02 AM Michael Moser  wrote:
>
> >  Congrats, welcome, and thank you for your work.
> >
> >
> > On Tue, Jul 31, 2018 at 9:21 AM Otto Fowler 
> > wrote:
> >
> > > Congratulations!
> > >
> > >
> > > On July 31, 2018 at 08:36:48, Tony Kurc (tk...@apache.org) wrote:
> > >
> > > All,
> > >
> > > On behalf of the Apache NiFi PMC, I am pleased to announce that Jeremy
> > Dyer
> > > has accepted the PMC's invitation to join the Apache NiFi PMC.
> > >
> > > Jeremy has been a long-time contributor to the project - across many
> > > different parts of the project to include NiFi and and MiNiFi,
> > contributing
> > > code, reviews, release verification, and help on the mailing lists.
> He's
> > > performed the challenging, detail oriented work of acting as a release
> > > manager for both the Java and C++ versions of MiNiFi (0.5.0 of each).
> > >
> > > Congratulations Jeremy and well deserved!
> > >
> > > Tony
> > >
> >
>


Re: [ANNOUNCE] New NiFi PMC member Kevin Doran

2018-07-31 Thread Sivaprasanna
Congratulations, Kevin.

On Tue, 31 Jul 2018 at 9:43 PM, Scott Aslan  wrote:

> Congrats Kevin!
>
> On Tue, Jul 31, 2018 at 11:01 AM Michael Moser  wrote:
>
> > Congrats, welcome, and thank you for your work.
> >
> >
> > On Tue, Jul 31, 2018 at 9:21 AM Otto Fowler 
> > wrote:
> >
> > > Congratulations!
> > >
> > >
> > > On July 31, 2018 at 08:26:34, Tony Kurc (tk...@apache.org) wrote:
> > >
> > > NiFi Community,
> > >
> > > On behalf of the Apache NiFi PMC, I am pleased to announce that Kevin
> > Doran
> > > has accepted the PMC's invitation to join the Apache NiFi PMC.
> > >
> > > In addition to being a regular code contributor to the project for
> quite
> > > some time, Kevin has been hard to miss on the mailing lists, especially
> > on
> > > NiFi Registry threads. We all appreciate his hard work getting releases
> > > "out the door", helping verify releases and recently doing release
> > manager
> > > duty for the NiFi Registry 0.2.0.
> > >
> > > Please join us in congratulating and welcoming Kevin to the Apache NiFi
> > > PMC.
> > >
> > > Tony
> > >
> >
>


Re: [VOTE] Release Apache NiFi 1.7.0

2018-06-21 Thread Sivaprasanna
+1 (Non-binding)

I did:
* Checked the sums and they matched
* mvn clean install -Pcontrib-check,include-grpc
* Run the built assembly in Windows 10 x64 and CentOS 7 x64
* Tested some data flows that used AWS bundle and Hadoop bundle. Everything
was a success.

One weird thing I noticed was, when successfully built, I saw one leftover
directory in the unpackaged source directory with the name
'${project.basedir}'.

Thanks.

On Wed, Jun 20, 2018 at 11:15 PM, Matt Burgess  wrote:

> +1 (binding)
>
> Verified all artifacts, full build with contrib-check, verified the
> Hive 3 NAR is not in the assembly unless the include-hive3 profile is
> activated, also ran through various flows to exercise Hive 3 and
> PutORC functionality (and their associated Record Readers, Writers,
> and intermediate processors).
> On Wed, Jun 20, 2018 at 3:16 AM Andy LoPresto 
> wrote:
> >
> > Hello,
> >
> > I am pleased to be calling this vote for the source release of Apache
> NiFi nifi-1.7.0.
> >
> > The source zip, including signatures, digests, etc. can be found at:
> > https://repository.apache.org/content/repositories/orgapachenifi-1127
> >
> > and
> >
> > https://dist.apache.org/repos/dist/dev/nifi/nifi-1.7.0
> >
> > The Git tag is nifi-1.7.0-RC1
> > The Git commit ID is 99bcd1f88dc826f857ae4ab33e842110bfc6ce21
> > https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=
> 99bcd1f88dc826f857ae4ab33e842110bfc6ce21
> >
> > Checksums of nifi-1.7.0-source-release.zip:
> > SHA1: 11086ef532bb51462d7e1ac818f6308d4ac62f03
> > SHA256: b616f985d486af3d05c04e375f952a4a5678f486017a2211657d5ba03aaaf563
> > SHA512: d81e9c6eb7fc51905d6f6629b25151fc3d8af7a3cd7cbc3aa03be390c056
> 1858d614b62d8379a90fdb736fcf5c1b4832f4e050fdcfcd786e9615a0b5cc1d563d
> >
> > Release artifacts are signed with the following key:
> > https://people.apache.org/keys/committer/alopresto.asc
> >
> > KEYS file available here:
> > https://dist.apache.org/repos/dist/release/nifi/KEYS
> >
> > 194 issues were closed/resolved for this release:
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> version=12342979=12316020
> >
> > Release note highlights can be found here:
> > https://cwiki.apache.org/confluence/display/NIFI/
> Release+Notes#ReleaseNotes-Version1.7.0
> >
> > The vote will be open for 72 hours.
> > Please download the release candidate and evaluate the necessary items
> including checking hashes, signatures, build
> > from source, and test. Then please vote:
> >
> > [ ] +1 Release this package as nifi-1.7.0
> > [ ] +0 no opinion
> > [ ] -1 Do not release this package because…
> >
> > Andy LoPresto
> > alopre...@apache.org
> > alopresto.apa...@gmail.com
> > PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> >
>


Re: Adding new data anonymization processor bundle

2018-06-20 Thread Sivaprasanna
Wow.. I dint realize there was a JIRA already. I'm interested and would be
happy to contribute my time & efforts on this.

On Wed, Jun 20, 2018 at 10:34 PM, Matt Burgess  wrote:

> I think is a great idea, I filed a Jira [1] a while ago in case
> someone wanted to start working on it (or in case I got a chance). It
> mentions ARX but any Apache-friendly implementation is of course
> welcome. I think it should be in its own bundle as it is functionality
> separate from all our other bundles (and not ubiquitous enough to put
> in the standard NAR).
>
> Glad to hear you're interested in this, please feel free to reach out
> with any questions and I too would be happy to review any
> contributions.
>
> Thanks,
> Matt
>
> [1] https://issues.apache.org/jira/browse/NIFI-4492
>
> On Wed, Jun 20, 2018 at 12:57 PM Mike Thomsen 
> wrote:
> >
> > There's a framework called ARX that could very useful for this. The only
> > question you have is how compliant it would be with different sets of
> > distinct legal requirements for privacy handling. In the absence of
> strong
> > legal guidance, I'd say err on the side of complying with health care
> > regulations because that's where you're likely to find the clearest
> > guidance and established tools.
> >
> > Ping me on any PR you send.
> >
> > On Wed, Jun 20, 2018 at 12:49 PM Sivaprasanna  >
> > wrote:
> >
> > > With data becoming more critical and substantial to business
> development,
> > > new stringent regulations & law are getting introduced (GDPR being a
> recent
> > > example), I've been spending some time lately doing research on data
> > > anonymization and after some hefty thinking, I finally decided to go
> ahead
> > > with the creation of new processor bundle that has processors like
> > > 'AnonymizeRecord', 'DeanonymizeRecord' (not quite sure about the name
> > > though). Following are my questions:
> > >
> > >- What do you guys think about these proposed processors?
> > >- If the processors are okay to be introduced, are they "standard"
> > >enough to get them added to our 'nifi-standard-bundles' module or
> is it
> > >better to keep it separated much like others like AWS, Azure
> bundles,
> > > etc.
> > >
> > > Having said this, I'm very much in the beginning phase with my
> research and
> > > development efforts so all your inputs & feedback on this one are
> greatly
> > > appreciated.
> > >
> > > Thanks.
> > >
> > > -
> > > Sivaprasanna
> > >
>


Adding new data anonymization processor bundle

2018-06-20 Thread Sivaprasanna
With data becoming more critical and substantial to business development,
new stringent regulations & law are getting introduced (GDPR being a recent
example), I've been spending some time lately doing research on data
anonymization and after some hefty thinking, I finally decided to go ahead
with the creation of new processor bundle that has processors like
'AnonymizeRecord', 'DeanonymizeRecord' (not quite sure about the name
though). Following are my questions:

   - What do you guys think about these proposed processors?
   - If the processors are okay to be introduced, are they "standard"
   enough to get them added to our 'nifi-standard-bundles' module or is it
   better to keep it separated much like others like AWS, Azure bundles, etc.

Having said this, I'm very much in the beginning phase with my research and
development efforts so all your inputs & feedback on this one are greatly
appreciated.

Thanks.

-
Sivaprasanna


Setting values as System properties

2018-06-19 Thread Sivaprasanna
Team,

As part of NIFI-5133, I started doing some bit of research on Google Cloud
PubSub service and it's support on proxy configuration and came to know
that the service uses 'gRPC' so the proxy configuration is expected to be
at the System properties level. I know this approach is not good and
believe it has to be avoided but wanted to know the community's thoughts on
this.

Thanks,
Sivaprasanna


Re: Unable to install Apachi nifi on my unix box

2018-06-12 Thread Sivaprasanna
What error do you get when you run it? Please share it over here. BTW,
which version of Apache NiFi are you using?

Thanks,
Sivaprasanna

On Tue, Jun 12, 2018 at 7:24 PM, Srinivasa Rao Potla <
srinivasarao.po...@infosys.com> wrote:

> Hi Team,
>
>
>
> I have unable to run the below step , I have downloaded the as per github
> steps .
>
>
>
> But unable to start the below step to start the apache nifi
>
> ./bin/nifi.sh start
>
>
>
>
>
>
>
> Regards,
>
> Srinivasarao Potla
>
> +91 8886210
>
> www.infosys.com
>
>
>


Re: Why "ExecuteSQL" processor serving DELETE and UPDATE SQL queries.

2018-06-11 Thread Sivaprasanna
I understand that. I linked that Jira to initiate a discussion (so that the
team can pitch in their thoughts) on to have necessary changes done on
ExecuteSQL to enable support for both SELECT and DELETE operations and make
the necessary documentation changes, explaining that this processor
supports SELECT and DELETE.

For your case of restricting to accept "SELECT" statement alone:
IMHO, this sounds like a case specific requirement and it may not be
possible or seem logical to enforce this restriction for the whole user
base so you can have two things:

   - If you have authorization management tools like Apache Ranger, you can
   have the restriction by having policies that allows only certain people to
   do DELETE operations so even if someone using NiFi tries to delete and that
   person doesn't have the necessary privileges, it will fail. This is kinda
   complex solution and the changes are externalized to NiFi
   - The simple solution is to customize the ExecuteSQL to accept only
   SELECT statements.

Thanks,
Sivaprasanna

On Mon, Jun 11, 2018 at 12:19 PM, Brajendra Mishra <
brajendra_mis...@persistent.com> wrote:

> Hi Sivaprasanna,
>
> Thanks for prompt response.
> Mentioned Jira defect (https://issues.apache.org/jira/browse/NIFI-4843)
> talks about, to support delete queries through ExecuteSQL processor, but in
> our case it works fine for 'Delete' as well as 'Update' queries.
>
> IN documentation is mentioned only about catering 'Select' SQL queries:
>
> "SQL select query: The SQL select query to execute. The query can be
> empty, a constant value, or built from attributes using Expression
> Language. If this property is specified, it will be used regardless of the
> content of incoming flowfiles. If this property is empty, the content of
> the incoming flow file is expected to contain a valid SQL select query, to
> be issued by the processor to the database. Note that Expression Language
> is not evaluated for flow file contents.
> Supports Expression Language: true"
>
> Is there a way to restrict user to allow only select statement/queries
> through expression language?
>
> Brajendra Mishra
> Persistent Systems Ltd.
>
> -Original Message-
> From: Sivaprasanna 
> Sent: Monday, June 11, 2018 11:55 AM
> To: dev@nifi.apache.org
> Subject: Re: Why "ExecuteSQL" processor serving DELETE and UPDATE SQL
> queries.
>
> Brajendra,
>
> As you have said, even though the documentation for ExecuteSQL mentions,
> "It is meant to execute SELECT", it ultimately accepts and executes DML
> commands. Looking at the code, there are no way of restricting it to accept
> & execute SELECT query only. There is this Jira NIFI-4843 <
> https://issues.apache.org/jira/browse/NIFI-4843> which mentions something
> similar. Either we can do one thing: Make ExecuteSQL support two types of
> operation "SELECT" and "DELETE" and be it exposed as a property and in the
> code we do the check and perform the execution. Thoughts?
>
> Thanks,
> Sivaprasanna
>
> On Mon, Jun 11, 2018 at 11:43 AM, Brajendra Mishra <
> brajendra_mis...@persistent.com> wrote:
>
> > Hi Team,
> >
> >
> >
> > We are currently using ExecuteSQL processor in our flow.
> >
> > As per the documentation, ExecuteSQL processor should only be worked
> > for SELECT queries, but it is working for other SQL commands as well
> > for DELETE and UPDATE queries.
> >
> >
> >
> > In our current flow implementation we want to restrict user to execute
> > only SELECT queries. How we can achieve this requirement?
> >
> >
> >
> > For your reference, here I have attached screen prints of ‘ExecuteSQL’
> > processor:
> >
> >
> >
> > Brajendra Mishra
> >
> > Persistent Systems Ltd.
> >
> >
> > DISCLAIMER
> > ==
> > This e-mail may contain privileged and confidential information which
> > is the property of Persistent Systems Ltd. It is intended only for the
> > use of the individual or entity to which it is addressed. If you are
> > not the intended recipient, you are not authorized to read, retain,
> > copy, print, distribute or use this message. If you have received this
> > communication in error, please notify the sender and delete all copies
> of this message.
> > Persistent Systems Ltd. does not accept any liability for virus
> > infected mails.
> >
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is
> the property of Persistent Systems Ltd. It is intended only for the use of
> the individual or entity to which it is addressed. If you are not the
> intended recipient, you are not authorized to read, retain, copy, print,
> distribute or use this message. If you have received this communication in
> error, please notify the sender and delete all copies of this message.
> Persistent Systems Ltd. does not accept any liability for virus infected
> mails.
>


Re: Why "ExecuteSQL" processor serving DELETE and UPDATE SQL queries.

2018-06-11 Thread Sivaprasanna
Brajendra,

As you have said, even though the documentation for ExecuteSQL mentions,
"It is meant to execute SELECT", it ultimately accepts and executes DML
commands. Looking at the code, there are no way of restricting it to accept
& execute SELECT query only. There is this Jira NIFI-4843
<https://issues.apache.org/jira/browse/NIFI-4843> which mentions something
similar. Either we can do one thing: Make ExecuteSQL support two types of
operation "SELECT" and "DELETE" and be it exposed as a property and in the
code we do the check and perform the execution. Thoughts?

Thanks,
Sivaprasanna

On Mon, Jun 11, 2018 at 11:43 AM, Brajendra Mishra <
brajendra_mis...@persistent.com> wrote:

> Hi Team,
>
>
>
> We are currently using ExecuteSQL processor in our flow.
>
> As per the documentation, ExecuteSQL processor should only be worked for
> SELECT queries, but it is working for other SQL commands as well for DELETE
> and UPDATE queries.
>
>
>
> In our current flow implementation we want to restrict user to execute
> only SELECT queries. How we can achieve this requirement?
>
>
>
> For your reference, here I have attached screen prints of ‘ExecuteSQL’
> processor:
>
>
>
> Brajendra Mishra
>
> Persistent Systems Ltd.
>
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is
> the property of Persistent Systems Ltd. It is intended only for the use of
> the individual or entity to which it is addressed. If you are not the
> intended recipient, you are not authorized to read, retain, copy, print,
> distribute or use this message. If you have received this communication in
> error, please notify the sender and delete all copies of this message.
> Persistent Systems Ltd. does not accept any liability for virus infected
> mails.
>


Re: Should the elastic search client service impl get renamed before 1.7?

2018-06-10 Thread Sivaprasanna
I feel if the upgrade from 5.x to 6.x client doesn't introduce any breaking
changes, we can continue with the name that we are having now.

-
Sivaprasanna

On Sun, Jun 10, 2018 at 5:16 PM, Mike Thomsen 
wrote:

> The current implementation uses the last stable release of the 5.X client
> from Elastic, and 6.X is already mature so we'll need to be able to have
> room to create a new implementation copy that uses that client if there are
> things we have to change between them. So does it make sense to throw in a
> new ticket to rename the service to something that indicates that this
> implementation is officially for 5.X? As of 1.6 I think only one processor,
> JsonQueryElasticSearch, uses it so not many uses would likely be impacted
> yet.
>
> Thanks,
>
> Mike
>


Re: Java 10

2018-06-07 Thread Sivaprasanna
Nope. Not yet. AFAIK, there is a Jira to support Java 9.

On Thu, Jun 7, 2018 at 7:09 PM, kirilzilla  wrote:

> does nifi support java 10 ? or even java 9 ?
>
>
>
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
>


Question on Docker module

2018-06-07 Thread Sivaprasanna
Team,

In the project's nifi-docker
<https://github.com/apache/nifi/tree/master/nifi-docker> module, we have
two things: dockerhub and dockermaven. Are they one and the same or they
are there for different purposes? I have to warn you I don't know much
about 'dockerfile-maven-plugin' so some light on this might help me
understand the differences (if at all).

Thanks,
Sivaprasanna


Re: [ANNOUNCE] New Apache NiFi Committer Sivaprasanna Sethuraman

2018-06-06 Thread Sivaprasanna
Thanks everyone :)

On Thu, Jun 7, 2018 at 2:01 AM, Jorge Machado  wrote:

> Congrats !
>
> Jorge
>
>
>
>
>
> > On 6 Jun 2018, at 18:44, Otto Fowler  wrote:
> >
> > Congratulations!
> >
> >
> > On June 5, 2018 at 10:09:28, Tony Kurc (tk...@apache.org) wrote:
> >
> > On behalf of the Apache NiFI PMC, I am very pleased to announce that
> > Sivaprasanna has accepted the PMC's invitation to become a committer on
> the
> > Apache NiFi project. We greatly appreciate all of Sivaprasanna's hard
> work
> > and generous contributions to the project. We look forward to continued
> > involvement in the project.
> >
> > Sivaprasanna has been working with the community on the mailing lists,
> and
> > has a big mix of code and feature contributions to include features and
> > improvements to cloud service integrations like Azure, AWS, and Google
> > Cloud.
> >
> > Welcome and congratulations!
>
>


Re: [ANNOUNCE] New NiFi PMC member Mike Thomsen

2018-06-06 Thread Sivaprasanna
Congratulations, Mike! :)

On Thu, Jun 7, 2018 at 4:54 AM, Otto Fowler  wrote:

> Congratulations!
>
>
> On June 6, 2018 at 18:22:32, Tony Kurc (tk...@apache.org) wrote:
>
> NiFi community,
> On behalf of the Apache NiFi PMC, I am pleased to announce that Mike
> Thomsen has accepted the PMC's invitation to join the Apache NiFi PMC. We
> greatly appreciate all of Mike's hard work and generous contributions to
> the project. We look forward to continued involvement in the project.
>
> Mike was announced as a committer a few months ago, and in that time I'm
> sure many have noticed Mike's increased involvement in the areas we look
> for in PMC members - building the community by helping people with issues,
> ensuring the project follows ASF best practices, participation in
> discussions, and helping others contribute (in addition to contributing his
> own code and documentation). He's been a great advocate of the project and
> of the Apache Way.
>
> Congratulations and welcome, Mike!
>
> Tony
>


Re: Commit to nifi-site

2018-06-05 Thread Sivaprasanna
To all,

If anyone who has been given committers privilege but haven't got their
name added to the NiFi project's people page. Let me know, if you want to
have your name added, the setup process is somewhat time consuming and if
you don't want to go through the hassle of setting up a whole lot of things
which you may not be actually using, I can help since I set them up
recently.

Thanks,
Sivaprasanna

On Tue, Jun 5, 2018 at 11:07 PM, Sivaprasanna 
wrote:

> Thank you so much, Aldrin. I was able to deploy it. Having one's name on
> an Apache projects' site is an honor!
>
> Thanks once again.
>
> -
> Sivaprasanna
>
> On Tue, Jun 5, 2018 at 10:00 PM, Sivaprasanna 
> wrote:
>
>> Aldrin,
>>
>> Thanks for the quick response. I'm going to try that now. I'll update
>> this thread if I'm stuck somewhere.
>>
>> Thanks,
>> Sivaprasanna
>>
>> On Tue, Jun 5, 2018 at 9:41 PM, Aldrin Piri  wrote:
>>
>>> Hi Sivaprasanna,
>>>
>>> You will need to perform a deploy via grunt as outlined here:
>>> https://github.com/apache/nifi-site#grunt-tasks
>>>
>>> You can configure your environment via
>>> https://github.com/apache/nifi-site#setting-up-build-environment.
>>>
>>> Let us know if you hit any issues.
>>>
>>> On Tue, Jun 5, 2018 at 12:04 PM, Sivaprasanna >> >
>>> wrote:
>>>
>>> > Hi
>>> >
>>> > I cloned http://git-wip-us.apache.org/repos/asf/nifi-site
>>> > <http://git-wip-us.apache.org/repos/asf/nifi-site/repo> and made a
>>> change
>>> > and committed it (#8bd32db0
>>> > <https://git1-us-west.apache.org/repos/asf?p=nifi-site.git;
>>> > a=commitdiff;h=8bd32db0>).
>>> > Is that the correct way to do it? The reason why I'm asking this is
>>> because
>>> > I see this (https://svn.apache.org/viewvc/nifi/site/trunk/) doesn't
>>> > include
>>> > the commit I made. I'm sorry if I did it the wrong way. Appreciate your
>>> > inputs.
>>> >
>>> > -
>>> > Sivaprasanna
>>> >
>>>
>>
>>
>


Re: Commit to nifi-site

2018-06-05 Thread Sivaprasanna
Thank you so much, Aldrin. I was able to deploy it. Having one's name on an
Apache projects' site is an honor!

Thanks once again.

-
Sivaprasanna

On Tue, Jun 5, 2018 at 10:00 PM, Sivaprasanna 
wrote:

> Aldrin,
>
> Thanks for the quick response. I'm going to try that now. I'll update this
> thread if I'm stuck somewhere.
>
> Thanks,
> Sivaprasanna
>
> On Tue, Jun 5, 2018 at 9:41 PM, Aldrin Piri  wrote:
>
>> Hi Sivaprasanna,
>>
>> You will need to perform a deploy via grunt as outlined here:
>> https://github.com/apache/nifi-site#grunt-tasks
>>
>> You can configure your environment via
>> https://github.com/apache/nifi-site#setting-up-build-environment.
>>
>> Let us know if you hit any issues.
>>
>> On Tue, Jun 5, 2018 at 12:04 PM, Sivaprasanna 
>> wrote:
>>
>> > Hi
>> >
>> > I cloned http://git-wip-us.apache.org/repos/asf/nifi-site
>> > <http://git-wip-us.apache.org/repos/asf/nifi-site/repo> and made a
>> change
>> > and committed it (#8bd32db0
>> > <https://git1-us-west.apache.org/repos/asf?p=nifi-site.git;
>> > a=commitdiff;h=8bd32db0>).
>> > Is that the correct way to do it? The reason why I'm asking this is
>> because
>> > I see this (https://svn.apache.org/viewvc/nifi/site/trunk/) doesn't
>> > include
>> > the commit I made. I'm sorry if I did it the wrong way. Appreciate your
>> > inputs.
>> >
>> > -
>> > Sivaprasanna
>> >
>>
>
>


Re: Commit to nifi-site

2018-06-05 Thread Sivaprasanna
Aldrin,

Thanks for the quick response. I'm going to try that now. I'll update this
thread if I'm stuck somewhere.

Thanks,
Sivaprasanna

On Tue, Jun 5, 2018 at 9:41 PM, Aldrin Piri  wrote:

> Hi Sivaprasanna,
>
> You will need to perform a deploy via grunt as outlined here:
> https://github.com/apache/nifi-site#grunt-tasks
>
> You can configure your environment via
> https://github.com/apache/nifi-site#setting-up-build-environment.
>
> Let us know if you hit any issues.
>
> On Tue, Jun 5, 2018 at 12:04 PM, Sivaprasanna 
> wrote:
>
> > Hi
> >
> > I cloned http://git-wip-us.apache.org/repos/asf/nifi-site
> > <http://git-wip-us.apache.org/repos/asf/nifi-site/repo> and made a
> change
> > and committed it (#8bd32db0
> > <https://git1-us-west.apache.org/repos/asf?p=nifi-site.git;
> > a=commitdiff;h=8bd32db0>).
> > Is that the correct way to do it? The reason why I'm asking this is
> because
> > I see this (https://svn.apache.org/viewvc/nifi/site/trunk/) doesn't
> > include
> > the commit I made. I'm sorry if I did it the wrong way. Appreciate your
> > inputs.
> >
> > -
> > Sivaprasanna
> >
>


Commit to nifi-site

2018-06-05 Thread Sivaprasanna
Hi

I cloned http://git-wip-us.apache.org/repos/asf/nifi-site
<http://git-wip-us.apache.org/repos/asf/nifi-site/repo> and made a change
and committed it (#8bd32db0
<https://git1-us-west.apache.org/repos/asf?p=nifi-site.git;a=commitdiff;h=8bd32db0>).
Is that the correct way to do it? The reason why I'm asking this is because
I see this (https://svn.apache.org/viewvc/nifi/site/trunk/) doesn't include
the commit I made. I'm sorry if I did it the wrong way. Appreciate your
inputs.

-
Sivaprasanna


Re: KerberosProperties.validatePrincipalAndKeytab Error ?

2018-06-04 Thread Sivaprasanna
Jorge,

Both 'Kerberos Principal' abd 'Kerberos Keytab' support NiFi expression
language so ${principal} and ${keytab} is valid here. Can you check if the
property "nifi.kerberos.krb5.file" is set in nifi.properties file? Looks
like this has to be set according to the description of those properties.

-
Sivaprasanna

On Mon, Jun 4, 2018 at 1:27 PM, Jorge Machado  wrote:

> Hi Guys,
>
> I’m facing the issue that I cannot start the DeleteHDFS with the error:
> "Kerberos Principal must be provided when using a secure configuration”
>
> I’m able to reproduce this with this test:
>
> @Test
> public void testKerberosOptionsWithCredentialServices() throws Exception {
> SimpleHadoopProcessor processor = new 
> SimpleHadoopProcessor(kerberosProperties);
> TestRunner runner = TestRunners.newTestRunner(processor);
>
> // initialize the runner with EL for the kerberos properties
> 
> runner.setProperty(AbstractHadoopProcessor.HADOOP_CONFIGURATION_RESOURCES, 
> "${variableHadoopConfigResources}");
> runner.setProperty(kerberosProperties.getKerberosPrincipal(), 
> "${variablePrincipal}");
> runner.setProperty(kerberosProperties.getKerberosKeytab(), 
> "${variableKeytab}");
>
> // add variables for all the kerberos properties except for the keytab
> runner.setVariable("variableHadoopConfigResources", 
> "src/test/resources/core-site-security.xml");
> runner.assertValid();
> }
>
> In our case the ${principal} and ${keytab} is coming as a attribute on the
> incomming the Flowfile, the problem is that this validation of the
> attribute is happening before.
> Should this work like this? In all other places if we are using a variable
> it can be evaluate at run time...
>
> Jorge Machado
>
>
>
>
>
>


Re: Put data to Elastic with static settings or index template

2018-05-22 Thread Sivaprasanna
Bobby,

If I'm correct, this setting is done during index creation and the
PutElasticsearch processors doesn't create index. It primarily works with
the assumption that the configured index already exists (people correct me,
if I'm wrong). If that's the case, there is no need to do anything on the
NiFi side. Rather while creating the index through ES APIs, you set the
"static" setting. Hope that helps.

-
Sivaprasanna

On Tue, May 22, 2018 at 8:21 AM, Bobby <bobbyhars...@gmail.com> wrote:

> Hi, when inserting data to elastic using nifi's processor (putElastic), i
> need to apply static setting for the index..like mentioned in
> https://www.elastic.co/guide/en/elasticsearch/reference/
> current/index-modules.html
> <https://www.elastic.co/guide/en/elasticsearch/reference/
> current/index-modules.html>
> , this must be applied in index creation..
>
> With the processor, will it be possible to use this utility? I need to do
> this in order to save the space...or in other word, changing the
> compression
> type...
>
> As for last resort, i might need to write custom processor extended from
> putElastic
>
>
> Any suggestion?
>
> Thank you
>
>
>
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
>


Re: NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-05 Thread Sivaprasanna
I have created a Jira - NIFI-5156
<https://issues.apache.org/jira/browse/NIFI-5156> and made the necessary
changes to the current nifi-gcp-bundle. I have tested the components - GCS
Processors & GCP Credentials Service. They all work as expected. Anyone
want to give it a look and review the changes? PR #2680
<https://github.com/apache/nifi/pull/2680>

-
Sivaprasanna

On Thu, May 3, 2018 at 9:48 PM, Sivaprasanna <sivaprasanna...@gmail.com>
wrote:

> I'm interested although it might consume some time since I don't know how
> big it is going to be. And I suppose it is better to capture it in a
> separate Jira. The summary could be upgrade and refactor GCP processor
> codebase or something like that. We could then make NIFI-5133 a dependent
> of the new Jira. Thoughts?
>
> Meanwhile, anyone who has any other recommendations, feel free to share. :)
>
> -
> Sivaprasanna
>
> On Thu, May 3, 2018 at 9:43 PM, Joe Witt <joe.w...@gmail.com> wrote:
>
>> Sivaprasanna
>>
>> Ok makes sense.  Are you in a position of interest/expertise/time to
>> make those changes as well?
>>
>> Thanks
>>
>> On Thu, May 3, 2018 at 12:11 PM, Sivaprasanna <sivaprasanna...@gmail.com>
>> wrote:
>> > Hi. As I had mentioned, upgrading to the latest version of the library
>> is
>> > not as simple as I thought. Google Cloud team introduced many breaking
>> > changes. Many of the APIs (classes & methods) have been
>> > scrapped/replaced/modified/refactored/renamed.
>> >
>> > In short, a simple change of version may demand changes on the
>> processor's
>> > code, especially on the AbstractProcessors (AbstractGCS, AbstractGCP)
>> which
>> > may pose backward compatibility issues, I'm afraid.
>> >
>> > Thanks,
>> > Sivaprasanna
>> >
>> > On Thu, May 3, 2018 at 9:26 PM, Joe Witt <joe.w...@gmail.com> wrote:
>> >
>> >> Sivaprasanna
>> >>
>> >> I might not completely follow but is there a 3rd option to upgrade to
>> >> a more recent library and solve the use of the proper jars
>> >> problem(smaller nar)?
>> >>
>> >> Thanks
>> >>
>> >> On Thu, May 3, 2018 at 11:51 AM, Sivaprasanna <
>> sivaprasanna...@gmail.com>
>> >> wrote:
>> >> > Hi
>> >> >
>> >> > I've started the initial works on implementing Google Cloud Pub/Sub
>> >> > processors. The associated Jira ID is NIFI-5133
>> >> > <https://issues.apache.org/jira/browse/NIFI-5133>. This will go to
>> the
>> >> > existing GCP bundle which currently has only the storage processors.
>> Upon
>> >> > some inspection, I noticed the following:
>> >> >
>> >> >- As of now, the bundle uses google-cloud
>> >> ><http://mvnrepository.com/artifact/com.google.cloud/google-cloud>
>> as
>> >> its
>> >> >dependency which is like uber/fat jar that contains most of the
>> Google
>> >> >Cloud's client library SDKs including storage, bigquery, pubsub,
>> etc.
>> >> The
>> >> >main point is it is using a very older version (0.8.0)
>> >> >- I thought of using google-cloud-bom
>> >> ><http://mvnrepository.com/artifact/com.google.cloud/google-
>> cloud-bom>
>> >> in
>> >> >the bundle's POM
>> >> ><https://github.com/apache/nifi/blob/master/nifi-nar-
>> >> bundles/nifi-gcp-bundle/pom.xml>
>> >> >and then use the required artifacts in the processor's POM
>> >> ><https://github.com/apache/nifi/blob/master/nifi-nar-
>> >> bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml>.
>> >> >The benefit is, it will help us reduce the overall size of the
>> NAR.
>> >> >
>> >> > When I tried to do #2, I realized this is not a simple version change
>> >> but a
>> >> > change that brings backward compatibility issues. Ex: Some APIs used
>> in
>> >> the
>> >> > older version i.e. 0.8.0 have now been entirely scrapped and moved to
>> >> > different library. We can do either two things:
>> >> >
>> >> >1. User the Pub/Sub APIs from the older version but the problem is
>> >> it's
>> >> >very old and the problem of upgrading would soon catchup with us.
>> >> >2. Or we can continue to use the older version of
>> google-cloud-storage
>> >> >only for the storage processors and introduce the #2 mentioned
>> above
>> >> but I
>> >> >don't think then the new processors can't properly extend the
>> existing
>> >> >AbstractGCPProcessor
>> >> ><https://github.com/apache/nifi/blob/master/nifi-nar-
>> >> bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/
>> >> org/apache/nifi/processors/gcp/AbstractGCPProcessor.java>.
>> >> >
>> >> >
>> >> > A quick glance on the processor code and the POM would help you
>> >> understand
>> >> > my concern.
>> >> >
>> >> > I'm stuck up here so any help & guidance in this regard is very much
>> >> > appreciated. :)
>> >> >
>> >> > Thanks,
>> >> >
>> >> > Sivaprasanna
>> >>
>>
>
>


Re: GetMongoDB : How to pass parameters as input to GetMongoDB processor

2018-05-04 Thread Sivaprasanna
As Bryan mentioned in the actual ‘GetSplunk’ thread, it is not available
but it certainly makes sense to have that feature. If you’re interested in
having that, please raise a Jira at https://issues.apache.org/jira

-
Sivaprasanna

On Fri, 4 May 2018 at 12:56 PM, Brajendra Mishra <
brajendra_mis...@persistent.com> wrote:

> Hi Mike,
>
> Thanks a lot for your valuable inputs.
> We tried GetMongoDB Template which you shared in previous mail with new
> version of Apache NiFi (NiFi-1.6) , Its working fine and we are able to get
> mongoDB data in desired format.
>
> One query: Is such input flow functionality available with 'GetSplunk'
> processor too with NiFI 1.6.0? if yes, please share sample template or if
> not, When can expect such implementation with 'GetSplunk' processor too?
>
> Brajendra Mishra
> Persistent Systems Ltd.
>
> -Original Message-
> From: Mike Thomsen <mikerthom...@gmail.com>
> Sent: Thursday, May 03, 2018 6:56 PM
> To: dev@nifi.apache.org
> Subject: Re: GetMongoDB : How to pass parameters as input to GetMongoDB
> processor
>
> Brajendra,
>
> I would recommend an update to 1.6.0. It'll make your life a lot easier on
> this. I did that patch to GetMongo because I had a client that had an
> explosion of GetMongos due to that inflexibility. With that said, *be aware
> of this bug* in 1.6.0 w/ PutMongo if you use it and upgrade. It is fixed in
> 1.7.0 (still in development):
>
> Migrating from 1.5.0 to 1.6.0
>
>- PutMongo can fail in insert mode. Will be fixed in next release. In
>the mean time you can set query keys for insert even though they'll be
>ignored it should workaround the validation bug.
>
>
> What it means is there is a validator function that is broken in PutMongo
> when one is using the "insert mode" instead of "update mode." You can do a
> work around by putting a dummy value in the "query key" field to make it
> happy.
>
> On Thu, May 3, 2018 at 8:26 AM Pierre Villard <pierre.villard...@gmail.com
> >
> wrote:
>
> > Hi,
> >
> > As Mike said: incoming relationship has been added for NiFi 1.6.0.
> > https://issues.apache.org/jira/browse/NIFI-4827
> >
> > Pierre
> >
> > 2018-05-03 14:09 GMT+02:00 Brajendra Mishra <
> > brajendra_mis...@persistent.com
> > >:
> >
> > > Hi Mike,
> > >
> > > I did attach the same in my previous mail. Well reattaching it again.
> > > Well error is at GetMongoDB Processor and error text is : "Upstream
> > > Connections is invalid because Processor does not allow upstream
> > > connections but currently has 1"
> > >
> > > Brajendra Mishra
> > > Persistent Systems Ltd.
> > >
> > > -Original Message-
> > > From: Mike Thomsen <mikerthom...@gmail.com>
> > > Sent: Thursday, May 03, 2018 5:20 PM
> > > To: dev@nifi.apache.org
> > > Subject: Re: GetMongoDB : How to pass parameters as input to
> > > GetMongoDB processor
> > >
> > > Brajendra,
> > >
> > > Looks like the image didn't make it.
> > >
> > > On Wed, May 2, 2018 at 11:36 PM Brajendra Mishra <
> > > brajendra_mis...@persistent.com> wrote:
> > >
> > > > Hi Mike,
> > > >
> > > > Thanks for responding.
> > > >
> > > > Here, I have attached missing image attachment.
> > > >
> > > >
> > > >
> > > > Brajendra Mishra
> > > >
> > > > Persistent Systems Ltd.
> > > >
> > > >
> > > >
> > > > *From:* Mike Thomsen <mikerthom...@gmail.com>
> > > > *Sent:* Wednesday, May 02, 2018 6:24 PM
> > > >
> > > >
> > > > *To:* dev@nifi.apache.org
> > > > *Subject:* Re: GetMongoDB : How to pass parameters as input to
> > > > GetMongoDB processor
> > > >
> > > >
> > > >
> > > > That might require 1.6.0. Also, your image didn't come through in
> > > > your response to Sivaprasanna so resend that too.
> > > >
> > > > On Wed, May 2, 2018 at 8:37 AM Brajendra Mishra <
> > > > brajendra_mis...@persistent.com> wrote:
> > > >
> > > > Hi Mike,
> > > >
> > > > Thanks a lot for responding.
> > > >
> > > > On your statement
> > > > "That is its new default behavior if you leave the query field
> > > > blank and have an incoming connection from another processor. That
> >

Re: NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-03 Thread Sivaprasanna
I'm interested although it might consume some time since I don't know how
big it is going to be. And I suppose it is better to capture it in a
separate Jira. The summary could be upgrade and refactor GCP processor
codebase or something like that. We could then make NIFI-5133 a dependent
of the new Jira. Thoughts?

Meanwhile, anyone who has any other recommendations, feel free to share. :)

-
Sivaprasanna

On Thu, May 3, 2018 at 9:43 PM, Joe Witt <joe.w...@gmail.com> wrote:

> Sivaprasanna
>
> Ok makes sense.  Are you in a position of interest/expertise/time to
> make those changes as well?
>
> Thanks
>
> On Thu, May 3, 2018 at 12:11 PM, Sivaprasanna <sivaprasanna...@gmail.com>
> wrote:
> > Hi. As I had mentioned, upgrading to the latest version of the library is
> > not as simple as I thought. Google Cloud team introduced many breaking
> > changes. Many of the APIs (classes & methods) have been
> > scrapped/replaced/modified/refactored/renamed.
> >
> > In short, a simple change of version may demand changes on the
> processor's
> > code, especially on the AbstractProcessors (AbstractGCS, AbstractGCP)
> which
> > may pose backward compatibility issues, I'm afraid.
> >
> > Thanks,
> > Sivaprasanna
> >
> > On Thu, May 3, 2018 at 9:26 PM, Joe Witt <joe.w...@gmail.com> wrote:
> >
> >> Sivaprasanna
> >>
> >> I might not completely follow but is there a 3rd option to upgrade to
> >> a more recent library and solve the use of the proper jars
> >> problem(smaller nar)?
> >>
> >> Thanks
> >>
> >> On Thu, May 3, 2018 at 11:51 AM, Sivaprasanna <
> sivaprasanna...@gmail.com>
> >> wrote:
> >> > Hi
> >> >
> >> > I've started the initial works on implementing Google Cloud Pub/Sub
> >> > processors. The associated Jira ID is NIFI-5133
> >> > <https://issues.apache.org/jira/browse/NIFI-5133>. This will go to
> the
> >> > existing GCP bundle which currently has only the storage processors.
> Upon
> >> > some inspection, I noticed the following:
> >> >
> >> >- As of now, the bundle uses google-cloud
> >> ><http://mvnrepository.com/artifact/com.google.cloud/google-cloud>
> as
> >> its
> >> >dependency which is like uber/fat jar that contains most of the
> Google
> >> >Cloud's client library SDKs including storage, bigquery, pubsub,
> etc.
> >> The
> >> >main point is it is using a very older version (0.8.0)
> >> >- I thought of using google-cloud-bom
> >> ><http://mvnrepository.com/artifact/com.google.cloud/
> google-cloud-bom>
> >> in
> >> >the bundle's POM
> >> ><https://github.com/apache/nifi/blob/master/nifi-nar-
> >> bundles/nifi-gcp-bundle/pom.xml>
> >> >and then use the required artifacts in the processor's POM
> >> ><https://github.com/apache/nifi/blob/master/nifi-nar-
> >> bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml>.
> >> >The benefit is, it will help us reduce the overall size of the NAR.
> >> >
> >> > When I tried to do #2, I realized this is not a simple version change
> >> but a
> >> > change that brings backward compatibility issues. Ex: Some APIs used
> in
> >> the
> >> > older version i.e. 0.8.0 have now been entirely scrapped and moved to
> >> > different library. We can do either two things:
> >> >
> >> >1. User the Pub/Sub APIs from the older version but the problem is
> >> it's
> >> >very old and the problem of upgrading would soon catchup with us.
> >> >2. Or we can continue to use the older version of
> google-cloud-storage
> >> >only for the storage processors and introduce the #2 mentioned
> above
> >> but I
> >> >don't think then the new processors can't properly extend the
> existing
> >> >AbstractGCPProcessor
> >> ><https://github.com/apache/nifi/blob/master/nifi-nar-
> >> bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/
> >> org/apache/nifi/processors/gcp/AbstractGCPProcessor.java>.
> >> >
> >> >
> >> > A quick glance on the processor code and the POM would help you
> >> understand
> >> > my concern.
> >> >
> >> > I'm stuck up here so any help & guidance in this regard is very much
> >> > appreciated. :)
> >> >
> >> > Thanks,
> >> >
> >> > Sivaprasanna
> >>
>


Re: NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-03 Thread Sivaprasanna
Hi. As I had mentioned, upgrading to the latest version of the library is
not as simple as I thought. Google Cloud team introduced many breaking
changes. Many of the APIs (classes & methods) have been
scrapped/replaced/modified/refactored/renamed.

In short, a simple change of version may demand changes on the processor's
code, especially on the AbstractProcessors (AbstractGCS, AbstractGCP) which
may pose backward compatibility issues, I'm afraid.

Thanks,
Sivaprasanna

On Thu, May 3, 2018 at 9:26 PM, Joe Witt <joe.w...@gmail.com> wrote:

> Sivaprasanna
>
> I might not completely follow but is there a 3rd option to upgrade to
> a more recent library and solve the use of the proper jars
> problem(smaller nar)?
>
> Thanks
>
> On Thu, May 3, 2018 at 11:51 AM, Sivaprasanna <sivaprasanna...@gmail.com>
> wrote:
> > Hi
> >
> > I've started the initial works on implementing Google Cloud Pub/Sub
> > processors. The associated Jira ID is NIFI-5133
> > <https://issues.apache.org/jira/browse/NIFI-5133>. This will go to the
> > existing GCP bundle which currently has only the storage processors. Upon
> > some inspection, I noticed the following:
> >
> >- As of now, the bundle uses google-cloud
> ><http://mvnrepository.com/artifact/com.google.cloud/google-cloud> as
> its
> >dependency which is like uber/fat jar that contains most of the Google
> >Cloud's client library SDKs including storage, bigquery, pubsub, etc.
> The
> >main point is it is using a very older version (0.8.0)
> >- I thought of using google-cloud-bom
> ><http://mvnrepository.com/artifact/com.google.cloud/google-cloud-bom>
> in
> >the bundle's POM
> ><https://github.com/apache/nifi/blob/master/nifi-nar-
> bundles/nifi-gcp-bundle/pom.xml>
> >and then use the required artifacts in the processor's POM
> ><https://github.com/apache/nifi/blob/master/nifi-nar-
> bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml>.
> >The benefit is, it will help us reduce the overall size of the NAR.
> >
> > When I tried to do #2, I realized this is not a simple version change
> but a
> > change that brings backward compatibility issues. Ex: Some APIs used in
> the
> > older version i.e. 0.8.0 have now been entirely scrapped and moved to
> > different library. We can do either two things:
> >
> >1. User the Pub/Sub APIs from the older version but the problem is
> it's
> >very old and the problem of upgrading would soon catchup with us.
> >2. Or we can continue to use the older version of google-cloud-storage
> >only for the storage processors and introduce the #2 mentioned above
> but I
> >don't think then the new processors can't properly extend the existing
> >AbstractGCPProcessor
> ><https://github.com/apache/nifi/blob/master/nifi-nar-
> bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/
> org/apache/nifi/processors/gcp/AbstractGCPProcessor.java>.
> >
> >
> > A quick glance on the processor code and the POM would help you
> understand
> > my concern.
> >
> > I'm stuck up here so any help & guidance in this regard is very much
> > appreciated. :)
> >
> > Thanks,
> >
> > Sivaprasanna
>


NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-03 Thread Sivaprasanna
Hi

I've started the initial works on implementing Google Cloud Pub/Sub
processors. The associated Jira ID is NIFI-5133
<https://issues.apache.org/jira/browse/NIFI-5133>. This will go to the
existing GCP bundle which currently has only the storage processors. Upon
some inspection, I noticed the following:

   - As of now, the bundle uses google-cloud
   <http://mvnrepository.com/artifact/com.google.cloud/google-cloud> as its
   dependency which is like uber/fat jar that contains most of the Google
   Cloud's client library SDKs including storage, bigquery, pubsub, etc. The
   main point is it is using a very older version (0.8.0)
   - I thought of using google-cloud-bom
   <http://mvnrepository.com/artifact/com.google.cloud/google-cloud-bom> in
   the bundle's POM
   
<https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-gcp-bundle/pom.xml>
   and then use the required artifacts in the processor's POM
   
<https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml>.
   The benefit is, it will help us reduce the overall size of the NAR.

When I tried to do #2, I realized this is not a simple version change but a
change that brings backward compatibility issues. Ex: Some APIs used in the
older version i.e. 0.8.0 have now been entirely scrapped and moved to
different library. We can do either two things:

   1. User the Pub/Sub APIs from the older version but the problem is it's
   very old and the problem of upgrading would soon catchup with us.
   2. Or we can continue to use the older version of google-cloud-storage
   only for the storage processors and introduce the #2 mentioned above but I
   don't think then the new processors can't properly extend the existing
   AbstractGCPProcessor
   
<https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/AbstractGCPProcessor.java>.


A quick glance on the processor code and the POM would help you understand
my concern.

I'm stuck up here so any help & guidance in this regard is very much
appreciated. :)

Thanks,

Sivaprasanna


Re: GetMongoDB : How to pass parameters as input to GetMongoDB processor

2018-05-02 Thread Sivaprasanna
Since I'm not so sure about your exact use case, I have just created a
rough template based on the simple example flow that I had posted earlier
which is GenerateFlowfile -> UpdateAttribute -> GetMongo. I have attached
the template here.

-
Sivaprasanna

On Wed, May 2, 2018 at 2:55 PM, Brajendra Mishra <
brajendra_mis...@persistent.com> wrote:

> Hi Sivaprasanna,
>
> Could you please provide me the sample template for the same, where I can
> pass parameters (and get those parameters' value to process further) to
> GetMongoDB processor?
> It would be a great help for us.
>
> Brajendra Mishra
> Persistent Systems Ltd.
>
> -----Original Message-
> From: Sivaprasanna <sivaprasanna...@gmail.com>
> Sent: Wednesday, May 02, 2018 2:28 PM
> To: dev@nifi.apache.org
> Subject: Re: GetMongoDB : How to pass parameters as input to GetMongoDB
> processor
>
> Hi.
>
> GetMongo can take input. So technically you can use a processor before and
> then connect it  to GetMongo.
>
> A simple example :
> GenerateFlowfile -> UpdateAttribute -> GetMongo
>
> In the UpdateAttribute, you can add attributes for the database and
> collection and then use them in GetMong using NiFi Expression Language.
>
> Let me know, if that doesn’t help.
>
> -
> Sivaprasanna
>
> On Wed, 2 May 2018 at 1:26 PM, Brajendra Mishra <
> brajendra_mis...@persistent.com> wrote:
>
> > Hi Team,
> > We have found there is only 'GetMongoDB' processor to connect and
> > query to MongoDB in Apache NiFi.
> > Hence, we this processor does not take any type or input.
> >
> > Do we have another type to Apache NiFi processor which can take
> > parameters as input (details of MongoDB, query, instance etc.) from
> other processor?
> > If not then please suggest when such type of processor can be expected
> > in upcoming release?
> >
> > Brajendra Mishra
> > Persistent Systems Ltd.
> >
> > DISCLAIMER
> > ==
> > This e-mail may contain privileged and confidential information which
> > is the property of Persistent Systems Ltd. It is intended only for the
> > use of the individual or entity to which it is addressed. If you are
> > not the intended recipient, you are not authorized to read, retain,
> > copy, print, distribute or use this message. If you have received this
> > communication in error, please notify the sender and delete all copies
> of this message.
> > Persistent Systems Ltd. does not accept any liability for virus
> > infected mails.
> >
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is
> the property of Persistent Systems Ltd. It is intended only for the use of
> the individual or entity to which it is addressed. If you are not the
> intended recipient, you are not authorized to read, retain, copy, print,
> distribute or use this message. If you have received this communication in
> error, please notify the sender and delete all copies of this message.
> Persistent Systems Ltd. does not accept any liability for virus infected
> mails.
>



f6620488-0162-1000-acf2-cb751ab6e617
Hello Mongo


944a5b06-7f3e-383e--
ab629e9a-a9e2-3dcc--
1 GB
1

ab629e9a-a9e2-3dcc--
ec33b40f-e496-3032--
PROCESSOR

0 sec
1

success

ab629e9a-a9e2-3dcc--
c97a1213-a63e-38ea--
PROCESSOR

0


955cdf67-0ef5-32af--
ab629e9a-a9e2-3dcc--
1 GB
1

ab629e9a-a9e2-3dcc--
f9778f4b-ba15-3efb--
PROCESSOR

0 sec
1

success

ab629e9a-a9e2-3dcc--
ec33b40f-e496-3032--
PROCESSOR

0


fe76ca18-692e-3adb--
ab629e9a-a9e2-3dcc--
1 GB
1

502.5
429.0


ab629e9a-a9e2-3dcc--
53ce9b6e-7c8f-358c--
PROCESSOR

0 sec
1

success

ab629e9a-a9e2-3dcc--
 

Re: GetMongoDB : How to pass parameters as input to GetMongoDB processor

2018-05-02 Thread Sivaprasanna
Hi.

GetMongo can take input. So technically you can use a processor before and
then connect it  to GetMongo.

A simple example :
GenerateFlowfile -> UpdateAttribute -> GetMongo

In the UpdateAttribute, you can add attributes for the database and
collection and then use them in GetMong using NiFi Expression Language.

Let me know, if that doesn’t help.

-
Sivaprasanna

On Wed, 2 May 2018 at 1:26 PM, Brajendra Mishra <
brajendra_mis...@persistent.com> wrote:

> Hi Team,
> We have found there is only 'GetMongoDB' processor to connect and query to
> MongoDB in Apache NiFi.
> Hence, we this processor does not take any type or input.
>
> Do we have another type to Apache NiFi processor which can take parameters
> as input (details of MongoDB, query, instance etc.) from other processor?
> If not then please suggest when such type of processor can be expected in
> upcoming release?
>
> Brajendra Mishra
> Persistent Systems Ltd.
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is
> the property of Persistent Systems Ltd. It is intended only for the use of
> the individual or entity to which it is addressed. If you are not the
> intended recipient, you are not authorized to read, retain, copy, print,
> distribute or use this message. If you have received this communication in
> error, please notify the sender and delete all copies of this message.
> Persistent Systems Ltd. does not accept any liability for virus infected
> mails.
>


Re: RAT errors after pull of master

2018-04-30 Thread Sivaprasanna
This is the maven unable to identify the iml and meta files that Intellij
IDEA generates. Pierre or someone has already shared a way to solve this
earlier in one dev thread. I’ll try to find but you can just search “rat
checks” in th dev threads. You’ll find it.

-
Sivaprasanna

On Mon, 30 Apr 2018 at 7:10 PM, Otto Fowler <ottobackwa...@gmail.com> wrote:

> Anyone else seeing:
>
> [WARNING] Files with unapproved licenses:
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-client-service-api/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-client-service-api/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-client-service-api/target/maven-archiver/pom.properties
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-client-service-api/target/.plxarc
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-client-service-api/nifi-mongodb-client-service-api.iml
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/target/maven-status/maven-compiler-plugin/testCompile/groovy-tests/inputFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/target/maven-status/maven-compiler-plugin/testCompile/groovy-tests/createdFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/target/maven-status/maven-compiler-plugin/testCompile/default-testCompile/inputFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/target/maven-status/maven-compiler-plugin/testCompile/default-testCompile/createdFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/target/maven-archiver/pom.properties
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/target/.plxarc
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/nifi-mongodb-services.iml
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/target/.plxarc
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services-bundle.iml
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services-nar/target/.plxarc
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services-nar/nifi-mongodb-services-nar.iml
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-elasticsearch-client-service-api/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-elasticsearch-client-service-api/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-elasticsearch-client-service-api/target/maven-archiver/pom.properties
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-elasticsearch-client-service-api/target/.plxarc
>
>
> /Users/ottofowler/src/apache/forks/nifi/nifi-nar-bundles/nifi-standard-services/nifi-elasticsearch-client-service-api/nifi-elasticsearch-client-service-api.iml
> [INFO] 
>


Re: [DISCUSS] Support for accessing sensitive values safely

2018-04-26 Thread Sivaprasanna
Andy, That's exactly what I had in mind, but without a separate textbox
popping up. What I had originally though, there is a checkbox instead of a
button named "Use Variables". If it is checked, whatever the user types,
AJAX comes in to play and suggests ( auto-completion ) variables that the
user has access to. If the box is not checked, it is going to be a literal
password. But in this case, the normal textbox that pops up when a property
is edited, because it is more of a "textarea" that a textbox and the
suggestion dropdown would feel awkward there. I haven't worked extensively
on UI so not sure about how difficult it is going to be. Definitely UI
folks would pitch in much better ideas.

Thanks,
Sivaprasanna

On Thu, Apr 26, 2018 at 7:25 PM, Andy LoPresto <alopre...@apache.org> wrote:

> Yes, I think a dynamically-populated dropdown is unnecessary, and I think
> there might be a way to do this without requiring complex EL parsing.
>
> For sensitive properties, I imagine a text field as we have now, with a
> button to the right that says “Use Variable”. I do not anticipate having to
> support a combination of variable interpolation and literal text (i.e. no
> one’s password should be “${db.password}12345”), so we only have to support
> the use cases of *literal password entry* OR *variable population*.
>
> If the user clicks “Use Variable”, a dialog would appear which has a text
> field and allows the user to type, and uses an AJAX function to call a
> query API endpoint, which allows them to filter the available variables as
> they type. For example, if the user types “d”, the following list might
> appear:
>
> data.endpoint
> db.url
> db.username
> db.password
>
> As the user continues to type “db.p”, only “db.password” is available. The
> user selects this (thus not allowing true freeform input to the final
> value, only a predetermined list of valid values). Obviously this API would
> need to be protected with proper authorization to ensure that only the
> variables available to that user were exposed, but this would be the case
> on validation as well to ensure the user didn’t manually type in a variable
> reference they were not allowed to use.
>
> I’ve included a couple mockups of how I envision this. I’ve implemented
> the same thing in a Rails app a few years ago, so I know it’s possible, and
> with modern libraries, should not be too difficult. This should also
> demonstrate that people like Rob, Matt, and Scott who actually design the
> UI/UX that goes into NiFi are invaluable.
>
> Current sensitive property entry field
> Proposed “Use Variable” button
>
> “Use Variable” dialog with empty text field
>
> User starts typing, triggering AJAX query
>
> User continues typing, narrowing down result set, and selects it
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com <alopresto.apa...@gmail.com>*
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Apr 26, 2018, at 9:05 AM, Bryan Bende <bbe...@gmail.com> wrote:
>
> That is a fair point about the list of variables potentially being
> long, we would probably want to get some UI/UX recommendations from
> the folks that have worked in that are the most.
>
> In practice I wonder if it would really be an issue though...
>
> The variables that would be selectable would only be the sensitive
> variables the user has access to, not all variables. I would imagine
> out of hundreds of variables only a handful are sensitive variables.
> It is similar to processor properties in that there are tons of
> properties across all processors, but only a small batch of sensitive
> properties (I just counted 63 references to sensitive(true) in the
> codebase).
>
> The other factor is that it would only be the sensitive variables that
> are in scope to where you are in the data flow, using the same logic
> of how variables are resolved in a hierarchical order. So you would
> only see sensitive variables that are in the current process group or
> a parent group.
>
> As far as knowing if the enter plain password vs. EL, that was the
> reason I suggested selecting a variable from a list. That is the
> mechanism that tells us that a variable was used. The hope was that
> you could enter free form text or select a variable, any time free
> form text was used it would function the way it does today where the
> value is cleared out when saved to registry. If a variable was
> selected then it could keep the variable name.
>
> If we didn't do a list of variables then we would need a checkbox or
> something for the user to indicate whether the sensitive value is a
> literal or expression language.
>
>
> On Thu, Apr 26, 2018 at 5:24 AM, S

Re: [DISCUSS] Support for accessing sensitive values safely

2018-04-26 Thread Sivaprasanna
Initially when I thought of this, I imagined that we can still leverage
`PropertyDescriptor` and just add new method(s) to handle the sensitive
variables. Some thought has to be put on this.

*"It would provide a list of variables that are readable to the current
user and one can be selected"*
I think this might lead to bad UX. When the number of variables grow,
rendering the variables in a drop-down list may not be that good, IMHO.
Maybe we can still stick with textbox and do the following:

   - Once a user enters sensitive variable expression language, the
   framework does a background check if the user is authorized to use this
   variable. If the user doesn't have access, render the same and make the
   component invalid there by preventing it to be enabled/started

Only catch is how are we specifying what the user entered is plain password
and not EL. Either we can go with the assumption that all sensitive values
have to be taken from Variable Registry and only the EL for those variables
will have to be provided as the sensitive property value but this would
restrict the developers' freedom to choose what approach they want. Or we
can tweak the UI and present the option better, like have a checkbox which
when checked, will consider that as an EL and evaluate from the variable
registry and when not checked, assume that the entered password is plain
password and no evaluation needs to happen.

-

Sivaprasanna

On Thu, Apr 26, 2018 at 12:19 AM, Bryan Bende <bbe...@gmail.com> wrote:

> The policy model would need more thought, but the point would be that
> a user can select variable references they have been given permission
> to.
>
> In order to configure the processor that is referencing the variable,
> they already need write permissions to that processor, or some parent
> in the hierarchy if no specific policy exists.
>
>
>
> On Wed, Apr 25, 2018 at 2:42 PM, Otto Fowler <ottobackwa...@gmail.com>
> wrote:
> >
> > "It would provide a list of variables that are readable to the current
> user
> > and one can be selected, just like allowable values or controller
> services.”
> >
> > A person may have rights to configure nifi without knowing the “value” of
> > the secure db password ( for example ), but that doesn’t mean they
> > don’t have there rights to reference it.
> >
> >
> >
> > On April 25, 2018 at 14:15:16, Bryan Bende (bbe...@gmail.com) wrote:
> >
> > There is definitely room for improvement here.
> >
> > Keep in mind that often the sensitive information is specific to a
> > given environment. For example you build a flow in dev with your
> > db.password. You don't actually want your dev db password to be
> > propagated to the next environment, but you do want to be able to set
> > a variable placeholder like ${db.password} and leave that placeholder
> > so you can just set that variable in the next environment. So to me
> > the goal here is how to handle secure variables.
> >
> > Andy highlighted many of the issues, my proposal would be the
> following...
> >
> > First, we can introduce a concept of a sensitive variable. This would
> > be something in the UI where a user can indicate a variable is
> > sensitive, maybe a checkbox, and then the framework can store these
> > values encrypted (currently all variable values are stored in plain
> > text because they aren't meant to be sensitive).
> >
> > Second, we can introduce policies on sensitive variables so that we
> > can restrict who can read them elsewhere, just like policies on
> > controller services that determine which controller services show up
> > in the drop down of a processor.
> >
> > Third, we introduce a new kind of PropertyDescriptor that allows
> > selecting a variable from the variable registry rather than free-form
> > expression language. It would provide a list of variables that are
> > readable to the current user and one can be selected, just like
> > allowable values or controller services. Ideally we can have a way to
> > still allow free form values for people who don't want to use
> > variables.
> >
> > Fourth, anytime variables are evaluated from expression language we
> > would prevent evaluating any of these new sensitive variables since we
> > have no way of knowing if a user should have access to it from
> > free-form EL, so they can only be used from the special
> > PropertyDescriptors above.
> >
> > If we put all this in place then when we save flows to the registry,
> > we can leave the variable place-holders in the sensitive properties,
> > and then when you import to the next environment you only need to edit
> > the var

Support for accessing sensitive values safely

2018-04-25 Thread Sivaprasanna
Hi

Since flowfile attributes and VariableRegistry is not suitable (not safe,
to be specific), developers have to rely on manually configuring the
sensitive values on the components (Processors & ControllerServices). And
during CI/CD (using flow registry), the sensitive information are dropped
and once imported to the next environment (QA or Prod), the user is
expected to configure the sensitive information again, although for the
first time. How about we introduce sort of a 'vault' that holds sensitive
values which could possibly avoid this unnecessary step completely ?

-
Sivaprasanna


Re: Custom Controller Service

2018-04-25 Thread Sivaprasanna
Okay.. but two questions:


   1. We are passing the attribute 'db.id' that means, we'll be using
   'UpdateAttribute' to do add that attribute to flowfile?
   2. If we are to use 'UpdateAttribute' to set the value for 'db.id', we
   need to know before hand, right?

-

Sivaprasanna

On Wed, Apr 25, 2018 at 8:38 PM, Bryan Bende <bbe...@gmail.com> wrote:

> Charlie,
>
> That is a really nice solution, thanks for sharing.
>
> If we make the API changes in that JIRA I just sent, I could see
> having a new implementation of DBCPService that does something very
> similar.
>
> Basically there could be "DelegatingDBCPService" which still
> implemented the same DBCPService interface but followed a convention
> where it always looked in the attribute map for an attribute called
> "db.id", and then it itself has dynamic properties to define many
> DBCPServices where the property name was the db.id, and it would just
> return a Connection from the one with the given id.
>
> There are definitely other options for how to implement the dynamic
> connection service, but this would be a good one to have.
>
> -Bryan
>
>
> On Wed, Apr 25, 2018 at 10:58 AM, Charlie Meyer
> <charlie.me...@civitaslearning.com> wrote:
> > Chiming in a bit late on this, but we faced this same issue and got
> around
> > it by implementing a custom controller service which acts as a "router"
> to
> > different dbcp services. It exposes a method which given a uuid, returns
> > back the DBCPservice that corresponds with that uuid if it exists using
> >
> >  DBCPService dbcpService =
> > (DBCPService)
> > getControllerServiceLookup().getControllerService(uuid);
> >
> > From there, we created processors we needed based on the stock ones which
> > relied on our "router" service rather than a single DBCP. Our custom
> > processors read an attribute from incoming flow files, then send that to
> > the router which returns back the connection pool.
> >
> > On Wed, Apr 25, 2018 at 9:48 AM, Bryan Bende <bbe...@gmail.com> wrote:
> >
> >> Here is a proposal for how to modify the existing API to support both
> >> scenarios:
> >>
> >> https://issues.apache.org/jira/browse/NIFI-5121
> >>
> >> The scope of that ticket would be to make the interface change, and
> >> then update all of NiFi's DB processors to pass in the attribute map.
> >>
> >> Then a separate effort to provide a new service implementation that
> >> used the attribute map to somehow manage multiple connection pools, or
> >> create connections on the fly, or whatever the desired behavior is.
> >>
> >> On Wed, Apr 25, 2018 at 9:34 AM, Bryan Bende <bbe...@gmail.com> wrote:
> >> > To Otto's question...
> >> >
> >> > For simplicity sake, there is a new implementation of
> >> > DBCPConnectionPool that behind the scenes has two connection pools,
> >> > one for DB A and one for DB B, it doesn't matter how these are
> >> > configured.
> >> >
> >> > Now a flow file comes into the ExecuteSQL and it goes to
> >> > connectionPoolService.getConnection()...
> >> >
> >> > How does it know which DB to return a connection for?
> >> >
> >> >
> >> > On Wed, Apr 25, 2018 at 9:01 AM, Sivaprasanna <
> sivaprasanna...@gmail.com>
> >> wrote:
> >> >> Option 2 and 3 seem to be a probable approach. However creating
> varying
> >> >> number of connections based on *each* flowfile still sounds to be
> >> >> suboptimal. If the requirement still demands to take that road, then
> >> it’s
> >> >> better to do some prep-work.. as in the list of probable connections
> >> that
> >> >> are required are taken and connection pools are created for them and
> >> then
> >> >> based on the flowfiles (which connection it needs), we use the
> relevant
> >> >> one.
> >> >>
> >> >> Thanks,
> >> >> Sivaprasanna
> >> >>
> >> >> On Wed, 25 Apr 2018 at 6:07 PM, Bryan Bende <bbe...@gmail.com>
> wrote:
> >> >>
> >> >>> The issue here is more about the service API and not the
> >> implementations.
> >> >>>
> >> >>> The current API has no way to pass information between the processor
> >> and
> >> >>> service.
> >> >>>
> >> >>> The options boil down to...
> 

Re: Is there a configuration to limit the size of nifi's flowfile repository

2018-04-25 Thread Sivaprasanna
No, he actually had mentioned “like content repository”. The answer is,
there aren’t any properties that support this, AFAIK. Pierre’s response
pretty much sums up why there aren’t any properties.

Thanks,
Sivaprasanna

On Wed, 25 Apr 2018 at 7:10 PM, Mike Thomsen <mikerthom...@gmail.com> wrote:

> I have a feeling that what Ben meant was how to limit the content
> repository size.
>
> On Wed, Apr 25, 2018 at 8:26 AM Pierre Villard <
> pierre.villard...@gmail.com>
> wrote:
>
> > Hi Ben,
> >
> > Since the flow file repository contains the information of the flow files
> > currently being processed by NiFi, you don't want to limit that
> repository
> > in size since it would prevent the workflows to create new flow files.
> >
> > Besides this repository is very lightweight, I'm not sure it'd need to be
> > limited in size.
> > Do you have a specific use case in mind?
> >
> > Pierre
> >
> >
> > 2018-04-25 9:15 GMT+02:00 尹文才 <batman...@gmail.com>:
> >
> > > Hi guys, I checked NIFI's system administrator guide trying to find a
> > > configuration item so that the size of the flowfile repository could be
> > > limited similar to the other repositories(e.g. content repository),
> but I
> > > didn't find such configuration items, is there currently any
> > configuration
> > > to limit the flowfile repository's size? thanks.
> > >
> > > Regards,
> > > Ben
> > >
> >
>


Re: Custom Controller Service

2018-04-25 Thread Sivaprasanna
Option 2 and 3 seem to be a probable approach. However creating varying
number of connections based on *each* flowfile still sounds to be
suboptimal. If the requirement still demands to take that road, then it’s
better to do some prep-work.. as in the list of probable connections that
are required are taken and connection pools are created for them and then
based on the flowfiles (which connection it needs), we use the relevant
one.

Thanks,
Sivaprasanna

On Wed, 25 Apr 2018 at 6:07 PM, Bryan Bende <bbe...@gmail.com> wrote:

> The issue here is more about the service API and not the implementations.
>
> The current API has no way to pass information between the processor and
> service.
>
> The options boil down to...
>
> - Make a new API, but then you need all new processors that use the new API
>
> - Modify the current API to have a new method, but then we are combing two
> concepts into one API and some impls may not implement both
>
> - Modify the processors to use two different service APIs, but enforce that
> only one can be used at a time, so it can have either the original
> connection pool service or can have some new dynamic connection factory,
>  but not both, and then modify all DB processors to have logic to determine
> which service to use.
>
> On Wed, Apr 25, 2018 at 8:28 AM Otto Fowler <ottobackwa...@gmail.com>
> wrote:
>
> > Or you could just call every time you needed properties more likely.
> > This would still be custom unless integrated….
> >
> >
> > On April 25, 2018 at 08:26:57, Otto Fowler (ottobackwa...@gmail.com)
> > wrote:
> >
> > Can services work with other controller services?
> > Maybe a PropertiesControllerService, FilePropertiesControllerService
> could
> > work with your service?
> > the PCS could fire events on property changes etc.
> >
> >
> >
> > On April 25, 2018 at 08:05:27, Mike Thomsen (mikerthom...@gmail.com)
> > wrote:
> >
> > Shot in the dark here, but what you try to do is create a custom
> connection
> > pool service that uses dynamic properties to build a "pool of connection
> > pools." You could then use the property names as hints for where to send
> > the queries.
> >
> > On Wed, Apr 25, 2018 at 6:19 AM Rishab Prasad <rishabprasad...@gmail.com
> >
> > wrote:
> >
> > > Hi,
> > >
> > > Basically, there are 'n' number of databases that we are dealing with.
> We
> > > need to fetch the data from the source database into HDFS. Now since we
> > are
> > > dealing with many databases, the source database is not static and
> > changes
> > > every now and then. And every time the source database changes we
> > manually
> > > need to change the value for the connection parameters in
> > > DBCPConnectionPool. Now, people suggest that for 'n' databases create
> 'n'
> > > connections for each database, but that is not possible because 'n' is
> a
> > > big number and creating that many connections in DBCPConnectionPool is
> > not
> > > possible. So we were looking for a way where we can specify all the
> > > connection parameters in a file present in our local system and then
> make
> > > the DBCPConnectionPool controller service to read the values from the
> > file.
> > > In that way we can simply change the value in the file present in the
> > local
> > > system. No need to alter anything in the dataflow. But it turns out
> that
> > > FlowFile attributes are not available to the controller services as the
> > > expression language is evaluated at the time of service enable.
> > >
> > > So can you suggest a way where I can achieve my requirement (except
> > > 'variable.registry' ) ? I am looking to develop a custom controller
> > service
> > > that can serve the requirement but how do I make the flowfile
> attributes
> > > available to the service?
> > >
> >
> --
> Sent from Gmail Mobile
>


Re: Nifi Custom Processor Import Error

2018-04-20 Thread Sivaprasanna
You have to build it locally after downloading or cloning.

-
Sivaprasanna

On Fri, 20 Apr 2018 at 4:16 PM, rishabprasad005 <rishabprasad...@gmail.com>
wrote:

> How can we download a maven dependency for nifi having version
> "1.7.0-SNAPSHOT". I tried doing this but gets an error since the latest
> version is 1.6.0.
>
>
>
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
>


Re: No controller service types found that are applicable for this property

2018-04-19 Thread Sivaprasanna
Hi,

Have you added ‘nifi-dbcp-service-api’ as a dependency in the processor’
POM ?

-
Sivaprasanna

On Thu, 19 Apr 2018 at 8:14 PM, Rishab Prasad <rishabprasad...@gmail.com>
wrote:

>
> Hi,
>
> I am new to Apache Nifi. So I have created a new custom processor by copying 
> the source code of *ListDatabaseTable* processor and build it using maven.
>
> Then I copied nar file to lib folder and restarted it. Then I try to 
> configure the property for Database Connection Pooling Service, so I choose 
> **create new service**
>
> option. It throws the following error "*No controller service types found 
> that are applicable for this property"*. Please help me resolving this.
>
>
>
> ‌
>


Re: [EXT] Suggestion: Apache NiFi component enhancement

2018-04-18 Thread Sivaprasanna
Mike,

Thanks for the response and those are valid concerns.. a couple of them I
didn't even think about in the first place. Here are my thoughts:

*   - When searching by component type, will I be able to find components
by both class name and annotation name?*
Yes. That way it adds more value.


*   - Will the flow.xml contain the class name or the annotation name or
both?  If just one, might people look through the flow.xml for the other
 name and not find anything?*
   Since my intention was to limit the effects to just the UI, I dint
think of writing the annotation anywhere other than using it in the DTO so
the flow.xml won't have the annotation is my assumption. Feel free to share
your idea on this.


*   - Would the auto-generated documentation contain both class name and
annotation name?*
Yes. It should.


*- Would this have any impact to the Registry?  Perhaps not yet, but
does it fit with the desire to include component extensions in the
Registry?*
   I don't have a clear idea on this yet.


*- Would we have to modify the Logger messages to include the
annotation name instead of the class name in the logs?*
   I think that would be neat.. having the annotation name in the
logger messages

Having said that, I'm fairly new to contributing to Apache NiFi so I might
be wrong in some things, like with my assumption or something so feel free
to correct me, and others if this interests you, please pitch in your
thoughts.


*-*
Sivaprasanna

On Fri, Apr 13, 2018 at 9:52 PM, Michael Moser <moser...@gmail.com> wrote:

> Hi Sivaprasanna,
>
> After reading your first email, I thought it would be a lot of work for
> little benefit, because I think you would have needed to touch many parts
> of the framework.
>
> After reading your clarification, the scope seems more limited.  It's
> definitely an interesting idea, and here are my thoughts
>
>- When searching by component type, will I be able to find components by
>both class name and annotation name?
>- Will the flow.xml contain the class name or the annotation name or
>both?  If just one, might people look through the flow.xml for the other
>name and not find anything?
>- Would the auto-generated documentation contain both class name and
>annotation name?
>- Would this have any impact to the Registry?  Perhaps not yet, but does
>it fit with the desire to include component extensions in the Registry?
>- Would we have to modify the Logger messages to include the annotation
>name instead of the class name in the logs?
>
> Thank you for your engagement with NiFi and for thinking of ways to improve
> it!
>
> -- Mike
>
>
>
> On Fri, Apr 13, 2018 at 11:50 AM, Sivaprasanna <sivaprasanna...@gmail.com>
> wrote:
>
> > Busy week, eh?
> >
> > Anybody is having any concerns, suggestions? Any input is appreciated :)
> >
> > Cheers,
> > Sivaprasanna
> >
> > On Thu, Apr 12, 2018 at 10:14 PM, Sivaprasanna <
> sivaprasanna...@gmail.com>
> > wrote:
> >
> > > No my suggestion was a simpler approach. It just affects only the UI
> > > aspect as my intention is to just override how the 'type' gets rendered
> > in
> > > the UI. For example, a processor's type is set to its canonical class
> > name (
> > > DtoFactory.java#createProcessorDto
> > > <https://github.com/apache/nifi/blob/master/nifi-nar-
> > bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-
> > web-api/src/main/java/org/apache/nifi/web/api/dto/
> DtoFactory.java#L2783>)
> > > but rather than getting the canonical class name, let's just get from
> > some
> > > other method that checks if the new annotation is present, if it is
> > > present, set the value provided in the annotation as the type, if it's
> > not
> > > present set the canonical class name just like how it is now. Again, my
> > > intention is to just affect the UI so as to avoid making unnecessary
> > > complication that could pose some backwards compatibility issue.
> > >
> > > -
> > > Sivaprasanna
> > >
> > > On Thu, Apr 12, 2018 at 1:35 PM, Peter Wicks (pwicks) <
> pwi...@micron.com
> > >
> > > wrote:
> > >
> > >> I think this is a good idea. But based on your example I think you
> would
> > >> want to provide a primary Type along with a list of Alias types.
> > >> If NiFi starts and it can no longer find a processor by the Type name
> it
> > >> had in the flow.xml it can check he annotations/aliases to see if it's
> > been
> > >> renamed. This would allow for easy renames.
> > >>
> > >

Re: The import org.apache.nifi.dbcp.DBCPService cannot be resolved

2018-04-17 Thread Sivaprasanna
*not the implementation bundle itself

On Tue, Apr 17, 2018 at 6:33 PM, Sivaprasanna <sivaprasanna...@gmail.com>
wrote:

> Yep. That's correct. You have to use the api and the implementation bundle
> itself. Add the following to your processor's POM:
>
> 
> org.apache.nifi
> nifi-dbcp-service-api
> 1.6.0
> 
>
> Change the version if you're trying to deploy your custom processor bundle
> on a different NiFi version.
>
> -
> Sivaprasanna
>
> On Tue, Apr 17, 2018 at 5:31 PM, Mike Thomsen <mikerthom...@gmail.com>
> wrote:
>
>> Did you declare a dependency on nifi-dbcp-service-api? If not, you have to
>> do that and set the scope to "provided.
>>
>> On Tue, Apr 17, 2018 at 7:05 AM Rishab Prasad <rishabprasad...@gmail.com>
>> wrote:
>>
>> > Hi,
>> >
>> > I am trying to create a custom processor equivalent to that of
>> > ListDatabaseTable. I copied and pasted the source code of
>> > ListDatabaseTableand
>> > used the same for my custom processor. But I am getting the following
>> > error: -
>> >
>> > [image: enter image description here] <https://i.stack.imgur.com/370
>> N8.png
>> > >
>> >
>> > The error says *The import org.apache.nifi.dbcp.DBCPService cannot be
>> > resolved*. I tried downloading this jar file
>> > <
>> > https://mvnrepository.com/artifact/org.apache.nifi/nifi-dbcp
>> -service/1.5.0
>> > >
>> >  for org.apache.nifi.dbcp and added it as an external jar in my build
>> path.
>> > The error still can't be resolved. How can i resolve this error?
>> > ‌
>> >
>>
>
>


Re: The import org.apache.nifi.dbcp.DBCPService cannot be resolved

2018-04-17 Thread Sivaprasanna
Yep. That's correct. You have to use the api and the implementation bundle
itself. Add the following to your processor's POM:


org.apache.nifi
nifi-dbcp-service-api
1.6.0


Change the version if you're trying to deploy your custom processor bundle
on a different NiFi version.

-
Sivaprasanna

On Tue, Apr 17, 2018 at 5:31 PM, Mike Thomsen <mikerthom...@gmail.com>
wrote:

> Did you declare a dependency on nifi-dbcp-service-api? If not, you have to
> do that and set the scope to "provided.
>
> On Tue, Apr 17, 2018 at 7:05 AM Rishab Prasad <rishabprasad...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am trying to create a custom processor equivalent to that of
> > ListDatabaseTable. I copied and pasted the source code of
> > ListDatabaseTableand
> > used the same for my custom processor. But I am getting the following
> > error: -
> >
> > [image: enter image description here] <https://i.stack.imgur.com/
> 370N8.png
> > >
> >
> > The error says *The import org.apache.nifi.dbcp.DBCPService cannot be
> > resolved*. I tried downloading this jar file
> > <
> > https://mvnrepository.com/artifact/org.apache.nifi/nifi-
> dbcp-service/1.5.0
> > >
> >  for org.apache.nifi.dbcp and added it as an external jar in my build
> path.
> > The error still can't be resolved. How can i resolve this error?
> > ‌
> >
>


Re: [EXT] Suggestion: Apache NiFi component enhancement

2018-04-13 Thread Sivaprasanna
Busy week, eh?

Anybody is having any concerns, suggestions? Any input is appreciated :)

Cheers,
Sivaprasanna

On Thu, Apr 12, 2018 at 10:14 PM, Sivaprasanna <sivaprasanna...@gmail.com>
wrote:

> No my suggestion was a simpler approach. It just affects only the UI
> aspect as my intention is to just override how the 'type' gets rendered in
> the UI. For example, a processor's type is set to its canonical class name (
> DtoFactory.java#createProcessorDto
> <https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/dto/DtoFactory.java#L2783>)
> but rather than getting the canonical class name, let's just get from some
> other method that checks if the new annotation is present, if it is
> present, set the value provided in the annotation as the type, if it's not
> present set the canonical class name just like how it is now. Again, my
> intention is to just affect the UI so as to avoid making unnecessary
> complication that could pose some backwards compatibility issue.
>
> -
> Sivaprasanna
>
> On Thu, Apr 12, 2018 at 1:35 PM, Peter Wicks (pwicks) <pwi...@micron.com>
> wrote:
>
>> I think this is a good idea. But based on your example I think you would
>> want to provide a primary Type along with a list of Alias types.
>> If NiFi starts and it can no longer find a processor by the Type name it
>> had in the flow.xml it can check he annotations/aliases to see if it's been
>> renamed. This would allow for easy renames.
>>
>> Example 1: NiFi can no longer find AzureDocumentDBProcessor. Developer
>> renamed it to CosmosDBProcessor. In this case we don't really want the type
>> to still same "DocumentDB", that's just confusing. Also, we might not want
>> the type named CosmosDBProcessor. So we make the Type be something nice,
>> like "Azure Comos DB", then add Aliases for "AzureDocumentDBProcessor" and
>> "CosmosDBProcessor".
>>
>> Next year when Microsoft renames it "CelestialDB" we can rename the
>> processor and add another alias.
>>
>> Something like that?
>>
>> -Original Message-
>> From: Sivaprasanna [mailto:sivaprasanna...@gmail.com]
>> Sent: Wednesday, April 11, 2018 23:37
>> To: dev@nifi.apache.org
>> Subject: [EXT] Suggestion: Apache NiFi component enhancement
>>
>> All,
>>
>> Currently the "type" of a component is actually the component's canonical
>> class name which gets rendered in the UI as the class name with the
>> component version. This is good. However I'm thinking it is better to have
>> an annotation which a developer can use to override the component type.
>>
>> How is it used?
>> I think an annotation can be sufficient. The framework checks if the
>> annotation is present or not, if it is present, it uses the name provided
>> there or else it uses the class name like how it is happening.
>>
>> Why and where is it needed?
>>
>>- In scenarios where we devise a new naming convention and want to
>> apply
>>it to older components without breaking backward compatibility
>>- A developer had created a component class with a name but later down
>>the line, the developer or someone else wants to change it to something
>>else, the reason could again be naming convention or just that the new
>> name
>>makes more sense
>>- A component that has been built to work with third party tech, like
>>Azure, MongoDB, S3, Druid processors but the later versions of that
>> tech
>>has been changed to something else by the original creators. (Something
>>similar has happened to Azure's DocumentDB which got later rebranded as
>>Azure CosmosDB). In such cases, without deprecating or rebuilding a new
>>processor, this can be used.
>>
>> Before creating a JIRA, I wanted to get the community's thoughts. Feel
>> free to share your thoughts, concerns. If everything seems fine, I'll start
>> working on the implementation.
>>
>> -
>>
>> Sivaprasanna
>>
>
>


Re: PutHDFS error

2018-04-13 Thread Sivaprasanna
Glad that you have solved it.. If possible can you share how you resolved
it? It might help others who might face this issue in the future.

-
Sivaprasanna

On Fri, 13 Apr 2018 at 7:05 PM, hemamoger <hemamo...@gmail.com> wrote:

> thanks for your suggestions.
> I have got the answer.
>
>
>
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
>


Re: [EXT] Suggestion: Apache NiFi component enhancement

2018-04-12 Thread Sivaprasanna
No my suggestion was a simpler approach. It just affects only the UI aspect
as my intention is to just override how the 'type' gets rendered in the UI.
For example, a processor's type is set to its canonical class name (
DtoFactory.java#createProcessorDto
<https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/dto/DtoFactory.java#L2783>)
but rather than getting the canonical class name, let's just get from some
other method that checks if the new annotation is present, if it is
present, set the value provided in the annotation as the type, if it's not
present set the canonical class name just like how it is now. Again, my
intention is to just affect the UI so as to avoid making unnecessary
complication that could pose some backwards compatibility issue.

-
Sivaprasanna

On Thu, Apr 12, 2018 at 1:35 PM, Peter Wicks (pwicks) <pwi...@micron.com>
wrote:

> I think this is a good idea. But based on your example I think you would
> want to provide a primary Type along with a list of Alias types.
> If NiFi starts and it can no longer find a processor by the Type name it
> had in the flow.xml it can check he annotations/aliases to see if it's been
> renamed. This would allow for easy renames.
>
> Example 1: NiFi can no longer find AzureDocumentDBProcessor. Developer
> renamed it to CosmosDBProcessor. In this case we don't really want the type
> to still same "DocumentDB", that's just confusing. Also, we might not want
> the type named CosmosDBProcessor. So we make the Type be something nice,
> like "Azure Comos DB", then add Aliases for "AzureDocumentDBProcessor" and
> "CosmosDBProcessor".
>
> Next year when Microsoft renames it "CelestialDB" we can rename the
> processor and add another alias.
>
> Something like that?
>
> -Original Message-
> From: Sivaprasanna [mailto:sivaprasanna...@gmail.com]
> Sent: Wednesday, April 11, 2018 23:37
> To: dev@nifi.apache.org
> Subject: [EXT] Suggestion: Apache NiFi component enhancement
>
> All,
>
> Currently the "type" of a component is actually the component's canonical
> class name which gets rendered in the UI as the class name with the
> component version. This is good. However I'm thinking it is better to have
> an annotation which a developer can use to override the component type.
>
> How is it used?
> I think an annotation can be sufficient. The framework checks if the
> annotation is present or not, if it is present, it uses the name provided
> there or else it uses the class name like how it is happening.
>
> Why and where is it needed?
>
>- In scenarios where we devise a new naming convention and want to apply
>it to older components without breaking backward compatibility
>- A developer had created a component class with a name but later down
>the line, the developer or someone else wants to change it to something
>else, the reason could again be naming convention or just that the new
> name
>makes more sense
>- A component that has been built to work with third party tech, like
>Azure, MongoDB, S3, Druid processors but the later versions of that tech
>has been changed to something else by the original creators. (Something
>similar has happened to Azure's DocumentDB which got later rebranded as
>Azure CosmosDB). In such cases, without deprecating or rebuilding a new
>processor, this can be used.
>
> Before creating a JIRA, I wanted to get the community's thoughts. Feel
> free to share your thoughts, concerns. If everything seems fine, I'll start
> working on the implementation.
>
> -
>
> Sivaprasanna
>


Re: NiFi to S3

2018-04-11 Thread Sivaprasanna
Adrian,

Current version of S3 processors in Apache NiFi don’t have the object
tagging feature. I think it was recently added to the AWS S3 SDK owing to
the new GDPR regulation announced by EU. Please open a JIRA to have this
feature added to the S3 processors.

-
Sivaprasanna

On Thu, 12 Apr 2018 at 6:00 AM, Adrian D <adnov...@gmail.com> wrote:

> Sir/Ma'am
>
> Is it possible to assign S3 object tags in NiFi upon writing? I am trying
> to implement object tagging as the PutS3Object processor is running. The
> closest reference to object tagging I found:
>
> https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.html
>
> Please let me know. Thank you.
>


Suggestion: Apache NiFi component enhancement

2018-04-11 Thread Sivaprasanna
All,

Currently the "type" of a component is actually the component's canonical
class name which gets rendered in the UI as the class name with the
component version. This is good. However I'm thinking it is better to have
an annotation which a developer can use to override the component type.

How is it used?
I think an annotation can be sufficient. The framework checks if the
annotation is present or not, if it is present, it uses the name provided
there or else it uses the class name like how it is happening.

Why and where is it needed?

   - In scenarios where we devise a new naming convention and want to apply
   it to older components without breaking backward compatibility
   - A developer had created a component class with a name but later down
   the line, the developer or someone else wants to change it to something
   else, the reason could again be naming convention or just that the new name
   makes more sense
   - A component that has been built to work with third party tech, like
   Azure, MongoDB, S3, Druid processors but the later versions of that tech
   has been changed to something else by the original creators. (Something
   similar has happened to Azure's DocumentDB which got later rebranded as
   Azure CosmosDB). In such cases, without deprecating or rebuilding a new
   processor, this can be used.

Before creating a JIRA, I wanted to get the community's thoughts. Feel free
to share your thoughts, concerns. If everything seems fine, I'll start
working on the implementation.

-

Sivaprasanna


Re: PutHDFS error

2018-04-08 Thread Sivaprasanna
1. You have shared screenshot for "hdfs dfs -ls" which will basically list
the contents under /user/root since you're executing that as a root but you
have provided "/root" in PutHDFS so please share the list of contents
available under /root i.e. hdfs dfs -ls /root
2. Have you tried Pierre's approach? That should work all the time since it
appends epoch timestamp to the filename which has no possibilities of
having duplicate.


On Mon, Apr 9, 2018 at 10:30 AM, hemamoger  wrote:

> please check the attached images puthdfs_properties.png
>  t951/puthdfs_properties.png>
> getsftp_confgrn.png
>  t951/getsftp_confgrn.png>
> hadoop_properties.hadoop_properties
>  t951/hadoop_properties.hadoop_properties>
>
>
>
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
>


Re: PutHDFS error

2018-04-06 Thread Sivaprasanna
Did you try the approach that Pierre suggested? That should work in all
cases. Can you please share the screenshot of the processor’s configuration
menu?

Also please make sure whatever path value that you have set for the
‘Directory’ property is prefixed with a ‘/‘. Otherwise it will look for the
relative path with respect to the user i.e. /user/nifi/. If
that’s the case, you maybhave to check that directory to see if a file with
the same name already exists.

-
Sivaprasanna

On Fri, 6 Apr 2018 at 12:33 PM, hemamoger <hemamo...@gmail.com> wrote:

> Yes i tried with replace and append that is throwing error that java is not
> supported something like that.
> But i dont have  that file in hadoop which i am pushing to hdfs. still its
> throwing that error .
>
>
>
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
>


Re: PutHDFS error

2018-04-06 Thread Sivaprasanna
You can do what Pierre suggested. However, if you don’t want the already
existing file or want to overwrite that with the new file, configure the
PutHDFS processor and set ‘Conflict Resolution Strategy’ to “replace” or
“append” based on your requirement.

-
Sivaprasanna

On Fri, 6 Apr 2018 at 11:57 AM, Pierre Villard <pierre.villard...@gmail.com>
wrote:

> Hi,
>
> This means that the name of the file you're pushing in HDFS already exists
> in the target directory.
> Usually people use an UpdateAttribute processor before the PuHDFS to update
> the attribute 'filename' with a timestamped suffix.
> Something like: filename => ${filname}-${now()}
> To be changed according to your needs.
>
> Thanks,
> Pierre
>
>
> 2018-04-06 8:18 GMT+02:00 hemamoger <hemamo...@gmail.com>:
>
> > <
> http://apache-nifi-developer-list.39713.n7.nabble.com/file/t951/data1.png
> > >
> >
> >
> >
> > Hi Team,
> >
> > I am getting error like in the above image. I am not sure wheather
> PUTHDFS
> > configuration is proper or not.
> > Can you pls help with the error. and also what are the confiration i need
> > to
> > setup for PUTHDFS processor.
> >
> > Thanks in advance.
> >
> >
> >
> > --
> > Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
> >
>


Re: Read processor property in init()

2018-03-29 Thread Sivaprasanna
Yep. That’s correct.

On Thu, 29 Mar 2018 at 6:45 PM, Jeff Zemerick <jzemer...@apache.org> wrote:

> Thanks! Just to confirm, each time the processor is started the
> @OnScheduled annotated method is executed, right?
>
> Jeff
>
>
> On Wed, Mar 28, 2018 at 9:07 AM, Sivaprasanna <sivaprasanna...@gmail.com>
> wrote:
>
> > Just to add on top of what Mike said. The @OnScheduled annotation
> indicates
> > that the method that is marked with this annotation will run when a
> > processor is started every time. So basically the setup() will be called
> > and executed everytime the processor is started from the UI.
> >
> > On Wed, 28 Mar 2018 at 6:34 PM, Jeff Zemerick <jzemer...@apache.org>
> > wrote:
> >
> > > I will give that a go. Thanks for the quick answer, Mike!
> > >
> > > On Wed, Mar 28, 2018 at 9:01 AM, Mike Thomsen <mikerthom...@gmail.com>
> > > wrote:
> > > > Just do...
> > > >
> > > > @OnScheduled
> > > > public void setup(ProcessContext context) {
> > > > //Read properties and do setup.
> > > > }
> > > >
> > > > On Wed, Mar 28, 2018 at 8:57 AM, Jeff Zemerick <jzemer...@apache.org
> >
> > > wrote:
> > > >
> > > >> Hi everyone,
> > > >>
> > > >> Is there a recommended method for making user-configurable property
> > > >> values available to a processor's init()? I would like to load a
> large
> > > >> index file but allow the user to specify the index's path. I am
> > > >> guessing that init() is executed too early to read user properties.
> > > >>
> > > >> Thanks for any suggestions.
> > > >>
> > > >> Jeff
> > > >>
> > >
> >
>


Re: [VOTE] Release Apache NiFi 1.6.0 (RC2)

2018-03-28 Thread Sivaprasanna
Thanks, Scott. That helped.

On Thu, 29 Mar 2018 at 10:09 AM, James Wing  wrote:

> +1 (binding).  Ran through the release helper, worked with a test flow.
> Thanks for putting this together.
>
> On Mon, Mar 26, 2018 at 8:34 PM, Joe Witt  wrote:
>
> > Hello,
> >
> > I am pleased to be calling this vote for the source release of Apache
> > NiFi nifi-1.6.0.
> >
> > The source zip, including signatures, digests, etc. can be found at:
> > https://repository.apache.org/content/repositories/orgapachenifi-1123
> >
> > The Git tag is nifi-1.6.0-RC2
> > The Git commit ID is b5935ec81a7cbc048820781ac62cd96bbea5b232
> > https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=
> > b5935ec81a7cbc048820781ac62cd96bbea5b232
> >
> > Checksums of nifi-1.6.0-source-release.zip:
> > SHA1: 009f1e2e3c17e38f21f27170b9c06228d11653c0
> > SHA256: 39941a5b25427e2b4cc5ba8206084ff92df58863f29ddd097d4ac1e85424beb9
> > SHA512: 1773417a48665e3cda22180ea7f401bc8190ebddbf3f7bc29831e46e7ab0
> > a07694c6e478d252fa573209d4a3c8132a522a8507b6a8784669ab7364847a07e234
> >
> > Release artifacts are signed with the following key:
> > https://people.apache.org/keys/committer/joewitt.asc
> >
> > KEYS file available here:
> > https://dist.apache.org/repos/dist/release/nifi/KEYS
> >
> > 146 issues were closed/resolved for this release:
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > projectId=12316020=12342422
> >
> > Release note highlights can be found here:
> > https://cwiki.apache.org/confluence/display/NIFI/
> > Release+Notes#ReleaseNotes-Version1.6.0
> >
> > The vote will be open for 72 hours.
> > Please download the release candidate and evaluate the necessary items
> > including checking hashes, signatures, build
> > from source, and test.  The please vote:
> >
> > [ ] +1 Release this package as nifi-1.6.0
> > [ ] +0 no opinion
> > [ ] -1 Do not release this package because...
> >
>


Re: ListSFTP incoming relationship

2018-03-28 Thread Sivaprasanna
Should we really have to have an optional state saving functionality? If
the user is unaware of the implications and proceed to store the state then
what Andrew Grande mentioned will happen - possibilities of never ending
stream of state information being stored. If we still go with the optional
state management approach, documentation have to be clear in explaining the
implications.

Sivaprasanna

On Thu, 29 Mar 2018 at 9:28 AM, scott <tcots8...@gmail.com> wrote:

> Okay. So, a new processor called "ScanSFTP", allow incoming relationship
> where the content of the flow file is replaced with the list of matching
> files from the remote directory, then the list is filtered by the usual
> regex parameters like today. Optional state information is kept to
> additionally filter the list of files older than the newest file
> observed during the last run. Does that sound okay to everyone? If so,
> what's the next step?
>
> Scott
>
>
> On 03/27/2018 06:21 PM, scott wrote:
> >
> > This is a great discussion, and appreciate the interest in my problem.
> > I think there are workarounds if you decide not to store state, but
> > I'd recommend keeping it. I think state should be kept optionally,
> > even turned off by default. Several times I've had issues where the
> > state has cause me to miss files, because files get moved into the
> > source folder out of order, and I've wished I could turn the state
> > feature off.
> >
> > In my current use-case, I would not be frequently, dynamically
> > changing the source directory, though I can see the use-cases where it
> > would be. In my current use-case, I want to use an external database
> > table to control the configuration of all my flows. I do this by first
> > reading the content of the table for this particular flow ID, then
> > assign the result as attributes to the flowfile, essentially creating
> > variables I can use throughout the flow to control its behavior. This
> > works great with flows that initiate with HTTP or SQL, but not
> > ListSFTP or ListFile.
> >
> > Scott
> >
> >
> > On 03/27/2018 02:05 PM, Andy LoPresto wrote:
> >> I think Bryan’s point is a good one and when I first saw this
> >> question (and thought of the previous times it’s been asked), my
> >> initial response is to propose a second processor.
> >>
> >> Something like “ScanSFTP”/“IndexSFTP”/“SnapshotSFTP” which operates
> >> differently from ListSFTP — it does not maintain state, and performs
> >> a one-time tabulation/chronicling of the state of that directory at
> >> the given point in time.
> >>
> >> The responsibility to maintain and compare state across time is no
> >> longer a requirement. There could even be a setting in the processor
> >> to allow for “individual flowfile output” (i.e. act the same as
> >> ListSFTP and output one flowfile per item listed) or “summary
> >> flowfile output” where a single flowfile is generated containing the
> >> directory listing information for all the items there. (Another
> >> option is to output both on two different relationships).
> >>
> >> I think this would enable the types of workflows that users have
> >> asked about in the past without compromising the mechanism by which
> >> List* processors work and adding undue complexity to those processors.
> >>
> >> Absolutely crystal clear documentation (and a standard verb for the
> >> new processor family) would be necessary (not only because these
> >> processor solve different problems, but to avoid a million variants
> >> of “I used ScanSFTP processor and it’s not tracking state”/“How do I
> >> provide a directory in an attribute to ListSFTP” mailing list
> >> questions).
> >>
> >>
> >> Andy LoPresto
> >> alopre...@apache.org <mailto:alopre...@apache.org>
> >> /alopresto.apa...@gmail.com <mailto:alopresto.apa...@gmail.com>/
> >> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> >>
> >>> On Mar 27, 2018, at 8:33 AM, Andrew Grande <apere...@gmail.com
> >>> <mailto:apere...@gmail.com>> wrote:
> >>>
> >>> The key here is that ListXXX processor maintains state. A directory
> >>> is part
> >>> of such state. Allowing arbitrary directories via an expression would
> >>> create never ending stream of new entries in the state storage,
> >>> effectively
> >>> engineering a distributed DoS attack on the NiFi node or shared ZK
> >>> quorum
> >&g

Re: [VOTE] Release Apache NiFi 1.6.0 (RC2)

2018-03-28 Thread Sivaprasanna
- Confirmed hashes and signature

Tried to build in a fresh Ubuntu VM that doesn't have NiFi installed or
configured previously. Build fails at nifi-web-ui module. Going by how it
works for others, it very well could be a isolated problem on my end.
However still wanted to get this posted.

Error snippet:

[ERROR] npm ERR! Error: No dist in undefined package
[ERROR] npm ERR! at next
(/opt/tmp/nifi-1.6.0/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/target/frontend-working-directory/node/node_modules/npm/lib/cache.js:746:26)
[ERROR] npm ERR! at
/opt/tmp/nifi-1.6.0/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/target/frontend-working-directory/node/node_modules/npm/lib/cache.js:739:5
[ERROR] npm ERR! at RegClient.get_
(/opt/tmp/nifi-1.6.0/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/target/frontend-working-directory/node/node_modules/npm/node_modules/npm-registry-client/lib/get.js:105:14)

Also attached the error message.

Environment:
Ubuntu 16.04 LTS x64

Thanks,
Sivaprasanna

On Wed, Mar 28, 2018 at 8:31 PM, Kevin Doran <kdo...@apache.org> wrote:

> +1
>
> I ran through the steps in the release helper guide and everything looks
> good.
>
> Thanks for handling RM duties, Joe.
>
> --Kevin
>
> On 3/26/18, 23:34, "Joe Witt" <joew...@apache.org> wrote:
>
> Hello,
>
> I am pleased to be calling this vote for the source release of Apache
> NiFi nifi-1.6.0.
>
> The source zip, including signatures, digests, etc. can be found at:
> https://repository.apache.org/content/repositories/orgapachenifi-1123
>
> The Git tag is nifi-1.6.0-RC2
> The Git commit ID is b5935ec81a7cbc048820781ac62cd96bbea5b232
> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=
> b5935ec81a7cbc048820781ac62cd96bbea5b232
>
> Checksums of nifi-1.6.0-source-release.zip:
> SHA1: 009f1e2e3c17e38f21f27170b9c06228d11653c0
> SHA256: 39941a5b25427e2b4cc5ba8206084ff92df58863f29ddd097d4ac1e85424
> beb9
> SHA512: 1773417a48665e3cda22180ea7f401bc8190ebddbf3f7bc29831e46e7ab0
> a07694c6e478d252fa573209d4a3c8132a522a8507b6a8784669ab7364847a07e234
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/joewitt.asc
>
> KEYS file available here:
> https://dist.apache.org/repos/dist/release/nifi/KEYS
>
> 146 issues were closed/resolved for this release:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> projectId=12316020=12342422
>
> Release note highlights can be found here:
> https://cwiki.apache.org/confluence/display/NIFI/
> Release+Notes#ReleaseNotes-Version1.6.0
>
> The vote will be open for 72 hours.
> Please download the release candidate and evaluate the necessary items
> including checking hashes, signatures, build
> from source, and test.  The please vote:
>
> [ ] +1 Release this package as nifi-1.6.0
> [ ] +0 no opinion
> [ ] -1 Do not release this package because...
>
>
>
>
INFO] Installed node locally.
[INFO] Installing npm version 1.3.8
[INFO] Unpacking 
/root/.m2/repository/com/github/eirslett/npm/1.3.8/npm-1.3.8.tar.gz into 
/opt/tmp/nifi-1.6.0/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/target/frontend-working-directory/node/node_modules
[INFO] Installed npm locally.
[INFO] 
[INFO] --- frontend-maven-plugin:1.1:npm (npm install) @ nifi-web-ui ---
[DEBUG] Configuring mojo com.github.eirslett:frontend-maven-plugin:1.1:npm from 
plugin realm ClassRealm[plugin>com.github.eirslett:frontend-maven-plugin:1.1, 
parent: sun.misc.Launcher$AppClassLoader@33909752]
[DEBUG] Configuring mojo 'com.github.eirslett:frontend-maven-plugin:1.1:npm' 
with basic configurator -->
[DEBUG]   (f) arguments = --cache-min Infinity install
[DEBUG]   (f) installDirectory = 
/opt/tmp/nifi-1.6.0/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/target/frontend-working-directory
[DEBUG]   (f) npmInheritsProxyConfigFromMaven = true
[DEBUG]   (f) project = MavenProject: org.apache.nifi:nifi-web-ui:1.6.0 @ 
/opt/tmp/nifi-1.6.0/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/pom.xml
[DEBUG]   (f) repositorySystemSession = 
org.eclipse.aether.DefaultRepositorySystemSession@3b0d3a63
[DEBUG]   (f) session = org.apache.maven.execution.MavenSession@5568c66f
[DEBUG]   (f) skip = false
[DEBUG]   (f) skipTests = false
[DEBUG]   (f) workingDirectory = 
/opt/tmp/nifi-1.6.0/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/target/frontend-working-directory
[DEBUG]   (f) execution = com.github.eirslett:frontend-maven-plugin:1.1:npm 
{execution: npm install}
[DEBUG] -- end configuration --
[INFO] Running 'npm --cache

Re: Read processor property in init()

2018-03-28 Thread Sivaprasanna
Just to add on top of what Mike said. The @OnScheduled annotation indicates
that the method that is marked with this annotation will run when a
processor is started every time. So basically the setup() will be called
and executed everytime the processor is started from the UI.

On Wed, 28 Mar 2018 at 6:34 PM, Jeff Zemerick  wrote:

> I will give that a go. Thanks for the quick answer, Mike!
>
> On Wed, Mar 28, 2018 at 9:01 AM, Mike Thomsen 
> wrote:
> > Just do...
> >
> > @OnScheduled
> > public void setup(ProcessContext context) {
> > //Read properties and do setup.
> > }
> >
> > On Wed, Mar 28, 2018 at 8:57 AM, Jeff Zemerick 
> wrote:
> >
> >> Hi everyone,
> >>
> >> Is there a recommended method for making user-configurable property
> >> values available to a processor's init()? I would like to load a large
> >> index file but allow the user to specify the index's path. I am
> >> guessing that init() is executed too early to read user properties.
> >>
> >> Thanks for any suggestions.
> >>
> >> Jeff
> >>
>


Re: [VOTE] Release Apache NiFi 1.6.0 (RC2)

2018-03-28 Thread Sivaprasanna
Don't have my Linux machine on hand now so tried building on Windows 7 x64
and it fails.

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
0.132 s - in org.apache.nifi.wali.TestHashMapSnapshot
[INFO] Running org.apache.nifi.wali.TestLengthDelimitedJournal
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
0.056 s - in org.apache.nifi.wali.TestLengthDelimitedJournal
[INFO] Running org.apache.nifi.wali.TestSequentialAccessWriteAheadLog
[WARNING] Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed:
0.184 s - in org.apache.nifi.wali.TestSequentialAccessWriteAheadLog
[INFO] Running org.wali.TestMinimalLockingWriteAheadLog
[ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 2, Time elapsed:
12.398 s <<< FAILURE! - in org.wali.TestMinimalLockingWriteAheadLog
[ERROR]
testRecoverFileThatHasTrailingNULBytesAndTruncation(org.wali.TestMinimalLockingWriteAheadLog)
Time elapsed: 0.032 s  <<< ERROR!
java.nio.channels.OverlappingFileLockException
at
org.wali.TestMinimalLockingWriteAheadLog.testRecoverFileThatHasTrailingNULBytesAndTruncation(TestMinimalLockingWriteAheadLog.java:503)

[INFO]
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR]
TestMinimalLockingWriteAheadLog.testRecoverFileThatHasTrailingNULBytesAndTruncation:503
» OverlappingFileLock
[INFO]
[ERROR] Tests run: 30, Failures: 0, Errors: 1, Skipped: 3
[INFO]
[INFO]


Known issue with building on Windows? I have attached the surefire reports.
I'll test the same on a Ubuntu setup and update.

-
Sivaprasanna

On Wed, Mar 28, 2018 at 4:21 PM, Marc Parisi <phroc...@apache.org> wrote:

> +1 binding
>
> Built and tested flow on osx and centos
> Worked through release helper's guide
> Validates Sig's and hashes.
>
>
>
> On Wed, Mar 28, 2018, 6:10 AM Pierre Villard <pierre.villard...@gmail.com>
> wrote:
>
> > +1 (binding)
> >
> > - went through the release helper guide
> > - full clean install on OS X
> > - test unsecured and secured cluster setups and communications with
> > Registry and MiNiFi Java
> > - various flows I have are running as expected
> > - confirmed the fix that blocked the previous RC
> >
> > Thanks Joe for taking care of the RM duties and thanks to everyone that
> > contributed in this release.
> >
> > Pierre
> >
> > 2018-03-28 11:46 GMT+02:00 Koji Kawamura <ijokaruma...@gmail.com>:
> >
> > > +1 (binding)
> > >
> > > - Confirmed hashes
> > > - Built with include-atlas profile
> > > - Confirmed various flows with 3 node secured cluster on Ubuntu
> > > - Tested integration with Hadoop environment and NiFi Registry
> > >
> > > Koji
> > >
> > > On Wed, Mar 28, 2018 at 12:27 PM, Andrew Lim <
> andrewlim.apa...@gmail.com
> > >
> > > wrote:
> > > > +1 (non-binding)
> > > >
> > > > -Ran full clean install on OS X (10.11.6)
> > > > -Tested integration with Secure NiFi Registry (1.5.0)
> > > > -Tested fine grained restricted component policies.  Verified two
> > issues
> > > discovered while testing RC1 have been fixed in RC2 [1, 2]
> > > > -Ran basic flows successfully
> > > > -Reviewed documentation
> > > >
> > > > Drew
> > > >
> > > > [1] https://issues.apache.org/jira/browse/NIFI-5008
> > > > [2] https://issues.apache.org/jira/browse/NIFI-5009
> > > >
> > > >
> > > >> On Mar 26, 2018, at 11:34 PM, Joe Witt <joew...@apache.org> wrote:
> > > >>
> > > >> Hello,
> > > >>
> > > >> I am pleased to be calling this vote for the source release of
> Apache
> > > >> NiFi nifi-1.6.0.
> > > >>
> > > >> The source zip, including signatures, digests, etc. can be found at:
> > > >> https://repository.apache.org/content/repositories/
> orgapachenifi-1123
> > > >>
> > > >> The Git tag is nifi-1.6.0-RC2
> > > >> The Git commit ID is b5935ec81a7cbc048820781ac62cd96bbea5b232
> > > >> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=
> > > b5935ec81a7cbc048820781ac62cd96bbea5b232
> > > >>
> > > >> Checksums of nifi-1.6.0-source-release.zip:
> > > >> SHA1: 009f1e2e3c17e38f21f27170b9c06228d11653c0
> > > >> SHA256: 39941a5b25427e2b4cc5ba8206084f
> f92df58863f29ddd097d4ac1e85424
> > > beb9
> > > >> SHA512: 1773417a48665e3cda22180ea7f401
> bc8190ebddbf3f7bc29831e46e7ab0
> > >

Proposal: Specifying range for List and Get processors

2018-03-27 Thread Sivaprasanna
I have been thinking about this lately. Now that NiFi has a wider presence
on the big data stack, I feel it is better to have an optional range
properties. Lets take, ListHDFS, ListAzureBlob or ListS3 for example, over
time, the files accumulated under a particular location will grow
exponentially. Having property descriptors that mention the upper and lower
bound for the files, in this case, a time range - say 1/31/2016 -
2/26/2016, would be a nice to have feature. Thoughts?


Re: [VOTE] Release Apache NiFi 1.6.0

2018-03-25 Thread Sivaprasanna
Is it not possible to review/verify the improvements made to NIFI-4864
issue and then go ahead with the 1.6.0 release process? The new change is
relatively minor so if possible, we can verify whether everything is intact
and not going to introduce new problems. Maybe, if that’s the case, we can
still go ahead? Having said that, I don’t intend to mean a rushed up
review/release process.

Cheers,
Sivaprasanna

On Sun, 25 Mar 2018 at 7:43 PM, Joey Frazee <joey.fra...@icloud.com> wrote:

> -1
>
> Ran through the usual release helper stuff, but it seems like the
> fingerprint issue is going to cause problems, so not sure how useful
> putting 1.6.0 out there will be if 1.6.1 will have to be turned around
> immediately.
>
> Did you mean to say there's a nifi-1.6.0 -RC tag? It doesn't look like the
> tag got pushed.
>
> -joey
>
> On Mar 24, 2018, 12:38 AM -0500, Pierre Villard <
> pierre.villard...@gmail.com>, wrote:
> > -1 (binding)
> >
> > I confirm the issue mentioned by Bryan. That's actually what Matt and I
> > experienced when trying the PR about the S2S Metrics Reporting task [1].
> I
> > thought it was due to my change but it appears it's not the case.
> >
> > [1] https://github.com/apache/nifi/pull/2575
> >
> > 2018-03-23 22:53 GMT+01:00 Bryan Bende <bbe...@gmail.com>:
> >
> > > After voting I happened to be using the RC to test something else and
> > > came across a bug that I think warrants changing my vote to a -1.
> > >
> > > I created a simple two node cluster and made a standard convert record
> > > flow. When I ran the flow I got a schema not found exception, so I
> > > used the debugger which showed AvroSchemaRegistry had no schemas, even
> > > though there was one in the UI.
> > >
> > > I then used the debugger to make sure the onPropertyModified was
> > > getting when a schema was added, and it was which meant some after
> > > adding the schema but before running the flow, it was being removed.
> > >
> > > As far as I can tell, the issue is related to changes introduced in
> > > NIFI-4864... the intent here was for components with property
> > > descriptors that have "dynamically modifies classpath" to be able to
> > > smartly reload when they are started based on knowing if more
> > > classpath resources were added.
> > >
> > > The issue is that for components that don't have any property
> > > descriptors like this, they have a null fingerprint, and before
> > > starting it compares null to the fingerprint of empty string, and
> > > decides to reload [2].
> > >
> > > I think the fix should be fairly easy to just short-circuit at the
> > > beginning of that method and return immediately if
> > > additionalResourcesFingerprint is null, but will have to do some
> > > testing.
> > >
> > > [1] https://issues.apache.org/jira/browse/NIFI-4864
> > > [2] https://github.com/apache/nifi/blob/master/nifi-nar-
> > > bundles/nifi-framework-bundle/nifi-framework/nifi-framework-
> > > core-api/src/main/java/org/apache/nifi/controller/
> > > AbstractConfiguredComponent.java#L313-L314
> > >
> > >
> > > On Fri, Mar 23, 2018 at 4:20 PM, Matt Gilman <matt.c.gil...@gmail.com
> > > wrote:
> > > > +1 (binding) Release this package as nifi-1.6.0
> > > >
> > > > Executed the release helper and verified new granular restrictions
> with
> > > > regards to flow versioning.
> > > >
> > > > Thanks for RMing Joe!
> > > >
> > > > Matt
> > > >
> > > > On Fri, Mar 23, 2018 at 4:12 PM, Michael Moser <moser...@gmail.com
> > > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > Ran through release helper to verify the release and run NiFi on
> Ubuntu
> > > > > 16.04. It worked as expected with no new comments to add.
> > > > >
> > > > > -- Mike
> > > > >
> > > > >
> > > > > On Fri, Mar 23, 2018 at 4:02 PM, Scott Aslan <
> scottyas...@gmail.com
> > > > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > - Ran through release helper
> > > > > > - Setup secure NiFi and verified a test flow
> > > > > >
> > > > > > On Fri, Mar 23, 2018 at 3:29 PM, Bryan Bende <bbe...@gmail.com
> > > wrote:
> > > > > >

Re: [VOTE] Release Apache NiFi 1.6.0

2018-03-23 Thread Sivaprasanna
Yeah. Tested and found the bug when using the Schema Registry service. I
have made the changes and raised PR #2581
<https://github.com/apache/nifi/pull/2581>

I have tested that both the scenario works as expected:

   1. Original reason why the ticket was raised i.e. to reload new
   resources found
   2. The schema registry bug

@Bryan and others,

Please review and check it.

Thanks,

Sivaprasanna

On Sat, Mar 24, 2018 at 3:23 AM, Bryan Bende <bbe...@gmail.com> wrote:

> After voting I happened to be using the RC to test something else and
> came across a bug that I think warrants changing my vote to a -1.
>
> I created a simple two node cluster and made a standard convert record
> flow. When I ran the flow I got a schema not found exception, so I
> used the debugger which showed AvroSchemaRegistry had no schemas, even
> though there was one in the UI.
>
> I then used the debugger to make sure the onPropertyModified was
> getting when a schema was added, and it was which meant some after
> adding the schema but before running the flow, it was being removed.
>
> As far as I can tell, the issue is related to changes introduced in
> NIFI-4864... the intent here was for components with property
> descriptors that have "dynamically modifies classpath" to be able to
> smartly reload when they are started based on knowing if more
> classpath resources were added.
>
> The issue is that for components that don't have any property
> descriptors like this, they have a null fingerprint, and before
> starting it compares null to the fingerprint of empty string, and
> decides to reload [2].
>
> I think the fix should be fairly easy to just short-circuit at the
> beginning of that method and return immediately if
> additionalResourcesFingerprint is null, but will have to do some
> testing.
>
> [1] https://issues.apache.org/jira/browse/NIFI-4864
> [2] https://github.com/apache/nifi/blob/master/nifi-nar-
> bundles/nifi-framework-bundle/nifi-framework/nifi-framework-
> core-api/src/main/java/org/apache/nifi/controller/
> AbstractConfiguredComponent.java#L313-L314
>
>
> On Fri, Mar 23, 2018 at 4:20 PM, Matt Gilman <matt.c.gil...@gmail.com>
> wrote:
> > +1 (binding) Release this package as nifi-1.6.0
> >
> > Executed the release helper and verified new granular restrictions with
> > regards to flow versioning.
> >
> > Thanks for RMing Joe!
> >
> > Matt
> >
> > On Fri, Mar 23, 2018 at 4:12 PM, Michael Moser <moser...@gmail.com>
> wrote:
> >
> >> +1 (binding)
> >>
> >> Ran through release helper to verify the release and run NiFi on Ubuntu
> >> 16.04.  It worked as expected with no new comments to add.
> >>
> >> -- Mike
> >>
> >>
> >> On Fri, Mar 23, 2018 at 4:02 PM, Scott Aslan <scottyas...@gmail.com>
> >> wrote:
> >>
> >> > +1 (binding)
> >> >
> >> > - Ran through release helper
> >> > - Setup secure NiFi and verified a test flow
> >> >
> >> > On Fri, Mar 23, 2018 at 3:29 PM, Bryan Bende <bbe...@gmail.com>
> wrote:
> >> >
> >> > > +1 (binding)
> >> > >
> >> > > - Ran through release helper and everything checked out
> >> > > - Verified some test flows with the restricted components + keytab
> CS
> >> > >
> >> > >
> >> > >
> >> > > On Fri, Mar 23, 2018 at 2:42 PM, Mark Payne <marka...@hotmail.com>
> >> > wrote:
> >> > > > +1 (binding)
> >> > > >
> >> > > > Was able to verify hashes, build with contrib-check, and start up
> >> > > application. Performed some basic functionality tests and all
> worked as
> >> > > expected.
> >> > > >
> >> > > > Thanks!
> >> > > > -Mark
> >> > > >
> >> > > >
> >> > > >> On Mar 23, 2018, at 6:02 AM, Takanobu Asanuma <
> >> tasan...@yahoo-corp.jp
> >> > >
> >> > > wrote:
> >> > > >>
> >> > > >> Thanks for all your efforts, Joe.
> >> > > >>
> >> > > >> I have one question. The version of the generated package is
> >> > > nifi-1.7.0-SNAPSHOT. Is this correct at this stage? If it's ok,
> >> > > +1(non-binding).
> >> > > >>
> >> > > >> - Succeeded "mvn -T 2.0C clean install -DskipTests -Prpm"
> >> > > >

Stackoverflow question: Moving data from one RDB to another through NiFi

2018-03-21 Thread Sivaprasanna
I had a chance to attempt a question raised on stackoverflow regarding
moving data from SQL Server to MySQL using NiFi. The user is using
GenerateTableFetch to read data from SQL Server and then try to use LOAD
DATA command in ExecuteSQL but this involves writing the read SQL Server
data to filesystem and then load it, which is a performance hit, I
suggested the user to try PutDatabaseRecord but I have never tried the
approach myself and going by the docs, I think it won't show any
performance benefit than LOAD DATA because the former reads from file and
inserts at a high speed while the latter reads content and parses it
according to the configured Record Reader and insert the rows as a single
batch. Confused, I wanted to get the community's opinion/thoughts on this.
Please attempt the questions, if you have better suggestions.

Links:

   -
   
https://stackoverflow.com/questions/49400447/bulk-load-sql-server-data-into-mysql-apache-nifi?noredirect=1#comment85843021_49400447
   -
   
https://stackoverflow.com/questions/49380307/flowfile-absolute-path-nifi/49398500?noredirect=1#comment85805848_49398500

Thanks,

Sivaprasanna


Re: [RESULT][VOTE] Establish Fluid Design System, a sub-project of Apache NiFi

2018-03-21 Thread Sivaprasanna
Great news. Wish this new initiative achieves the same success that NiFi
has achieved. With a community as good as ours, I have no doubt in it. All
the very best!

-
Sivaprasanna

On Wed, 21 Mar 2018 at 11:25 PM, Joe Witt <joe.w...@gmail.com> wrote:

> I've initiated the request for the nifi-fds.git repository in the ASF
> git-wip.  Should be ready in about an hour.
>
> I *think* it has github mirroring by default but if not we'll file a
> JIRA with asf-infra to enable that.
>
> Thanks
>
> On Mon, Mar 12, 2018 at 4:28 PM, Scott Aslan <scottyas...@gmail.com>
> wrote:
> > All,
> >
> > This vote has ended with:
> >
> > 15 binding +1s
> > 2 non-binding +1s
> > 0 <= 0
> >
> > The vote to establish Fluid Design System as a sub-project of the Apache
> > NiFi TLP
> > has passed.
> >
> > Thank you all for your participation in this important vote. I will
> > submit the necessary information to infra to get the ball rolling
> >
> > Thanks,
> >
> > Scott
>


Re: [ANNOUNCE] New Apache NiFi Committer Mike Thomsen

2018-03-21 Thread Sivaprasanna
Congratulations, Mike :)

On Wed, 21 Mar 2018 at 8:57 PM, Jeff  wrote:

> Congrats Mike!
>
> On Wed, Mar 21, 2018 at 11:20 AM Scott Aslan 
> wrote:
>
> > Congrats!
> >
> > On Wed, Mar 21, 2018 at 10:15 AM, Pierre Villard <
> > pierre.villard...@gmail.com> wrote:
> >
> > > Congrats Mike, well deserved!
> > >
> > > 2018-03-21 14:55 GMT+01:00 Mike Thomsen :
> > >
> > > > Thanks everyone!
> > > >
> > > > On Wed, Mar 21, 2018 at 9:55 AM, Joe Witt 
> wrote:
> > > >
> > > > > Mike
> > > > >
> > > > > Thanks for all the great contributions and reviews and discussions
> > and
> > > > > congratulations on the well deserved commit bit!
> > > > >
> > > > > Thanks
> > > > > Joe
> > > > >
> > > > > On Wed, Mar 21, 2018 at 1:53 PM, Kevin Doran 
> > > wrote:
> > > > > > Congrats, Mike!
> > > > > >
> > > > > > On 3/21/18, 09:42, "Tony Kurc"  wrote:
> > > > > >
> > > > > > On behalf of the Apache NiFI PMC, I am very pleased to
> announce
> > > > that
> > > > > Mike
> > > > > > has accepted the PMC's invitation to become a committer on
> the
> > > > > Apache NiFi
> > > > > > project. We greatly appreciate all of Mike's hard work and
> > > generous
> > > > > > contributions to the project. We look forward to his
> continued
> > > > > involvement
> > > > > > in the project.
> > > > > >
> > > > > > Mike has been contributing to the project for quite a while,
> > > > > contributing
> > > > > > features such as  enhancements to MongoDB and HBase
> processors
> > as
> > > > > well as
> > > > > > things like improvements to Expression Language and
> > > LookupService.
> > > > > I'm sure
> > > > > > many of you have interacted with Mike on the mailing lists
> > where
> > > he
> > > > > has
> > > > > > been giving great input on both the project and community. We
> > > also
> > > > > > appreciate his work reviewing and providing feedback on new
> > > > > contributions.
> > > > > >
> > > > > > Welcome and congratulations!
> > > > > > Tony
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: onPropertyModified doesn't seem to work

2018-03-20 Thread Sivaprasanna
Thank you, Mark. That helped.

On Mon, Mar 19, 2018 at 10:44 PM, Mark Payne <marka...@hotmail.com> wrote:

> Sivaprasanna,
>
> What package is your processor class in? By default, the logging for the
> org.apache.nifi.processors package is set to WARN. I am wondering if you
> are
> simply not seeing the logging because of your logback configuration. I
> would recommend
> you instead add a breakpoint and attach to your running nifi instance.
>
> Thanks
> -Mark
>
>
>
> > On Mar 19, 2018, at 1:03 PM, Sivaprasanna <sivaprasanna...@gmail.com>
> wrote:
> >
> > Hi everyone,
> >
> > I'm facing problem getting onPropertyModified() working. Just to test it
> > and troubleshoot further, I created a custom processor, the usual
> > MyProcessor sample processor. I then added the following:
> >
> > @Override
> > public void onPropertyModified(PropertyDescriptor descriptor, final
> > String oldValue, final String newValue) {
> >LOG.info("Property modified for processor " + this.getIdentifier());
> >super.onPropertyModified(descriptor, oldValue, newValue);
> > }
> >
> > And modified onTrigger to generate a string and write it as an output
> > flowfile. The processor seems to work. However, when I modify the value
> for
> > the property and restart it, I expect the above log statement to be
> > generated, but it is not. Any help is appreciated. Thanks.
> >
> > -
> > Sivaprasanna
>
>


Re: FlattenJson

2018-03-20 Thread Sivaprasanna
Like the idea that Otto suggested. RoutOnJSONPath makes more sense since
making the flattened JSON write to attributes is restricted to that
processor alone.

On Tue, Mar 20, 2018 at 8:37 PM, Otto Fowler 
wrote:

> Why not create a new processor that does routeOnJSONPath and works on the
> flow file?
>
>
> On March 20, 2018 at 10:39:37, Jorge Machado (jom...@me.com) wrote:
>
> So that is what we actually are doing EvaluateJsonPath the problem with
> that is, that is hard to build something generic if we need to specify each
> property by his name, that’s why this idea.
>
> Should I make a PR for this or is this to business specific ?
>
>
> Jorge Machado
>
> > On 20 Mar 2018, at 15:30, Bryan Bende  wrote:
> >
> > Ok so I guess it depends whether you end up needing all 30 fields as
> > attributes to achieve the logic in your flow, or if you only need a
> > couple.
> >
> > If you only need a couple you could probably use EvaluateJsonPath
> > after FlattenJson to extract just the couple of fields you need into
> > attributes.
> >
> > If you need them all then I guess it makes sense to want the option to
> > flatten into attributes.
> >
> > On Tue, Mar 20, 2018 at 10:14 AM, Jorge Machado  wrote:
> >> From there on we use a lot of routeOnAttritutes and use that values on
> sql queries to other tables like select * from someTable where
> id=${myExtractedAttribute}
> >> To be honest I tryed JoltTransformJSON but I could not get it working :)
> >>
> >> Jorge Machado
> >>
> >>
> >>
> >>
> >>
> >>> On 20 Mar 2018, at 15:12, Matt Burgess  wrote:
> >>>
> >>> I think Bryan is asking about what happens AFTER this part of the
> >>> flow. For example, if you are doing routing you can use QueryRecord
> >>> (and you won't need the SplitJson), if you are doing transformations
> >>> you can use JoltTransformJSON (often without SplitJson as well), etc.
> >>>
> >>> Regards,
> >>> Matt
> >>>
> >>> On Tue, Mar 20, 2018 at 10:08 AM, Jorge Machado  wrote:
>  Hi Bryan,
> 
>  thanks for the help.
>  Our Flow: ExecuteSql -> convertToJSON -> SplitJson -> ExecuteScript
> with attachedcode 1.
> 
>  We are now writting a custom processor that does this which is a copy
> of FlattenJson but instead of putting the result into a flowfile we put it
> into the attributes.
>  That’s why I asked if it makes sense to contribute this back
> 
> 
> 
>  Attached code 1:
> 
>  import org.apache.commons.io.IOUtils
>  import java.nio.charset.*
>  def flowFile = session.get();
>  if (flowFile == null) {
>  return;
>  }
>  def slurper = new groovy.json.JsonSlurper()
>  def attrs = [:] as Map
>  session.read(flowFile,
>  { inputStream ->
>  def text = IOUtils.toString(inputStream, StandardCharsets.UTF_8)
>  def obj = slurper.parseText(text)
>  obj.each {k,v ->
>  if(v!=null && v.toString()!=""){
>  attrs[k] = v.toString()
>  }
>  }
>  } as InputStreamCallback)
>  flowFile = session.putAllAttributes(flowFile, attrs)
>  session.transfer(flowFile, REL_SUCCESS)
> 
>  some code removed
> 
> 
>  Jorge Machado
> 
> 
> 
> 
> 
> > On 20 Mar 2018, at 15:03, Bryan Bende  wrote:
> >
> > Ok it is still not clear what the reason for needing it in attributes
> > is though... Is there another processor you are using after this that
> > only works off attributes?
> >
> > Just trying to understand if there is another way to accomplish what
> > you want to do.
> >
> > On Tue, Mar 20, 2018 at 9:50 AM, Jorge Machado 
> wrote:
> >> We are using nifi for Workflow and we get from a database like
> job_status and job_name and some nested json columns. (30 columns)
> >> We need to put it as attributes from the Flow file and not the
> content. For the first part (columns without a json is done by groovy
> script) but then would be nice to use this standard processor and instead
> of writing this to a flow content write it to attributes.
> >>
> >>
> >> Jorge Machado
> >>
> >>
> >>
> >>
> >>
> >>> On 20 Mar 2018, at 14:47, Bryan Bende  wrote:
> >>>
> >>> What would be the main use case for wanting all the flattened
> values
> >>> in attributes?
> >>>
> >>> If the reason was to keep the original content, we could probably
> just
> >>> added an original relationship.
> >>>
> >>> Also, I think FlattenJson supports flattening a flow file where the
> >>> root is an array of JSON documents (although I'm not totally sure),
> so
> >>> you'd have to consider what to do in that case.
> >>>
> >>> On Tue, Mar 20, 2018 at 5:26 AM, Pierre Villard
> >>>  wrote:
>  No I do see how this could be convenient in 

Re: Not able to add "Max Rows Per Flow File" property in SelectHiveQL

2018-03-20 Thread Sivaprasanna
If you download NiFi 1.5.0, you can copy 'nifi-hive-nar-1.5.0.nar' and
'nifi-hive-services-api-nar-1.5.0.nar' from the lib directory and paste it
in your NiFi 1.3.0 lib. No need to remove the older version i.e. 1.3.0 hive
nars. NiFi supports multiple versions of bundles to exist. Anyone please
correct me, if I'm wrong.

On Tue, Mar 20, 2018 at 1:14 PM, Rohit  wrote:

> Thanks..
>
> Is it possible to change only source code in 1.3.0 and add 1.5.0 code.
>
>
>
>
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
>


onPropertyModified doesn't seem to work

2018-03-19 Thread Sivaprasanna
Hi everyone,

I'm facing problem getting onPropertyModified() working. Just to test it
and troubleshoot further, I created a custom processor, the usual
MyProcessor sample processor. I then added the following:

@Override
public void onPropertyModified(PropertyDescriptor descriptor, final
String oldValue, final String newValue) {
LOG.info("Property modified for processor " + this.getIdentifier());
super.onPropertyModified(descriptor, oldValue, newValue);
}

And modified onTrigger to generate a string and write it as an output
flowfile. The processor seems to work. However, when I modify the value for
the property and restart it, I expect the above log statement to be
generated, but it is not. Any help is appreciated. Thanks.

-
Sivaprasanna


Re: Not able to add "Max Rows Per Flow File" property in SelectHiveQL

2018-03-19 Thread Sivaprasanna
Max Rows Per Flow File is a property that was introduced fairly recently. I
think it was after NiFi 1.4.0/1.5.0. Since yours is NiFi 1.3, the framework
doesn’t know/understand it. You can try downloading and using NiFi 1.5.0
version of Hive bundle.

On Mon, 19 Mar 2018 at 5:27 PM, Rohit  wrote:

> Hi,
>
>   I am using SelectHiveQL 1.3.0 processor to read data from hive table and
> i
> am trying to add "Max Rows Per Flow File" property in  SelectHiveQL but its
> giving error "Max Rows Per Flow File" is not a supported property. As per
> Nifi notes this is valid property.
>
> SelectHiveQL.png
> <
> http://apache-nifi-developer-list.39713.n7.nabble.com/file/t927/SelectHiveQL.png
> >
> SelectHiveQL1.PNG
> <
> http://apache-nifi-developer-list.39713.n7.nabble.com/file/t927/SelectHiveQL1.PNG
> >
>
> Thanks,
> Rohit
>
>
>
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
>


Re: PutMongo 1.3.0 - Upsert issue

2018-03-16 Thread Sivaprasanna
It need be _id. In case of it being generated automatically, it will be
tough to keep track. It can be any valid MongoDB filter query, if I’m
right. So the query should be:
{
,
“$set” : {
“FieldA” : “New FieldA value”
}
}


On Fri, 16 Mar 2018 at 5:07 PM, Mike Thomsen  wrote:

> I noticed that your update document didn't have an update key in it. The
> default lookup key is _id so you'd need something like this:
> {
> "_id": "ID_HERE",
> "$set": {
> "new_field": 1
> }
> }
>
> On Wed, Jan 10, 2018 at 6:05 AM, fabe_bdx  wrote:
>
> > Dear Pierre,
> >
> > First all many thnaks for your help, we were able to use the copy you
> > provided us and restart Nifi with 1.5.0 module.
> >
> > But it's appear that don't realy work like expected.
> >
> > If we try to update some attributes with the $set commant it don't work.
> >
> > *Original MongoDb record :*
> >
> > {
> > "_id" : ObjectId("5a535bcea33116f026ef65f0"),
> > "type" : "Cc",
> > "nmess" : 3172,
> > "idgtr" : "C-0--00061--0-CMF-RMF",
> > "zone" : 1.0,
> > "idzone" : "NULL",
> > "cmf" : 24,
> > "rmf" : 24,
> > "carrefour" : 61,
> > "idcarrefour" : 2,
> > "last_update" : "2018-01-09T13:49:58",
> > "libelle" : " Henrique#Cebolas",
> > "geojson" :
> > "{\"features\":[{\"geometry\":{\"coordinates\":[\"-9.
> > 132289080874671\",\"38.708011\"],\"type\":\"
> > Point\"},\"type\":\"Feature\"}],\"type\":\"FeatureCollection\"}",
> > "positionOk" : true,
> > "orientation" : 0
> > }
> >
> > *Update request throw putmongo processor :*
> >
> > {
> > "$set": {
> > "type":"Cc",
> > "nmess":14488,
> > "idgtr":"C-0--00088--0-CMF-RMF",
> > "last_update":"2018-01-10T09:32:35",
> > "libelle":"Bonifacio_Estefan. "
> > }
> > }
> >
> >
> >
> >
> > --
> > Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
> >
>


Re: [VOTE] Establish Fluid Design System, a sub-project of Apache NiFi

2018-03-10 Thread Sivaprasanna
+1

On Sat, Mar 10, 2018 at 4:23 AM, Aldrin Piri  wrote:

> +1
> On Fri, Mar 9, 2018 at 17:39 Yolanda Davis 
> wrote:
>
> > +1
> >
> > On Fri, Mar 9, 2018 at 3:22 PM, Pierre Villard <
> > pierre.villard...@gmail.com>
> > wrote:
> >
> > > +1
> > >
> > > Le ven. 9 mars 2018 à 21:16, Bryan Bende  a écrit :
> > >
> > > > +1
> > > >
> > > > On Fri, Mar 9, 2018 at 3:11 PM, Joe Witt  wrote:
> > > > > +1
> > > > >
> > > > > On Mar 9, 2018 3:10 PM, "Scott Aslan" 
> wrote:
> > > > >
> > > > > All,
> > > > >
> > > > > Following a solid discussion for the past couple of weeks [1]
> > regarding
> > > > the
> > > > > establishment of Fluid Design System as a sub-project of Apache
> NiFi,
> > > I'd
> > > > > like to
> > > > > call a formal vote to record this important community decision and
> > > > > establish consensus.
> > > > >
> > > > > The scope of this project is to define a theme-able set of high
> > quality
> > > > UI
> > > > > components and utilities for use across the various Apache NiFi web
> > > > > applications in order to provide a more consistent user experience.
> > > > >
> > > > > I am a +1 and looking forward to the future work in this area.
> > > > >
> > > > > The vote will be open for 72 hours and be a majority rule vote.
> > > > >
> > > > > [ ] +1 Establish Fluid Design System, a subproject of Apache NiFi
> > > > > [ ]   0 Do not care
> > > > > [ ]  -1 Do not establish Fluid Design System, a subproject of
> Apache
> > > NiFi
> > > > >
> > > > > Thanks,
> > > > >
> > > > > ScottyA
> > > > >
> > > > > [1] *http://mail-archives.apache.org/mod_mbox/nifi-dev/201802.
> mbox/%
> > > > > 3CCAKeSr4ibXX9xzGN1GhdVv5uTmWvfB3QULXF9orzw4FYD0n7taQ%
> > 40mail.gmail.com
> > > > %3E
> > > > >  > > > > 3CCAKeSr4ibXX9xzGN1GhdVv5uTmWvfB3QULXF9orzw4FYD0n7taQ%
> > 40mail.gmail.com
> > > > %3E>*
> > > >
> > >
> >
> >
> >
> > --
> > --
> > yolanda.m.da...@gmail.com
> > @YolandaMDavis
> >
>


Re: Custom controller service

2018-03-07 Thread Sivaprasanna
Hey Varun,

It would br helpful if you could share the error stacktrace. By the way,
you can use DBCP Connection Pool for connecting to databases.

On Thu, 8 Mar 2018 at 12:14 PM, varun yadav 
wrote:

> Hii,
>
> I am trying to create a custom controller service for connection to a local
> database.
> I am getting so many issues in that. If you can help with a few documents
> or videos, it will be very helpful for me to proceed.
>
> Thanks Varun
>


Re: [DISCUSS] Proposal for an Apache NiFi sub-project - NiFi Fluid Design System

2018-03-01 Thread Sivaprasanna
A couple of names I came up with:

   1. chalk
   2. depth
   3. dew
   4. cyan
   5. mint


On Thu, Mar 1, 2018 at 8:37 PM, Scott Aslan <scottyas...@gmail.com> wrote:

> Hello MikeM/everyone,
>
> One of the questions brought up in this discussion was centered around what
> a design system is. I stumbled upon a really great article this morning on
> this exact topic. It also links to several other really great articles and
> will help us all to understand.
> https://uxplanet.org/design-systems-in-2016-5415a660b29
>
> Also, I was thinking about the name what does everyone think about
> naming this sub-project NiFi Design System?
>
> -Scotty
>
> On Fri, Feb 23, 2018 at 10:20 AM, Scott Aslan <scottyas...@gmail.com>
> wrote:
>
> > TonyK,
> >
> > The intent is to use this this NgModule in NiFi Registry and eventually
> > NiFi. However, this would be released under ASLv2 so yes other projects
> > could use it.
> >
> > On Thu, Feb 22, 2018 at 10:46 PM, Tony Kurc <tk...@apache.org> wrote:
> >
> >> Is some of the thinking that projects other than nifi projects would use
> >> this?
> >>
> >> On Feb 22, 2018 10:00 PM, "Scott Aslan" <scottyas...@gmail.com> wrote:
> >>
> >> > Sivaprasanna,
> >> >
> >> > I am not sure I follow exactly what you are saying...
> >> >
> >> > NiFi Registry would no longer continue to host a copy of the FDS
> >> NgModule.
> >> > Instead, NiFi Registry would just add the NiFi FDS sub-project as a
> >> client
> >> > side dependency in its package.json. This would be analogous to how
> NiFi
> >> > Registry depends on Angular Material, etc. npm supports the ability to
> >> > download published packages which are current with the latest stable
> >> > release of a package. npm *also* supports the ability to develop off
> >> > of the *master
> >> > branch (or any other branch really)* of the NiFi FDS. An example of
> this
> >> > can be found in the github.io demo here
> >> > <https://github.com/scottyaslan/fluid-design-system/blob/gh-
> >> pages/package.
> >> > json#L45>
> >> > . By placing that dependency in the package.json for the NiFi Registry
> >> each
> >> > subsequent clean build would automatically download the latest master
> >> > branch of the NiFi FDS sub-project and developers can leverages the
> >> latest
> >> > NiFi FDS components.
> >> >
> >> > This also brings up a good point about release management. I have also
> >> > included a prototype of one possible implementation of automating the
> >> > tagging of a branch and automatically updating release version numbers
> >> etc.
> >> > leveraging grunt-bump [1]. The FDS-0.0.1-RC.0 tag [2] was created with
> >> the
> >> > described grunt task.
> >> >
> >> > [1]
> >> > https://github.com/scottyaslan/fluid-design-system/blob/
> >> master/Gruntfile.
> >> > js#L47
> >> >
> >> > [2] https://github.com/scottyaslan/fluid-design-system/tree/FDS-
> >> 0.0.1-RC.0
> >> >
> >> > On Thu, Feb 22, 2018 at 12:39 PM, Sivaprasanna <
> >> sivaprasanna...@gmail.com>
> >> > wrote:
> >> >
> >> > > I agree with Matt. With clear documentation and guides,
> contributions
> >> on
> >> > > the sub-projects can be streamlined and be ensured that the
> necessary
> >> > > changes are already available on the core project i.e NiFi. One
> >> challenge
> >> > > is that the committer of the sub-project should have the courtesy to
> >> > check
> >> > > wether the supporting changes are made available to the core project
> >> and
> >> > > track its status but given how contributions are being handled in
> >> > > nifi-registry project, I don’t think it’s going to be that much of a
> >> > > headache.
> >> > >
> >> > > We could also add to the helper doc mentioning that if the
> >> contribution
> >> > is
> >> > > going to affect a core component, the contributor needs to add the
> >> JIRA
> >> > id
> >> > > of the core project’s supporting changes in the sub-projects’ issue
> >> > > description.
> >> > >
> >> > > On Thu, 22 Feb 2018 at 10:42 PM, Matt Gilman <
> matt.

Re: Re: A question to copy files to other folders

2018-02-28 Thread Sivaprasanna
Yep. And that’s what Matt suggested.

On Thu, 1 Mar 2018 at 8:16 AM,  wrote:

> I found that I can set the 'Directory' property of 'PutFile' processor to
> B/${path} and C/${path}
> to solve this issue. So the patch is not needed I think
>
>
>
>
> 发件人:
> l...@china-inv.cn
> 收件人:
>
> 抄送:
> dev@nifi.apache.org
> 日期:
> 2018/03/01 09:17
> 主题:
> 答复:  Re: A question to copy files to other folders
>
>
>
> I haven't tried it yet.
>
> I just find that the PutFile doesn't meet my requirements.
>
> I'll create a patch for PutFile.
>
>
>
> 发件人:
> Matt Burgess 
> 收件人:
> dev@nifi.apache.org
> 日期:
> 2018/02/28 21:10
> 主题:
> Re: A question to copy files to other folders
>
>
>
> Looking at the code, it appears that PutFile should support the
> creation of arbitrary directories if the Create Missing Directories
> property is set to true. With that and setting the Directory property
> to ${path}, I would think that would create the subdirectories
> properly. If not, what error are you getting?
>
> Regards,
> Matt
>
> On Wed, Feb 28, 2018 at 7:12 AM, Mike Thomsen 
> wrote:
> > That seems like a pretty easy thing to fix with PutFile. Could be done
> with
> > a patch to add an attribute that provides a relative path.
> >
> > On Wed, Feb 28, 2018 at 7:00 AM,  wrote:
> >
> >> Hi, team,
> >>
> >> I'm writing a data flow template to copy files from directory A to
> other
> >> two directories B and C
> >>
> >> There are sub directories under A and I need to copy all files under
> those
> >> sub directories to the same
> >> palce under B and C e.g. copy a file A/foo/a.txt to B/foo/a.txt and
> >> C/foo/a.txt
> >>
> >> I tried processors 'GetFile' and 'PutFile', but PutFile doesn't support
> >> creating sub directories under B and C
> >> (i.e. B/foo and C/foo in above example).
> >>
> >> The 'GetFile' processor saves the relative path 'foo' in the 'PATH'
> >> attribute of flowfile. But the 'PutFile' doesn't use it
> >> to create.
> >>
> >> I don't want to create those folders manually and create data flow by
> >> using 'GetFile' --> 'PutFile' for each folder because there
> >> will be too many data flows.
> >>
> >>
> >> Is there any processor that support creating folder and copying files?
> >>
> >> Thanks
> >>
> >> Boying
> >>
> >>
> >>
> >>
> >> 本邮件内容包含保密信息。如阁下并非拟发送的收件人,请您不要阅读、保存、
> 对外
> >> 披露或复制本邮件的任何内容,或者打开本邮件的任何附件。请即回复邮件告知
> 发件
> >> 人,并立刻将该邮件及其附件从您的电脑系统中全部删除,不胜感激。
> >>
> >>
> >> This email message may contain confidential and/or privileged
> information.
> >> If you are not the intended recipient, please do not read, save,
> forward,
> >> disclose or copy the contents of this email or open any file attached
> to
> >> this email. We will be grateful if you could advise the sender
> immediately
> >> by replying this email, and delete this email and any attachment or
> links
> >> to this email completely and immediately from your computer system.
> >>
> >>
> >>
> >>
>
>
>
>
>
>
> 本邮件内容包含保密信息。如阁下并非拟发送的收件人,请您不要阅读、保存、对外
> 披露或复制本邮件的任何内容,或者打开本邮件的任何附件。请即回复邮件告知发件
> 人,并立刻将该邮件及其附件从您的电脑系统中全部删除,不胜感激。
>
>
> This email message may contain confidential and/or privileged information.
>
> If you are not the intended recipient, please do not read, save, forward,
> disclose or copy the contents of this email or open any file attached to
> this email. We will be grateful if you could advise the sender immediately
>
> by replying this email, and delete this email and any attachment or links
> to this email completely and immediately from your computer system.
>
>
>
>
>
>
>
>
>
> 本邮件内容包含保密信息。如阁下并非拟发送的收件人,请您不要阅读、保存、对外
> 披露或复制本邮件的任何内容,或者打开本邮件的任何附件。请即回复邮件告知发件
> 人,并立刻将该邮件及其附件从您的电脑系统中全部删除,不胜感激。
>
>
> This email message may contain confidential and/or privileged information.
> If you are not the intended recipient, please do not read, save, forward,
> disclose or copy the contents of this email or open any file attached to
> this email. We will be grateful if you could advise the sender immediately
> by replying this email, and delete this email and any attachment or links
> to this email completely and immediately from your computer system.
>
>
>
>


Re: Implementation of ListFile's Primary Node only in a cluster

2018-02-24 Thread Sivaprasanna
Think it was a cache issue. It works as intended. Looks like removing the
executionNode === PRIMARY from nf-processor-details.js
<https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/nf-processor-details.js#L220>
and nf-processor-configuration.js
<https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js#L745>
alone is enough. However, I want to confirm it here with the community
whether it is okay to remove that.

On Fri, Feb 23, 2018 at 10:28 PM, Sivaprasanna <sivaprasanna...@gmail.com>
wrote:

> I have started working on an annotation implementation wherein the
> developer can use that annotation to indicate that processor is supposed to
> be set to run only on 'Primary node'. Framework side of things work just
> fine. However, for UI side there are a couple of questions and issues:
>
>1. nf-processor-details.js
> <https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/nf-processor-details.js#L220>
> and nf-processor-configuration.js
> <https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js#L745>
> checks if the setup 'isClustered' or 'executionNode === PRIMARY' which
> confuses me. Checking ' nfClusterSummary.isClustered()' alone is enough,
> right? The reason is, since we are also checking 'executionNode ===
> Primary', even for single instance NiFi i.e. non clustered setup, the
> 'execution-node-options' will be rendered for processors marked with this
> annotation.
>2. In order to avoid this, I made a change to the code and removed the
> 'executionNode === PRIMARY' condition check in the mentioned files. Even
> after that, 'execution-node-options' is being rendered. Am I missing
> something?
>
> I have pushed these changes to my remote repo. Here is the link:
> https://github.com/zenfenan/nifi/commit/e09e85960fb394eeef89d9cb6aa7ac
> dfc5d4dad3
>
> BTW, right now I have implemented it in this way : If the annotation is
> present, at the time of processor creation/instantiation, the executionNode
> will be set to 'PRIMARY'. However this can be changed later by configuring
> the processor from the UI. Should we think about disabling the 'Execution
> Node' configuration altogether (from UI) for a processor marked with this
> annotation (which makes more sense to me but kinda seems to be restricting
> the users' liberty from choosing according their wish) ?
>
>
> On Sun, Feb 11, 2018 at 12:59 AM, Bryan Bende <bbe...@gmail.com> wrote:
>
>> Currently it means that the dataflow manager/developer is expected to
>> set the 'Execution Nodes' strategy to "Primary Node" at the time of
>> flow design.
>>
>> We don't have anything that restricts the scheduling strategy of a
>> processor, but we probably should consider having an annotation like
>> @PrimaryNodeOnly that you can put on a processor and then the
>> framework will enforce that it can only be scheduled on primary node.
>>
>> In the case of ListFile, I think the statement in the documentation is
>> only partially true...
>>
>> When "Input Directory Location" is set to local, there should be no
>> issue with scheduling the processor on all nodes in the cluster, as it
>> would be listing a local directory and storing state locally.
>>
>> When "Input Directory Location" is set to remote, it wouldn't make
>> sense to have all nodes listing the same remote directory and getting
>> the same results, and also the state is then stored in ZooKeeper under
>> a ZNode using the processor's UUID, and the processor has the same
>> UUID on each node so they would be overwriting each other's state in
>> ZK.
>>
>> So ListFile probably can't be restricted to primary node only, where
>> as something like ListHDFS probably could because it is always listing
>> a remote destination.
>>
>>
>> On Fri, Feb 9, 2018 at 10:55 PM, Sivaprasanna <sivaprasanna...@gmail.com>
>> wrote:
>> > I was going through ListFile processor's code and found out that in the
>> > documentation
>> > <https://github.com/apache/nifi/blob/master/nifi-nar-bundles
>> /nifi-standard-bundle/nifi-standard-processors/src/main/
>> java/org/apache/nifi/processors/standard/ListFile.java#L72-L76>,
>> > it is mentioned that "this processor is designed to run on Primary Node
>> > on

Re: Advice on ListS3 Processor

2018-02-23 Thread Sivaprasanna
Jim,

${filename} is a NiFi expression language that is used to get the name of
the flowfile that is in the flow. For ex: assume this simple pipeline.
GetFile -> UpdateAttribute -> PutFile. GetFile will read the files from the
input directory and each flowfile will have an attribute ‘filename’ which
is the actual name of the file. You can use ${filename} in the downstream
processoes .. in this case UpdateAttribute and/or PutFile ans manipulate it
to whatever you want. This can’t b used in ListS3 since that is the source
processor and it’s not aware of what ${filename} contains so it will be
replaced with empty string. Hence, it will actually look for
‘folder1/folder1a/ .txt‘.

Having said that , for your usecase, what you can do is: ListS3 (provide
the source *directory* alone in the configuration) -> RouteOnAttribute
(create a property with a name like ‘txtFilesOnly’ and its value as
${filename:endsWith(‘txt’)} -> to any downstream processor you want but for
the relationship, choose ‘txtFilesOnly’ while connecting to the downstream
processor.

How this work is, ListS3 just lists files present in the S3 buckets’
provided path as a separate flowfile. These flowfiles will have an
attribute on them called ‘filename’ and you are using that attribute in the
‘RouteOnAttribute’ processor, saying only if the filename ends with .txt,
route thos particular flowfiles alone to the downstream processors. BTW,
ListS3 doesn’t fetch the content along so you probably want to use FetchS3
after ‘RouteOnAttribute’ to actually read the needed files i.e. txt files
from S3 bucket.

On Sat, 24 Feb 2018 at 4:45 AM, Jim Murphy  wrote:

> Hey all,
>
> I'm a bit of a newbie to nifi . But I am hoping someone might have
> some advice for me here. I am using the ListS3 processor and the prefix
> field just doesn't seem to like anything but static filenames. I cannot
> reference directories or use the nifi expression language, where the docs
> say that I should be able to.
>
> I'd ideally like to reference files like this:
>
> /folder1/folder1a/${filename:endsWith('txt')}
>
> to find all files with a .txt extension 2 levels down in that specific
> path.
>
> Any advice?
>
> I totally know it's something stupid I am doing (or not doing). But
> research shows scant examples for this processor out there.
>
> Any help is appreciated greatly.
>
> Thanks,
>
> Jim
>


Re: Implementation of ListFile's Primary Node only in a cluster

2018-02-23 Thread Sivaprasanna
I have started working on an annotation implementation wherein the
developer can use that annotation to indicate that processor is supposed to
be set to run only on 'Primary node'. Framework side of things work just
fine. However, for UI side there are a couple of questions and issues:

   1. nf-processor-details.js
<https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/nf-processor-details.js#L220>
and nf-processor-configuration.js
<https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js#L745>
checks if the setup 'isClustered' or 'executionNode === PRIMARY' which
confuses me. Checking ' nfClusterSummary.isClustered()' alone is enough,
right? The reason is, since we are also checking 'executionNode ===
Primary', even for single instance NiFi i.e. non clustered setup, the
'execution-node-options' will be rendered for processors marked with this
annotation.
   2. In order to avoid this, I made a change to the code and removed the
'executionNode === PRIMARY' condition check in the mentioned files. Even
after that, 'execution-node-options' is being rendered. Am I missing
something?

I have pushed these changes to my remote repo. Here is the link:
https://github.com/zenfenan/nifi/commit/e09e85960fb394eeef89d9cb6aa7acdfc5d4dad3

BTW, right now I have implemented it in this way : If the annotation is
present, at the time of processor creation/instantiation, the executionNode
will be set to 'PRIMARY'. However this can be changed later by configuring
the processor from the UI. Should we think about disabling the 'Execution
Node' configuration altogether (from UI) for a processor marked with this
annotation (which makes more sense to me but kinda seems to be restricting
the users' liberty from choosing according their wish) ?


On Sun, Feb 11, 2018 at 12:59 AM, Bryan Bende <bbe...@gmail.com> wrote:

> Currently it means that the dataflow manager/developer is expected to
> set the 'Execution Nodes' strategy to "Primary Node" at the time of
> flow design.
>
> We don't have anything that restricts the scheduling strategy of a
> processor, but we probably should consider having an annotation like
> @PrimaryNodeOnly that you can put on a processor and then the
> framework will enforce that it can only be scheduled on primary node.
>
> In the case of ListFile, I think the statement in the documentation is
> only partially true...
>
> When "Input Directory Location" is set to local, there should be no
> issue with scheduling the processor on all nodes in the cluster, as it
> would be listing a local directory and storing state locally.
>
> When "Input Directory Location" is set to remote, it wouldn't make
> sense to have all nodes listing the same remote directory and getting
> the same results, and also the state is then stored in ZooKeeper under
> a ZNode using the processor's UUID, and the processor has the same
> UUID on each node so they would be overwriting each other's state in
> ZK.
>
> So ListFile probably can't be restricted to primary node only, where
> as something like ListHDFS probably could because it is always listing
> a remote destination.
>
>
> On Fri, Feb 9, 2018 at 10:55 PM, Sivaprasanna <sivaprasanna...@gmail.com>
> wrote:
> > I was going through ListFile processor's code and found out that in the
> > documentation
> > <https://github.com/apache/nifi/blob/master/nifi-nar-
> bundles/nifi-standard-bundle/nifi-standard-processors/src/
> main/java/org/apache/nifi/processors/standard/ListFile.java#L72-L76>,
> > it is mentioned that "this processor is designed to run on Primary Node
> > only in a cluster". I want to understand what "designed" stands for here.
> > Does that mean the processor was built in a way that it only runs on the
> > Primary node regardless of the "Execution Nodes" strategy set to
> otherwise
> > or does it mean that dataflow manager/developer is expected to set the
> > 'Execution Nodes' strategy to "Primary Node" at the time of flow design?
> If
> > it is of the former case, how is it handled in the code? If it is
> handled,
> > it should be in the framework side but I don't see any annotation
> > indicating anything related to such mechanism in the processor code and
> > more over a related JIRA NIFI-543
> > <https://issues.apache.org/jira/browse/NIFI-543> is also open so I want
> > clear my doubt.
> >
> > -
> > Sivaprasanna
>


Re: [DISCUSS] Proposal for an Apache NiFi sub-project - NiFi Fluid Design System

2018-02-22 Thread Sivaprasanna
Scott,

I understand the vision. I was actually echoing what Matt said i.e. if the
contributions being made to the sub-project has any supporting changes that
*should* exist in the core-project i.e. NiFi or any other projects that use
this sub-project, they have to be checked by the committer. But you have
made it clear that it's going to be a npm module so I understand it better
now. Disregard my previous comment.

-
Sivaprasanna

On Fri, Feb 23, 2018 at 9:16 AM, Tony Kurc <tk...@apache.org> wrote:

> Is some of the thinking that projects other than nifi projects would use
> this?
>
> On Feb 22, 2018 10:00 PM, "Scott Aslan" <scottyas...@gmail.com> wrote:
>
> > Sivaprasanna,
> >
> > I am not sure I follow exactly what you are saying...
> >
> > NiFi Registry would no longer continue to host a copy of the FDS
> NgModule.
> > Instead, NiFi Registry would just add the NiFi FDS sub-project as a
> client
> > side dependency in its package.json. This would be analogous to how NiFi
> > Registry depends on Angular Material, etc. npm supports the ability to
> > download published packages which are current with the latest stable
> > release of a package. npm *also* supports the ability to develop off
> > of the *master
> > branch (or any other branch really)* of the NiFi FDS. An example of this
> > can be found in the github.io demo here
> > <https://github.com/scottyaslan/fluid-design-
> system/blob/gh-pages/package.
> > json#L45>
> > . By placing that dependency in the package.json for the NiFi Registry
> each
> > subsequent clean build would automatically download the latest master
> > branch of the NiFi FDS sub-project and developers can leverages the
> latest
> > NiFi FDS components.
> >
> > This also brings up a good point about release management. I have also
> > included a prototype of one possible implementation of automating the
> > tagging of a branch and automatically updating release version numbers
> etc.
> > leveraging grunt-bump [1]. The FDS-0.0.1-RC.0 tag [2] was created with
> the
> > described grunt task.
> >
> > [1]
> > https://github.com/scottyaslan/fluid-design-system/blob/master/Gruntfile
> .
> > js#L47
> >
> > [2] https://github.com/scottyaslan/fluid-design-
> system/tree/FDS-0.0.1-RC.0
> >
> > On Thu, Feb 22, 2018 at 12:39 PM, Sivaprasanna <
> sivaprasanna...@gmail.com>
> > wrote:
> >
> > > I agree with Matt. With clear documentation and guides, contributions
> on
> > > the sub-projects can be streamlined and be ensured that the necessary
> > > changes are already available on the core project i.e NiFi. One
> challenge
> > > is that the committer of the sub-project should have the courtesy to
> > check
> > > wether the supporting changes are made available to the core project
> and
> > > track its status but given how contributions are being handled in
> > > nifi-registry project, I don’t think it’s going to be that much of a
> > > headache.
> > >
> > > We could also add to the helper doc mentioning that if the contribution
> > is
> > > going to affect a core component, the contributor needs to add the JIRA
> > id
> > > of the core project’s supporting changes in the sub-projects’ issue
> > > description.
> > >
> > > On Thu, 22 Feb 2018 at 10:42 PM, Matt Gilman <matt.c.gil...@gmail.com>
> > > wrote:
> > >
> > > > Joe, Joe,
> > > >
> > > > Regarding the release process... I think it could be similar to how
> > folks
> > > > verified and validated the NiFi Registry release. Guidance was given
> > in a
> > > > helper guide regarding how to obtain/build a branch or PR that
> > references
> > > > the new components. For the Registry release, there was a PR for NiFi
> > > that
> > > > had the supporting changes already available.
> > > >
> > > > We may have this issue any time we release new versions that depend
> on
> > > > another (sub)project.
> > > >
> > > > Matt
> > > >
> > > > On Thu, Feb 22, 2018 at 11:39 AM, Joe Percivall <
> jperciv...@apache.org
> > >
> > > > wrote:
> > > >
> > > > > Scott,
> > > > >
> > > > > Definitely like the direction of standardizing NiFi and Registry
> > around
> > > > the
> > > > > same set of components, so the user has a common UX. Is there
> > precedent
> > > > for
&

Re: Re: Re: Re: 答复: Re: Is there a REST API to run a dataflow on demand?

2018-02-22 Thread Sivaprasanna
I believe even if it is where to be made available as part of NiFi REST
API, I think it is internally going to call the start and stop APIs.

On Fri, 23 Feb 2018 at 6:54 AM,  wrote:

> Yes, that is what I do currently.
>
> But I think it will be better if NiFi can support this feature natively.
>
>
>
> 发件人:
> "Andrew Grande" 
> 收件人:
> dev@nifi.apache.org
> 日期:
> 2018/02/23 09:07
> 主题:
> Re: Re: Re: 答复: Re: Is there a REST API to run a dataflow on demand?
>
>
>
> One could write a script and call it in 1 step. I don't believe there is
> anything available OOTB.
>
> Andrew
>
> On Thu, Feb 22, 2018, 7:58 PM  wrote:
>
> >  Thanks a lot for your help.
> >
> > Yes. that is what I do to trigger a dataflow on demand.
> > But I want to know if there is an API that I can do this in one step.
> >
> >
> >
> > 发件人:
> > "Daniel Chaffelson" 
> > 收件人:
> > dev@nifi.apache.org
> > 日期:
> > 2018/02/23 04:46
> > 主题:
> > Re: Re: 答复: Re: Is there a REST API to run a dataflow on demand?
> >
> >
> >
> > Hi Boying,
> >
> > I have been working on a NiFi Python Client SDK that might help you
> here,
> > as the goal is to be able to replicate everyday actions taken in the
> NiFi
> > GUI as well as extending it for CICD/SDLC work.
> > For example with the following commands you would:
> >
> >1. get the reference object for a processor
> >2. stop it if it is running
> >3. change the scheduling period to 3s (or most other parameters)
> >4. start it again
> >
> >
> > import nipyapi
> > processor_state_1 = nipyapi.canvas.get_processor('MyProcessor')
> > nipyapi.canvas.schedule_processor(processor, scheduled=False)
> > update = nipyapi.nifi.ProcessorConfigDTO(
> > scheduling_period='3s'
> > )
> > processor_state_2 = nipyapi.canvas.update_processor(processor, update)
> > nipyapi.canvas.schedule_processor(processor, scheduled=True)
> >
> > If you need a different set of steps then please let me know and perhaps
> I
> > can help.
> > Those commands are currently in the master branch awaiting release:
> > https://github.com/Chaffelson/nipyapi
> >
> > Thanks,
> > Dan
> >
> > On Thu, Feb 22, 2018 at 7:41 AM  wrote:
> >
> > > Thanks very much, I'll try your suggestions.
> > >
> > >
> > >
> > > 发件人:
> > > James Wing 
> > > 收件人:
> > > NiFi Dev List 
> > > 日期:
> > > 2018/02/22 14:05
> > > 主题:
> > > Re: 答复: Re: Is there a REST API to run a dataflow on demand?
> > >
> > >
> > >
> > > The NiFi API can be used to start and stop processors or process
> groups,
> > > and this might solve your use case.  But NiFi does not have an API to
> > run
> > > a
> > > processor only once, immediately, separate from its configured
> schedule.
> > I
> > > have solved similar problems in the past by creating two separate
> > upstream
> > > sources - one for scheduled operation, and one for ad-hoc operation.
> > > GenerateFlowFile, GetFile, or similar processors can be used to inject
> a
> > > flowfile where you need to kick off the flow.
> > >
> > > Thanks,
> > >
> > > James
> > >
> > > On Wed, Feb 21, 2018 at 5:57 PM,  wrote:
> > >
> > > > Thanks a lot.
> > > >
> > > > But I want to know if there is a REST API that triggers a dataflow
> on
> > > > demand?
> > > > I don't find the API in the page.
> > > >
> > > >
> > > >
> > > >
> > > > 发件人:
> > > > Charlie Meyer 
> > > > 收件人:
> > > > dev@nifi.apache.org
> > > > 日期:
> > > > 2018/02/22 09:36
> > > > 主题:
> > > > Re: Is there a REST API to run a dataflow on demand?
> > > >
> > > >
> > > >
> > > > Yep, when you make the changes in the UI, open developer tools in
> your
> > > > browser and see what calls to the nifi api it is making then mimic
> > those
> > > > with code.
> > > >
> > > > The nifi team also kindly publishes
> > > > https://nifi.apache.org/docs/nifi-docs/rest-api/index.html which
> help
> > a
> > > > lot.
> > > >
> > > > Best of luck!
> > > >
> > > > -Charlie
> > > >
> > > > On Wed, Feb 21, 2018 at 7:34 PM,  wrote:
> > > >
> > > > > Hi, team,
> > > > >
> > > > > We set up several NiFi dataflows for data processing.
> > > > > These dataflows are configured to run once per day in the
> midnight.
> > > > >
> > > > > But sometimes, some dataflows are failed,I want to run the
> dataflow
> > > > again
> > > > > immediately after fixing the issue instead of waiting for running
> it
> > > in
> > > > > the midnight to
> > > > > make sure that the issue is really fixed.
> > > > >
> > > > > The only way I know to do this is to change the time of running
> the
> > > > > dataflow to the 5 mintutes from now for example
> > > > > and then change it back to midnight.
> > > > >
> > > > > It's a little inconvenient.
> > > > >
> > > > > Is there any REST API that I can use to trigger the dataflow on
> > demand
> > > > > i.e. without change the time back and forth?
> > > > >
> > > > > Thanks
> 

Re: [DISCUSS] Proposal for an Apache NiFi sub-project - NiFi Fluid Design System

2018-02-22 Thread Sivaprasanna
I agree with Matt. With clear documentation and guides, contributions on
the sub-projects can be streamlined and be ensured that the necessary
changes are already available on the core project i.e NiFi. One challenge
is that the committer of the sub-project should have the courtesy to check
wether the supporting changes are made available to the core project and
track its status but given how contributions are being handled in
nifi-registry project, I don’t think it’s going to be that much of a
headache.

We could also add to the helper doc mentioning that if the contribution is
going to affect a core component, the contributor needs to add the JIRA id
of the core project’s supporting changes in the sub-projects’ issue
description.

On Thu, 22 Feb 2018 at 10:42 PM, Matt Gilman 
wrote:

> Joe, Joe,
>
> Regarding the release process... I think it could be similar to how folks
> verified and validated the NiFi Registry release. Guidance was given in a
> helper guide regarding how to obtain/build a branch or PR that references
> the new components. For the Registry release, there was a PR for NiFi that
> had the supporting changes already available.
>
> We may have this issue any time we release new versions that depend on
> another (sub)project.
>
> Matt
>
> On Thu, Feb 22, 2018 at 11:39 AM, Joe Percivall 
> wrote:
>
> > Scott,
> >
> > Definitely like the direction of standardizing NiFi and Registry around
> the
> > same set of components, so the user has a common UX. Is there precedent
> for
> > creating a new sub-project just for common components/modules to be used
> by
> > the top-level, and it's other sub-projects? My concerns are similar to
> > Joe's. Along those lines, if the core problems we're trying to address is
> > the release process and distribution, is there a less "heavy-weight"
> > avenue?
> >
> > In the past, we've also talked about pulling out the core NiFi framework
> to
> > be shared between NiFi and MiNiFi-Java for similar reasons. How we go
> about
> > solving this for the UI could be used a model for the core framework as
> > well.
> >
> > - Joe
> >
> > On Thu, Feb 22, 2018 at 10:58 AM, Mike Thomsen 
> > wrote:
> >
> > > Also, what sort of framework is the UI being standardized on? Angular?
> > > React? Something else?
> > >
> > > On Thu, Feb 22, 2018 at 10:03 AM, Joe Witt  wrote:
> > >
> > > > Scott
> > > >
> > > > Ok so extract out the fluid design work you started with NiFi
> Registry
> > > > to its own codebase which can be rev'd and published to NPM making it
> > > > easier to consume/reuse across NiFi projects and offers better
> > > > consistency.  This sounds interesting.
> > > >
> > > > In thinking through the additional community effort or the effort
> > > > trade-off:
> > > > How often do you anticipate we'd be doing releases (and thus
> > > > validation/voting) for this?
> > > > How often would those differ from when we'd want to do a NiFi or NiFi
> > > > Registry release?
> > > > How do you envision the community would be able to help vet/validate
> > > > releases of these modules?
> > > >
> > > > Thanks
> > > > Joe
> > > >
> > > > On Thu, Feb 22, 2018 at 8:18 AM, Scott Aslan 
> > > > wrote:
> > > > > NiFi Community,
> > > > >
> > > > > I'd like to initiate a discussion around creating a sub-project of
> > NiFi
> > > > to
> > > > > encompass the Fluid Design System NgModule created during the
> > > development
> > > > > of the NiFi Registry. A possible name for this sub-project is
> simply
> > > > > "NiFi Fluid
> > > > > Design System". The idea would be to create a sub-project that
> > > > distributes
> > > > > an atomic set of high quality, reuse-able, theme-able, and testable
> > > UI/UX
> > > > > components, fonts, and other JS modules for use across the various
> > web
> > > > > applications throughout the NiFi universe (uNiFiverse???). Both
> NiFi
> > > and
> > > > > NiFi Registry web applications would eventually leverage this
> module
> > > via
> > > > > npm. This approach will enable us to provide our users with a
> > > consistent
> > > > > experience across web applications. Creating a sub-project would
> also
> > > > allow
> > > > > the FDS code to evolve independently of NiFi/NiFi registry and be
> > > > released
> > > > > on it's own timeline. In addition, it would make tracking
> issues/work
> > > > much
> > > > > clearer through a separate JIRA.
> > > > >
> > > > > Please discuss and provide and thoughts or feedback.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Scotty
> > > >
> > >
> >
> >
> >
> > --
> > *Joe Percivall*
> > linkedin.com/in/Percivall
> > e: jperciv...@apache.com
> >
>


  1   2   >