Re: nifi git commit: This closes #1047

2016-11-02 Thread Oleg Zhurakousky
Andre

Quick note for the future. . .
I just noticed how this commit appears in the Git logs

===
commit 4acc9ad288cc005127d1709adbb1134f6dab94c6
Author: Andre F de Miranda 
Date:   Thu Nov 3 02:29:11 2016 +1100

This closes #1047
===

As you can see it does not have any description nor the corresponding JIRA. 
For consistency and to simplify traceability it would be nice to have those 
regardless of how trivial the actual issue is.

Cheers
Oleg 


> On Nov 2, 2016, at 11:30 AM, af...@apache.org wrote:
> 
> Repository: nifi
> Updated Branches:
>  refs/heads/master 511f2a0c8 -> 4acc9ad28
> 
> 
> This closes #1047
> 
> 
> Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
> Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/4acc9ad2
> Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/4acc9ad2
> Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/4acc9ad2
> 
> Branch: refs/heads/master
> Commit: 4acc9ad288cc005127d1709adbb1134f6dab94c6
> Parents: 511f2a0
> Author: Andre F de Miranda 
> Authored: Thu Nov 3 02:29:11 2016 +1100
> Committer: Andre F de Miranda 
> Committed: Thu Nov 3 02:29:11 2016 +1100
> 
> --
> 
> --
> 
> 
> 



Re: [ANNOUNCE] New Apache NiFi Committer Scott Aslan

2016-11-04 Thread Oleg Zhurakousky
Congrats Scott! Well earned!

Oleg
> On Nov 4, 2016, at 11:25 AM, Tony Kurc  wrote:
> 
> On behalf of the Apache NiFI PMC, I am very pleased to announce that Scott
> Aslan has accepted the PMC's invitation to become a committer on the Apache
> NiFi project. We greatly appreciate all of Scott's hard work and generous
> contributions to the project. We look forward to his continued involvement
> in the project.
> 
> For those familiar with the UI improvements in 1.x, a lot of the
> implementation was contributed by Scott, so another set of very visible
> contributions! (Sorry for the pun, I couldn't help myself).
> 
> Welcome and congratulations!
> Tony



Re: Processor disabled state not maintained across template download/import

2016-11-07 Thread Oleg Zhurakousky
This is actually a very interesting discussion to be had. . .
So, at this point I believe that similar to other states of the component 
(i.e., RUNNING), the DISABLED state is what is constituted as runtime-state and 
therefore should NOT be stored into NOR expected to be restored from the 
template. Template IMHO should only be used to store flow definitions.
That said, I do see how someone may also treat/expect that DISABLED state is a 
special state and as such does not represent the runtime state, rather the 
intention of the author at the time of template export.

So let’s duke it out here and summarize it in JIRA before we decide if any 
action needs to be taken.

Cheers
Oleg
> On Nov 7, 2016, at 10:20 AM, Joe Witt  wrote:
> 
> Chris,
> 
> I can see why we're not automatically starting processors when they're
> placed on the graph but I do share your view that disabled processor status
> should be honored and retained.  I think a JIRA for this is reasonable and
> at the very least will get some good discussion and/or doc changes.
> 
> Thanks
> JOe
> 
> On Mon, Nov 7, 2016 at 10:12 AM, McDermott, Chris Kevin (MSDU -
> STaTS/StorefrontRemote)  wrote:
> 
>> If I create a template from a flow that has some disabled components, when
>> I download and import that template into a different NiFi instance, the
>> disabled state of those components is lost (they are no longer disabled.)
>> I’m not sure when this information is being lost (is it saved in the
>> template?)
>> 
>> 
>> 
>> This makes using a template for deployment somewhat difficult.  Unless I’m
>> missing something I am planning of entering a JIRA, but I wanted to check
>> with the community first in case I am missing something.
>> 
>> 
>> 
>> 
>> 
>> Thanks,
>> 
>> 
>> 
>> Chris McDermott
>> 
>> 
>> 
>> Remote Business Analytics
>> 
>> STaTS/StoreFront Remote
>> 
>> HPE Storage
>> 
>> Hewlett Packard Enterprise
>> 
>> Mobile: +1 978-697-5315
>> 
>> 
>> 
>> 



Re: NiFi processor validation

2016-11-08 Thread Oleg Zhurakousky
Could it also be related to https://issues.apache.org/jira/browse/NIFI-1318?

Cheers
Oleg

On Nov 8, 2016, at 7:52 AM, Joe Witt 
mailto:joe.w...@gmail.com>> wrote:

+1 to both of those points:
1) Avoid validating that which it doesn't help (disabled and running)
2) Avoid using web/synchronous threading for any user code

On Tue, Nov 8, 2016 at 7:43 AM, Matt Gilman 
mailto:matt.c.gil...@gmail.com>> wrote:
I also agreed these changes make sense. In addition, another approach we
could consider that has been discussed in the past [1] is to perform
component validation asynchronously. This presents its own challenges but
would also be helpful. We should try to avoid calling into user code in any
web thread.

Matt

[1] https://issues.apache.org/jira/browse/NIFI-950

On Mon, Nov 7, 2016 at 6:15 PM, Matt Burgess 
mailto:mattyb...@apache.org>> wrote:

Agreed. Also we validate processors on a timer-based strategy in
FlowController (looks like for snapshotting) and in the web server
(via ControllerFacade), those seem to happen 6-7 times on that
interval (which is like 15-20 seconds). Also we validate all
processors on any change to the canvas (such as moving a processor).
Besides Mike's suggestion, perhaps we should look at a purely
event-driven strategy for validating processors if possible?

Regards,
Matt

On Mon, Nov 7, 2016 at 6:06 PM, Joe Witt 
mailto:joe.w...@gmail.com>> wrote:
Makes good sense to me.

On Nov 7, 2016 5:39 PM, "Michael Moser" 
mailto:moser...@gmail.com>> wrote:

All,

I would like to propose a fundamental change to processor validation
based
on observations in https://issues.apache.org/jira/browse/NIFI-2996. I
would
like to validate processors only when they are in the STOPPED state.

The properties on a processor in the RUNNING state should always be
valid,
else you should not have been able to start the processor. A processor
in
the DISABLED statue doesn't show validation results, so it seems a
waste to
validate its properties.

The reason I'm proposing this change is because the NiFi UI slows down
as
you add more processors and controller services to the graph. Beyond
common
sense expectations that this would be true, it appears that processor
validation is a significant part of the 'cost' on the server when
responding to REST API requests.  Some details from my testing are in
the
JIRA ticket.

Thoughts?

Thanks,
-- Mike






Re: NiFi Spark Processor

2016-11-18 Thread Oleg Zhurakousky
Shankha

I know it may not always be all that obvious, but have you started the 
OutputPort? Also, have you defined ‘nifi.remote.input.socket.port’ property in 
wifi.properties?
Also, unless you’ve already discovered it there is an excellent blog 
(https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark) from 
Mark on this which covers every little detail, so hope that will help.

Cheers
Oleg

On Nov 18, 2016, at 12:29 AM, shankhamajumdar 
mailto:shankha.majum...@lexmark.com>> wrote:

Hi joey,

I am trying the POC in different way now.

1. GetFile processor which is getting the files from the local drive.

2. Connecting the processor to the NiFi Output Port.

3. Written a Spark streaming Job which is connecting to the NiFi Output Port
to stream the data.

The Spark Job is able to connect the NiFi Output Port successfully. But not
able to get any data in spark though in the NiFi the data has been queued
from GetFile processor. I have checked the NiFi Data Provenance of the NiFi
Output Port but no data is showing there as well where as GetFile to Output
Port connector is showing the data has been queued.

Regards,
Shankha



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/NiFi-Spark-Processor-tp13899p13913.html
Sent from the Apache NiFi Developer List mailing list archive at 
Nabble.com.




Re: Hello

2016-12-13 Thread Oleg Zhurakousky
Hi

I am not sure I fully understand what ‘data’ you are referring to. Is this the 
usual Log data? If so it is still somewhat unusual requirement to log something 
during some time frame and ignore during others, so that is why I am asking if 
you can clarify what ‘data’ you are referring to.
In any event you also use variety of Unix/Linux tools as well as other OS and 
commercial applications to aggregate, search, manipulate log data. 
For example, let’s say you have a log like this:

07:11:41,118  INFO main server.Server:403 - Started @47542ms
07:17:48,596  INFO main server.JettyServer:832 - NiFi has started. The UI is 
available at the following URLs:
08:01:11,596  INFO main server.JettyServer:834 - 
http://fe80:0:0:0:c07e:7dff:fe92:3ddb%awdl0:8080/nifi
09:18:51,597  INFO main server.JettyServer:834 - 
http://fe80:0:0:0:0:0:0:1%lo0:8080/nifi
09:11:41,597  INFO main server.JettyServer:834 - http://192.168.1.114:8080/nifi
. . .

and you want to extract only logs between 7 and 8 AM, something like this would 
do the trick 'grep "0[7-8]:” my-log.log’, but could be more sophisticated.
Cheers
Oleg 

> On Dec 13, 2016, at 6:08 AM, Vidhyashreemurthy N 
>  wrote:
> 
> Hello,
> 
> Firstly, i thank you for an excellent tool! NiFi has helped me in a lot of
> ways.
> I have started using NiFi just a couple of hours ago. Im asked to perform
> the following task with the next 4 hours. I went through the documentation
> roughly, I didnt find the apt answer. Can you please help me with the
> following issue?
> 
> I have a question about the NiFi operations.
> I want to know how can i set a time frame for data to be logged into log
> file.
> Suppose, in one day, i want the data from 5:00pm to 6:00pm alone to be
> stored into log file. The rest 23 hrs of data should be ignored and not to
> be stored in log file.
> 
> How can i achieve this?



Re: NIFI 1.1.0 build error

2016-12-15 Thread Oleg Zhurakousky
Alessio

Any chance you have some network connectivity issues when you attempt too build 
or may be a proxy that restricts certain sites?
This error is indeed strange and I can’t recall having it and I’ve tried to do 
the same (as you describe) on my mac and all builds fine. 

Also, just to compare the environments:
olegs-mac:~ foo$ mvn --version
Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 
2015-04-22T07:57:37-04:00)
Maven home: /usr/local/Cellar/maven/3.3.3/libexec
Java version: 1.8.0_112, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.11.2", arch: "x86_64", family: "mac"

Cheers
Oleg

> On Dec 15, 2016, at 6:49 AM, Alessio Palma  
> wrote:
> 
> Hello all,
> 
> I downloaded the source code from
> 
> 
> http://apache.panu.it/nifi/1.1.0/nifi-1.1.0-source-release.zip
> 
> 
> then unpacked the archivie and executed:
> 
> 
> mvn dependency:purge-local-repository
> 
> mvn clean install
> 
> 
> and I got this:
> 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project nifi-standard-processors: Error resolving project artifact: Could 
> not transfer artifact com.martiansoftware:macnificent:pom:0.2.0 from/to 
> org.apche.nifi (http://https://mvnrepository.com/artifact/org.apache.nifi): 
> https: unknown error for project com.martiansoftware:macnificent:jar:0.2.0: 
> Unknown host https: unknown error ->
> 
> 
> 
> any pointers how to fix this ?



Re: NIFI 1.1.0 build error

2016-12-15 Thread Oleg Zhurakousky
Actually, i’ll take that back. I believe I never did the ‘purge’ step, just 
‘clean install’, and that is where I thought you had the ‘martiansoftware’ 
failure.
Now when I did try the ‘purge’ step I do see the same failure in 
nifi-standard-processors bundle.

Anyway, I am not sure why it is complaining about ‘martiansoftware’ on the 
purge step, but ‘mvn clean install’ works fine. I guess if you really want to 
purge everything you can do it manually for now. Meanwhile you can file a bug 
in JIRA and we’ll take a look.

Cheers
Oleg

> On Dec 15, 2016, at 8:12 AM, Joe Witt  wrote:
> 
> Alessio
> 
> You should not need any sort of special settings.xml file for this to
> work.  Just grabbing a default maven install and following our dev
> guide should be sufficient.  If you do have one that is altered it
> might be changing important details.
> 
> Thanks
> Joe
> 
> On Thu, Dec 15, 2016 at 8:10 AM, Alessio Palma
>  wrote:
>> Can I have your settings.xml which is located in ~/.m2 ?
>> 
>> 
>> From: Oleg Zhurakousky 
>> Sent: Thursday, December 15, 2016 1:14:42 PM
>> To: dev@nifi.apache.org
>> Subject: Re: NIFI 1.1.0 build error
>> 
>> Alessio
>> 
>> Any chance you have some network connectivity issues when you attempt too 
>> build or may be a proxy that restricts certain sites?
>> This error is indeed strange and I can’t recall having it and I’ve tried to 
>> do the same (as you describe) on my mac and all builds fine.
>> 
>> Also, just to compare the environments:
>> olegs-mac:~ foo$ mvn --version
>> Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 
>> 2015-04-22T07:57:37-04:00)
>> Maven home: /usr/local/Cellar/maven/3.3.3/libexec
>> Java version: 1.8.0_112, vendor: Oracle Corporation
>> Java home: 
>> /Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home/jre
>> Default locale: en_US, platform encoding: UTF-8
>> OS name: "mac os x", version: "10.11.2", arch: "x86_64", family: "mac"
>> 
>> Cheers
>> Oleg
>> 
>>> On Dec 15, 2016, at 6:49 AM, Alessio Palma 
>>>  wrote:
>>> 
>>> Hello all,
>>> 
>>> I downloaded the source code from
>>> 
>>> 
>>> http://apache.panu.it/nifi/1.1.0/nifi-1.1.0-source-release.zip
>>> 
>>> 
>>> then unpacked the archivie and executed:
>>> 
>>> 
>>> mvn dependency:purge-local-repository
>>> 
>>> mvn clean install
>>> 
>>> 
>>> and I got this:
>>> 
>>> 
>>> [ERROR] Failed to execute goal 
>>> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process 
>>> (default) on project nifi-standard-processors: Error resolving project 
>>> artifact: Could not transfer artifact 
>>> com.martiansoftware:macnificent:pom:0.2.0 from/to org.apche.nifi 
>>> (http://https://mvnrepository.com/artifact/org.apache.nifi): https: unknown 
>>> error for project com.martiansoftware:macnificent:jar:0.2.0: Unknown host 
>>> https: unknown error ->
>>> 
>>> 
>>> 
>>> any pointers how to fix this ?
>> 
> 



Re: [VOTE] Release Apache NiFi 1.0.1 (RC1)

2016-12-19 Thread Oleg Zhurakousky
+1 (non-binding)
Used Release Helper to run through steps on OSX with Java 1.8.0_112. Created 
and deployed few simple flows, played with templates etc., all is good.

Oleg

> On Dec 18, 2016, at 10:24 PM, Tony Kurc  wrote:
> 
> +1 (binding)
> 
> Ran through helper, built on ubuntu 14.04 with java 1.8, verified hashes
> and signature. ran simple test flow.
> 
> On Sun, Dec 18, 2016 at 9:15 PM, Bryan Rosander 
> wrote:
> 
>> +1 (non-binding)
>> 
>> - Ran through release helper
>> - Sent data to NiFi via S2S using MiNiFi without proxy, with proxy, with
>> authenticating proxy
>> 
>> On Sun, Dec 18, 2016 at 8:14 PM, Koji Kawamura 
>> wrote:
>> 
>>> +1 (non-binding).
>>> 
>>> - Ran through the release helper
>>> - Ran some basic data flows  (Java 1.8.0_111, Mac OS X 10.11.6)
>>> 
>>> 
>>> 
>>> On Sun, Dec 18, 2016 at 6:28 AM, James Wing  wrote:
 +1 (non-binding).  I ran through the release helper and did some basic
 testing of the built binary (Java 1.8.0_101, AWS Linux).
 
 Thanks for putting this together, Joe.
 
 
 James
 
 On Fri, Dec 16, 2016 at 7:28 AM, Joe Percivall <
 joeperciv...@yahoo.com.invalid> wrote:
 
> I apologize for the improperly formatted message. Hopefully the below
> message is better.
> 
> 
> 
> 
> Hello Apache NiFi Community,
> 
> I am pleased to be calling this vote for the source release of Apache
>>> NiFi,
> nifi-1.0.1.
> 
> The source zip, including signatures, digests, etc. can be found at:
> https://repository.apache.org/content/repositories/orgapachenifi-1096
> 
> The Git tag is nifi-1.0.1-RC1
> The Git commit hash is 1890f6c522514027ae46f86601f4771f62cadc6d
> * https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=
> 1890f6c522514027ae46f86601f4771f62cadc6d
> * https://github.com/apache/nifi/commit/
>> 1890f6c522514027ae46f86601f477
> 1f62cadc6d
> 
> Checksums of nifi-1.0.1-source-release.zip:
> MD5: cc7fea9a22c0b48f87dd7152ab83c28c
> SHA1: 88c35d5d3ff9d350473a742cdd8c38204628d343
> SHA256: d9d9628ced5bf3f0f3e0eae7729f4eb507120b072e883e287198fee80fbf
>>> 9d15
> 
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/jpercivall
> 
> KEYS file available here:
> https://dist.apache.org/repos/dist/release/nifi/KEYS
> 
> 6 issues were closed/resolved for this release:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> projectId=12316020&version=12338865
> Release note highlights can be found here:
> https://cwiki.apache.org/confluence/display/NIFI/
> Release+Notes#ReleaseNotes-Version1.0.1
> 
> The vote will be open for 72 hours.
> Please download the release candidate and evaluate the necessary items
> including checking hashes, signatures, build from source, and test.
>> Then
> please vote:
> 
> [ ] +1 Release this package as nifi-1.0.1
> [ ] +0 no opinion
> [ ] -1 Do not release this package because...
> 
> Thanks!
> 
> 
> 
> On Friday, December 16, 2016 10:23 AM, Joe Percivall <
> joeperciv...@yahoo.com.INVALID> wrote:
> 
> 
> 
> Hello Apache NiFi Community,
> I am pleased to be calling this vote for the source release of Apache
> NiFi,nifi-1.0.1.
> The source zip, including signatures, digests, etc. can be found at:
> https://repository.apache.org/content/repositories/orgapachenifi-1096
> The Git tag is nifi-1.0.1-RC1The Git commit hash is
> 1890f6c522514027ae46f86601f4771f62cadc6d*
>>> https://git-wip-us.apache.org/
> repos/asf?p=nifi.git;a=commit;h=1890f6c522514027ae46f86601f477
>>> 1f62cadc6d*
> https://github.com/apache/nifi/commit/1890f6c522514027ae46f86601f477
> 1f62cadc6d
> Checksums of nifi-1.0.1-source-release.zip:MD5:
> cc7fea9a22c0b48f87dd7152ab83c28cSHA1: 88c35d5d3ff9d350473a742cdd8c38
>>> 204628d343SHA256:
> d9d9628ced5bf3f0f3e0eae7729f4eb507120b072e883e287198fee80fbf9d15
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/jpercivall
> KEYS file available here:https://dist.apache.org/
> repos/dist/release/nifi/KEYS
> 6 issues were closed/resolved for this release:https://issues.apache.
> org/jira/secure/ReleaseNote.jspa?projectId=12316020&
> version=12338865Release note highlights can be found here:
> https://cwiki.apache.org/confluence/display/NIFI/
> Release+Notes#ReleaseNotes-Version1.0.1
> The vote will be open for 72 hours.Please download the release
>> candidate
> and evaluate the necessary itemsincluding checking hashes, signatures,
> build from source, and test. Thenplease vote:
> [ ] +1 Release this package as nifi-1.0.1[ ] +0 no opinion[ ] -1 Do
>> not
> release this package because...
> Thanks!
> 
>>> 
>> 



Re: [ANNOUNCE] New Apache NiFi Committer Jeremy Dyer

2016-12-20 Thread Oleg Zhurakousky
Congrats Jeremy! Well deserved!
> On Dec 20, 2016, at 8:46 AM, Yolanda Davis  wrote:
> 
> Congratulations Jeremy!
> 
> On Mon, Dec 19, 2016 at 8:23 PM, Aldrin Piri  wrote:
> 
>> On behalf of the Apache NiFI PMC, I am very pleased to announce that Jeremy
>> Dyer has accepted the PMC's invitation to become a committer on the
>> Apache NiFi project. We greatly appreciate all of Jeremy's hard work and
>> generous contributions and look forward to continued involvement in the
>> project.
>> 
>> Jeremy’s contributions include creation a suite of the processors that
>> aided in parsing HTML, build improvement and testing for MiNiFi, as well as
>> many articles and presentations on using NiFi in new and novel ways.
>> 
>> Welcome Jeremy!
>> 
> 
> 
> 
> -- 
> --
> yolanda.m.da...@gmail.com
> @YolandaMDavis



Re: [VOTE] Release Apache NiFi 1.1.1 (RC1)

2016-12-20 Thread Oleg Zhurakousky
+1 (non-binding) 

Built and tested on OSX with contrib-check.
Ran several basic flows
Validated Import/Export templates (single node and 3 node cluster)

> On Dec 20, 2016, at 1:49 PM, Joey Frazee  wrote:
> 
> +1 (non-binding)
> 
> - Verified commit hash, checksums and GPG signature
> - Checked root LICENSE and NOTICE
> - Checked version in pom files
> - Ran `mvn -T 2.0C clean install -Pcontrib-check`
> - Tested with PutElasticsearchHttp with/without connection failure (NIFI-3194)
> - Tested with ValidateCsv (NIFI-3175)
> - Tested CSV to Hive data flow with Avro, ORC conversions and PutHiveQL
> 
>> On Dec 20, 2016, at 12:29 PM, James Wing  wrote:
>> 
>> +1 (non-binding). I ran through the release helper -- verified hashes,
>> license/notice/readme files, full build, and tested the resulting binary
>> without issues on JDK 1.8.0_101, Amazon Linux.
>> 
>> Thanks,
>> 
>> James
>> 
>> On Mon, Dec 19, 2016 at 2:35 PM, Joe Percivall <
>> joeperciv...@yahoo.com.invalid> wrote:
>> 
>>> Hello Apache NiFi Community,
>>> 
>>> I am pleased to be calling this vote for the source release of Apache
>>> NiFi, nifi-1.1.1.
>>> 
>>> The source zip, including signatures, digests, etc. can be found at:
>>> https://repository.apache.org/content/repositories/orgapachenifi-1097
>>> 
>>> The Git tag is nifi-1.1.1-RC1
>>> The Git commit hash is a92f2e36ed6be695e4dc6f624f6b3a96e6d1a57c
>>> * https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=
>>> a92f2e36ed6be695e4dc6f624f6b3a96e6d1a57c
>>> * https://github.com/apache/nifi/commit/a92f2e36ed6be695e4dc6f624f6b3a
>>> 96e6d1a57c
>>> 
>>> Checksums of nifi-1.1.1-source-release.zip:
>>> MD5: 74955060d8ee295d77a23607ac644a6e
>>> SHA1: 82efc0dc3141d0fad0205b33539e5928da87ad17
>>> SHA256: 25fab8d7abfecf4c0ccef1ed9cd5f0849c829c0741142ed4074bc8dd0781f7d0
>>> 
>>> Release artifacts are signed with the following key:
>>> https://people.apache.org/keys/committer/jpercivall
>>> 
>>> KEYS file available here:
>>> https://dist.apache.org/repos/dist/release/nifi/KEYS
>>> 
>>> 16 issues were closed/resolved for this release:
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
>>> projectId=12316020&version=12338797
>>> Release note highlights can be found here:
>>> https://cwiki.apache.org/confluence/display/NIFI/
>>> Release+Notes#ReleaseNotes-Version1.1.1
>>> 
>>> The vote will be open for 72 hours.
>>> Please download the release candidate and evaluate the necessary items
>>> including checking hashes, signatures, build from source, and test. Then
>>> please vote:
>>> 
>>> [ ] +1 Release this package as nifi-1.1.1
>>> [ ] +0 no opinion
>>> [ ] -1 Do not release this package because...
>>> 
>>> Thanks!
> 
> 



Re: NiFi compilation error - master & 1.1.x branch

2016-12-22 Thread Oleg Zhurakousky
Andrew

I believe I’ve seen something similar before and it persisted until up’ve 
upgraded JDK. Once on the latest (*_112) all was good, But discussing/testing 
it a bit internally even the older upgrades would do (e.g., *_65)
FWIW, I has the same *_45 version until I started seeing this.

Cheers
Oleg

> On Dec 22, 2016, at 6:49 AM, Andrew Christianson 
>  wrote:
> 
>> What version of Java are using? Oracle or OpenJDK, 8 or 9, etc.? Also
> 
> $ java -version   
>  
> java version "1.8.0_45"   
>   
> 
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)  
>   
> 
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> 
>> which branch are you building from? It looks like it's building
> 
> Tried both of these:
> 
> $ git branch -v   
>  
> * master 44c9ea0 NIFI-3236 - SplitJson performance improvements   
>   
> 
>  support/nifi-1.1.x 986e716 NIFI-3188: Added unit test to test corner cases



Re: GetKafka and ConvertAvroToJson returns blank

2016-12-22 Thread Oleg Zhurakousky
Malini

Indeed that is a strange.
Would you mind looking at the logs to see if there are any exceptions/errors 
generated form ConvertAvroToJson? Meanwhile I’ll run some tests with your data.

Cheers
Oleg

On Dec 22, 2016, at 1:37 PM, Malini Shetty 
mailto:malini.she...@gmail.com>> wrote:

Hi,
GetKafka processor is returning success message and data can be viewed in hex 
format. Next processor called is ConvertAvroToJson is returning blank string, 
while Kafka confluent is able to convert the avro binary data into Json format.

DB2 data is streamed thru CDC to Kafka, schema used in ConvertAvroToJson is


{ "namespace": "mynamespace.com", "type": "record", 
"name": "TEST_DATA1", "fields": [ {"name": "ENAME", "type": "string"} ] }

Can you please help to understand why ConvertAvrotoJson is returning blank 
string?

I have used Putfile processor after GetKafka. Attached is the format and the 
string returned using ConvertAvroToJson is blank.

Regards
Malini Shetty

<10677-hprojectodsdevbinaryavro.zip>



Re: GetKafka and ConvertAvroToJson returns blank

2016-12-22 Thread Oleg Zhurakousky
Ok, I see the issue after testing it wit your data

You probably did not specify the schema when configuring the ConvertAvroToJson 
processor and because of that you were most likely getting the following error:

[pool-1-thread-1] ERROR org.apache.nifi.processors.avro.ConvertAvroToJSON - 
ConvertAvroToJSON[id=84a6c68c-0064-40a1-9431-80ff35fbc7d3] Failed to convert 
FlowFile[0,147497938581519.mockFlowFile,14B] from Avro to JSON due to 
org.apache.nifi.processor.exception.ProcessException: java.io.IOException: Not 
a data file.; transferring to failure: 
org.apache.nifi.processor.exception.ProcessException: java.io.IOException: Not 
a data file.

So my first question; Do you see the same error in the logs? (just want to 
confirm that you do)

In any event, ConvertAvroToJson has the “Schema” configuration property. Simply 
pass your schema as value to this property (e.g., “{ "namespace": 
"mynamespace.com<http://mynamespace.com/>", "type": "record", "name": 
"TEST_DATA1", "fields": [ {"name": "ENAME", "type": "string"} ] }”)

Cheers
Oleg


On Dec 22, 2016, at 1:49 PM, Oleg Zhurakousky 
mailto:ozhurakou...@hortonworks.com>> wrote:

Malini

Indeed that is a strange.
Would you mind looking at the logs to see if there are any exceptions/errors 
generated form ConvertAvroToJson? Meanwhile I’ll run some tests with your data.

Cheers
Oleg

On Dec 22, 2016, at 1:37 PM, Malini Shetty 
mailto:malini.she...@gmail.com>> wrote:

Hi,
GetKafka processor is returning success message and data can be viewed in hex 
format. Next processor called is ConvertAvroToJson is returning blank string, 
while Kafka confluent is able to convert the avro binary data into Json format.

DB2 data is streamed thru CDC to Kafka, schema used in ConvertAvroToJson is


{ "namespace": "mynamespace.com<http://mynamespace.com/>", "type": "record", 
"name": "TEST_DATA1", "fields": [ {"name": "ENAME", "type": "string"} ] }

Can you please help to understand why ConvertAvrotoJson is returning blank 
string?

I have used Putfile processor after GetKafka. Attached is the format and the 
string returned using ConvertAvroToJson is blank.

Regards
Malini Shetty

<10677-hprojectodsdevbinaryavro.zip>




Re: [ANNOUNCE] New Apache NiFi PMC Member - Joe Skora

2016-12-29 Thread Oleg Zhurakousky
Congrats Joe!!! Well earned!
Oleg 

Sent from my iPhone

> On Dec 29, 2016, at 14:23, Joe Percivall  wrote:
> 
> Congrats Joe!
> 
>> On Thu, Dec 29, 2016 at 1:11 PM, Tony Kurc  wrote:
>> 
>> Glad to have you on the PMC, Joe!
>> 
>>> On Thu, Dec 29, 2016 at 1:02 PM, Aldrin Piri  wrote:
>>> 
>>> All,
>>> 
>>> On behalf of the Apache NiFi PMC, I am pleased to announce that Joe
>> Skora has
>>> accepted the PMC's invitation to join the Apache NiFi PMC.  Joe has been
>>> with NiFi for quite some time, even before its arrival in the ASF and
>>> became a committer in February.  We are most pleased he brought his
>>> knowledge and supported the community once open sourced and has has
>>> provided continuous and excellent contributions in all facets of the
>>> community.  Of note, Joe was our first community member to carry out a
>>> release without being a PMC member for 0.7.1.
>>> 
>>> Please join us in congratulating and welcoming Joe to the Apache NiFi
>> PMC.
>>> 
>>> Again, congratulations Joe and well deserved!
> 
> 
> 
> -- 
> 
> - - - - - -
> *Joseph Percivall*
> e: joeperciv...@gmail.com


Re: [ANNOUNCE] New Apache NiFi Committer Joey Frazee

2017-01-03 Thread Oleg Zhurakousky
Congrats Joey!

Nice to have you on board!
Oleg

On Jan 3, 2017, at 2:42 PM, Aldrin Piri 
mailto:ald...@apache.org>> wrote:

On behalf of the Apache NiFI PMC, I am very pleased to announce that Joey 
Frazee has accepted the PMC's invitation to become a committer on the Apache 
NiFi project. We greatly appreciate all of Joey's hard work and generous 
contributions and look forward to continued involvement in the project.

Joey's contributions including support for HL7, JMS, and EventHub extensions.  
Joey can also be found assisting with the lists, as well as articles and 
repositories based around the NiFi community.

Congrats, Joey!



Re: NiFi installation issues

2017-01-09 Thread Oleg Zhurakousky
Pushkar

Any chance you can look and provide relevant logs? You can find them in the 
/log directory.
Also, could you please be more specific as to what directions did you follow. 
Just want to make sure that if there are any issues with documentation it is 
corrected.

Cheers
Oleg

> On Jan 9, 2017, at 8:46 AM, Pushkara R  wrote:
> 
> Hi,
> 
> I'm installing NiFi on my arch linux machine following the instructions in
> https://github.com/apache/nifi.
> 
> After I built the project, I executed the NiFi startup script and tried
> connecting to the local server but I got a 404 error with the following
> message.
> 
> HTTP ERROR: 404
> 
> Problem accessing /nifi/. Reason:
> 
>Not Found
> 
> --
> Powered by Jetty:// 9.3.9.v20160517 
> 
> Could someone please help me debug this?
> 
> Thank You
> Pushkar



Re: NiFi installation issues

2017-01-09 Thread Oleg Zhurakousky
Pushkar

It appears that something is running on port 8080 so Jetty can not start. You 
can see at the bottom of the log the following (see below)


  1.
2017-01-09 17:24:23,606 WARN [main] org.apache.nifi.web.server.JettyServer 
Failed to start web server... shutting down.
  2.
java.net.BindException: Address already in use
  3.
at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_112]
  4.
at sun.nio.ch.Net.bind(Net.java:433) ~[na:1.8.0_112]
  5.
at sun.nio.ch.Net.bind(Net.java:425) ~[na:1.8.0_112]
  6.
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 
~[na:1.8.0_112]

On Jan 9, 2017, at 9:37 AM, Pushkara R 
mailto:pushkar1...@gmail.com>> wrote:

Oleg

I've pasted the app log here http://pastebin.com/YXkEvNZJ
and the bootstrap log here http://pastebin.com/iPJeJvwX.

I built the project using 'maven clean install' (after setting
MAVEN_OPTS="-Xmx2048m -XX:MaxPermSize=128m" because of OutOfMemory
exceptions).
Then I extracted the nifi-1.2.0-snapshot-bin.tar.gz.
Followed by '$ ./bin/nifi.sh start'

I then try to connect to 'http://localhost:8080/nifi/' which gives me the
screen I pasted in my previous mail.

These were the documentation as available in the README.md in the github
clone of the project.

Thanks
Pushkar



Re: problems with custom controller services, and other comments

2017-01-10 Thread Oleg Zhurakousky
Michael

That is indeed strange. 
Quick question, is there any chance we can look at your code (the custom CS). 
Also, what version of NiFi you are using?
Will be looking into it, but any additional info is helpful 

Cheers
Oleg


> On Jan 10, 2017, at 1:14 PM, Knapp, Michael  
> wrote:
> 
> Devs,
> 
> For some reason NIFI is not working with some custom controller services I 
> have written.  I wrote new implementations of AWSCredentialsProviderService, 
> that aim to work with session tokens.  I am hoping to run NIFI from my local 
> machine and to be able to connect with AWS using session tokens.
> 
> For both implementations I tried, it fails when I try to create them from the 
> web UI.  I created a PutS3Object processor, and configured the “AWS 
> Credentials Provider Service” property.  From that property I tried to create 
> a new service and selected my custom implementation.  When I click “create” 
> the value for the credentials provider service is the ID of the controller 
> service, not its name.  While my controller services require several 
> properties to be set, the web UI is not letting me set them.  Usually I see 
> an arrow next to the property, which allows me to configure a controller 
> service, but I am not getting that now.  I looked in the nifi-app logs, and I 
> do not see any exception, I have even set the logging to TRACE for all 
> things, and still don’t see any problem in the logs.  The PutS3Object 
> processor is not validating because the controller service is found to be 
> invalid.  I tried creating a unit test, it seems to work for me in tests, but 
> I can’t use TestRunners because that is processor oriented, not meant for 
> controller services.  I have a suspicion that spring’s aspect oriented 
> programming is somehow fuddling with my service.
> 
> Does anybody know what I am doing wrong here?
> 
> Other unrelated comments:
> 
> 1.   The first time you unpack NIFI it takes super long for it to start 
> for me, like a half hour or more.  I think you should make it easy for people 
> to scale back their NIFI implementation.  Really I would like to start it 
> with just the minimum NAR files for it to start, and I can add others that I 
> need.  Maybe a sub-directory in lib for the essential nars could help people 
> separate the essential stuff from the optional nars.  The first time I tried 
> installing it, I thought it was broken when really it just was taking forever 
> (over 30 minutes).  I think that new users will probably abandon NIFI if they 
> can’t get it to start quickly out of the box.  Maybe split the optional nars 
> into an “extra-lib”, and people can move those into lib as necessary for 
> their goals.
> 
> 2.   Building NIFI from source takes over an hour for me, really I just 
> want to build the bare minimum things to get it to start.  I tried creating 
> maven profiles to have it build just the minimum pieces, but this proved to 
> be non-trivial as maven does not seem to respect the “modules” tag in 
> profiles, and the nifi-assembly project requires all of the optional nars to 
> also be built.  Creating this might be too complicated for me.  Has anybody 
> thought about supporting a quick/minimal build?
> 
> 3.   The “nifi-aws-processors” is challenging to use because in one 
> project they have defined the interfaces for controller services 
> (AWSCredentialsProviderService) and also included the services.  I tried 
> creating my own nar with an implementation of AWSCredentialsProviderService, 
> but since it depended on “nifi-aws-processors”, my nar was also re-hosting 
> all of the AWS processors.  I was facing a lot of classpath issues because of 
> this.  I worked around this by using maven shade to shade in the 
> “nifi-aws-processors” into my own jar, but excluding the services it 
> provided.  Then in my nar project I had to exclude the dependency on 
> “nifi-aws-processors”.  This was a lot of work on my part when all they 
> needed to do was split that project into api, api-nar, impl, and impl-nar.
> 
> 4.   I think it is very confusing how there is a “Controller Services” 
> for the entire NIFI canvas, and separate ones for individual processor 
> groups.  It seems that processors cannot use global controller services, and 
> I am still uncertain about why I would ever create a global one.  From Nifi 
> settings, I would like to also see controller services in processor groups, 
> and vice versa.  From a processor, I would like to assign controller services 
> that are global in scope, not limited to a processor group.  I think this is 
> something that will confuse and frustrate a lot of new developers, driving 
> them to consider competing products.
> 
> 5.   I think the developer guide needs some clarification on what jars 
> are provided and not.  New developers will be unsure if they should include 
> “nifi-api” as a provided or compile dependency, and same goes for 
> nifi-framework-core.
> 
> 6.   Perha

Re: [DISCUSS] Run Once scheduling

2017-01-12 Thread Oleg Zhurakousky
I was just about to suggest the same. 
Run-once would be a bit counter intuitive to the flow processing as a concept. 
Basically think of it this way; Flow or parts of it have only two states - 
RUNNING or STOPPED. In the RUNNING state it processes the data as it arrives 
(every second, every minute or every day etc). Indeed there may be a concern 
that the processor will do a lot of 'dry’ spins if no data is available but 
fortunately NiFi allows you to limit the impact of that by configuring “yield 
duration’. By default it is set to 1 sec, but for your case you may wan to set 
it to 1 hour or so essentially controlling the scheduling of such processor 
between ‘dry’ spins.

That said and just to entertain the idea of Run Once, what do you think should 
be the processor state if it did ran once? Let’s assume it did and somehow was 
stopped. . . then what? The data arrived on the incoming queue, but nothing is 
processed until someone manually goes and re-starts the processor. Right?
I mean from the general workflow standpoint the concern is very valid, but from 
flow processing the fact that NiFi does not support it is actually more of a 
feature rather then lack of functionality.

Thoughts?

Cheers
Oleg

> On Jan 12, 2017, at 1:02 PM, Joe Witt  wrote:
> 
> Naz,
> 
> Why not just leave all the processes running?  If the data only
> arrives periodically that is ok, right?
> 
> Thanks
> Joe
> 
> On Thu, Jan 12, 2017 at 10:54 AM, Irizarry Jr., Nazario  
> wrote:
>> On a project that I am on we have been looking at using NiFi for 
>> orchestrations that are invoked infrequently.  For example, once a month a 
>> new data input product becomes available and then one wants to run it 
>> through a set of processing steps that can be nicely implemented using NiFi 
>> processors.  However, using the interval or cron scheduling for this purpose 
>> begins to get cumbersome after a while with the need to start and manually 
>> stop these occasional flows.
>> 
>> It would be fairly easy to add an additional scheduling option - “Run Once” 
>> for this use case.  The behavior would be that when a processor is set to 
>> run once it automatically stops after it has successfully processed one 
>> input.
>> 
>> What do people think?  We are willing to implement this small enhancement.
>> 
>> Cheers,
>> 
>> Naz Irizarry
>> MITRE Corp.
>> 617-893-0074
>> 
>> 
>> 
> 



Re: problems with custom controller services, and other comments

2017-01-12 Thread Oleg Zhurakousky
Michael

Sorry took a while, but we just had one of our guys reporting the same symptoms 
and as Bryan suggested it was actually a problem in the POM, specifically the 
POM file for NAR modules of your custom processor.
Basically once your processor is referencing the service it’s NRA module has to 
declare dependency on such service.
I m pasting an example POM for a NAR module from one of our guys who is 
referencing DBCP service from his custom bundle

http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
4.0.0

org.apache.nifi
nifi-sql-bundle
1.1.0


nifi-sql-nar
nar



org.apache.nifi
nifi-dbcp-service-nar
nar


org.apache.nifi
nifi-sql-processors
1.1.0




Note the first dependency (wifi-dbcp-service-nar)

Hope that helps

Cheers
Oleg
On Jan 10, 2017, at 1:29 PM, Oleg Zhurakousky 
mailto:ozhurakou...@hortonworks.com>> wrote:

Michael

That is indeed strange.
Quick question, is there any chance we can look at your code (the custom CS). 
Also, what version of NiFi you are using?
Will be looking into it, but any additional info is helpful

Cheers
Oleg


On Jan 10, 2017, at 1:14 PM, Knapp, Michael 
mailto:michael.kn...@capitalone.com>> wrote:

Devs,

For some reason NIFI is not working with some custom controller services I have 
written.  I wrote new implementations of AWSCredentialsProviderService, that 
aim to work with session tokens.  I am hoping to run NIFI from my local machine 
and to be able to connect with AWS using session tokens.

For both implementations I tried, it fails when I try to create them from the 
web UI.  I created a PutS3Object processor, and configured the “AWS Credentials 
Provider Service” property.  From that property I tried to create a new service 
and selected my custom implementation.  When I click “create” the value for the 
credentials provider service is the ID of the controller service, not its name. 
 While my controller services require several properties to be set, the web UI 
is not letting me set them.  Usually I see an arrow next to the property, which 
allows me to configure a controller service, but I am not getting that now.  I 
looked in the nifi-app logs, and I do not see any exception, I have even set 
the logging to TRACE for all things, and still don’t see any problem in the 
logs.  The PutS3Object processor is not validating because the controller 
service is found to be invalid.  I tried creating a unit test, it seems to work 
for me in tests, but I can’t use TestRunners because that is processor 
oriented, not meant for controller services.  I have a suspicion that spring’s 
aspect oriented programming is somehow fuddling with my service.

Does anybody know what I am doing wrong here?

Other unrelated comments:

1.   The first time you unpack NIFI it takes super long for it to start for 
me, like a half hour or more.  I think you should make it easy for people to 
scale back their NIFI implementation.  Really I would like to start it with 
just the minimum NAR files for it to start, and I can add others that I need.  
Maybe a sub-directory in lib for the essential nars could help people separate 
the essential stuff from the optional nars.  The first time I tried installing 
it, I thought it was broken when really it just was taking forever (over 30 
minutes).  I think that new users will probably abandon NIFI if they can’t get 
it to start quickly out of the box.  Maybe split the optional nars into an 
“extra-lib”, and people can move those into lib as necessary for their goals.

2.   Building NIFI from source takes over an hour for me, really I just 
want to build the bare minimum things to get it to start.  I tried creating 
maven profiles to have it build just the minimum pieces, but this proved to be 
non-trivial as maven does not seem to respect the “modules” tag in profiles, 
and the nifi-assembly project requires all of the optional nars to also be 
built.  Creating this might be too complicated for me.  Has anybody thought 
about supporting a quick/minimal build?

3.   The “nifi-aws-processors” is challenging to use because in one project 
they have defined the interfaces for controller services 
(AWSCredentialsProviderService) and also included the services.  I tried 
creating my own nar with an implementation of AWSCredentialsProviderService, 
but since it depended on “nifi-aws-processors”, my nar was also re-hosting all 
of the AWS processors.  I was facing a lot of classpath issues because of this. 
 I worked around this by using maven shade to shade in the 
“nifi-aws-processors” into my own jar, but excluding the services it provided.  
Then in my nar project I had to exclude the dependency on 
“nifi-aws-processors”.  This was a lot of work on my part when all they needed 
to do was split that project into api, api-nar, impl, and impl-nar.

4. 

Re: GenerateFlowFile.java

2017-02-02 Thread Oleg Zhurakousky
Alessio

The onScheduled() is only called once when processor is started. Started means 
scheduled to execute, hence the name of the operation. When the process is 
scheduled its scheduler triggers the execution of the processor and the same 
scheduler controls how often the onTrigger is called. For example, with default 
TimeDrivenScheduler set to 1 sec, the onTrigger() operation will be invoked 
every sec.

So to answer your question in short your use case should be implemented in 
onTrigger(). However onSchedule() is where you could/should obtain the 
reference to your service.

Does that clarify?

Cheers
Oleg
> On Feb 2, 2017, at 4:43 AM, Alessio Palma  
> wrote:
> 
> Hello  all,
> 
> I'm reading the GenerateFlowFile processor, which is started with no input 
> files. I need to write a processor very close to this one,  which accept no 
> files, but when scheduled has to output into the queue the file returned from 
> a service it calls.
> 
> I did not understand how the framework calls this kind of processor.
> There are both onScheduled and onTrigger methods.
> In which order are they called ?
> Do I need both?
> 
> AP



Re: [DISCUSS] Proposal for an Apache NiFi sub-project - NiFi Registry

2017-02-09 Thread Oleg Zhurakousky
Bryan

While I am huge +1 for breaking up NiFi into manageable structure, I am still 
unclear as to the scope of this project.

1. As discussed in [3], there are public repositories and services out there 
that have been designed for the exact problem we are facing in NiFi today. 
Those repositories and services (i.e., Bintray) have been embraced by the 
larger OS community of developers and open source projects already. So at the 
very least we need to have some answer/story about why we decided NOT to rely 
on them (if that is the decision).
2. On the flip side, it is perfectly valid to assume that someone or some 
organization will not find solutions such as BinTray attractive or usable in 
their environment and may prefer a custom solution. In that case having 
alternatives may help, but in itself is not a solution since all of the 
available alternatives may also NOT be sufficient. 
So, in other words relying on any existing solution or building multiple may 
not solve the issue I (the customer) may have. This leaves only one option - a 
project that defines a set of strategies (interfaces) that would allow one to 
either integrate with an already existing solution or implement a new custom 
one. Now, we (NiFi community) could contribute integration modules to such 
project based on those pre-defined Registry interfaces. IMHO such approach will 
foster more community collaboration and acceptance by exposing such integration 
model.

Also, regardless of the decision, the bigger work will be overhauling NiFi 
deployment/packaging model and UI interaction since with external registries 
NARs, flows etc., would need to be pulled/removed/updated while NiFi is running 
and that I think is a much bigger work.

In the end I am +1 for the project and super excited that the idea is finally 
starting to get traction, but wanted to contribute with above comments to 
ensure that the project's scope is clearly defined.

Cheers
Oleg
> On Feb 8, 2017, at 4:50 PM, Bryan Bende  wrote:
> 
> NiFi Community,
> 
> I'd like to initiate a discussion around creating a sub-project of
> NiFi to encompass the registry capabilities outlined in several of the
> feature proposals on the Wiki [1]. A possible name for this
> sub-project is simply "NiFi Registry".
> 
> Currently there are two feature proposals that call for NiFi to
> interact with an external registry:
> 
> Configuration Management of Flows [2]  - This feature proposal calls
> for a flow registry where versioned flows can be published and
> consumed, allowing flows to be easily migrated between environments .
> 
> Extension Registry [3] - This feature proposal calls for a place to
> publish NARs containing extensions, allowing NiFi to decouple itself
> from including all of the NARs in the main distribution, and allowing
> better discovery of available extensions.
> 
> The idea would be to create a NiFi Registry sub-project, with
> sub-modules for the various registries. These registries could then be
> packaged and distributed as a single artifact and run as a
> complimentary application to NiFi and MiNiFi. NiFi would not require
> the registry application, however, a given NiFi could be configured to
> know about one or more flow registries, or one or more extension
> registries.
> 
> Creating a sub-project would allow the registry code to evolve
> independently of NiFi and be released on it's own timeline. In
> addition, it would make tracking issues/work much clearer through a
> separate JIRA.
> 
> Please discuss and provide and thoughts or feedback.
> 
> Thanks,
> 
> Bryan
> 
> [1] https://cwiki.apache.org/confluence/display/NIFI/NiFi+Feature+Proposals
> [2] 
> https://cwiki.apache.org/confluence/display/NIFI/Configuration+Management+of+Flows
> [3] 
> https://cwiki.apache.org/confluence/display/NIFI/Extension+Repositories+%28aka+Extension+Registry%29+for+Dynamically-loaded+Extensions
> 



Re: [DISCUSS] Proposal for an Apache NiFi sub-project - NiFi Registry

2017-02-09 Thread Oleg Zhurakousky
Joe

Versioning is probably the second main driver for the externalization as a 
concept. I believe in it so strongly that I would probably cast -1 if 
versioning was not part of it ;)
And some of the existing ready to use solutions already provide versioning of 
artifacts (Binary, GitHub etc), which sets the expectations, so not having i 
would be sad. . . so sad ;)

Cheers
Oleg
> On Feb 9, 2017, at 8:45 AM, Joe Skora  wrote:
> 
> +1 as well.  Glad to help if needed.
> 
> Do you think this will include support for versioned Processors and
> Controller Services, such that I could have SuperWidgetProcessor 1.0 and
> SuperWidgetProcessor 1.5 on the same flow?
> 
> On Thu, Feb 9, 2017 at 8:42 AM, Matt Gilman  wrote:
> 
>> +1. I really like this idea. Should ease deployments between instances and
>> facilitate a better UX throughout the lifecycle of a dataflow.
>> 
>> Matt
>> 
>> On Thu, Feb 9, 2017 at 7:52 AM, Koji Kawamura 
>> wrote:
>> 
>>> Huge +1! I was excited to read the design documents :)
>>> Agree with flow versioning at ProcessGroup level.
>>> 
>>> I don't know if this is helpful, but here is an experimental project
>>> of mine which tries to achieve the same goal, versioning ProcessGroup.
>>> https://github.com/ijokarumawak/nifi-deploy-process-group
>>> 
>>> It contains logics that will probably need to be implemented such as
>>> checking remaining flow files in the queues around ProcessGroup, or
>>> checking number of input/output ports from/to the ProcessGroup ... etc
>>> 
>>> Hope that helps in some way, and I'd like to help make this come true!
>>> 
>>> Thanks,
>>> Koji
>>> 
>>> On Thu, Feb 9, 2017 at 9:43 PM, Joe Gresock  wrote:
 +1, I've been waiting for this idea since NiFi went open source!
 
 On Thu, Feb 9, 2017 at 4:53 AM, Ricky Saltzer 
>>> wrote:
 
> I'm a big +1 to this proposal. It would solve a huge burden that is
>>> keeping
> NARs up to date in environments where there's alot of teams that share
>>> NARs
> but have separate NiFi deployments and repositories.
> 
> On Feb 8, 2017 7:09 PM, "Peter Wicks (pwicks)" 
>>> wrote:
> 
>> I think a lot of us are facing the same challenges, and this sounds
>>> like
> a
>> step in the right direction.
>> I had actually started to dig into a Flow Configuration plugin that
>>> would
>> use Git branches to copy/sync flows between instances/environments,
>>> and
>> keep them versioned; hadn't gotten very far.
>> 
>> -Original Message-
>> From: Jeremy Dyer [mailto:jdy...@gmail.com]
>> Sent: Wednesday, February 08, 2017 3:54 PM
>> To: dev@nifi.apache.org
>> Subject: Re: [DISCUSS] Proposal for an Apache NiFi sub-project -
>> NiFi
>> Registry
>> 
>> Bryan - I think this is a fantastic idea. I would also think this
>>> would
> be
>> a good place to add a "device registry" as well. It makes much more
>>> sense
>> in my mind to have these efforts in sub projects outside of the
> nifi/minifi
>> core.
>> 
>> On Wed, Feb 8, 2017 at 4:50 PM, Bryan Bende 
>> wrote:
>> 
>>> NiFi Community,
>>> 
>>> I'd like to initiate a discussion around creating a sub-project of
>>> NiFi to encompass the registry capabilities outlined in several of
>>> the
>>> feature proposals on the Wiki [1]. A possible name for this
>>> sub-project is simply "NiFi Registry".
>>> 
>>> Currently there are two feature proposals that call for NiFi to
>>> interact with an external registry:
>>> 
>>> Configuration Management of Flows [2]  - This feature proposal
>> calls
>>> for a flow registry where versioned flows can be published and
>>> consumed, allowing flows to be easily migrated between
>> environments
>>> .
>>> 
>>> Extension Registry [3] - This feature proposal calls for a place
>> to
>>> publish NARs containing extensions, allowing NiFi to decouple
>> itself
>>> from including all of the NARs in the main distribution, and
>>> allowing
>>> better discovery of available extensions.
>>> 
>>> The idea would be to create a NiFi Registry sub-project, with
>>> sub-modules for the various registries. These registries could
>> then
>>> be
>>> packaged and distributed as a single artifact and run as a
>>> complimentary application to NiFi and MiNiFi. NiFi would not
>> require
>>> the registry application, however, a given NiFi could be
>> configured
>>> to
>>> know about one or more flow registries, or one or more extension
>>> registries.
>>> 
>>> Creating a sub-project would allow the registry code to evolve
>>> independently of NiFi and be released on it's own timeline. In
>>> addition, it would make tracking issues/work much clearer through
>> a
>>> separate JIRA.
>>> 
>>> Please discuss and provide and thoughts or feedback.
>>> 
>>> Thanks,
>>> 
>>> Bryan
>>> 
>>> [1] https://cwiki.apache.org/conflue

Re: [VOTE] Establish Registry, a sub-project of Apache NiFi

2017-02-10 Thread Oleg Zhurakousky
+1 Here as well. We desperately need it. 

> On Feb 10, 2017, at 12:11 PM, Jeremy Dyer  wrote:
> 
> +1 non-binding. I like the separation and I see a lot of need for this in
> the community.
> 
> On Fri, Feb 10, 2017 at 12:03 PM, Matt Burgess  wrote:
> 
>> +1 binding
>> 
>> On Fri, Feb 10, 2017 at 11:40 AM, Bryan Bende  wrote:
>>> All,
>>> 
>>> Following a solid discussion for the past few days [1] regarding the
>>> establishment of Registry as a sub-project of Apache NiFi, I'd like to
>>> call a formal vote to record this important community decision and
>>> establish consensus.
>>> 
>>> The scope of this project is to define APIs for interacting with
>>> resources that one or more NiFi instances may be interested in, such
>>> as a flow registry for versioned flows, an extension registry for
>>> extensions, and possibly other configuration resources in the future.
>>> In addition, this project will provide reference implementations of
>>> these registries, with the goal of allowing the community to build a
>>> diverse set of implementations, such as a Git provider for versioned
>>> flows, or a bintray provider for an extension registry.
>>> 
>>> I am a +1 and looking forward to the future work in this area.
>>> 
>>> The vote will be open for 72 hours and be a majority rule vote.
>>> 
>>> [ ] +1 Establish Registry, a subproject of Apache NiFi
>>> [ ]   0 Do not care
>>> [ ]  -1 Do not establish Registry, a subproject of Apache NiFi
>>> 
>>> Thanks,
>>> 
>>> Bryan
>>> 
>>> [1] http://mail-archives.apache.org/mod_mbox/nifi-dev/201702.
>> mbox/%3CCALo_M19euo2LLy0PVWmE70FzeLhQRcCtX6TC%3DqoiBVfn4zFQMA%40mail.
>> gmail.com%3E
>> 



Re: Creating a custom DatabaseAdapter

2017-02-17 Thread Oleg Zhurakousky
Stanislav

Sorry to hear you’re having problems. 
Is ether any way we can look at the code (i.e., via GitHub)? Also, when you say 
you don’t see your DatabaseAdapter is not picked up, do you see any messages in 
the logs?

It is hard to determine the issue without more info, but usually symptoms like 
this point to miss-configuration of some type and are usually very easy to fix, 
once diagnosed. 
Let us know what you can do.

Also, a lot of times i helps to just copy another working bundle, then rename 
the artifact(s), build and make sure you can see it being deployed. Now you can 
modify the code any way you like knowing that at the very least you don’t have 
any configuration issues.

Let us know
Cheers
Oleg
> On Feb 17, 2017, at 5:05 AM, Stanislav  wrote:
> 
> Hi,
> 
> I would like to create a custom 
> org.apache.nifi.processors.standard.db.DatabaseAdapter, is this possible?
> I have tried creating new class that implements the interface and specified 
> the full class name in 
> META-INF\services\org.apache.nifi.processors.standard.db.DatabaseAdapter 
> file, and created a nar.
> But this does not appear to be working, the custom processor i have in that 
> nar are picked-up but the DatabaseAdapter isn`t.
> Any tips on how to make it work?
> 
> Best regards,
> Stanislav.
> 



Re: Email processor test failure

2017-02-21 Thread Oleg Zhurakousky
Chris

ListenSMTP is essentially an email server. This processor essentially allows 
you to send emails to it. So by running it you are starting a simple email 
server. 
That said something is obviously not going well on your end. Any firewalls or 
other specific networking setup that you have that may cause it?
Anyway, you must have some stack traces to share.

Cheers
Oleg

> On Feb 21, 2017, at 1:57 PM, Chris Herrera  wrote:
> 
> Hi All,
> 
> Apologies for apparently going braindead…I’m sure I’m doing something silly…. 
> I am in the process of starting to work on some custom processors and 
> controller services, and I am running into an issue with a few tests in the 
> email processor.
> 
> Specifically org.apache.nifi.processors.email.TestListenSMTP is timing out on 
> validateSuccesfulInteraction… when stepping through it seems as if it is 
> timing out. However, I also see that there is a test smtp server that should 
> be stood up for the test…has anyone run into this before.
> 
> Thanks a lot!
> Chris Herrera
> 
> Tests run: 3, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 48.071 sec 
> <<< FAILURE! - in org.apache.nifi.processors.email.TestListenSMTP
> validateSuccessfulInteraction(org.apache.nifi.processors.email.TestListenSMTP)
>   Time elapsed: 15.04 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.nifi.processors.email.TestListenSMTP.validateSuccessfulInteraction(TestListenSMTP.java:91)
> 
> validateSuccessfulInteractionWithTls(org.apache.nifi.processors.email.TestListenSMTP)
>   Time elapsed: 16.518 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.nifi.processors.email.TestListenSMTP.validateSuccessfulInteractionWithTls(TestListenSMTP.java:157)
> 
> validateTooLargeMessage(org.apache.nifi.processors.email.TestListenSMTP)  
> Time elapsed: 16.513 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.nifi.processors.email.TestListenSMTP.validateTooLargeMessage(TestListenSMTP.java:203)
> 



Re: Email processor test failure

2017-02-21 Thread Oleg Zhurakousky
Chris

Also, looking through the various posts on Googlenet I am wondering if you 
tinkered with your /etc/hosts file. I know you stated that you’ve verified that 
"everything is normal”, but I guess I am daring you to take a second look. It 
appears that you server starts successfully so the @Before operation seems to 
succeed. So what I would do (if you can) is put a break point at the beginning 
of any test (that would mean that the server is started), see what the port is 
and try to telnet to it ‘telnet localhost ’ and see if you can connect to 
it (all that while in debug session).

Anyway, keep us posted. This is strange indeed.

Cheers
Oleg

> On Feb 21, 2017, at 8:46 PM, Koji Kawamura  wrote:
> 
> Hi Chris,
> 
> Are you running this test with multi-thread mode?
> The test has ScheduledExecutorService private instance field and it's
> replaced @Before each test method. If it runs with multi-threaded
> mode, it might be possible the port variable gets confused among each
> test.
> Each test starts different SMTP server with different port, and
> Runnable in each test supposed to use the same port with the
> corresponding SMTP server.
> I'm not an expert of how java lexical scope works, but the log looks so..
> 
> If you're using a mvn flag such as -T4, please try without that.
> 
> Thanks,
> Koji
> 
> On Wed, Feb 22, 2017 at 5:00 AM, Chris Herrera
>  wrote:
>> Thanks All!
>> 
>> Here is some additional information:
>> 
>> Interestingly enough it seems in the surefire report that the SMTP server is 
>> starting on a different port than the test is trying to connect to, unless 
>> I’m reading it wrong.
>> 
>> Nothing else strange on the networking side, verified my hosts file is 
>> normal, no weird firewalls, etc...
>> 
>> Env Info:
>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 
>> 2015-11-10T10:41:47-06:00)
>> Maven home: /usr/local/Cellar/maven/3.3.9/libexec
>> Java version: 1.8.0_121, vendor: Oracle Corporation
>> Java home: 
>> /Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/jre
>> Default locale: en_US, platform encoding: UTF-8
>> OS name: "mac os x", version: "10.12.3", arch: "x86_64", family: “mac"
>> 
>> Additional Surefire Report info:
>> [pool-15-thread-1] INFO org.subethamail.smtp.server.SMTPServer - SMTP server 
>> *:58738 starting
>> [org.subethamail.smtp.server.ServerThread *:58738] INFO 
>> org.subethamail.smtp.server.ServerThread - SMTP server *:58738 started
>> [pool-18-thread-1] INFO org.subethamail.smtp.server.SMTPServer - SMTP server 
>> *:58840 starting
>> [org.subethamail.smtp.server.ServerThread *:58840] INFO 
>> org.subethamail.smtp.server.ServerThread - SMTP server *:58840 started
>> org.apache.commons.mail.EmailException: Sending the email to the following 
>> server failed : localhost:58738
>> at org.apache.commons.mail.Email.sendMimeMessage(Email.java:1421)
>> at org.apache.commons.mail.Email.send(Email.java:1448)
>> at 
>> org.apache.nifi.processors.email.TestListenSMTP$1.run(TestListenSMTP.java:78)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>> at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: javax.mail.MessagingException: [EOF]
>> at com.sun.mail.smtp.SMTPTransport.issueCommand(SMTPTransport.java:2074)
>> at com.sun.mail.smtp.SMTPTransport.helo(SMTPTransport.java:1469)
>> at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:660)
>> at javax.mail.Service.connect(Service.java:295)
>> at javax.mail.Service.connect(Service.java:176)
>> at javax.mail.Service.connect(Service.java:125)
>> at javax.mail.Transport.send0(Transport.java:194)
>> at javax.mail.Transport.send(Transport.java:124)
>> at org.apache.commons.mail.Email.sendMimeMessage(Email.java:1411)
>> ... 9 more
>> [pool-21-thread-1] INFO org.subethamail.smtp.server.SMTPServer - SMTP server 
>> *:58923 starting
>> [org.subethamail.smtp.server.ServerThread *:58923] INFO 
>> org.subethamail.smtp.server.ServerThread - SMTP server *:58923 started
>> org.apache.commons.mail.EmailException: Sending the email to the following 
>> server failed : localhost:58840
>> at org.apache.commons.mail.Email.sendMimeMessage(Email.java:1421)
>> at org.apache.commons.mail.Email.send(Email.java:1448)
>> at 
>> org.apache.nifi.processors.email.TestListenSMTP$2.run(TestListenSMTP.java:144)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at 
>> java.util.concurrent.ScheduledT

Re: [ANNOUNCE] New Apache NiFi Committer Jeff Storck

2017-02-22 Thread Oleg Zhurakousky
Congrats Jeff!
Well earned!

> On Feb 21, 2017, at 7:55 PM, Scott Aslan  wrote:
> 
> Congrats Jeff! Well deserved!
> 
> On Tue, Feb 21, 2017 at 6:36 PM, Koji Kawamura 
> wrote:
> 
>> Congratulations Jeff!
>> 
>> On Wed, Feb 22, 2017 at 7:52 AM, Andre  wrote:
>>> Welcome aboard Jeff! Well deserved
>>> 
>>> On Wed, Feb 22, 2017 at 6:41 AM, Aldrin Piri  wrote:
>>> 
 On behalf of the Apache NiFI PMC, I am very pleased to announce that
>> Jeff
 Storck has accepted the PMC's invitation to become a committer on the
 Apache NiFi project. We greatly appreciate all of Jeff's hard work and
 generous contributions and look forward to continued involvement in the
 project.
 
 Jeff's contributions include significant efforts toward upgrade and
 migration processes inclusive of flow layout when upgrading from 0.x to
>> 1.x
 and the ZooKeeper migration toolkit.
 
 Congrats, Jeff!
 
>> 
> 
> 
> 
> -- 
> *Scott Aslan = new WebDeveloper(*
> *{"location": {"city": "Saint Cloud","state": "FL",
>"zip": "34771"},"contact": {"email":
> "scottyas...@gmail.com ","linkedin":
> "http://www.linkedin.com/in/scottyaslan
> "}});*



Re: [ANNOUNCE] New Apache NiFi PMC Member - James Wing

2017-02-22 Thread Oleg Zhurakousky
Awesome! Well earned! Congrats!

> On Feb 22, 2017, at 10:09 AM, Joe Witt  wrote:
> 
> congrats james and thanks for your efforts!
> 
> On Wed, Feb 22, 2017 at 10:03 AM, Joe Skora  wrote:
>> Congrats James!
>> 
>> On Wed, Feb 22, 2017 at 9:58 AM, Aldrin Piri  wrote:
>> 
>>> Team,
>>> 
>>> On behalf of the Apache NiFi PMC, I am pleased to announce that James Wing
>>> has accepted the PMC's invitation to join the Apache NiFi PMC.  We
>>> greatly appreciate all of Joe's hard work and generous contributions to
>>> the project. We look forward to his continued involvement in the project.
>>> 
>>> James started out contributing in January 2016 with review and assistance
>>> on all things AWS. After receiving committer status, James embraced the
>>> role and continued to assist in fostering and growing the community. Beyond
>>> code contributions, James is active in the community lists, votes, reviews
>>> and external sites helping solve questions about NiFi.
>>> 
>>> Please join us in congratulating and welcoming James to the Apache NiFi
>>> PMC.
>>> 
>>> Congratulations and welcome, James!
>>> 
> 



Re: [DISCUSS] Scale-out/Object Storage - taming the diversity of processors

2017-02-22 Thread Oleg Zhurakousky
I’ll second Pierre

Yes with the current deployment model the amount of processors and the size of 
NiFi distribution is a concern simply because it’s growing with each release. 
But it should not be the driver to start jamming more functionality into 
existing processors which on the surface may look like related (even if they 
are).
Basically a processor should never be complex with regard to it being 
understood by the end user who is non-technical, so “specialization” is always 
takes precedence here since it limits “configuration” and thus making such 
processor simpler. It also helps with maintenance and management of such 
processor by the developer. Also, having multiple related processors will 
promote healthy competition where my MyputHDFS may for certain cases be 
better/faster then YourPutHDFS and why not have both?

The “artifact registry” (flow, extension, template etc) is the only answer here 
since it will remove the “proliferation” and the need for “taming” anything 
from the picture. With “artifact registry” one or one million processors, the 
NiFi size/state will always remain constant and small.

Cheers
Oleg
> On Feb 22, 2017, at 6:05 AM, Pierre Villard  
> wrote:
> 
> Hey guys,
> 
> Thanks for the thread Andre.
> 
> +1 to James' answer.
> 
> I understand the interest that would provide a single processor to connect
> to all the back ends... and we could document/improve the PutHDFS to ease
> such use but I really don't think that it will benefit the user experience.
> That may be interesting in some cases for some users but I don't think that
> would be a majority.
> 
> I believe NiFi is great for one reason: you have a lot of specialized
> processors that are really easy to use and efficient for what they've been
> designed for.
> 
> Let's ask ourselves the question the other way: with the NiFi registry on
> its way, what is the problem having multiple processors for each back end?
> I don't really see the issue here. OK we have a lot of processors (but I
> believe this is a good point for NiFi, for user experience, for
> advertising, etc. - maybe we should improve the processor listing though,
> but again, this will be part of the NiFi Registry work), it generates a
> heavy NiFi binary (but that will be solved with the registry), but that's
> all, no?
> 
> Also agree on the positioning aspect: IMO NiFi should not be highly tied to
> the Hadoop ecosystem. There is a lot of users using NiFi with absolutely no
> relation to Hadoop. Not sure that would send the good "signal".
> 
> Pierre
> 
> 
> 
> 
> 2017-02-22 6:50 GMT+01:00 Andre :
> 
>> Andrew,
>> 
>> 
>> On Wed, Feb 22, 2017 at 11:21 AM, Andrew Grande 
>> wrote:
>> 
>>> I am observing one assumption in this thread. For some reason we are
>>> implying all these will be hadoop compatible file systems. They don't
>>> always have an HDFS plugin, nor should they as a mandatory requirement.
>>> 
>> 
>> You are partially correct.
>> 
>> There is a direct assumption in the availability of a HCFS (thanks Matt!)
>> implementation.
>> 
>> This is the case with:
>> 
>> * Windows Azure Blob Storage
>> * Google Cloud Storage Connector
>> * MapR FileSystem (currently done via NAR recompilation / mvn profile)
>> * Alluxio
>> * Isilon (via HDFS)
>> * others
>> 
>> But I would't say this will apply to every other use storage system and in
>> certain cases may not even be necessary (e.g. Isilon scale-out storage may
>> be reached using its native HDFS compatible interfaces).
>> 
>> 
>> Untie completely from the Hadoop nar. This allows for effective minifi
>>> interaction without the weight of hadoop libs for example. Massive size
>>> savings where it matters.
>>> 
>>> 
>> Are you suggesting a use case were MiNiFi agents interact directly with
>> cloud storage, without relying on NiFi hubs to do that?
>> 
>> 
>>> For the deployment, it's easy enough for an admin to either rely on a
>>> standard tar or rpm if the NAR modules are already available in the
>> distro
>>> (well, I won't talk registry till it arrives). Mounting a common
>> directory
>>> on every node or distributing additional jars everywhere, plus configs,
>> and
>>> then keeping it consistent across is something which can be avoided by
>>> simpler packaging.
>>> 
>> 
>> As long the NAR or RPM supports your use-case, which is not the case of
>> people running NiFi with MapR-FS for example. For those, a recompilation is
>> required anyway. A flexible processor may remove the need to recompile (I
>> am currently playing with the classpath implication to MapR users).
>> 
>> Cheers
>> 



Re: [DISCUSS] Scale-out/Object Storage - taming the diversity of processors

2017-02-22 Thread Oleg Zhurakousky
Adam

I 100% agree with your comment on "official/sanctioned”. As an external 
artifact registry such as BinTray for example or GitHub, one can not control 
what is there, rather how to get it. The final decision is left to the end user.
Artifacts could be rated and/or Apache NiFi (and/or commercial distributions of 
NiFi) can “endorse” and/or “un-endorse” certain artifacts and IMHO that is 
perfectly fine. On top of that a future distribution of NiFi can have 
configuration to account for the “endorsed/supported” artifacts, yet it should 
not stop one from downloading and trying something new.

Cheers
Oleg

> On Feb 22, 2017, at 12:43 PM, Adam Lamar  wrote:
> 
> Hey all,
> 
> I can understand Andre's perspective - when I was building the ListS3
> processor, I mostly just copied the bits that made sense from ListHDFS and
> ListFile. That worked, but its a poor way to ensure consistency across
> List* processors.
> 
> As a once-in-a-while contributor, I love the idea that community
> contributions are respected and we're not dropping them, because they solve
> real needs right now, and it isn't clear another approach would be better.
> 
> And I disagree slightly with the notion that an artifact registry will
> solve the problem - I think it could make it worse, at least from a
> consistency point of view. Taming _is_ important, which is one reason
> registry communities have official/sanctioned modules. Quality and
> interoperability can vary vastly.
> 
> By convention, it seems like NiFi already has a handful of well-understood
> patterns - List, Fetch, Get, Put, etc all mean something specific in
> processor terms. Is there a reason not to formalize those patterns in the
> code as well? That would help with processor consistency, and if done
> right, it may even be easier to write new processors, fix bugs, etc.
> 
> For example, ListS3 initially shipped with some bad session commit()
> behavior, which was obvious once identified, but a generalized
> AbstractListProcessor (higher level that the one that already exists) could
> make it easier to avoid this class of bug.
> 
> Admittedly this could be a lot of work.
> 
> Cheers,
> Adam
> 
> 
> 
> On Wed, Feb 22, 2017 at 8:38 AM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com> wrote:
> 
>> I’ll second Pierre
>> 
>> Yes with the current deployment model the amount of processors and the
>> size of NiFi distribution is a concern simply because it’s growing with
>> each release. But it should not be the driver to start jamming more
>> functionality into existing processors which on the surface may look like
>> related (even if they are).
>> Basically a processor should never be complex with regard to it being
>> understood by the end user who is non-technical, so “specialization” is
>> always takes precedence here since it limits “configuration” and thus
>> making such processor simpler. It also helps with maintenance and
>> management of such processor by the developer. Also, having multiple
>> related processors will promote healthy competition where my MyputHDFS may
>> for certain cases be better/faster then YourPutHDFS and why not have both?
>> 
>> The “artifact registry” (flow, extension, template etc) is the only answer
>> here since it will remove the “proliferation” and the need for “taming”
>> anything from the picture. With “artifact registry” one or one million
>> processors, the NiFi size/state will always remain constant and small.
>> 
>> Cheers
>> Oleg
>>> On Feb 22, 2017, at 6:05 AM, Pierre Villard 
>> wrote:
>>> 
>>> Hey guys,
>>> 
>>> Thanks for the thread Andre.
>>> 
>>> +1 to James' answer.
>>> 
>>> I understand the interest that would provide a single processor to
>> connect
>>> to all the back ends... and we could document/improve the PutHDFS to ease
>>> such use but I really don't think that it will benefit the user
>> experience.
>>> That may be interesting in some cases for some users but I don't think
>> that
>>> would be a majority.
>>> 
>>> I believe NiFi is great for one reason: you have a lot of specialized
>>> processors that are really easy to use and efficient for what they've
>> been
>>> designed for.
>>> 
>>> Let's ask ourselves the question the other way: with the NiFi registry on
>>> its way, what is the problem having multiple processors for each back
>> end?
>>> I don't really see the issue here. OK we have a lot of processors (but I
>>> believe this is a good point for NiFi, for user exper

Re: [DISCUSS] Scale-out/Object Storage - taming the diversity of processors

2017-02-22 Thread Oleg Zhurakousky
Just wanted to add one more point which IMHO just as important. . .
Certain “artifacts” (i.e., NARs that depends on libraries which are not ASF 
friendly) may not fit the ASF licensing requirements of genuine Apache NiFi 
distribution, yet add a great value for greater community of NiFi users, so 
having them NOT being part of official NiFi distribution is a value in itself.

Cheers
Oleg

> On Feb 22, 2017, at 12:52 PM, Oleg Zhurakousky  
> wrote:
> 
> Adam
> 
> I 100% agree with your comment on "official/sanctioned”. As an external 
> artifact registry such as BinTray for example or GitHub, one can not control 
> what is there, rather how to get it. The final decision is left to the end 
> user.
> Artifacts could be rated and/or Apache NiFi (and/or commercial distributions 
> of NiFi) can “endorse” and/or “un-endorse” certain artifacts and IMHO that is 
> perfectly fine. On top of that a future distribution of NiFi can have 
> configuration to account for the “endorsed/supported” artifacts, yet it 
> should not stop one from downloading and trying something new.
> 
> Cheers
> Oleg
> 
>> On Feb 22, 2017, at 12:43 PM, Adam Lamar  wrote:
>> 
>> Hey all,
>> 
>> I can understand Andre's perspective - when I was building the ListS3
>> processor, I mostly just copied the bits that made sense from ListHDFS and
>> ListFile. That worked, but its a poor way to ensure consistency across
>> List* processors.
>> 
>> As a once-in-a-while contributor, I love the idea that community
>> contributions are respected and we're not dropping them, because they solve
>> real needs right now, and it isn't clear another approach would be better.
>> 
>> And I disagree slightly with the notion that an artifact registry will
>> solve the problem - I think it could make it worse, at least from a
>> consistency point of view. Taming _is_ important, which is one reason
>> registry communities have official/sanctioned modules. Quality and
>> interoperability can vary vastly.
>> 
>> By convention, it seems like NiFi already has a handful of well-understood
>> patterns - List, Fetch, Get, Put, etc all mean something specific in
>> processor terms. Is there a reason not to formalize those patterns in the
>> code as well? That would help with processor consistency, and if done
>> right, it may even be easier to write new processors, fix bugs, etc.
>> 
>> For example, ListS3 initially shipped with some bad session commit()
>> behavior, which was obvious once identified, but a generalized
>> AbstractListProcessor (higher level that the one that already exists) could
>> make it easier to avoid this class of bug.
>> 
>> Admittedly this could be a lot of work.
>> 
>> Cheers,
>> Adam
>> 
>> 
>> 
>> On Wed, Feb 22, 2017 at 8:38 AM, Oleg Zhurakousky <
>> ozhurakou...@hortonworks.com> wrote:
>> 
>>> I’ll second Pierre
>>> 
>>> Yes with the current deployment model the amount of processors and the
>>> size of NiFi distribution is a concern simply because it’s growing with
>>> each release. But it should not be the driver to start jamming more
>>> functionality into existing processors which on the surface may look like
>>> related (even if they are).
>>> Basically a processor should never be complex with regard to it being
>>> understood by the end user who is non-technical, so “specialization” is
>>> always takes precedence here since it limits “configuration” and thus
>>> making such processor simpler. It also helps with maintenance and
>>> management of such processor by the developer. Also, having multiple
>>> related processors will promote healthy competition where my MyputHDFS may
>>> for certain cases be better/faster then YourPutHDFS and why not have both?
>>> 
>>> The “artifact registry” (flow, extension, template etc) is the only answer
>>> here since it will remove the “proliferation” and the need for “taming”
>>> anything from the picture. With “artifact registry” one or one million
>>> processors, the NiFi size/state will always remain constant and small.
>>> 
>>> Cheers
>>> Oleg
>>>> On Feb 22, 2017, at 6:05 AM, Pierre Villard 
>>> wrote:
>>>> 
>>>> Hey guys,
>>>> 
>>>> Thanks for the thread Andre.
>>>> 
>>>> +1 to James' answer.
>>>> 
>>>> I understand the interest that would provide a single processor to
>>> connect
>>>> to all the back ends... and we could document/improve the PutH

Re: session recover behaviour in nifi-jms-processor

2017-02-24 Thread Oleg Zhurakousky
Dominic

Great to hear from you!
As you can see from the inline comment in the code, the recover is there for a 
reason primarily to ensure or should I say limit the possibility of a message 
loads in the event of a processor and/or NiFi crash. 
As you may be aware, in NiFi we do prefer message duplication over message 
loss. That said, I do see several possibilities for improvement especially for 
the high traffic scenarios you are describing. One such improvement would be to 
create a listening container version of ConsumeJMS which has far more control 
over threading and session caching/sharing.

Would you mind racing a JIRA issue - 
https://issues.apache.org/jira/browse/NIFI/ describing everything you just did 
and we’ll handle it.

Cheers
Oleg

> On Feb 24, 2017, at 9:08 AM, Dominik Benz  wrote:
> 
> Hi,
> 
> we're currently using Nifi to consume a relatively high-traffic JMS topic
> (40-60 messages per second). 
> 
> Worked well in principle - however, we then noticed that the the outbound
> rate (i.e. the number of messages we fetched) of the topic was consistently
> slightly higher than its inbound rate (i.e. the actual number of messages
> sent to the topic). This puzzled me, because (being the only subscriber to
> the topic) I would expect inbound and outbound traffic to be identical
> (given we can consume fast enough, which we can).
> 
> Digging deeper, I found that in 
> 
>  org.apache.nifi.jms.processors.JMSConsumer
> 
> the method "consume" performs a session.recover:
> 
> 
> 
> session.recover (as written in the comment) basically stops message delivery
> and re-starts from the last non-acked message. However, I think this leads
> to the following issue in high-traffic contexts:
> 
> 1) several threads perform the JMS session callback in parallel
> 2) each callback performs a session.recover
> 3) during high traffic, the situation arises that the ACKs from another
> thread may not (yet) have arrived at the JMS server
> 4) this implies that the pointer of session.recover will reconsume the
> not-yet-acked message from another thread
> 
> For verification, I performed so far the following steps:
> 
> (a) manual implementation of a simplistic synchronous JMS topic consumer ->
> inbound/outbound identical as expected
> (b) patched nifi-jms-processors and commented out session.recover() -
> inbout/outbound identical as expected
> 
> Any thoughts on this? My current impression is that session.recover in its
> current usage doesn't play well together with the multi-threaded JMS
> consumers. Or do I have any misconception? 
> 
> Thanks & best regards,
>  Dominik
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/session-recover-behaviour-in-nifi-jms-processor-tp14940.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: session recover behaviour in nifi-jms-processor

2017-02-24 Thread Oleg Zhurakousky
Sorry, just noticed the typo. Instead of “limit the possibility of a message 
loads. . .” should be "limit the possibility of a message loss…"

Cheers
Oleg
> On Feb 24, 2017, at 9:28 AM, Oleg Zhurakousky  
> wrote:
> 
> Dominic
> 
> Great to hear from you!
> As you can see from the inline comment in the code, the recover is there for 
> a reason primarily to ensure or should I say limit the possibility of a 
> message loads in the event of a processor and/or NiFi crash. 
> As you may be aware, in NiFi we do prefer message duplication over message 
> loss. That said, I do see several possibilities for improvement especially 
> for the high traffic scenarios you are describing. One such improvement would 
> be to create a listening container version of ConsumeJMS which has far more 
> control over threading and session caching/sharing.
> 
> Would you mind racing a JIRA issue - 
> https://issues.apache.org/jira/browse/NIFI/ describing everything you just 
> did and we’ll handle it.
> 
> Cheers
> Oleg
> 
>> On Feb 24, 2017, at 9:08 AM, Dominik Benz  wrote:
>> 
>> Hi,
>> 
>> we're currently using Nifi to consume a relatively high-traffic JMS topic
>> (40-60 messages per second). 
>> 
>> Worked well in principle - however, we then noticed that the the outbound
>> rate (i.e. the number of messages we fetched) of the topic was consistently
>> slightly higher than its inbound rate (i.e. the actual number of messages
>> sent to the topic). This puzzled me, because (being the only subscriber to
>> the topic) I would expect inbound and outbound traffic to be identical
>> (given we can consume fast enough, which we can).
>> 
>> Digging deeper, I found that in 
>> 
>> org.apache.nifi.jms.processors.JMSConsumer
>> 
>> the method "consume" performs a session.recover:
>> 
>> 
>> 
>> session.recover (as written in the comment) basically stops message delivery
>> and re-starts from the last non-acked message. However, I think this leads
>> to the following issue in high-traffic contexts:
>> 
>> 1) several threads perform the JMS session callback in parallel
>> 2) each callback performs a session.recover
>> 3) during high traffic, the situation arises that the ACKs from another
>> thread may not (yet) have arrived at the JMS server
>> 4) this implies that the pointer of session.recover will reconsume the
>> not-yet-acked message from another thread
>> 
>> For verification, I performed so far the following steps:
>> 
>> (a) manual implementation of a simplistic synchronous JMS topic consumer ->
>> inbound/outbound identical as expected
>> (b) patched nifi-jms-processors and commented out session.recover() -
>> inbout/outbound identical as expected
>> 
>> Any thoughts on this? My current impression is that session.recover in its
>> current usage doesn't play well together with the multi-threaded JMS
>> consumers. Or do I have any misconception? 
>> 
>> Thanks & best regards,
>> Dominik
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://apache-nifi-developer-list.39713.n7.nabble.com/session-recover-behaviour-in-nifi-jms-processor-tp14940.html
>> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
>> 
> 



Re: [REMINDER] Please signoff when committing other people's changes

2017-03-02 Thread Oleg Zhurakousky
Andre

Thanks for the reminder. I admit that I did not know that we require it in the 
Contributor Guide, so thanks for pointing it out.
However, your email did prompt me to look at the purpose and origin of the ‘-s’ 
flag and led me to this thread on Stack Overflow - 
http://stackoverflow.com/questions/1962094/what-is-the-sign-off-feature-in-git-for.

And I am now wondering if we should require it or even use it in the first 
place, since it’s origin, history and purpose appears to have more “individual” 
legal implications then showcasing the actual committer.

Thoughts?

Cheers
Oleg

On Mar 2, 2017, at 6:35 AM, Andre 
mailto:andre-li...@fucs.org>> wrote:

dev,

May I remind you to ensure we follow the Contributor Guide and use:

git commit --amend -s

when merging commits from your peers?

While git pretty-format can be used to reveal the committer, I am sure that
all of us will agree that as an inclusive community we value both the
pretty and ugly formats...

So can we give the ugly format the support it deserves and ensure we add
the neat Signed-off-by stamp to the commit message?

Cheers



Re: [REMINDER] Please signoff when committing other people's changes

2017-03-02 Thread Oleg Zhurakousky
Thanks Bryan.

If ‘-s’ is only for showcasing the committer I don’t believe anyone would have 
any issues with it, but my concern at the moment is purely legal, so I am not 
sure who is the right person to answer that, but figured raising the concern is 
the least I can do.

Cheers
Oleg


> On Mar 2, 2017, at 8:16 AM, Bryan Bende  wrote:
> 
> The sign-off is so we can easily see who the reviewer/merger was from
> the git history.
> 
> We can always go back to the JIRA or PR and the reviewer/merger should
> have commented there, but its convenient to see it in the git history
> in my opinion.
> 
> Personally, whenever merging someones contribution I use "git am
> --signoff < patchfile" which I guess is equivalent to doing the ammend
> after applying the patch.
> 
> 
> On Thu, Mar 2, 2017 at 8:05 AM, Oleg Zhurakousky
>  wrote:
>> Andre
>> 
>> Thanks for the reminder. I admit that I did not know that we require it in 
>> the Contributor Guide, so thanks for pointing it out.
>> However, your email did prompt me to look at the purpose and origin of the 
>> ‘-s’ flag and led me to this thread on Stack Overflow - 
>> http://stackoverflow.com/questions/1962094/what-is-the-sign-off-feature-in-git-for.
>> 
>> And I am now wondering if we should require it or even use it in the first 
>> place, since it’s origin, history and purpose appears to have more 
>> “individual” legal implications then showcasing the actual committer.
>> 
>> Thoughts?
>> 
>> Cheers
>> Oleg
>> 
>> On Mar 2, 2017, at 6:35 AM, Andre 
>> mailto:andre-li...@fucs.org>> wrote:
>> 
>> dev,
>> 
>> May I remind you to ensure we follow the Contributor Guide and use:
>> 
>> git commit --amend -s
>> 
>> when merging commits from your peers?
>> 
>> While git pretty-format can be used to reveal the committer, I am sure that
>> all of us will agree that as an inclusive community we value both the
>> pretty and ugly formats...
>> 
>> So can we give the ugly format the support it deserves and ensure we add
>> the neat Signed-off-by stamp to the commit message?
>> 
>> Cheers
>> 
> 



Re: accidently deleted my Nifi Template

2017-03-02 Thread Oleg Zhurakousky
Ssingh

Sorry to hear that.

If you deleted an un-exported template there is no way you can recover it. If 
however, you did export the template and deleted the template file, then it may 
still be in your recycling bin or use OS level utilities to recover reference 
to the deleted file.

Oleg

> On Mar 2, 2017, at 10:09 AM, ssingh singh  
> wrote:
> 
> Hi,
> I have accidently deleted my nifi template. 
> Please help me how can I recover it. Is there any way so that I can recover 
> it. 
> Thanks,
> Ssingh



Re: Hosting API's on Nifi

2017-03-02 Thread Oleg Zhurakousky
Anil

Aside from opening another port I don’t se how you can overcome this issue. 
HandleHttpRequest essentially starts another web server and this server needs a 
port to listen on.
Further more, there are many other network based Processors that come with NiFi 
that would fall int the same category - “a processors that need to bind to a 
port” to facilitate communication with external systems, so I’d recommend 
bringing this up with your AWS admins.

I know there is nor much help in my reply, but I hope you understand.

Cheers
Oleg

> On Mar 2, 2017, at 2:08 PM, Rai, Anil (GE Digital)  wrote:
> 
> 
> 
> 
> Hello All,
> 
> I  am exposing an API using HandleHttpRequest on my local nifi instance. The 
> HandleHttpRequest processor requires a Listening port that I need to provide. 
> If I enter 80 in that field, the processor fails when it starts saying 
> “unable to initialize the server”. Which is expected as the webserver uses 
> that port to serve the canvas. So if we provide any other randon number then 
> it works fine.
> 
> When I promote the above API on the nifi cluster that is hosted on our AWS 
> farm, then we are unable to invoke this API. As only 80 and 443 are opened on 
> AWS.
> 
> How do we overcome this problem?
> 
> Regards
> Anil
> 



Re: NiFI XProc Processor

2017-03-07 Thread Oleg Zhurakousky
Steve

Thank you very much for desire to contribute and such a detailed explanation of 
your contribution.
I left some comments in line [OLEG], so let us know what you think.

Cheers
Oleg

> On Mar 7, 2017, at 10:17 AM, Steve Lawrence  wrote:
> 
> We have developed a NiFi processor that uses XMLCalabash [1] to add
> support for XProc [2] processing. XProc is an XML transformation
> language that defines and XML pipeline, allowing for complex validation,
> transformation, and routing of XML data within the pipeline, using
> existing XML technologies such as RelaxNG, Schematron, XSD Schema,
> XQuery, XSLT, XPath and custom XProc transformations.
> 
> This new processor is mostly straightforward, but we had some questions
> regarding the specific implementation and the handling of non-thread
> safe code. The code is available for viewing here:
> 
> 
> https://opensource.ncsa.illinois.edu/bitbucket/projects/DFDL/repos/nifi-xproc/browse
> 
> In this processor, a property is created to provide an XProc file, which
> defines the pipeline input and output "ports". XML goes into an input
> port, goes through the pipeline, and one or more XML documents exit at
> specified output ports. This NiFi processor maps each output port to a
> dynamic NiFi relationship. It does this mapping in the
> onPropertyModified method when the XProc file property is changed. This
> method also stores the XMLCalabash XRuntime and XPipeline objects (which
> do all the pipeline work) in volatile member variables to be used later
> in onTrigger. The members are saved here to avoid recreating them in
> each call to onTrigger. Is this an acceptable place to do that? It seems
> this normally happens in an @OnScheduled method or in the first call to
> onTrigger, however the objects must be created in onPropertyModified to
> get the output ports, so this does avoid recreating the same objects
> multiple times.
[OLEG] Without getting into more details, both approaches are acceptable. 
However assigning values in onTrigger()in certain cases is more preferable. 
Those cases primarily deal with obtaining references to a remote resource 
(i.e., connection factory, socket etc) and for those cases exception handling 
is much simpler. I can definitely elaborate further if need to and point to a 
few examples where we do that, but it appears that it is not the case for you, 
so your current approach seems acceptable. And as far as multi-threading for 
onTrigger(), such assignments are done in a typical synchronized block with 
null check.

> Also note that the same objects are created in the
> XML_PIPELINE_VALIDATOR but are not saved due to the validator being
> static, so there is already some duplication. Is there a standard way to
> avoid duplication/is this an acceptable way to handle this?

[OLEG] Not fully understand the question, but keep in mind that regardless of 
the amount of threads, there is only one instance of the processor at any given 
time, so any reference held by such instance is essentially a singleton as 
well. Does that help?
> 
> The other concern we have is that the XPipeline and XRuntime objects
> created by XML Calabash are not thread safe. To resolve this issue, the
> processor is annotated with @TriggerSerially. Is this the correct
> solution, or is there a some other preferred method. Perhaps ThreadLocal
> or a thread safe pool of XPipeline objects is preferred?

[OLEG] Definitely not thread local since there is no guarantee that you will 
get the same thread or a particular thread on subsequent invocation. The 
@TriggerSerially is obviously the most defensive way to avoid collisions. That 
said I probably need to better understand the issue. However off the top of my 
head one way of ensuring the correctness for such scenarios is to maintain a 
Map of such objects as an instance variable (like a pool) where key is 
something that would ensure that you always get the correct object.
> 
> 
> Lastly, is this something the devs would be interested in pulling in
> NiFI, and if not, what could be changed to achieve this? The code is
> licensed as Apache v2 and we would be happy to contribute the code to
> NiFi if deemed acceptable.

[OLEG] This is probably the most difficult question to answer since immediate 
answer is we don’t know ;) Only the community can decide. So what I would 
suggest is to raise a JIRA - https://issues.apache.org/jira/browse/NIFI and 
submit a PR for it and see if it gets any traction. Further more we are 
currently working on the concept of the Extension/Artifact Registry to 
accommodate growing request for more NiFi components. 
> 
> Thanks,
> - Steve
> 
> [1] http://xmlcalabash.com/
> [2] https://www.w3.org/TR/xproc/
> 



Re: Connection Issue

2017-03-14 Thread Oleg Zhurakousky
Anil

When you say "it does not like the connection object. . .” what do you mean by 
that?
Can you please provide stack trace or some other details?

Cheers
Oleg

> On Mar 14, 2017, at 4:06 PM, Anil Rai  wrote:
> 
> Thanks Russ. Yes, we are doing exactly the same thing.
> 
>driverClass = context.getProperty(DRIVER_CLASS).getValue();
>queueName = context.getProperty(QUEUE_NAME).getValue();
>databaseSchema = context.getProperty(DATABASE_SCHEMA).getValue();
>consumerName = context.getProperty(CONSUMER_NAME).getValue();
>eventName = context.getProperty(EVENT_NAME).getValue();
>DBCPService connection =
> context.getProperty(JDBC_CONNECTION_POOL).asControllerService(DBCPService.class);
>Connection conn = connection.getConnection();
> *aqSession = AQDriverManager.createAQSession(connection);*
> 
> The underlined code above fails as it does not like the connection object
> that is returned by the DBCPService.
> 
> Regards
> Anil
> 
> 
> On Tue, Mar 14, 2017 at 2:43 PM, Russell Bateman 
> wrote:
> 
>> Anil,
>> 
>> Typically, your custom processor should have a property, something like
>> 
>>public static final PropertyDescriptor DBCP_SERVICE = new
>> PropertyDescriptor.Builder()
>>.name("Database Connection Pooling Service")
>>.description("The Controller Service that is used to obtain
>> connection to database")
>>.required(true)
>>.identifiesControllerService(DBCPService.class)
>>.build();
>> 
>> When your NiFi user sets up the flow, he or she puts a reference to NiFi's
>> DBCPConnectionPool in it. In configuring that (a ControllerService, you
>> tell it that it's Oracle, location, etc.)
>> 
>> Then, your onTrigger() code would do something like this:
>> 
>>final DBCPService dbcpService = context.getProperty(DBCP_SERVI
>> CE).asControllerService(DBCPService.class);
>> 
>> 
>> and later...
>> 
>> 
>>try (final Connection connection = dbcpService.getConnection())
>>{
>>try (final Statement stmt = 
>> connection.createStatement(ResultSet.TYPE_FORWARD_ONLY,
>> ResultSet.CONCUR_READ_ONLY))
>> 
>> etc.
>> 
>> Does this help?
>> 
>> Russ
>> 
>> 
>> 
>> On 03/14/2017 11:54 AM, Anil Rai wrote:
>> 
>>> We have a use case to connect to oracle database and subscribe to Advanced
>>> Queuing (https://docs.oracle.com/cd/A58617_01/server.804/a58241/ch_aq.htm
>>> ).
>>> Below is the java snippet to establish this connection from a java client.
>>> We can run this in eclipse and consume message from the advanced queue.
>>> **
>>> Class.forName("oracle.jdbc.driver.OracleDriver");
>>> connection = DriverManager.getConnection("
>>> jdbc:oracle:thin:@xxx-yyy.zzz.com:1521/DB1","user", "pwd");
>>> connection.setAutoCommit(true);
>>> Class.forName("oracle.AQ.AQOracleDriver");
>>> aqSession = AQDriverManager.createAQSession(connection);
>>> System.out.println("AQ Session --->" + aqSession);
>>> 
>>> 
>>> We have created a custom processor in Nifi. This processor is getting the
>>> connection string using getConnection function of Standard DBCP service.
>>> The problem is, the connection object that is retrieved from eclipse
>>> versus
>>> what is returned from DBCP service is different. We have made sure we are
>>> referring to the same jar both in eclipse and Nifi (ojdbc7.jar)
>>> It fails @  aqSession = AQDriverManager.createAQSession(connection);
>>> The connection object that comes from DBCP is not what is expected by
>>> AQDriverManager.
>>> 
>>> Any help is greatly appreciated.
>>> 
>>> Thanks
>>> Anil
>>> 
>>> 
>> 



Re: Connection Issue

2017-03-14 Thread Oleg Zhurakousky
Anil

I understand that you are having an issue and we are here to help, but we can 
only do this if you help us just a little more, so it would be very helpful if 
you provided a stack trace (I understand if you have to mask sensitive 
information). 
The “. . .fails saying cannot create AQSession. . .” could be due to various 
reasons and until we see the stack trace everything here would be speculation. 
I hope you understand

Cheers
Oleg

> On Mar 14, 2017, at 4:59 PM, Anil Rai  wrote:
> 
> Here is the behaviour that we have seen so forhope this helps
> 
>   1. When we run the java code in eclipse, it works and this is the
>   connection object that is printed ->
>   oracle.jdbc.driver.T4CConnection@6f75e721
>   2. When we hard code all the values as mentioned in my first email in a
>   custom processor, deploy that. It works as well. The above connection
>   object gets printed.
>   3. When we change the code in the custom processor to use the DBCP
>   connection service, deploy that. The connection object that gets printed is
>   jdbc:oracle:thin:@oged-scan.og.ge.com:1521/ORPOGPB1 and this does not
>   work. aqSession = AQDriverManager.createAQSession(connection) fails
>   saying cannot create AQSession.
> 
> Thanks
> Anil
> 
> 
> On Tue, Mar 14, 2017 at 4:13 PM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com> wrote:
> 
>> Anil
>> 
>> When you say "it does not like the connection object. . .” what do you
>> mean by that?
>> Can you please provide stack trace or some other details?
>> 
>> Cheers
>> Oleg
>> 
>>> On Mar 14, 2017, at 4:06 PM, Anil Rai  wrote:
>>> 
>>> Thanks Russ. Yes, we are doing exactly the same thing.
>>> 
>>>   driverClass = context.getProperty(DRIVER_CLASS).getValue();
>>>   queueName = context.getProperty(QUEUE_NAME).getValue();
>>>   databaseSchema = context.getProperty(DATABASE_SCHEMA).getValue();
>>>   consumerName = context.getProperty(CONSUMER_NAME).getValue();
>>>   eventName = context.getProperty(EVENT_NAME).getValue();
>>>   DBCPService connection =
>>> context.getProperty(JDBC_CONNECTION_POOL).asControllerService(
>> DBCPService.class);
>>>   Connection conn = connection.getConnection();
>>> *aqSession = AQDriverManager.createAQSession(connection);*
>>> 
>>> The underlined code above fails as it does not like the connection object
>>> that is returned by the DBCPService.
>>> 
>>> Regards
>>> Anil
>>> 
>>> 
>>> On Tue, Mar 14, 2017 at 2:43 PM, Russell Bateman 
>>> wrote:
>>> 
>>>> Anil,
>>>> 
>>>> Typically, your custom processor should have a property, something like
>>>> 
>>>>   public static final PropertyDescriptor DBCP_SERVICE = new
>>>> PropertyDescriptor.Builder()
>>>>   .name("Database Connection Pooling Service")
>>>>   .description("The Controller Service that is used to obtain
>>>> connection to database")
>>>>   .required(true)
>>>>   .identifiesControllerService(DBCPService.class)
>>>>   .build();
>>>> 
>>>> When your NiFi user sets up the flow, he or she puts a reference to
>> NiFi's
>>>> DBCPConnectionPool in it. In configuring that (a ControllerService, you
>>>> tell it that it's Oracle, location, etc.)
>>>> 
>>>> Then, your onTrigger() code would do something like this:
>>>> 
>>>>   final DBCPService dbcpService = context.getProperty(DBCP_SERVI
>>>> CE).asControllerService(DBCPService.class);
>>>> 
>>>> 
>>>> and later...
>>>> 
>>>> 
>>>>   try (final Connection connection = dbcpService.getConnection())
>>>>   {
>>>>   try (final Statement stmt = connection.createStatement(
>> ResultSet.TYPE_FORWARD_ONLY,
>>>> ResultSet.CONCUR_READ_ONLY))
>>>> 
>>>> etc.
>>>> 
>>>> Does this help?
>>>> 
>>>> Russ
>>>> 
>>>> 
>>>> 
>>>> On 03/14/2017 11:54 AM, Anil Rai wrote:
>>>> 
>>>>> We have a use case to connect to oracle database and subscribe to
>> Advanced
>>>>> Queuing (https://docs.oracle.com/cd/A58617_01/server.804/a58241/
>> ch_aq.htm
>>>>> ).
>>>>> Below is the java snippet to establish this connection from a jav

Re: Connection Issue

2017-03-15 Thread Oleg Zhurakousky
Anil

Unfortunately the attachment didn’t come thru. Perhaps you can just paste the 
relevant part of the exception.

Cheers
Oleg

On Mar 15, 2017, at 8:58 AM, Anil Rai 
mailto:anilrain...@gmail.com>> wrote:

Hi Oleg, Thanks. attached is the log. Let me know if you want us to change the 
log levels and re-run and send you additional logs?


On Tue, Mar 14, 2017 at 5:12 PM, Oleg Zhurakousky 
mailto:ozhurakou...@hortonworks.com>> wrote:
Anil

I understand that you are having an issue and we are here to help, but we can 
only do this if you help us just a little more, so it would be very helpful if 
you provided a stack trace (I understand if you have to mask sensitive 
information).
The “. . .fails saying cannot create AQSession. . .” could be due to various 
reasons and until we see the stack trace everything here would be speculation.
I hope you understand

Cheers
Oleg

> On Mar 14, 2017, at 4:59 PM, Anil Rai 
> mailto:anilrain...@gmail.com>> wrote:
>
> Here is the behaviour that we have seen so forhope this helps
>
>   1. When we run the java code in eclipse, it works and this is the
>   connection object that is printed ->
>   oracle.jdbc.driver.T4CConnection@6f75e721
>   2. When we hard code all the values as mentioned in my first email in a
>   custom processor, deploy that. It works as well. The above connection
>   object gets printed.
>   3. When we change the code in the custom processor to use the DBCP
>   connection service, deploy that. The connection object that gets printed is
>   
> jdbc:oracle:thin:@oged-scan.og.ge.com:1521/ORPOGPB1<http://jdbc:oracle:thin:@oged-scan.og.ge.com:1521/ORPOGPB1>
>  and this does not
>   work. aqSession = AQDriverManager.createAQSession(connection) fails
>   saying cannot create AQSession.
>
> Thanks
> Anil
>
>
> On Tue, Mar 14, 2017 at 4:13 PM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com<mailto:ozhurakou...@hortonworks.com>> wrote:
>
>> Anil
>>
>> When you say "it does not like the connection object. . .” what do you
>> mean by that?
>> Can you please provide stack trace or some other details?
>>
>> Cheers
>> Oleg
>>
>>> On Mar 14, 2017, at 4:06 PM, Anil Rai 
>>> mailto:anilrain...@gmail.com>> wrote:
>>>
>>> Thanks Russ. Yes, we are doing exactly the same thing.
>>>
>>>   driverClass = context.getProperty(DRIVER_CLASS).getValue();
>>>   queueName = context.getProperty(QUEUE_NAME).getValue();
>>>   databaseSchema = context.getProperty(DATABASE_SCHEMA).getValue();
>>>   consumerName = context.getProperty(CONSUMER_NAME).getValue();
>>>   eventName = context.getProperty(EVENT_NAME).getValue();
>>>   DBCPService connection =
>>> context.getProperty(JDBC_CONNECTION_POOL).asControllerService(
>> DBCPService.class);
>>>   Connection conn = connection.getConnection();
>>> *aqSession = AQDriverManager.createAQSession(connection);*
>>>
>>> The underlined code above fails as it does not like the connection object
>>> that is returned by the DBCPService.
>>>
>>> Regards
>>> Anil
>>>
>>>
>>> On Tue, Mar 14, 2017 at 2:43 PM, Russell Bateman 
>>> mailto:r...@windofkeltia.com>>
>>> wrote:
>>>
>>>> Anil,
>>>>
>>>> Typically, your custom processor should have a property, something like
>>>>
>>>>   public static final PropertyDescriptor DBCP_SERVICE = new
>>>> PropertyDescriptor.Builder()
>>>>   .name("Database Connection Pooling Service")
>>>>   .description("The Controller Service that is used to obtain
>>>> connection to database")
>>>>   .required(true)
>>>>   .identifiesControllerService(DBCPService.class)
>>>>   .build();
>>>>
>>>> When your NiFi user sets up the flow, he or she puts a reference to
>> NiFi's
>>>> DBCPConnectionPool in it. In configuring that (a ControllerService, you
>>>> tell it that it's Oracle, location, etc.)
>>>>
>>>> Then, your onTrigger() code would do something like this:
>>>>
>>>>   final DBCPService dbcpService = context.getProperty(DBCP_SERVI
>>>> CE).asControllerService(DBCPService.class);
>>>>
>>>>
>>>> and later...
>>>>
>>>>
>>>>   try (final Connection connection = dbcpService.getConnection())
>>>>   {
>>>>   try (final Statement stm

Re: Connection Issue

2017-03-15 Thread Oleg Zhurakousky
OR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2017-03-14 16:50:43,570 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> 2017-03-14 16:50:43,570 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> 2017-03-14 16:50:43,570 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> 2017-03-14 16:50:43,570 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2017-03-14 16:50:43,570 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2017-03-14 16:50:43,570 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at java.lang.Thread.run(Thread.java:745)
> 2017-03-14 16:50:43,570 ERROR [NiFi logging handler] org.apache.nifi.StdErr
> java.lang.NullPointerException
> 
> 
> On Wed, Mar 15, 2017 at 9:09 AM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com> wrote:
> 
>> Anil
>> 
>> Unfortunately the attachment didn’t come thru. Perhaps you can just paste
>> the relevant part of the exception.
>> 
>> Cheers
>> Oleg
>> 
>> On Mar 15, 2017, at 8:58 AM, Anil Rai > anilrain...@gmail.com>> wrote:
>> 
>> Hi Oleg, Thanks. attached is the log. Let me know if you want us to change
>> the log levels and re-run and send you additional logs?
>> 
>> 
>> On Tue, Mar 14, 2017 at 5:12 PM, Oleg Zhurakousky <
>> ozhurakou...@hortonworks.com<mailto:ozhurakou...@hortonworks.com>> wrote:
>> Anil
>> 
>> I understand that you are having an issue and we are here to help, but we
>> can only do this if you help us just a little more, so it would be very
>> helpful if you provided a stack trace (I understand if you have to mask
>> sensitive information).
>> The “. . .fails saying cannot create AQSession. . .” could be due to
>> various reasons and until we see the stack trace everything here would be
>> speculation.
>> I hope you understand
>> 
>> Cheers
>> Oleg
>> 
>>> On Mar 14, 2017, at 4:59 PM, Anil Rai > anilrain...@gmail.com>> wrote:
>>> 
>>> Here is the behaviour that we have seen so forhope this helps
>>> 
>>>  1. When we run the java code in eclipse, it works and this is the
>>>  connection object that is printed ->
>>>  oracle.jdbc.driver.T4CConnection@6f75e721
>>>  2. When we hard code all the values as mentioned in my first email in a
>>>  custom processor, deploy that. It works as well. The above connection
>>>  object gets printed.
>>>  3. When we change the code in the custom processor to use the DBCP
>>>  connection service, deploy that. The connection object that gets
>> printed is
>>>  jdbc:oracle:thin:@oged-scan.og.ge.com:1521/ORPOGPB1> /jdbc:oracle:thin:@oged-scan.og.ge.com:1521/ORPOGPB1> and this does not
>>>  work. aqSession = AQDriverManager.createAQSession(connection) fails
>>>  saying cannot create AQSession.
>>> 
>>> Thanks
>>> Anil
>>> 
>>> 
>>> On Tue, Mar 14, 2017 at 4:13 PM, Oleg Zhurakousky <
>>> ozhurakou...@hortonworks.com<mailto:ozhurakou...@hortonworks.com>>
>> wrote:
>>> 
>>>> Anil
>>>> 
>>>> When you say "it does not like the connection object. . .” what do you
>>>> mean by that?
>>>> Can you please provide stack trace or some other details?
>>>> 
>>>> Cheers
>>>> Oleg
>>>> 
>>>>> On Mar 14, 2017, at 4:06 PM, Anil Rai > anilrain...@gmail.com>> wrote:
>>>>> 
>>>>> Thanks Russ. Yes, we are doing exactly the same thing.
>>>>> 
>>>>>  driverClass = context.getProperty(DRIVER_CLASS).getValue();
>>>>>  queueName = context.getProperty(QUEUE_NAME).getValue();
>>>>>  databaseSchema = context.getProperty(DATABASE_
>> SCHEMA).getValue();
>>>>>  consumerName = context.getProperty(CONSUMER_NAME).getValue();
>>>>>  eventName = context.getProperty(EVENT_NAME).getValue();
>>>>>  DBCPService connection =
>>>>> context.g

Re: [VOTE] Release Apache NiFi nifi-nar-maven-plugin-1.2.0

2017-03-15 Thread Oleg Zhurakousky
Build successful, built sample NAR all is good
+1

> On Mar 15, 2017, at 10:25 AM, Matt Burgess  wrote:
> 
> +1 Release this package as nifi-nar-maven-plugin-1.2.0
> 
> Verified checksums, verified and built from commit, built a NAR with
> the updated plugin.
> 
> On Tue, Mar 14, 2017 at 12:21 PM, Bryan Bende  wrote:
>> Hello,
>> 
>> I am pleased to be calling this vote for the source release of Apache
>> NiFi nifi-nar-maven-plugin-1.2.0.
>> 
>> The source zip, including signatures, digests, etc. can be found at:
>> https://repository.apache.org/content/repositories/orgapachenifi-1101
>> 
>> The Git tag is nifi-nar-maven-plugin-1.2.0-RC1
>> The Git commit ID is d0c9d46d25a3eb8d3dbeb2783477b1a7c5b2f345
>> https://git-wip-us.apache.org/repos/asf?p=nifi-maven.git;a=commit;h=d0c9d46d25a3eb8d3dbeb2783477b1a7c5b2f345
>> 
>> Checksums of nifi-nar-maven-plugin-1.2.0-source-release.zip:
>> MD5: a20b62075f79bb890c270445097dc337
>> SHA1: 68e4739c9a4c4b2c69ff4adab8e1fdb0e7840923
>> SHA256: f5d4acbaa38460bcf19e9b33f385aa643798788026875bd034ee837e5d9d45a8
>> 
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/bbende.asc
>> 
>> KEYS file available here:
>> https://dist.apache.org/repos/dist/release/nifi/KEYS
>> 
>> 3 issues were closed/resolved for this release:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020&version=12339193
>> 
>> Release note highlights can be found here:
>> https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-NiFiNARMavenPluginVersion1.2.0
>> 
>> The vote will be open for 72 hours.
>> Please download the release candidate and evaluate the necessary items
>> including checking hashes, signatures, build from source, and test.
>> 
>> Then please vote:
>> 
>> [ ] +1 Release this package as nifi-nar-maven-plugin-1.2.0
>> [ ] +0 no opinion
>> [ ] -1 Do not release this package because because...
> 



Re: Connection Issue

2017-03-15 Thread Oleg Zhurakousky
Ok, so it appears that DBCP does not like you to get access to the underlying 
connection without additional configuration, but Java does since 
java.sql.Connection extends java.sql.Wrapper from which you can do something 
like this:

Connection dbConnection= d.getConnection();
OracleConnection orcConnection = null;
if (dbConnection.isWrapperFor(OracleConnection.class)) {
orcConnection = dbConnection.unwrap(OracleConnection.class);
}
. . .

Let me know

Cheers
Oleg

On Mar 15, 2017, at 1:33 PM, Anil Rai 
mailto:anilrain...@gmail.com>> wrote:

Hi Oleg, we tried and no luck. This is another example where we are seeing 
similar issue. Attached txt has the java code as well as the log.

Thanks in advance
Anil


On Wed, Mar 15, 2017 at 10:02 AM, Anil Rai 
mailto:anilrain...@gmail.com>> wrote:
Thanks Oleg. Makes sense. Will try and keep you posted.

Regards
Anil


On Wed, Mar 15, 2017 at 9:56 AM, Oleg Zhurakousky 
mailto:ozhurakou...@hortonworks.com>> wrote:
Anil

Thank you for details. That does help a lot.

First, I want to make sure that it is clear that this is not a NiFi issue, 
since the problem is specific to the combination of DBCP and Oracle and the 
expectations between the two.

Seems like Oracle JDBC connection is wrapped in an implementation specific 
class (DBCP in this case I assume).
It is my believe that you need to obtain reference to native Oracle connection 
to avoid "JMS-112: Connection is invalid".
So, I think you need to try to cast your Connection object to DBCPs 
DelegatingConnection and then do something like this:

DelegatingConnection wrappedConn = (DelegatingConnection)con;
OracleConnection ocon =  null ;
if (wrappedConn != null)
 ocon = (OracleConnection) wrappedConn.getDelegate();
 . . .

Let me know how it goes

Cheers
Oleg

> On Mar 15, 2017, at 9:20 AM, Anil Rai 
> mailto:anilrain...@gmail.com>> wrote:
>
> 2017-03-14 16:50:42,312 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at java.lang.Thread.run(Thread.java:745)
> 2017-03-14 16:50:43,567 INFO [NiFi logging handler] org.apache.nifi.StdOut
> Databse Connection :- 
> jdbc:oracle:thin:@xxxog.yy.com:1521/DB1<http://jdbc:oracle:thin:@xxxog.yy.com:1521/DB1>,
> UserName=user, Oracle JDBC driver
> 2017-03-14 16:50:43,567 ERROR [NiFi logging handler] org.apache.nifi.StdErr
> oracle.AQ.AQException: JMS-112: Connection is invalid
> 2017-03-14 16:50:43,567 INFO [NiFi logging handler] org.apache.nifi.StdOut
> AQ Driver Class ---> oracle.AQ.AQOracleDriver
> 2017-03-14 16:50:43,567 ERROR [NiFi logging handler] org.apache.nifi.StdErr
> 2017-03-14 16:50:43,568 INFO [NiFi logging handler] org.apache.nifi.StdOut
> Aq Sesssion ---> null
> 2017-03-14 16:50:43,568 ERROR [NiFi logging handler] org.apache.nifi.StdErr
> 2017-03-14 16:50:43,568 INFO [NiFi logging handler] org.apache.nifi.StdOut
> Queue Owner ---> APPS
> 2017-03-14 16:50:43,568 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at oracle.AQ.AQDriverManager.createAQSession(AQDriverManager.java:193)
> 2017-03-14 16:50:43,569 INFO [NiFi logging handler] org.apache.nifi.StdOut
> QueueName ---> WF_BPEL_Q
> 2017-03-14 16:50:43,569 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> com.oracle.xx.connector.processors.xx_SCon_ConsumeAQ.xx_Scon_ConsumeAQ.createSession(xx_Scon_ConsumeAQ.java:183)
> 2017-03-14 16:50:43,569 INFO [NiFi logging handler] org.apache.nifi.StdOut
> EventName ---> oracle.apps.ar.hz.CustAcctSite.update
> 2017-03-14 16:50:43,569 ERROR [NiFi logging handler] org.apache.nifi.StdErr
>  at
> com.oracle.xx.connector.processors.XX_SCon_ConsumeAQ.XX_Scon_ConsumeAQ.onTrigger(XX_Scon_ConsumeAQ.java:254)
> 2017-03-14 16:50:43,569 INFO [NiFi logging handler] org.apache.nifi.

Re: Connection Issue

2017-03-15 Thread Oleg Zhurakousky
Sorry, I do see the “unwrap” call now, but what I can’t correlate is your code 
to the stack trace. Actually, it appears from your stack trace that in your 
GESBOProcessor on line 183 you are attempting to cast the 
org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper to 
OracleConnection and the unwrap call is supposed to be called on such 
connection. Can you clarify?

Cheers
Oleg

> On Mar 15, 2017, at 2:29 PM, Anil Rai  wrote:
> 
> Hi Oleg, if you look at the java code and logs i sent in the previous
> email, it does have the code to unwrap. But it is not allowing to cast that
> connection object back to OracleConnection. It fails.
> 
> On Wed, Mar 15, 2017 at 2:14 PM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com> wrote:
> 
>> Ok, so it appears that DBCP does not like you to get access to the
>> underlying connection without additional configuration, but Java does since
>> java.sql.Connection extends java.sql.Wrapper from which you can do
>> something like this:
>> 
>> Connection dbConnection= d.getConnection();
>> OracleConnection orcConnection = null;
>> if (dbConnection.isWrapperFor(OracleConnection.class)) {
>>orcConnection = dbConnection.unwrap(OracleConnection.class);
>> }
>> . . .
>> 
>> Let me know
>> 
>> Cheers
>> Oleg
>> 
>> On Mar 15, 2017, at 1:33 PM, Anil Rai > anilrain...@gmail.com>> wrote:
>> 
>> Hi Oleg, we tried and no luck. This is another example where we are seeing
>> similar issue. Attached txt has the java code as well as the log.
>> 
>> Thanks in advance
>> Anil
>> 
>> 
>> On Wed, Mar 15, 2017 at 10:02 AM, Anil Rai > anilrain...@gmail.com>> wrote:
>> Thanks Oleg. Makes sense. Will try and keep you posted.
>> 
>> Regards
>> Anil
>> 
>> 
>> On Wed, Mar 15, 2017 at 9:56 AM, Oleg Zhurakousky <
>> ozhurakou...@hortonworks.com<mailto:ozhurakou...@hortonworks.com>> wrote:
>> Anil
>> 
>> Thank you for details. That does help a lot.
>> 
>> First, I want to make sure that it is clear that this is not a NiFi issue,
>> since the problem is specific to the combination of DBCP and Oracle and the
>> expectations between the two.
>> 
>> Seems like Oracle JDBC connection is wrapped in an implementation specific
>> class (DBCP in this case I assume).
>> It is my believe that you need to obtain reference to native Oracle
>> connection to avoid "JMS-112: Connection is invalid".
>> So, I think you need to try to cast your Connection object to DBCPs
>> DelegatingConnection and then do something like this:
>> 
>> DelegatingConnection wrappedConn = (DelegatingConnection)con;
>> OracleConnection ocon =  null ;
>> if (wrappedConn != null)
>> ocon = (OracleConnection) wrappedConn.getDelegate();
>> . . .
>> 
>> Let me know how it goes
>> 
>> Cheers
>> Oleg
>> 
>>> On Mar 15, 2017, at 9:20 AM, Anil Rai > anilrain...@gmail.com>> wrote:
>>> 
>>> 2017-03-14 16:50:42,312 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr
>>> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>>> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$
>> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$
>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1142)
>>> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:617)
>>> 2017-03-14 16:50:42,313 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr
>>> at java.lang.Thread.run(Thread.java:745)
>>> 2017-03-14 16:50:43,567 INFO [NiFi logging handler]
>> org.apache.nifi.StdOut
>>> Databse Connection :- jdbc:oracle:thin:@xxxog.yy.com:1521/DB1<
>> http://jdbc:oracle:thin:@x

Re: When should MergeContent stop and proceed to next processor?

2017-03-16 Thread Oleg Zhurakousky
Hi

Is there any chance you can share your processor’s configuration? I am curious 
as to what are you using as “Correlation Attribute Name” in the MergeContent 
processor.
Basically this attribute allows to distinguish groups of flow files so, since 
you have SplitJson as an upstream processor feeding MergeContent you can use 
“fragment.identifier” as correlation attribute.
Anyway, please share what you can.

Cheers
Oleg

> On Mar 15, 2017, at 5:55 PM, srini  wrote:
> 
> Hi,
> I have a subflow like this. From SplitJson to MergeContent it is in loop. I
> expect it loops based on the number of splits of that record. How it know
> that splits for that record is over, and it needs to be proceed to next
> processor that is ExtractText? 
> 
> I have 3 records. In my case it is merging 25 (8 + 5 + 12 = 25). It is
> merging all records data into one record. It shouldn't merge all, instead
> after each record it should proceed to next processor.
> 1st record: Merge 8 items
> 2nd record:  Merge 5 items.
> 3rd record: Merge 12 items.
> 
> 
>  
> 
> What changes do you recommend to my flow?
> Here is my MergeContent screenshot.
> 
>  
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/When-should-MergeContent-stop-and-proceed-to-next-processor-tp15148.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: When should MergeContent stop and proceed to next processor?

2017-03-16 Thread Oleg Zhurakousky
Ok, can you please set the “Correlation Identifier” to “fragment.identifier”?
That is what I was trying to explain in the previous email.

Cheers
Oleg

> On Mar 16, 2017, at 11:06 AM, srini  wrote:
> 
> Hi Oleg,
> 
> Here is MergetContent screenshot. My flowfiles don't give any clue about
> what record it belongs to. I have an attribute called recordId which
> distinguishes each record. But I shouldn't add recordId in the flowfiles to
> be merged.
> 
>  
> 
> thanks
> Srini
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/When-should-MergeContent-stop-and-proceed-to-next-processor-tp15148p15164.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: [ANNOUNCE] New Apache NiFi Committer Bin Qiu

2017-04-05 Thread Oleg Zhurakousky
And of course I did it in the wrong thread ;)
Congrats Bin!!!

> On Apr 5, 2017, at 9:48 AM, Oleg Zhurakousky  wrote:
> 
> Thank you all!
> 
> > *Joe Percivall*
>> e: joeperciv...@gmail.com
> 



Re: [ANNOUNCE] New Apache NiFi Committer Bin Qiu

2017-04-05 Thread Oleg Zhurakousky
Thank you all!

 *Joe Percivall*
> e: joeperciv...@gmail.com



Re: [ANNOUNCE] New Apache NiFi PMC Member - Oleg Zhurakousky

2017-04-05 Thread Oleg Zhurakousky
Thank you all!

 *Joe Percivall*
> e: joeperciv...@gmail.com



Re: Observation regarding PutKafka while implementing NIFI-1672

2016-04-13 Thread Oleg Zhurakousky
Pierre

You're absolutely correct. The explanation was valid, but there were problems 
with the original implementation of how it was described, so that changed, but 
we didn't update the docs


> On Apr 13, 2016, at 16:24, Pierre Villard  wrote:
> 
> Hi,
> 
> I was working on NIFI-1672 [1], and while performing some tests with a
> local running Kafka instance, I noticed there is a possible error regarding
> how the behavior of the processor is documented.
> 
> The "Message delimiter" property is documented with:
> 
> "Specifies the delimiter (interpreted in its UTF-8 byte representation) to
> use for splitting apart multiple messages within a single FlowFile. If not
> specified, the entire content of the FlowFile will be used as a single
> message. If specified, the contents of the FlowFile will be split on this
> delimiter and each section sent as a separate Kafka message. Note that if
> messages are delimited and some messages for a given FlowFile are
> transferred successfully while others are not, the messages will be split
> into individual FlowFiles, such that those messages that were successfully
> sent are routed to the 'success' relationship while other messages are sent
> to the 'failure' relationship."
> 
> I believe that the part "Note that if messages are delimited and some
> messages for a given FlowFile are transferred successfully while others are
> not, the messages will be split into individual FlowFiles, such that those
> messages that were successfully sent are routed to the 'success'
> relationship while other messages are sent to the 'failure' relationship."
> is incorrect.
> 
> Instead I would say that the behavior (at least, it is what I observed) is:
> if one or multiple messages inside the FlowFile are not transferred
> successfully to Kafka, those messages are "tagged" thanks to some custom
> attributes of the FlowFile and the whole FlowFile is sent to 'failure'
> relationship. In case the 'failure' relationship is plugged back on the
> PutKafka processor, only the messages previously in error will be sent to
> Kafka.
> 
> I believe the documentation should be updated to reflect the current
> behavior. Should I do it as part of NIFI-1672? Or create a specific JIRA
> for that? In addition, should I create a JIRA to implement the behavior
> where FlowFile are splitted as currently documented?
> 
> Since Kafka processors have been source of discussions lately, advice and
> comments are welcomed!
> 
> Thanks !
> Pierre
> 
> [1] https://issues.apache.org/jira/browse/NIFI-1672


Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread Oleg Zhurakousky
A bit unrelated, but how do you guys feel if we deprecate ObjectHolder so it 
could be gone by 1.0?
AtomicReference is available from Java 5

Cheers
Oleg

> On Apr 14, 2016, at 5:18 AM, Bryan Bende  wrote:
> 
> Hello,
> 
> It may be easier to move the load() out of the InputStreamCallback. You
> could do something like this...
> 
> final ObjectHolder holder = new ObjectHolder(null);
> 
> session.read(flowFile, new InputStreamCallback() {
> 
>@Override
>public void process(InputStream in) throws IOException {
>StringWriter strWriter = new StringWriter();
>IOUtils.copy(in, strWriter, "UTF-8");
>String contents = strWriter.toString();
>holder.set(contents);
>}
> });
> 
> try {
>load(holder.get());
>session.transfer(flowFile, SUCCESS);
>  } catch (IOException e) {
>session.transfer(flowFile, FAILURE);
> }
> 
> 
> -Bryan
> 
> On Thu, Apr 14, 2016 at 9:06 AM, idioma  wrote:
> 
>> Hi,
>> I have modified my onTrigger in this way:
>> 
>> session.read(flowFile, new InputStreamCallback() {
>> 
>>@Override
>>public void process(InputStream in) throws IOException {
>> 
>>StringWriter strWriter = new StringWriter();
>>IOUtils.copy(in, strWriter, "UTF-8");
>>String contents = strWriter.toString();
>> 
>>try {
>>load(contents);
>>} catch (IOException e) {
>>e.getMessage();
>>boolean error = true;
>>throw e;
>>}
>>}
>>});
>> 
>> What I am struggling with is how to send it to a failure or a success
>> depending on the error being thrown. Any help would be appreciated, thank
>> you so much.
>> 
>> 
>> 
>> --
>> View this message in context:
>> http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9062.html
>> Sent from the Apache NiFi Developer List mailing list archive at
>> Nabble.com.
>> 



Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread Oleg Zhurakousky
Idioma

Keep an eye on this https://issues.apache.org/jira/browse/NIFI-1771 and 
consider using java.util.concurrent.atomic.AtomicReference

Cheers
Oleg

On Apr 14, 2016, at 7:01 AM, idioma 
mailto:corda.ila...@gmail.com>> wrote:

Bryan,
thank you so much, this is absolutely fantastic. I was actually looking for
an easy way to access to the content of my load class and I did not know
about ObjectHolder.

Thank you so much



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9066.html
Sent from the Apache NiFi Developer List mailing list archive at 
Nabble.com.




Re: Multiple nar/custom processors: advisable directory structure

2016-04-14 Thread Oleg Zhurakousky
Unfortunately I’ll answer the question with the question ;)
Is the additional processor related to the previous one? For example we have a 
single bundle with more then one processor (e.g., Get/PutSomething). If so then 
you can create another Processor in the same bundle (NAR).
If it is not then you should start a separate NAR.

Keep in mind that each NAR provides a class loader isolation, so another way of 
looking at this is do the two+ processor require different class path?
Does that help?

Cheers
Oleg

> On Apr 14, 2016, at 11:20 AM, idioma  wrote:
> 
> Hi,
> currently, I have one custom processor + test in a similar folder structure
> in my IDE (IntelliJ):
> 
> -CustomProcessors
>   -nifi-myprocessor-nar
>   -nifi-myprocessor
>  -src
>  -main
>  -java
>  MyProcessor.java
>  -test
>  -MyProcessorTest.java
> 
> I am now in the process to add another processor, what is the best approach?
> Shall I have 2 new folders for the nar and one containing the actual
> processor? I would like to generate a basic structure for the processor (as
> it describes here:
> https://community.hortonworks.com/articles/4318/build-custom-nifi-processor.html).
> Is that advisable when adding another custom processor?
> 
> Thanks,
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Multiple-nar-custom-processors-advisable-directory-structure-tp9089.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: GetKafka blowing up with assertion error in Kafka client code

2016-04-14 Thread Oleg Zhurakousky
Chris
That is correct and for a change I am pretty happy to see this stack trace as 
it clearly shows the problem and validates the approach we have. 
So here are more details. . .

The root failure is in Kafka (as you can see from the stack trace). All we are 
doing is encapsulating interaction with Kafka into cancelable Future so we can 
cancel if and when Kafka deadlocks (which we noticed happens rather often)
When we execute Future.get() it results in ExecutionException which caries the 
original Kafka exception (AssertionError). 
Now I am not sure what that assertion error really means in the context of what 
you are trying to do but its clearly a problem originated in Kafka.
Could you share your config or whatever other details?

Cheers
Oleg

> On Apr 14, 2016, at 4:00 PM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> I’m running based of of 0.7.0 Snapshot.  The GetKafka config is pretty 
> generic.  Batch size 1, 1 concurrent task.
> 
> 
> 2016-04-14 19:27:23,204 ERROR [Timer-Driven Process Thread-9] 
> o.apache.nifi.processors.kafka.GetKafka
> java.lang.IllegalStateException: java.util.concurrent.ExecutionException: 
> java.lang.AssertionError: assertion failed
>at 
> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:355) 
> ~[na:na]
>at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_45]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> assertion failed
>at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.8.0_45]
>at java.util.concurrent.FutureTask.get(FutureTask.java:206) 
> [na:1.8.0_45]
>at 
> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:348) 
> ~[na:na]
>... 12 common frames omitted
> Caused by: java.lang.AssertionError: assertion failed
>at scala.Predef$.assert(Predef.scala:165) ~[na:na]
>at 
> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:51)
>  ~[na:na]
>at 
> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:49)
>  ~[na:na]
>at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>  ~[na:na]
>at scala.collection.immutable.Map$Map1.foreach(Map.scala:109) ~[na:na]
>at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>  ~[na:na]
>at 
> kafka.consumer.TopicCount$.makeConsumerThreadIdsPerTopic(TopicCount.scala:49) 
> ~[na:na]
>at 
> kafka.consumer.StaticTopicCount.getConsumerThreadIdsPerTopic(TopicCount.scala:113)
>  ~[na:na]
>at 
> kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:226)
>  ~[na:na]
>at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:85)
>  ~[na:na]
>at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:97)
>  ~[na:na]
>at 
> org.apache.nifi.processors.kafka.GetKafka.createConsumers(GetKafka.java:281) 
> ~[na:na]
>at org.apache.nifi.processors.kafka.GetKafka$1.call(GetKafka.java:343) 
> ~[na:na]
>at org.apache.nifi.processors.kafka.GetKafka$1.call(GetKafka.java:340) 
> ~[na:na]
>at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_45]
>

Re: GetKafka blowing up with assertion error in Kafka client code

2016-04-14 Thread Oleg Zhurakousky
Thanks Chris

Indeed let us know if/when/how to reproduce it so we can evaluate and see if it 
is something we can validate/handle in NiFi before it is passed to Kafka (e.g., 
validation etc)

Cheers
Oleg

> On Apr 14, 2016, at 8:25 PM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> I looked at the Kafka client code and it seemed to me to be a bug in the 
> caller. There is a map passed that maps topics to number of consumers. In 
> this case it asserting that the number of consumers is greater than zero. If 
> I can repro the problem I'll try to isolate it in the debugger and provide 
> more details.
> 
> 
> 
> Sent from my Verizon, Samsung Galaxy smartphone
> 
> 
>  Original message 
> From: Oleg Zhurakousky 
> Date: 4/14/16 4:14 PM (GMT-05:00)
> To: dev@nifi.apache.org
> Subject: Re: GetKafka blowing up with assertion error in Kafka client code
> 
> Chris
> That is correct and for a change I am pretty happy to see this stack trace as 
> it clearly shows the problem and validates the approach we have.
> So here are more details. . .
> 
> The root failure is in Kafka (as you can see from the stack trace). All we 
> are doing is encapsulating interaction with Kafka into cancelable Future so 
> we can cancel if and when Kafka deadlocks (which we noticed happens rather 
> often)
> When we execute Future.get() it results in ExecutionException which caries 
> the original Kafka exception (AssertionError).
> Now I am not sure what that assertion error really means in the context of 
> what you are trying to do but its clearly a problem originated in Kafka.
> Could you share your config or whatever other details?
> 
> Cheers
> Oleg
> 
>> On Apr 14, 2016, at 4:00 PM, McDermott, Chris Kevin (MSDU - 
>> STaTS/StorefrontRemote)  wrote:
>> 
>> I’m running based of of 0.7.0 Snapshot.  The GetKafka config is pretty 
>> generic.  Batch size 1, 1 concurrent task.
>> 
>> 
>> 2016-04-14 19:27:23,204 ERROR [Timer-Driven Process Thread-9] 
>> o.apache.nifi.processors.kafka.GetKafka
>> java.lang.IllegalStateException: java.util.concurrent.ExecutionException: 
>> java.lang.AssertionError: assertion failed
>>   at 
>> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:355) 
>> ~[na:na]
>>   at 
>> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
>> [na:1.8.0_45]
>>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
>> [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>  [na:1.8.0_45]
>>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
>> Caused by: java.util.concurrent.ExecutionException: 
>> java.lang.AssertionError: assertion failed
>>   at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
>> [na:1.8.0_45]
>>   at java.util.concurrent.FutureTask.get(FutureTask.java:206) 
>> [na:1.8.0_45]
>>   at 
>> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:348) 
>> ~[na:na]
>>   ... 12 common frames omitted
>> Caused by: java.lang.AssertionError: assertion failed
>>   at scala.Predef$.assert(Predef.scala:165) ~[na:na]
>>   at 
>> kafka.consumer.TopicCount$$anonfun$makeCons

Re: Multiple nar/custom processors: advisable directory structure

2016-04-15 Thread Oleg Zhurakousky
Hmm, I am not sure I follow completely. 
You’ve described the approach you followed (i assume successfully) to create a 
NAR. Are you asking if you should follow the same approach for creating another 
NAR?

Cheers
Oleg

> On Apr 15, 2016, at 3:16 AM, idioma  wrote:
> 
> Oleg,
> thanks for your reply. No, in this case it is not strictly related to my
> first processor so I felt myself it should go in a separate NAR. I am
> probably still unsure on how to generate it. For my first one, I have
> created an empty folder and then run mvn archetype:generate, then after
> filling in all information I have run maven clean install. Will this be the
> very same process for my second/third/additional processor(s)? 
> 
> Thank you so much for your help. 
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Multiple-nar-custom-processors-advisable-directory-structure-tp9089p9108.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: Is my custom processor doing too many things? OnTrigger question

2016-04-18 Thread Oleg Zhurakousky
Idioma

I would suggest  for your learning experience to have custom processor to do 
your HTTP stuff and if successful transfer to ‘success’ relationship and 
connect it to PutKafka processor that comes with NiFi.
This way you’ll not only be able to learn on how to develop custom processor 
but also see it being integrated with another processor that was not developed 
by you.

Cheers
Oleg

> On Apr 18, 2016, at 8:27 AM, idioma  wrote:
> 
> Thank you Joe and thank you for understanding the struggle of a newbie, not
> many communities are so welcoming and inclusive like Apache NiFi Developer
> List! :) Yes, you are right posting to a URL endpoint to get a response and
> put that on Kafka is all I want, do you have any existing out of the box
> processors you can point me to? I actually wanted to create my own custom
> processor for my personal benefit and for understanding how to build them,
> so I probably will try to go ahead with a custom one, but bearing in mind
> that I can fully exploit out of the box bundles. Am I heading towards the
> right direction with the code I posted? 
> 
> Thank you
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Is-my-custom-processor-doing-too-many-things-OnTrigger-question-tp9225p9228.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: Is my custom processor doing too many things? OnTrigger question

2016-04-18 Thread Oleg Zhurakousky
I don’t see any response from postHttp. You just invoke a method and that’s it. 
I thought you said you need to send the results of HTTP Post (json) downstream. 
If so you should receive response form HTTP and create a new FlowFile 
(session.create(..) or session.clone(..)) and write the contents of the HTTP 
response to it (session.write(..)).

Cheers
Oleg

> On Apr 18, 2016, at 8:58 AM, idioma  wrote:
> 
> Oleg,
> this is actually a great idea, I will follow it for certain (a good
> compromise, too). I have an additional question on the onTrigger method for
> my Post Http. Among the main NiFi components, my custom processor contains a
> number of separate methods, in particular one that send a Post HTTP request
> to an endpoint and return a Json response. The method takes a number of
> parameter such as userId, password, etc and this is where I am rather
> confused when it comes to the operations I should be perform in my on
> Trigger. I have the following: 
> 
>   @Override
>public void onTrigger(final ProcessContext context, final ProcessSession
> session) throws ProcessException {
> 
>FlowFile flowFile = session.get();
>if (flowFile == null) return;
> 
>final String userId = context.getProperty(USER_ID).getValue();
>final String password = context.getProperty(PASSWORD).getValue();
>final String http_post_url =
> context.getProperty(HTTP_POST_URL).getValue();
> 
>final AtomicReference httpPostRequestHolder = new
> AtomicReference<>();
>session.read(flowFile, new InputStreamCallback() {
>@Override
>public void process(InputStream inputStream) throws IOException
> {
>StringWriter strWriter = new StringWriter();
>IOUtils.copy(inputStream, strWriter, "UTF-8");
>httpPostRequestHolder.set(userId);
>httpPostRequestHolder.set(password);
>httpPostRequestHolder.set(http_post_url);
>}
>});
> 
> try {
>postHttpRequest(userId, password, http_post_url);
>session.transfer(flowFile, SUCCESS);
>} catch (IllegalArgumentException ex) {
>session.transfer(flowFile, FAILURE);
>ex.printStackTrace();
>}
> 
> Is that what I am supposed to do? Is that correct? I am trying to process
> the inputstream and convert it into an AtomicReference object with the same
> arguments as the one passed in the postHttpRequest method. Does it make
> sense? 
> 
> Thank you again for all your help, so much appreciated!
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Is-my-custom-processor-doing-too-many-things-OnTrigger-question-tp9225p9232.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: Is my custom processor doing too many things? OnTrigger question

2016-04-18 Thread Oleg Zhurakousky
Well, you have session transfer to success after postHttp call; You don’t need 
to do that since you only wan to transfer the new file to success.
Also, I am not sure I understand the exception handling. You seem to be 
catching exception, handling it and attempt to create flow file regardless if 
exception happened. Is that the intention?

Cheers
Oleg

> On Apr 18, 2016, at 10:03 AM, idioma  wrote:
> 
> Thanks Oleg,
> would this be what I want? 
> 
> public void onTrigger(final ProcessContext context, final ProcessSession
> session) throws ProcessException {
> 
>FlowFile flowFile = session.get();
>if (flowFile == null) return;
> 
>final String userId = context.getProperty(USER_ID).getValue();
>final String password = context.getProperty(PASSWORD).getValue();
>final String http_post_url =
> context.getProperty(HTTP_POST_URL).getValue();
> 
>final AtomicReference httpPostRequestHolder = new
> AtomicReference<>();
>session.read(flowFile, new InputStreamCallback() {
>@Override
>public void process(InputStream inputStream) throws IOException
> {
>StringWriter strWriter = new StringWriter();
>IOUtils.copy(inputStream, strWriter, "UTF-8");
>httpPostRequestHolder.set(userId);
>httpPostRequestHolder.set(password);
>httpPostRequestHolder.set(http_post_url);
>}
>});
> 
>try {
>postHttpRequest(userId, password, source, message,
> http_post_url, resource_ids_file_path);
>session.transfer(flowFile, SUCCESS);
>} catch (IllegalArgumentException ex) {
>session.transfer(flowFile, FAILURE);
>ex.printStackTrace();
>}
> 
>flowFile = session.create();
>flowFile = session.write(flowFile, new OutputStreamCallback() {
>@Override
>public void process(OutputStream out) throws IOException {
>out.write(httpPostRequestHolder.get().getBytes());
>}
>});
> 
>session.transfer(flowFile, SUCCESS);
> 
> is this the correct way to write out the JSON response?
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Is-my-custom-processor-doing-too-many-things-OnTrigger-question-tp9225p9238.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: GetKafka blowing up with assertion error in Kafka client code

2016-04-28 Thread Oleg Zhurakousky
Chris

Thanks for looking into this and describing the problem. Indeed we have seen 
similar symptoms but would need to further investigate and see if there is an 
option to stop the internal to Kafka reconnect thread. It appears there are 
configuration properties in the new API to do that, while I am not sure about 
the old at the moment. 
As I said, will investigate further and let you know 

Thanks again for looking into this

Oleg 

Sent from my iPhone

> On Apr 28, 2016, at 18:41, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> Oleg,
> 
> I have reproduced the problem.  Its pretty easy to do. Just delete and 
> recreate the topic while the processor is running.  I think I saw a similar 
> problem when I increased the partitions in the topic.  That problem resolved 
> itself when I restarted the GetKafka processors.  However, to resolve this 
> problem restarting the processor does not work. It must be that something is 
> being stored in Zookeeper.  I am guessing that deleting and recreating the 
> processor will do the trick.  Is there any debugging information which I can 
> provide to you?
> 
> Thanks,
> Chris
> 
> 
> 
>> On 4/14/16, 8:32 PM, "Oleg Zhurakousky"  wrote:
>> 
>> Thanks Chris
>> 
>> Indeed let us know if/when/how to reproduce it so we can evaluate and see if 
>> it is something we can validate/handle in NiFi before it is passed to Kafka 
>> (e.g., validation etc)
>> 
>> Cheers
>> Oleg
>> 
>>> On Apr 14, 2016, at 8:25 PM, McDermott, Chris Kevin (MSDU - 
>>> STaTS/StorefrontRemote)  wrote:
>>> 
>>> I looked at the Kafka client code and it seemed to me to be a bug in the 
>>> caller. There is a map passed that maps topics to number of consumers. In 
>>> this case it asserting that the number of consumers is greater than zero. 
>>> If I can repro the problem I'll try to isolate it in the debugger and 
>>> provide more details.
>>> 
>>> 
>>> 
>>> Sent from my Verizon, Samsung Galaxy smartphone
>>> 
>>> 
>>>  Original message 
>>> From: Oleg Zhurakousky 
>>> Date: 4/14/16 4:14 PM (GMT-05:00)
>>> To: dev@nifi.apache.org
>>> Subject: Re: GetKafka blowing up with assertion error in Kafka client code
>>> 
>>> Chris
>>> That is correct and for a change I am pretty happy to see this stack trace 
>>> as it clearly shows the problem and validates the approach we have.
>>> So here are more details. . .
>>> 
>>> The root failure is in Kafka (as you can see from the stack trace). All we 
>>> are doing is encapsulating interaction with Kafka into cancelable Future so 
>>> we can cancel if and when Kafka deadlocks (which we noticed happens rather 
>>> often)
>>> When we execute Future.get() it results in ExecutionException which caries 
>>> the original Kafka exception (AssertionError).
>>> Now I am not sure what that assertion error really means in the context of 
>>> what you are trying to do but its clearly a problem originated in Kafka.
>>> Could you share your config or whatever other details?
>>> 
>>> Cheers
>>> Oleg
>>> 
>>>> On Apr 14, 2016, at 4:00 PM, McDermott, Chris Kevin (MSDU - 
>>>> STaTS/StorefrontRemote)  wrote:
>>>> 
>>>> I’m running based of of 0.7.0 Snapshot.  The GetKafka config is pretty 
>>>> generic.  Batch size 1, 1 concurrent task.
>>>> 
>>>> 
>>>> 2016-04-14 19:27:23,204 ERROR [Timer-Driven Process Thread-9] 
>>>> o.apache.nifi.processors.kafka.GetKafka
>>>> java.lang.IllegalStateException: java.util.concurrent.ExecutionException: 
>>>> java.lang.AssertionError: assertion failed
>>>>  at 
>>>> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:355) 
>>>> ~[na:na]
>>>>  at 
>>>> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>>>>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>>>  at 
>>>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>>>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>>>  at 
>>>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>>>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>>>  at 
>>>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(Co

Re: Quick Question- Nifi Kafka

2016-05-03 Thread Oleg Zhurakousky
Hi 

It was tested with 0.8.2 and 0.9, but it does not use new consumer API. We are 
working on a new version of Kafka support slated for 0.7 release 

Cheers 
Oleg 



> On May 3, 2016, at 02:20, Sourav Gulati  wrote:
> 
> Hi All,
> 
> Nifi-0.6.1 is compatible with which kafka version?
> 
> 
> Regards,
> Sourav Gulati
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NOTE: This message may contain information that is confidential, proprietary, 
> privileged or otherwise protected by law. The message is intended solely for 
> the named addressee. If received in error, please destroy and notify the 
> sender. Any use of this email is prohibited when received in error. Impetus 
> does not represent, warrant and/or guarantee, that the integrity of this 
> communication has been maintained nor that the communication is free of 
> errors, virus, interception or interference.


Re: Quick Question- Nifi Kafka

2016-05-03 Thread Oleg Zhurakousky
It will be vey hard to suggest anything without any details. Have you looked at 
Kafka logs?

Oleg

> On May 3, 2016, at 7:30 AM, Sourav Gulati  wrote:
> 
> Hi,
> 
> 
> I am using 0.8.1.1 , I am not seeing any exception in logs . however, still 
> it is not writing to kafka. Any suggestions?
> 
> 
> Regards,
> Sourav Gulati
> 
> -Original Message-
> From: Oleg Zhurakousky [mailto:ozhurakou...@hortonworks.com]
> Sent: Tuesday, May 03, 2016 4:14 PM
> To: dev@nifi.apache.org
> Subject: Re: Quick Question- Nifi Kafka
> 
> Hi
> 
> It was tested with 0.8.2 and 0.9, but it does not use new consumer API. We 
> are working on a new version of Kafka support slated for 0.7 release
> 
> Cheers
> Oleg
> 
> 
> 
>> On May 3, 2016, at 02:20, Sourav Gulati  wrote:
>> 
>> Hi All,
>> 
>> Nifi-0.6.1 is compatible with which kafka version?
>> 
>> 
>> Regards,
>> Sourav Gulati
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> NOTE: This message may contain information that is confidential, 
>> proprietary, privileged or otherwise protected by law. The message is 
>> intended solely for the named addressee. If received in error, please 
>> destroy and notify the sender. Any use of this email is prohibited when 
>> received in error. Impetus does not represent, warrant and/or guarantee, 
>> that the integrity of this communication has been maintained nor that the 
>> communication is free of errors, virus, interception or interference.
> 
> 
> 
> 
> 
> 
> 
> 
> NOTE: This message may contain information that is confidential, proprietary, 
> privileged or otherwise protected by law. The message is intended solely for 
> the named addressee. If received in error, please destroy and notify the 
> sender. Any use of this email is prohibited when received in error. Impetus 
> does not represent, warrant and/or guarantee, that the integrity of this 
> communication has been maintained nor that the communication is free of 
> errors, virus, interception or interference.
> 



Help Wanted

2016-05-03 Thread Oleg Zhurakousky
Guys

I’d like to use this opportunity to address all members of the NiFi community 
hence this email is sent to both mailing lists (dev/users)

While somewhat skeptical when I started 6 month ago, I have to admit that now I 
am very excited to observe the growth and adaption of the Apache NiFi and say 
that in large part it’s because of the healthy community that we have here - 
committers and contributors alike representing variety of business domains.
This is absolutely great news for all of us and I am sure some if not all of 
you share this sentiment. 

That said and FWIW we need help!
While it’s great to wake up every morning to a set of new PRs and patches, we 
now have a bit of a back log. In large this is due to the fact that most of our 
efforts are spent in development as we all try to grow NiFi feature base. 
However we need to remember that PRs and patches will remain as they are unless 
and until they are reviewed/agreed to be merged by this same community and that 
is where we need help. While “merge" responsibilities are limited to 
“committers”, “review” is the responsibility of every member of this community 
and I would like to ask you if at all possible to redirect some of your efforts 
to this process.
We currently have 61 outstanding PRs and this particular development cycle is a 
bit more complex then the previous ones since it addresses 0.7.0 and 1.0.0 
releases in parallel (so different approach to breaking changes if any etc.)

Cheers
Oleg



Re: Help Wanted

2016-05-03 Thread Oleg Zhurakousky
Andrew

Thank you so much for following up on this.
I am assuming you have GitHub account. If not please create one as most of our 
contributions deal with pull requests (PR).
Then you can go to https://github.com/apache/nifi , click on “Pull Requests” 
and review them by commenting in line (you can see plenty of examples there of 
PRs that are already in review process).

I would also suggest to get familiar with Contributor’s guideline for NiFi - 
https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide. But it 
appears you have already done so and I think there may be small discrepancy in 
the link you provided or may be it is not as dynamic.
In any event JIRA and GutHub are good resources to use.

As for the last question, the best case scenario is both (code review and 
test). Having said that we do realize that your time and the time of every 
contributor may be limited, so I say whatever you can. Some time quick code 
scan can uncover the obvious that doesn’t need testing.

Thanks again
Cheers
Oleg
On May 3, 2016, at 11:07 AM, Andrew Psaltis 
mailto:psaltis.and...@gmail.com>> wrote:

Oleg,
I would love to help -- couple of quick questions:

The GH PR's are ~60 as you indicated, but the How To Contribute guide (Code
review process --
https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-CodeReviewProcess
) shows a JIRA list with patches available.

Which should be reviewed first? For the PR's on GH are you just looking for
code review or same process of apply local merge and test?

Thanks,
Andrew

On 5/3/16, 9:58 AM, "Oleg Zhurakousky"  wrote:

Guys

I’d like to use this opportunity to address all members of the NiFi
community hence this email is sent to both mailing lists (dev/users)

While somewhat skeptical when I started 6 month ago, I have to admit that
now I am very excited to observe the growth and adaption of the Apache NiFi
and say that in large part it’s because of the healthy community that we
have here - committers and contributors alike representing variety of
business domains.
This is absolutely great news for all of us and I am sure some if not all
of you share this sentiment.

That said and FWIW we need help!
While it’s great to wake up every morning to a set of new PRs and patches,
we now have a bit of a back log. In large this is due to the fact that most
of our efforts are spent in development as we all try to grow NiFi feature
base. However we need to remember that PRs and patches will remain as they
are unless and until they are reviewed/agreed to be merged by this same
community and that is where we need help. While “merge" responsibilities
are limited to “committers”, “review” is the responsibility of every member
of this community and I would like to ask you if at all possible to
redirect some of your efforts to this process.
We currently have 61 outstanding PRs and this particular development cycle
is a bit more complex then the previous ones since it addresses 0.7.0 and
1.0.0 releases in parallel (so different approach to breaking changes if
any etc.)

Cheers
Oleg


--
Thanks,
Andrew



Re: Help Wanted

2016-05-03 Thread Oleg Zhurakousky
Andrew

Regarding PR vs. Patch.

This has been an ongoing discussion and i’ll let other’s to contribute to this. 
Basically we support both. That said, personally (and it appears to be embraced 
by the rest of the community) PR is the preference specifically due to the 
inline review/comment capabilities provided by GitHub.

Cheers
Oleg
 
> On May 3, 2016, at 11:18 AM, Andrew Psaltis  wrote:
> 
> Thank you Oleg!
> 
> Yeah, that page with the Code Review, has a little refresh link, but it
> really just points to this JIRA query:
> https://issues.apache.org/jira/browse/NIFI-1837?filter=12331874
> 
> As a community is there a preference given to JIRA's with Patch or GH PR's
> or are they all treated with the same priority?
> 
> Thanks,
> Andrew
> 
> On Tue, May 3, 2016 at 11:12 AM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com> wrote:
> 
>> Andrew
>> 
>> Thank you so much for following up on this.
>> I am assuming you have GitHub account. If not please create one as most of
>> our contributions deal with pull requests (PR).
>> Then you can go to https://github.com/apache/nifi , click on “Pull
>> Requests” and review them by commenting in line (you can see plenty of
>> examples there of PRs that are already in review process).
>> 
>> I would also suggest to get familiar with Contributor’s guideline for NiFi
>> - https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide. But
>> it appears you have already done so and I think there may be small
>> discrepancy in the link you provided or may be it is not as dynamic.
>> In any event JIRA and GutHub are good resources to use.
>> 
>> As for the last question, the best case scenario is both (code review and
>> test). Having said that we do realize that your time and the time of every
>> contributor may be limited, so I say whatever you can. Some time quick code
>> scan can uncover the obvious that doesn’t need testing.
>> 
>> Thanks again
>> Cheers
>> Oleg
>> 
>> On May 3, 2016, at 11:07 AM, Andrew Psaltis 
>> wrote:
>> 
>> Oleg,
>> I would love to help -- couple of quick questions:
>> 
>> The GH PR's are ~60 as you indicated, but the How To Contribute guide (Code
>> review process --
>> 
>> https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-CodeReviewProcess
>> ) shows a JIRA list with patches available.
>> 
>> Which should be reviewed first? For the PR's on GH are you just looking for
>> code review or same process of apply local merge and test?
>> 
>> Thanks,
>> Andrew
>> 
>> On 5/3/16, 9:58 AM, "Oleg Zhurakousky" 
>> wrote:
>> 
>> Guys
>> 
>> I’d like to use this opportunity to address all members of the NiFi
>> 
>> community hence this email is sent to both mailing lists (dev/users)
>> 
>> 
>> While somewhat skeptical when I started 6 month ago, I have to admit that
>> 
>> now I am very excited to observe the growth and adaption of the Apache NiFi
>> and say that in large part it’s because of the healthy community that we
>> have here - committers and contributors alike representing variety of
>> business domains.
>> 
>> This is absolutely great news for all of us and I am sure some if not all
>> 
>> of you share this sentiment.
>> 
>> 
>> That said and FWIW we need help!
>> While it’s great to wake up every morning to a set of new PRs and patches,
>> 
>> we now have a bit of a back log. In large this is due to the fact that most
>> of our efforts are spent in development as we all try to grow NiFi feature
>> base. However we need to remember that PRs and patches will remain as they
>> are unless and until they are reviewed/agreed to be merged by this same
>> community and that is where we need help. While “merge" responsibilities
>> are limited to “committers”, “review” is the responsibility of every member
>> of this community and I would like to ask you if at all possible to
>> redirect some of your efforts to this process.
>> 
>> We currently have 61 outstanding PRs and this particular development cycle
>> 
>> is a bit more complex then the previous ones since it addresses 0.7.0 and
>> 1.0.0 releases in parallel (so different approach to breaking changes if
>> any etc.)
>> 
>> 
>> Cheers
>> Oleg
>> 
>> 
>> --
>> Thanks,
>> Andrew
>> 
>> 
>> 
> 
> 
> -- 
> Thanks,
> Andrew
> 
> Subscribe to my book: Streaming Data <http://manning.com/psaltis>
> <https://www.linkedin.com/pub/andrew-psaltis/1/17b/306>
> twiiter: @itmdata <http://twitter.com/intent/user?screen_name=itmdata>



New Kafka API support (0.9+)

2016-05-03 Thread Oleg Zhurakousky
As some of you know, we are in the process of providing a set of new Processors 
that uses the new Kafka API (0.9+). This is a new NAR and will live for a while 
along side with the old Kafka NAR.
The new Processors are called PublishKafka and ConsumeKafka (specifically to 
emphasize the new consumer API provided by Kafka)

The current PR https://github.com/apache/nifi/pull/366 is in the review 
process, and even though we’ll do our best to merge it to the trunk as quick as 
possible I would encourage all of Kafka aficionados to see if you can try it 
earlier by building NiFi from this branch

The commit message provides all of the details included in the commit, and I 
have to worn you that aside form new Kafka AP support it addresses several more 
issue including some of the work for old Kafka processors.

Feel free to comment on the PR or use mailing lists with questions/concerns

Given the popularity of Kafka I felt compelled to make all of you aware of this 
work as early as possible.

Cheers
Oleg



Re: [discuss] PropertyDescriptor name and displayName attributes

2016-05-03 Thread Oleg Zhurakousky
I am definitely +1 on this.
The only question I have is related to "Add code to warn (without blocking) on 
processors missing displayName attributes”. Did you mean in he code itself 
where some validator in the abstract class would flag it with a WARN or some 
build plugin?

Cheers
Oleg

On May 3, 2016, at 2:09 PM, Andy LoPresto 
mailto:alopre...@apache.org>> wrote:

Hi all,

As a result of some conversations and some varying feedback on PRs, I’d like to 
discuss with the community an issue I see with PropertyDescriptor name and 
displayName attributes. I’ll describe the scenarios that cause issues and my 
proposed solution, and then solicit responses and other perspectives.

A PropertyDescriptor has various attributes [1]. When the property is 
configured, a “name” is provided to uniquely identify the property. This name 
is both displayed on the UI in a property configuration dialog, and used in the 
REST API to retrieve or set values. When the flow is persisted to the 
flow.xml.gz file, the name identifies the value during serialization.

There are multiple scenarios where the name value could be changed:

* There is a typo in the name
* The name is unclear or could be improved to more accurately reflect the 
purpose of the property (I believe we have had a couple instances with “batch” 
meaning when integrating with other projects)
* Internationalization and localization

When an existing PropertyDescriptor name is changed for any of these reasons, 
it breaks backward compatibility because a flow.xml.gz file which defines a 
value for the property name will no longer have that value retrieved [2]. In 
this case, name is serving a dual role for both UI display and object 
resolution within the persisted state.

To address this, the displayName attribute was added to PropertyDescriptor [3]. 
This attribute allows a “human readable” name to be provided for UI purposes 
and modified at will without modifying the static name value. However, many 
developers are unaware of this attribute [4], and provide only the name 
attribute when contributing a new Processor.

My proposal is to do the following:

* Improve the documentation to increase awareness of the displayName attribute 
and the benefit it provides
* Consciously encourage contributors to provide both name and displayName 
attributes on new processors and add displayName to existing processors during 
PR reviews
* Add code to warn (without blocking) on processors missing displayName 
attributes

I appreciate that providing both attributes may seem duplicative in the 
scenario where both are similar English phrases, which is the default today. 
However, as our community grows and we are seeing increased 
internationalization and localization efforts, I believe this will pay 
dividends. I also think being proactive by providing both attributes will 
increase developer awareness and avoid a scenario where a user changes the 
existing name attribute rather than add a displayName attribute. I feel the 
steps I outline above will get the maximum return with minimal coding effort 
and no changes to backward compatibility.

I welcome the community’s feedback on this.


[1] 
https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html#documenting-properties
[2] https://issues.apache.org/jira/browse/NIFI-1795
[3] 
https://github.com/apache/nifi/blob/master/nifi-api/src/main/java/org/apache/nifi/components/PropertyDescriptor.java#L254
[4] https://issues.apache.org/jira/browse/NIFI-1828

Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69




Re: PutKafka-Message Delimiter

2016-05-05 Thread Oleg Zhurakousky
Please use Ctrl+Enter. 
The ‘\n’ is interpreted literally as a string (not new line).
Let me know how it goes
Cheers
Oleg

> On May 5, 2016, at 7:01 AM, Sourav Gulati  wrote:
> 
> Hi All,
> 
> I am tried setting message delimiter in put kafka to /n . But it is not 
> working. Please guide.
> 
> Regards,
> Sourav Gulati
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NOTE: This message may contain information that is confidential, proprietary, 
> privileged or otherwise protected by law. The message is intended solely for 
> the named addressee. If received in error, please destroy and notify the 
> sender. Any use of this email is prohibited when received in error. Impetus 
> does not represent, warrant and/or guarantee, that the integrity of this 
> communication has been maintained nor that the communication is free of 
> errors, virus, interception or interference.



Re: PutKafka-Message Delimiter

2016-05-05 Thread Oleg Zhurakousky
Yes, we've also updated documentation as well for the upcoming release 

Sent from my iPhone

> On May 5, 2016, at 07:22, Sourav Gulati  wrote:
> 
> Should I press CTRL+Enter on value tab of "Message delimiter" key?
> 
> Regards,
> Sourav Gulati
> 
> -Original Message-
> From: Oleg Zhurakousky [mailto:ozhurakou...@hortonworks.com]
> Sent: Thursday, May 05, 2016 4:41 PM
> To: dev@nifi.apache.org
> Subject: Re: PutKafka-Message Delimiter
> 
> Please use Ctrl+Enter.
> The ‘\n’ is interpreted literally as a string (not new line).
> Let me know how it goes
> Cheers
> Oleg
> 
>> On May 5, 2016, at 7:01 AM, Sourav Gulati  
>> wrote:
>> 
>> Hi All,
>> 
>> I am tried setting message delimiter in put kafka to /n . But it is not 
>> working. Please guide.
>> 
>> Regards,
>> Sourav Gulati
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> NOTE: This message may contain information that is confidential, 
>> proprietary, privileged or otherwise protected by law. The message is 
>> intended solely for the named addressee. If received in error, please 
>> destroy and notify the sender. Any use of this email is prohibited when 
>> received in error. Impetus does not represent, warrant and/or guarantee, 
>> that the integrity of this communication has been maintained nor that the 
>> communication is free of errors, virus, interception or interference.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NOTE: This message may contain information that is confidential, proprietary, 
> privileged or otherwise protected by law. The message is intended solely for 
> the named addressee. If received in error, please destroy and notify the 
> sender. Any use of this email is prohibited when received in error. Impetus 
> does not represent, warrant and/or guarantee, that the integrity of this 
> communication has been maintained nor that the communication is free of 
> errors, virus, interception or interference.


Re: PutKafka-Message Delimiter

2016-05-05 Thread Oleg Zhurakousky
I am not sure what you’re saying since it is not a required property and it 
uses non empty validator so anything you enter there is a valid value. In other 
words it can never produce 'no value set’ message.
I just tested it myself as well to be sure.
Could you provide more details as to what exactly you are doing?

Cheers
Oleg
> On May 5, 2016, at 7:31 AM, Sourav Gulati  wrote:
> 
> Could you please provide me that documentation content?
> 
> Regards,
> Sourav Gulati
> 
> -Original Message-
> From: Sourav Gulati
> Sent: Thursday, May 05, 2016 5:01 PM
> To: dev@nifi.apache.org
> Subject: RE: PutKafka-Message Delimiter
> 
> If I press " CTRL+Enter" on the value table of "Message delimiter" key. It 
> says "no value set".
> 
> 
> 
> Regards,
> Sourav Gulati
> 
> -Original Message-
> From: Oleg Zhurakousky [mailto:ozhurakou...@hortonworks.com]
> Sent: Thursday, May 05, 2016 4:54 PM
> To: dev@nifi.apache.org
> Subject: Re: PutKafka-Message Delimiter
> 
> Yes, we've also updated documentation as well for the upcoming release
> 
> Sent from my iPhone
> 
>> On May 5, 2016, at 07:22, Sourav Gulati  wrote:
>> 
>> Should I press CTRL+Enter on value tab of "Message delimiter" key?
>> 
>> Regards,
>> Sourav Gulati
>> 
>> -Original Message-
>> From: Oleg Zhurakousky [mailto:ozhurakou...@hortonworks.com]
>> Sent: Thursday, May 05, 2016 4:41 PM
>> To: dev@nifi.apache.org
>> Subject: Re: PutKafka-Message Delimiter
>> 
>> Please use Ctrl+Enter.
>> The '\n' is interpreted literally as a string (not new line).
>> Let me know how it goes
>> Cheers
>> Oleg
>> 
>>> On May 5, 2016, at 7:01 AM, Sourav Gulati  
>>> wrote:
>>> 
>>> Hi All,
>>> 
>>> I am tried setting message delimiter in put kafka to /n . But it is not 
>>> working. Please guide.
>>> 
>>> Regards,
>>> Sourav Gulati
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> NOTE: This message may contain information that is confidential, 
>>> proprietary, privileged or otherwise protected by law. The message is 
>>> intended solely for the named addressee. If received in error, please 
>>> destroy and notify the sender. Any use of this email is prohibited when 
>>> received in error. Impetus does not represent, warrant and/or guarantee, 
>>> that the integrity of this communication has been maintained nor that the 
>>> communication is free of errors, virus, interception or interference.
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> NOTE: This message may contain information that is confidential, 
>> proprietary, privileged or otherwise protected by law. The message is 
>> intended solely for the named addressee. If received in error, please 
>> destroy and notify the sender. Any use of this email is prohibited when 
>> received in error. Impetus does not represent, warrant and/or guarantee, 
>> that the integrity of this communication has been maintained nor that the 
>> communication is free of errors, virus, interception or interference.
> 
> 
> 
> 
> 
> 
> 
> 
> NOTE: This message may contain information that is confidential, proprietary, 
> privileged or otherwise protected by law. The message is intended solely for 
> the named addressee. If received in error, please destroy and notify the 
> sender. Any use of this email is prohibited when received in error. Impetus 
> does not represent, warrant and/or guarantee, that the integrity of this 
> communication has been maintained nor that the communication is free of 
> errors, virus, interception or interference.
> 



Re: PutKafka-Message Delimiter

2016-05-05 Thread Oleg Zhurakousky
Is it “\n” as a "new line" or “/n”? If you want to create a message for each 
line then you have to use “new line” and that is CTRL+Enter. If it is “/n”, 
then it’s just a string pattern that will be looked at as any other string.
I am still not following the "no value set” comment. Could you please clarify?

Oleg

> On May 5, 2016, at 8:51 AM, Sourav Gulati  wrote:
> 
> Oleg,
> 
> Yes it is not a required property.
> 
> As per the documentation, If we do not provide any message delimiter to 
> PutKafka, it writes all  the content of a flowfile as single message to 
> Kafka. My requirement is that I want to write all messages in the flow file 
> as separate messages on Kafka.
> 
> Messages in the flowfile is delimited by "/n" . That is why I was providing 
> "/n" as message delimiter. However, it is not working.
> 
> Regards,
> Sourav Gulati
> 
> -Original Message-
> From: Oleg Zhurakousky [mailto:ozhurakou...@hortonworks.com]
> Sent: Thursday, May 05, 2016 6:08 PM
> To: dev@nifi.apache.org
> Subject: Re: PutKafka-Message Delimiter
> 
> I am not sure what you’re saying since it is not a required property and it 
> uses non empty validator so anything you enter there is a valid value. In 
> other words it can never produce 'no value set’ message.
> I just tested it myself as well to be sure.
> Could you provide more details as to what exactly you are doing?
> 
> Cheers
> Oleg
>> On May 5, 2016, at 7:31 AM, Sourav Gulati  
>> wrote:
>> 
>> Could you please provide me that documentation content?
>> 
>> Regards,
>> Sourav Gulati
>> 
>> -Original Message-
>> From: Sourav Gulati
>> Sent: Thursday, May 05, 2016 5:01 PM
>> To: dev@nifi.apache.org
>> Subject: RE: PutKafka-Message Delimiter
>> 
>> If I press " CTRL+Enter" on the value table of "Message delimiter" key. It 
>> says "no value set".
>> 
>> 
>> 
>> Regards,
>> Sourav Gulati
>> 
>> -Original Message-
>> From: Oleg Zhurakousky [mailto:ozhurakou...@hortonworks.com]
>> Sent: Thursday, May 05, 2016 4:54 PM
>> To: dev@nifi.apache.org
>> Subject: Re: PutKafka-Message Delimiter
>> 
>> Yes, we've also updated documentation as well for the upcoming release
>> 
>> Sent from my iPhone
>> 
>>> On May 5, 2016, at 07:22, Sourav Gulati  wrote:
>>> 
>>> Should I press CTRL+Enter on value tab of "Message delimiter" key?
>>> 
>>> Regards,
>>> Sourav Gulati
>>> 
>>> -Original Message-
>>> From: Oleg Zhurakousky [mailto:ozhurakou...@hortonworks.com]
>>> Sent: Thursday, May 05, 2016 4:41 PM
>>> To: dev@nifi.apache.org
>>> Subject: Re: PutKafka-Message Delimiter
>>> 
>>> Please use Ctrl+Enter.
>>> The '\n' is interpreted literally as a string (not new line).
>>> Let me know how it goes
>>> Cheers
>>> Oleg
>>> 
>>>> On May 5, 2016, at 7:01 AM, Sourav Gulati  
>>>> wrote:
>>>> 
>>>> Hi All,
>>>> 
>>>> I am tried setting message delimiter in put kafka to /n . But it is not 
>>>> working. Please guide.
>>>> 
>>>> Regards,
>>>> Sourav Gulati
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> NOTE: This message may contain information that is confidential, 
>>>> proprietary, privileged or otherwise protected by law. The message is 
>>>> intended solely for the named addressee. If received in error, please 
>>>> destroy and notify the sender. Any use of this email is prohibited when 
>>>> received in error. Impetus does not represent, warrant and/or guarantee, 
>>>> that the integrity of this communication has been maintained nor that the 
>>>> communication is free of errors, virus, interception or interference.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> NOTE: This message may contain information that is confidential, 
>>> proprietary, privileged or otherwise protected by law. The message is 
>>> intended solely for the named addressee. If received in error, please 
>>> destroy and notify the sender. Any use of this email is prohibited when 
>>> received in error. Impetus does not represent, warrant and

Re: [discuss] PropertyDescriptor name and displayName attributes

2016-05-06 Thread Oleg Zhurakousky
I think the the main source of confusion (as it was for me) is the ‘name’ name 
itself. 
Basically the ‘name’ today is really an identifier of the property and 
therefore could/should never change while 'displayName' is volatile. 
Obviously changing “name’ to “id” is out of the question as it would break 
practically everything. But from the documentation perspective may be we should 
consider documenting that ‘name’ corresponds to a unique and perpetual 
identifier of the property that would also be displayed unless override by 
‘displayName’ while such override would only affect the display characteristics 
of the property and not its identification.

Cheers
Oleg

> On May 6, 2016, at 10:25 AM, Joe Witt  wrote:
> 
> Definitely on board with the idea that the 'name' will be the key to a
> resource bundle.  It does imply such names will need to follow
> necessary conventions to be valid resource bundle keys.
> 
> However, in the spirit of always thinking about the developer path to
> productivity I am hopeful we can come up with a nice way to not
> require them to setup a resource bundle.
> 
> The idea being that the following order of operations/thought would exist:
> 
> 1) How can I provide a new property to this processor?
> Answer: Add a property descriptor and set the name.  This name will be
> used to refer to the property descriptor whenever serialized/saving
> the config and it will be rendered through the REST API and thus made
> available as the property name in the UI.
> 
> 2) Oh snap.  I wish I had used a different name because I've found a
> better way to communicate intent to the user.  How do I do this?
> Answer: Go ahead and set displayName.  NiFi will continue to use the
> 'name' for serialization/config saving but will use the displayName
> for what is shown to the user in the UI.
> 
> 3) I would like to support locale sensitive representations of my
> property name.  How can I do this?
> Answer: Add a resource bundle with entries for your property 'name'
> value.  This means the resource bundle needs to exist and your
> property 'name' must adhere to resource bundle key naming requirements
> [1].  If this is supplied and can be looked up then this will be used
> and otherwise will fallback to using displayName value if present and
> otherwise will fallback to using the value of 'name'.
> 
> And in any event we still need to better document/articulate this
> model as the root of this thread was that we hadn't effectively
> communicated the existence of displayName.  I agree this discussion
> ended up getting us to a great place though as we should all strive to
> support internationalization.
> 
> With an approach like this I am onboard.  I think this balances our
> goals of having a simple to use API but also allows those who want to
> support multiple locales to do so cleanly.
> 
> Thanks
> Joe
> 
> [1] https://docs.oracle.com/javase/tutorial/i18n/resbundle/propfile.html
> 
> On Fri, May 6, 2016 at 9:33 AM, Brandon DeVries  wrote:
>> +1.  I like that better.  Deprecate displayName(), and set it
>> "automatically" based on the locale from properties.  The name of the
>> property (which should never change) is the key into the ResourceBundle.
>> 
>> Brandon
>> 
>> 
>> On Fri, May 6, 2016 at 9:24 AM Matt Burgess  wrote:
>> 
>>> Same here. Internationalization is often implemented as properties
>>> files/resources, where you possibly load in a file based on the system
>>> setting for Locale (like processor_names_en_US.properties). If we were
>>> to do internationalization this way (i.e. a non-code based solution,
>>> which is more flexible), then ironically displayName() might/should be
>>> deprecated in favor of using the value of name() as the key in a
>>> properties/lookup file; the corresponding value would be the
>>> appropriate locale-specific "display name".
>>> 
>>> Brandon's links show this approach, I have seen this i18n approach on
>>> other projects/products and it seems to work pretty well.
>>> 
>>> Regards,
>>> Matt
>>> 
>>> On Fri, May 6, 2016 at 9:11 AM, Joe Witt  wrote:
 I share Bryan's perspective.
 
 On Fri, May 6, 2016 at 9:05 AM, Bryan Bende  wrote:
> I might just be resistant to change, but I am still on the fence a
>>> little
> bit...
> 
> In the past the idea has always been you start out with name, and if you
> later need to change what is displayed in the UI, then you add
>>> displayName
> after the fact.
> 
> It sounds like the issue is that a lot of people don't know about
> displayName, so I am totally in favor of increasing awareness through
> documentation,
> but I'm struggling with telling people that they should set displayName
>>> as
> the default behavior.
> 
> For code that is contributed to NiFi, I would expect the PMC/committer
> doing the review/merging to notice if an existing property name was
>>> being
> changed and point that out in the review.
> If it was someone else'

Re: [ANNOUNCE] New Apache NiFi PMC Member - Joe Percivall

2016-05-17 Thread Oleg Zhurakousky
Congrats Joe!
> On May 16, 2016, at 11:00 PM, Joe Witt  wrote:
> 
> On behalf of the Apache NiFI PMC, I am very pleased to announce that
> Joe Percivall has accepted the PMC's invitation to join the Apache
> NiFi PMC.  We greatly appreciate all of Joe's hard work and generous
> contributions to the project. We look forward to his continued
> involvement in the project.
> 
> Joe demonstrates 'community over code' as-is central to the Apache Way
> in his many wonderful engagements with members of the community be
> they developers building new features or users wanting to better
> understand NiFi.
> 
> Welcome and congratulations!
> 



Re: [DISCUSS] nifi meetup/hackathon ground rules?

2016-05-17 Thread Oleg Zhurakousky
In any event, I think creating JIRA ticket (regardless of how right/wrong it 
may be) would be appropriate in such settings as well as producing a fix and a 
PR/Patch essentially allowing it to be vetted by ASF process. 
On the other hand I also hear Tony’s point about "short turnaround”. Certain 
issues may be very obvious (e.g., NPEs, spelling, misconfiguration etc) and I 
think we need to show a bit more flexibility while fostering community 
participation as I (based on personal experience in previous ventures) strongly 
believe that a bug fixed jointly in such settings and merged right away draws 
more interest from attendees to the specific technology and goes to the heart 
of the community participation/collaboration (everyone present is part of that 
fix - a true community fix). After all, attendees of such event are the 
community and if there is a consensus among all, that should be enough for an 
implied +1. Don’t you agree?

Chers
Oleg

> On May 16, 2016, at 9:38 PM, Joe Witt  wrote:
> 
> Tony: Good point and probably fair game.  It would need to be really
> urgent and really specific I think though.  Otherwise no need to rush.
> 
> Matt: Yeah that is a great idea.
> 
> AdamL: Thanks for offering to setup a zoom.  I'll try to do it and if
> good will send details on meetup invite.  If not I'll ping you.
> 
> Thanks
> Joe
> 
> On Mon, May 16, 2016 at 9:15 PM, Tony Kurc  wrote:
>> Joe - if a bug is discovered during the hackathon and patch developed,
>> would this be an appropriate short turnaroundJIRA/patch/merge type
>> situation?
>> On May 16, 2016 5:31 PM, "Matt Burgess"  wrote:
>> 
>>> One thing that could be done to enable demos while still having
>>> PRs/patches go through the Apache process is to have the Organizer create a
>>> hackathon branch off their fork, and merge in any patches/PRs that are
>>> demo-able, then show the hackathon goodness at the end. Then the regular
>>> process (Jira, review, etc) applies to the Apache branch(es) for inclusion
>>> into the product.
>>> 
>>> 
 On May 16, 2016, at 5:03 PM, Joe Witt  wrote:
 
 Team,
 
 I wanted to shoot out a note to gather input on some rules of
 engagement so to speak for running a nifi hackathon/meetup.  A few of
 us in the DC/MD area have one planned soon [1].
 
 What I'd like to send out to the meetup group are some ground rules
 for how the meetup will operate.  It is important because not everyone
 will be familiar with the Apache Way, it is being hosted in a vendor
 space, and because in general we want to make sure things like this
 can occur more in the future which means we want this to go well!
 
 Key points to make follow but if you have others please share:
 1) Decisions cannot be made in such a setting.  Rather the discussions
 that happen and the ideas and opinions formed in them need to be
 captured on the appropriate feature proposals, JIRAs, mailing-list
 discussions so others can participate.  This includes feature ideas,
 code ideas, roadmap items, etc..
 
 2) We cannot just make up JIRAs, whip up some code, +1 and merge it
 during the meetup.  If something is worthy of a RTC, which is
 basically all things code, then it needs to be given time for folks
 not sitting at the meetup to participate in - that is it should be
 treated like any other contribution.
 
 3) Notes/summary of the meetup should occur and be made available to
 the community.
 
 [1] http://www.meetup.com/ApacheNiFi/events/230804255/
 
 Thanks
 Joe
>>> 
> 



Re: [DISCUSS] Apache NiFi 0.7.0 and 1.0.0

2016-05-17 Thread Oleg Zhurakousky
Agreed! I would like to see 0.7 within 2-3 weeks as there are a lot of 
improvements and new features/components in it already, and would like to give 
it some miles before 1.0.

Oleg
> On May 17, 2016, at 4:02 PM, James Wing  wrote:
> 
> I'm definitely in favor of releasing 0.7.0, but I don't think we need be
> rigid about the schedule.  If delaying 0.7.0 a few weeks (2-4?) helps pace
> us towards a 1.0 in mid- to late-Summer, that seems reasonable to me.  Do
> we believe that is still a likely target?
> 
> Thanks,
> 
> James
> 
> On Tue, May 17, 2016 at 7:30 AM, Joe Witt  wrote:
> 
>> Team,
>> 
>> Want to start zeroing in on the details of the next releases.  We had
>> a good set of discussions around this back in January and have since
>> been executing along this general path [1].
>> 
>> On the 0.x line the next release would be 0.7.0.  There does appear to
>> be a lot of useful improvements/features/fixes there now and it is
>> time to do a release according to our general 6-8 week approach.
>> However, given all the effort going into 1.x I'd like to get a sense
>> of what the community preference is.
>> 
>> On the 1.0 line the release is coming into focus.  Some things have
>> moved into 1.x and some things look like they'd slide to the right of
>> 1.x as is to be expected.  For example distributed durability (HA
>> Data) looks like a good thing to do post 1.0 given the substantive
>> changes present from the new HA clustering approach and multi-tenant
>> authorization.  I'd also like to dive in and liberally apply Apache
>> Yetus annotations [2] to all the things so we can be really explicit
>> about what parts we can more freely evolve going forward.  We've been
>> a bit awkwardly hamstrung thus far without these so they should help
>> greatly to better convey intent.
>> 
>> For those really interested in things coming in the 1.0 release please
>> take a look through the JIRAs currently there and provide comments on
>> what is important to you, what you'd like to see moved out, in, etc..
>> [3].  At this point there are still a lot of things which will likely
>> need to move out to allow the release to occur in a timely fashion.
>> 
>> Also, keep in mind our stated release line/support model as found here [4].
>> 
>> [1]
>> http://mail-archives.apache.org/mod_mbox/nifi-dev/201601.mbox/%3CCALJK9a4dMw9PyrrihpPwM7DH3R_4v8b%3Dr--LDhK7y5scob-0og%40mail.gmail.com%3E
>> 
>> [2]
>> https://yetus.apache.org/documentation/0.2.1/audience-annotations-apidocs/
>> 
>> [3]
>> https://issues.apache.org/jira/browse/NIFI-1887?jql=fixVersion%20%3D%201.0.0%20AND%20project%20%3D%20NIFI
>> 
>> [4]
>> https://cwiki.apache.org/confluence/display/NIFI/Git+Branching+and+Release+Line+Management
>> 
>> Thanks
>> Joe
>> 



Re: NIFI & IBM MQ

2016-05-24 Thread Oleg Zhurakousky
Christian

Get/PutJMS* processors are effectively deprecated since they only work with 
ActiveMQ.
The new processors that you should use that were specifically developed to 
support multiple JMS providers (tested with IBM MQ, Tibco and ActiveMQ) are 
PublishJMS and ConsumeJMS.

Give it a try and let us know if you need more help.
Cheers
Oleg

> On May 24, 2016, at 5:47 AM, christianv 
>  wrote:
> 
> Hi,
> 
> I am in need of help I am trying to use GetJMSQueue to connect to an IBM MQ
> Queue and I need an example of the property setting I am unable to compile
> the UTI.
> 
> Kind Regards
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/NIFI-IBM-MQ-tp10651.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



Re: NIFI & IBM MQ

2016-05-25 Thread Oleg Zhurakousky
Christian

I’d suggest to look at the additional documentation of this new component as it 
provides some level of details, but it would be interesting to get your 
feedback as to what do you think is missing/confusing.

In any event, here are some details. You need to configure ControllerService 
for JMS Connection Factory and Consume and/or PublishJMS processor
Attached are the images showing sample configuration.
Obviously the Destination names, URIs etc would have to be changed to fit your 
environment.

Let me know how it goes.

Cheers
Oleg


[cid:D06FA6EF-6C92-4B1F-89F9-98437CDDC029@domain.invalid][cid:2FB4D9E7-9763-4155-861E-ED6226FAAD1A@domain.invalid]


On May 25, 2016, at 4:09 AM, christianv 
mailto:christian.vandenhee...@standardbank.co.za>>
 wrote:

Tried it which class do I use from IBM ie com.ibm for the setting
JMSConnectionFactoryProvider



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/NIFI-IBM-MQ-tp10651p10679.html
Sent from the Apache NiFi Developer List mailing list archive at 
Nabble.com.




Re: NIFI & IBM MQ

2016-05-25 Thread Oleg Zhurakousky
Ok, seems like this list doesn’t like images (at least for me), so here is text 
version of what was in them

Processor (required properties):
- Destination Name:  (e.g., queue://MYQUEUE)
- Destination Type: QUEUE or TOPIC
- Session Cache size: 1 (read its doc for more details)
- Connection Factory service:  
(e.g., IBMMQ)

Controller Service (e.g., IBMMQ)
- MQ Connection Factory Implementation: com.ibm.mq.jms.MQConnectionFactory
- MQ Client Libraries path: 
- Broker URI:  (e.g., foo.bar:1234)
Then you would have to configure IBM specific properties as Dynamic Properties:
- channel:  (e.g., SYSTEM.ADMIN.SVRCONN)
- queueManeger:  (e.g., FOO)
- transportType:  (make sure you put '1' as a 
value which states that TCP/IP will be used)

You can get more from IBM docs for additional properties which could all be 
provided as dynamic properties

Cheers
Oleg

On May 25, 2016, at 9:41 AM, Oleg Zhurakousky 
mailto:ozhurakou...@hortonworks.com>> wrote:

Christian

I’d suggest to look at the additional documentation of this new component as it 
provides some level of details, but it would be interesting to get your 
feedback as to what do you think is missing/confusing.

In any event, here are some details. You need to configure ControllerService 
for JMS Connection Factory and Consume and/or PublishJMS processor
Attached are the images showing sample configuration.
Obviously the Destination names, URIs etc would have to be changed to fit your 
environment.

Let me know how it goes.

Cheers
Oleg





On May 25, 2016, at 4:09 AM, christianv 
mailto:christian.vandenhee...@standardbank.co.za>>
 wrote:

Tried it which class do I use from IBM ie com.ibm for the setting
JMSConnectionFactoryProvider



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/NIFI-IBM-MQ-tp10651p10679.html
Sent from the Apache NiFi Developer List mailing list archive at 
Nabble.com<http://nabble.com/>.





Re: [ANNOUNCE] New Apache NiFi Committer Pierre Villard

2016-05-25 Thread Oleg Zhurakousky
Wow! Finally, very excited!!! Long overdue! Congrats and welcome Pierre!

Oleg 

> On May 25, 2016, at 11:50, Joe Witt  wrote:
> 
> On behalf of the Apache NiFi PMC, I am pleased to announce that Pierre
> Villard has accepted the PMC's invitation to become a committer on the
> Apache NiFi project.  Several months ago Pierre began contributing to
> NiFi in a variety of important ways and quickly expanded those
> contributions to include code, mailing list, release votes, and more.
> We thank him for his efforts thus far and look forward to his
> continued involvement in the project.
> 
> Welcome and congratulations!
> 


Re: Nifi Java 1.7 support

2016-05-31 Thread Oleg Zhurakousky
Yes, that would be with NIFI 1.0.0 

Cheers
Oleg

> On May 31, 2016, at 4:51 PM, M Singh  wrote:
> 
> Hi Folks:
> 
> Just wanted to find out if there is a timeline when Nifi will sunset support 
> for JDK 1.7.
> Thanks
> Mans



Re: [DISCUSS] Apache NiFi 0.7.0 and 1.0.0

2016-06-01 Thread Oleg Zhurakousky
 lines and extra work increases contributor and reviewer burden so we
>>>>>>> should be mindful of that as it is a dragging force.  We also need to
>>>>>>> keep in mind that with 1.x we have Java 8 as a minimum and so there
>>>>>>> are cases which will not apply to both and we don't want folks to
>>>>>>> avoid using Java 8 features just so it can apply to both.
>>>>>>> 
>>>>>>> My preference is that we have 0.7 as the last planned feature release
>>>>>>> in 0.x and with that in mind we need to choose to have it be a bit
>>>>>>> before, a bit after, or at the same time as the 1.x release.  I
>>>>>>> personally am comfortable with what I proposed for 0.7 vs 1.0 timing
>>>>>>> but I am fine if the consensus is to release the last 0.x and 1.0 at
>>>>>>> the same time.  Just hoping to avoid needing to have another feature
>>>>>>> release on 0.x after 0.7 other than some special request that might
>>>>>>> come up later (which is also discussed in the support doc).
>>>>>>> 
>>>>>>> I also agree the release process for 1.0 will be significant as it
>>>>>>> will include important new features.  Definitely need folks testing
>>>>>>> out and providing feedback on the features early and often.
>>>>>>> 
>>>>>>> Thanks
>>>>>>> Joe
>>>>>>> 
>>>>>>>> On Tue, May 17, 2016 at 6:20 PM, Michael Moser 
>>>>>>> wrote:
>>>>>>> 
>>>>>>> The way I read the release support document, I don't think the
>>> feature
>>>>>>> cut-off for the 0.x branch happens when we confirm a release date for
>>>>>> 1.0,
>>>>>>> I think it occurs once we actually release 1.0.  Maybe the cut-off
>>> can
>>>>>>> happen once we declare the first 1.0 release candidate.  I'm sure we
>>>>>> will
>>>>>>> spend significant time doing testing and bug fixes on 1.0 release
>>>>>>> candidates.  If I recall, we spent 2 weeks on 0.6.1 release
>>> candidates.
>>>>>>> 
>>>>>>> -- Mike
>>>>>>> 
>>>>>>> 
>>>>>>> On Tue, May 17, 2016 at 6:04 PM, Joe Witt 
>>> wrote:
>>>>>>> 
>>>>>>> I believe that is right Andy.  The support guide articulates that we
>>>>>>> could do a feature release upon request if there was some specific
>>>>>>> need a community member had but that otherwise the only releases on
>>> an
>>>>>>> older line still supported would be focused on security/data loss
>>> type
>>>>>>> items.
>>>>>>> 
>>>>>>> Thanks
>>>>>>> Joe
>>>>>>> 
>>>>>>> On Tue, May 17, 2016 at 4:58 PM, Andy LoPresto >>> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>> This schedule seems appropriate to me. Once 0.7.0 is released and we
>>>>>>> 
>>>>>>> confirm
>>>>>>> 
>>>>>>> the release date for 1.0, feature development is completely targeted
>>> to
>>>>>>> 
>>>>>>> 1.0,
>>>>>>> 
>>>>>>> correct? Security and data loss bug fixes would still be backported,
>>> but
>>>>>>> 
>>>>>>> new
>>>>>>> 
>>>>>>> features would not.
>>>>>>> 
>>>>>>> Andy LoPresto
>>>>>>> alopre...@apache.org
>>>>>>> alopresto.apa...@gmail.com
>>>>>>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>>>>>> 
>>>>>>> On May 17, 2016, at 1:19 PM, Joe Witt  wrote:
>>>>>>> 
>>>>>>> Ok - i'm good with an 0.7 release too and think it is a good idea.  I
>>>>>>> am happy to RM the release.
>>>>>>> 
>>>>>>> I'd like to select a date at which we're happy to call the 0.x line
>>>>>>> then feature complete which means 0.7 would be the last feature
>>>

Re: [DISCUSS] - Markdown option for documentation artifacts

2016-06-07 Thread Oleg Zhurakousky
Personally I am ok either way, however the question I have is consistency and 
how different artifacts written via html will be different (look-and-feel) from 
the once created using mark up and then transformed. 

Oleg

> On Jun 7, 2016, at 2:28 PM, Bryan Rosander  wrote:
> 
> Hey all,
> 
> When writing documentation (e.g. the additionalDetails.html for a
> processor) it would be nice to have the option to use Markdown instead of
> html.
> 
> I think Markdown is easier to read and write than raw HTML and for simple
> cases does the job pretty well.  It also has the advantage of being able to
> be translated into other document types easily and it would be rendered by
> default in Github when the file is clicked.
> 
> There is an MIT-licensed Markdown maven plugin (
> https://github.com/walokra/markdown-page-generator-plugin) that seems like
> it might work for translating additionalDetails.md (and others) into an
> equivalent html page.
> 
> Thanks,
> Bryan Rosander



Re: unwedgeable flow caused by "Cannot create Provenance Event Record because FlowFile UUID is not set"

2016-06-13 Thread Oleg Zhurakousky
Chris

The fact that your flow is hosed means this is a bug. IMHO corrupted FlowFile 
should not render NiFi unavailable. So please raise the JIRA.

Also, i think the real question here is not how the FF got corrupted but the 
fact that corrupted FF rendered NIFI unavailable and that is the core issue. We 
should probably discuss what should we do with corrupted FF (drop/move/delete 
etc), but I don’t think it should stop the world.

In any event, raise the JIRA and we can discuss it there.

Cheers
Oleg

> On Jun 13, 2016, at 3:01 PM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> 2016-06-13 17:33:30,275 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.p.attributes.UpdateAttribute
> java.lang.IllegalStateException: Cannot create Provenance Event Record 
> because FlowFile UUID is not set
>at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:700)
>  ~[nifi-data-provenance-utils-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:721)
>  ~[nifi-data-provenance-utils-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:412)
>  ~[nifi-data-provenance-utils-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.repository.StandardProcessSession.updateProvenanceRepo(StandardProcessSession.java:634)
>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:295)
>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:283)
>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_45]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> 
> I suspect the error is due to some file corruption that I may have caused 
> through some fault testing (hard reboot.)
> 
> My problem at this point is that it completely wedges the flow. I cannot not 
> examine or empty incoming connection and the processor is not discarding the 
> bad file.  Any suggestions on how to get past this?
> 
> On the file corruption, are there any known issues with NiFi where file 
> corruption could occur when SIGKILLing NiFi? Not that, that is what I did, I 
> was doing a hard reboot, which could make the file system configuration 
> suspect, but if the problem is further up the stack I’d like to be aware of 
> it.
> 
> 
> Thanks,
> 
> Chris McDermott
> 
> Remote Business Analytics
> STaTS/StoreFront Remote
> HPE Storage
> Hewlett Packard Enterprise
> Mobile: +1 978-697-5315
> 
> https://www.storefrontremote.com



Re: Apache NiFi 0.7.0 Release date ?

2016-06-13 Thread Oleg Zhurakousky
Unfortunately I just raised another relatively trivial JIRA which affects 0.7, 
but I should have it ready by tomorrow.
Otherwise I am +1 to get 0.7 out asap 

Sent from my iPhone

> On Jun 13, 2016, at 16:47, Joe Percivall  
> wrote:
> 
> Looks like time to start finishing up the remaining 17 tickets and getting 
> together a release. I will volunteer to be Release Manager. I'll start 
> another email thread outlining the remaining tickets and their statuses.
> 
> Joe
> - - - - - - Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
> 
> 
> 
> 
> On Tuesday, June 7, 2016 11:26 AM, Matt Burgess  wrote:
> There was a whitespace issue in the test HL7 file. Joey moved the
> content into the test class itself and removed the file. I reviewed
> and merged to 0.x and master, so things should be back to normal.
> Sorry for the hiccup.
> 
> Regards,
> Matt
> 
> 
>> On Tue, Jun 7, 2016 at 10:03 AM, Matt Burgess  wrote:
>> I merged an HL7 PR last night, I will take a look. The tests passed
>> for me but maybe I screwed something up with the patch application.
>> 
>>> On Tue, Jun 7, 2016 at 10:00 AM, Andre  wrote:
>>> + 1 on the build errors on HL7.
>>> 
>>> Seems to break things in here as well.
>>> 
>>> Cheers
>>> 
 On Tue, Jun 7, 2016 at 11:58 PM, Ryan H  
 wrote:
 
 I did build the 0.x branch for the latest 0.7.0... It works when you skip
 building the tests.
> 


Re: unwedgeable flow caused by "Cannot create Provenance Event Record because FlowFile UUID is not set"

2016-06-14 Thread Oleg Zhurakousky
Thank you Chris

Will be addressed shortly
Oleg

> On Jun 14, 2016, at 9:56 AM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> Thanks, Oleg. I’ve created NIFI-2015.
> 
> 
> Regards,
> 
> Chris McDermott
> 
> Remote Business Analytics
> STaTS/StoreFront Remote
> HPE Storage
> Hewlett Packard Enterprise
> Mobile: +1 978-697-5315
> 
> https://www.storefrontremote.com
> 
> On 6/13/16, 3:10 PM, "Oleg Zhurakousky"  wrote:
> 
>> 2016-06-13 17:33:30,275 ERROR [Timer-Driven Process Thread-4] 
>> o.a.n.p.attributes.UpdateAttribute
>> java.lang.IllegalStateException: Cannot create Provenance Event Record 
>> because FlowFile UUID is not set
>>   at 
>> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.assertSet(StandardProvenanceEventRecord.java:700)
>>  ~[nifi-data-provenance-utils-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:721)
>>  ~[nifi-data-provenance-utils-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.provenance.StandardProvenanceEventRecord$Builder.build(StandardProvenanceEventRecord.java:412)
>>  ~[nifi-data-provenance-utils-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.repository.StandardProcessSession.updateProvenanceRepo(StandardProcessSession.java:634)
>>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:295)
>>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:283)
>>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
>> [na:1.8.0_45]
>>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
>> [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>  [na:1.8.0_45]
>>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> 



Re: GetJMSQueue does not detect dead connections

2016-06-16 Thread Oleg Zhurakousky
Chris 

Given that we are deprecating Get/PutJMS* in favor of Publish/SubscribeJMS, I’d 
suggest start using those once.

Cheers
Oleg


> On Jun 16, 2016, at 1:34 PM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> Folks,
> 
> I’ve been trying to test my GetJMSQueue configuration so that it detects a 
> dead broker connection and fails over to an alternate broker.  When I say 
> dead connection I mean TCP connection that has not been closed but is no 
> longer passing traffic.  In the real world this typically happens when broker 
> server crashes and so it does not reset the open connections.  For my test 
> case I am using iptables to block traffic.
> 
> This is the connection URI I am using
> 
> failover:(tcp://host2:61616,tcp://host1:61616)?randomize=false&timeout=3000&nested.soTimeout=3&nested.soWriteTimeout=3&startupMaxReconnectAttempts=1&maxReconnectAttempts=0
> 
> They key parameters here are soTimeout=3 and soWriteTimeout=3
> 
> These set a 30 second timeout on socket reads and writes.  I’m not sure if 
> these are necessary since I believe the JMSConsumer classes specifies its own 
> timeout according to the processor configuration.  The important thing to 
> note is that when one of these timeouts occurs the AMQ client does not close 
> the connection.
> 
> I believe the deficiency here is that JMSConsumer does not consider the 
> possibility that the connection is dead.   The problem with this is that an 
> attempt to reconnect and failover to an alternate broker is not made.
> 
> I think the fix would involve counting the number of sequential empty 
> responses on the connection and then closing the connection once that number 
> crosses some threshold.  Then subsequent onTrigger() would cause a new 
> connection attempt.
> 
> Thoughts?
> 
> Chris McDermott
> 
> Remote Business Analytics
> STaTS/StoreFront Remote
> HPE Storage
> Hewlett Packard Enterprise
> Mobile: +1 978-697-5315
> 
> https://www.storefrontremote.com



Re: GetJMSQueue does not detect dead connections

2016-06-16 Thread Oleg Zhurakousky
Yes, you can probably look at the test case for it since it uses embedded AMQP.

Let m know if you need more help with it.

Cheers
Oleg
> On Jun 16, 2016, at 2:50 PM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> Thanks, Oleg.  
> 
> Do you have an example of how to configure the JMSConnectionFactoryProvider 
> to work with AMQ?
> 
> The documentation says that the MQ Client Libraries path is optional with 
> org.apache.activemq.ActiveMQConnectionFactory but I am find that is not the 
> case.
> 
> Thanks,
> 
> Chris McDermott
> 
> Remote Business Analytics
> STaTS/StoreFront Remote
> HPE Storage
> Hewlett Packard Enterprise
> Mobile: +1 978-697-5315
> 
> https://www.storefrontremote.com
> 
> On 6/16/16, 1:43 PM, "Oleg Zhurakousky"  wrote:
> 
>> Chris 
>> 
>> Given that we are deprecating Get/PutJMS* in favor of Publish/SubscribeJMS, 
>> I’d suggest start using those once.
>> 
>> Cheers
>> Oleg
>> 
>> 
>>> On Jun 16, 2016, at 1:34 PM, McDermott, Chris Kevin (MSDU - 
>>> STaTS/StorefrontRemote)  wrote:
>>> 
>>> Folks,
>>> 
>>> I’ve been trying to test my GetJMSQueue configuration so that it detects a 
>>> dead broker connection and fails over to an alternate broker.  When I say 
>>> dead connection I mean TCP connection that has not been closed but is no 
>>> longer passing traffic.  In the real world this typically happens when 
>>> broker server crashes and so it does not reset the open connections.  For 
>>> my test case I am using iptables to block traffic.
>>> 
>>> This is the connection URI I am using
>>> 
>>> failover:(tcp://host2:61616,tcp://host1:61616)?randomize=false&timeout=3000&nested.soTimeout=3&nested.soWriteTimeout=3&startupMaxReconnectAttempts=1&maxReconnectAttempts=0
>>> 
>>> They key parameters here are soTimeout=3 and soWriteTimeout=3
>>> 
>>> These set a 30 second timeout on socket reads and writes.  I’m not sure if 
>>> these are necessary since I believe the JMSConsumer classes specifies its 
>>> own timeout according to the processor configuration.  The important thing 
>>> to note is that when one of these timeouts occurs the AMQ client does not 
>>> close the connection.
>>> 
>>> I believe the deficiency here is that JMSConsumer does not consider the 
>>> possibility that the connection is dead.   The problem with this is that an 
>>> attempt to reconnect and failover to an alternate broker is not made.
>>> 
>>> I think the fix would involve counting the number of sequential empty 
>>> responses on the connection and then closing the connection once that 
>>> number crosses some threshold.  Then subsequent onTrigger() would cause a 
>>> new connection attempt.
>>> 
>>> Thoughts?
>>> 
>>> Chris McDermott
>>> 
>>> Remote Business Analytics
>>> STaTS/StoreFront Remote
>>> HPE Storage
>>> Hewlett Packard Enterprise
>>> Mobile: +1 978-697-5315
>>> 
>>> https://www.storefrontremote.com
>> 
> 



Re: GetJMSQueue does not detect dead connections

2016-06-16 Thread Oleg Zhurakousky
Chris

That is correct.
The idea was to make sure that we can support multiple clients and multiple 
vendors since Get/Put only supported AMQP and only one version. The new JMS 
support allows you to use any JMS vendor and the only extra work we are asking 
you to do is to provide ConnectionFactory JAR(s).

Does that clarify?

Also, yeh tests I was referring to are 
https://github.com/apache/nifi/tree/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/test/java/org/apache/nifi/jms/processors

Let me know if you need more help
Cheers
Oleg
On Jun 16, 2016, at 3:08 PM, McDermott, Chris Kevin (MSDU - 
STaTS/StorefrontRemote) 
mailto:chris.mcderm...@hpe.com>> wrote:

So does that mean that I cannot use the AMQ client packaged with NiFi but 
rather provide my own?

Sorry if I an being obtuse.

Chris McDermott

Remote Business Analytics
STaTS/StoreFront Remote
HPE Storage
Hewlett Packard Enterprise
Mobile: +1 978-697-5315

https://www.storefrontremote.com

On 6/16/16, 2:53 PM, "Oleg Zhurakousky"  wrote:

Yes, you can probably look at the test case for it since it uses embedded AMQP.

Let m know if you need more help with it.

Cheers
Oleg
On Jun 16, 2016, at 2:50 PM, McDermott, Chris Kevin (MSDU - 
STaTS/StorefrontRemote)  wrote:

Thanks, Oleg.

Do you have an example of how to configure the JMSConnectionFactoryProvider to 
work with AMQ?

The documentation says that the MQ Client Libraries path is optional with 
org.apache.activemq.ActiveMQConnectionFactory but I am find that is not the 
case.

Thanks,

Chris McDermott

Remote Business Analytics
STaTS/StoreFront Remote
HPE Storage
Hewlett Packard Enterprise
Mobile: +1 978-697-5315

https://www.storefrontremote.com

On 6/16/16, 1:43 PM, "Oleg Zhurakousky"  wrote:

Chris

Given that we are deprecating Get/PutJMS* in favor of Publish/SubscribeJMS, I’d 
suggest start using those once.

Cheers
Oleg


On Jun 16, 2016, at 1:34 PM, McDermott, Chris Kevin (MSDU - 
STaTS/StorefrontRemote)  wrote:

Folks,

I’ve been trying to test my GetJMSQueue configuration so that it detects a dead 
broker connection and fails over to an alternate broker.  When I say dead 
connection I mean TCP connection that has not been closed but is no longer 
passing traffic.  In the real world this typically happens when broker server 
crashes and so it does not reset the open connections.  For my test case I am 
using iptables to block traffic.

This is the connection URI I am using

failover:(tcp://host2:61616,tcp://host1:61616)?randomize=false&timeout=3000&nested.soTimeout=3&nested.soWriteTimeout=3&startupMaxReconnectAttempts=1&maxReconnectAttempts=0

They key parameters here are soTimeout=3 and soWriteTimeout=3

These set a 30 second timeout on socket reads and writes.  I’m not sure if 
these are necessary since I believe the JMSConsumer classes specifies its own 
timeout according to the processor configuration.  The important thing to note 
is that when one of these timeouts occurs the AMQ client does not close the 
connection.

I believe the deficiency here is that JMSConsumer does not consider the 
possibility that the connection is dead.   The problem with this is that an 
attempt to reconnect and failover to an alternate broker is not made.

I think the fix would involve counting the number of sequential empty responses 
on the connection and then closing the connection once that number crosses some 
threshold.  Then subsequent onTrigger() would cause a new connection attempt.

Thoughts?

Chris McDermott

Remote Business Analytics
STaTS/StoreFront Remote
HPE Storage
Hewlett Packard Enterprise
Mobile: +1 978-697-5315

https://www.storefrontremote.com







Re: GetJMSQueue does not detect dead connections

2016-06-16 Thread Oleg Zhurakousky
Yes, that is documentation bug.

Chris would you mind raising a JIRA or I can do it.

Cheers
Oleg

> On Jun 16, 2016, at 3:15 PM, Joe Witt  wrote:
> 
> Oleg - so to Chris' other comment about docs suggesting the ActiveMQ
> lib path is not needed - is that a doc bug?
> 
> On Thu, Jun 16, 2016 at 3:13 PM, Oleg Zhurakousky
>  wrote:
>> Chris
>> 
>> That is correct.
>> The idea was to make sure that we can support multiple clients and multiple 
>> vendors since Get/Put only supported AMQP and only one version. The new JMS 
>> support allows you to use any JMS vendor and the only extra work we are 
>> asking you to do is to provide ConnectionFactory JAR(s).
>> 
>> Does that clarify?
>> 
>> Also, yeh tests I was referring to are 
>> https://github.com/apache/nifi/tree/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/test/java/org/apache/nifi/jms/processors
>> 
>> Let me know if you need more help
>> Cheers
>> Oleg
>> On Jun 16, 2016, at 3:08 PM, McDermott, Chris Kevin (MSDU - 
>> STaTS/StorefrontRemote) 
>> mailto:chris.mcderm...@hpe.com>> wrote:
>> 
>> So does that mean that I cannot use the AMQ client packaged with NiFi but 
>> rather provide my own?
>> 
>> Sorry if I an being obtuse.
>> 
>> Chris McDermott
>> 
>> Remote Business Analytics
>> STaTS/StoreFront Remote
>> HPE Storage
>> Hewlett Packard Enterprise
>> Mobile: +1 978-697-5315
>> 
>> https://www.storefrontremote.com
>> 
>> On 6/16/16, 2:53 PM, "Oleg Zhurakousky"  wrote:
>> 
>> Yes, you can probably look at the test case for it since it uses embedded 
>> AMQP.
>> 
>> Let m know if you need more help with it.
>> 
>> Cheers
>> Oleg
>> On Jun 16, 2016, at 2:50 PM, McDermott, Chris Kevin (MSDU - 
>> STaTS/StorefrontRemote)  wrote:
>> 
>> Thanks, Oleg.
>> 
>> Do you have an example of how to configure the JMSConnectionFactoryProvider 
>> to work with AMQ?
>> 
>> The documentation says that the MQ Client Libraries path is optional with 
>> org.apache.activemq.ActiveMQConnectionFactory but I am find that is not the 
>> case.
>> 
>> Thanks,
>> 
>> Chris McDermott
>> 
>> Remote Business Analytics
>> STaTS/StoreFront Remote
>> HPE Storage
>> Hewlett Packard Enterprise
>> Mobile: +1 978-697-5315
>> 
>> https://www.storefrontremote.com
>> 
>> On 6/16/16, 1:43 PM, "Oleg Zhurakousky"  wrote:
>> 
>> Chris
>> 
>> Given that we are deprecating Get/PutJMS* in favor of Publish/SubscribeJMS, 
>> I’d suggest start using those once.
>> 
>> Cheers
>> Oleg
>> 
>> 
>> On Jun 16, 2016, at 1:34 PM, McDermott, Chris Kevin (MSDU - 
>> STaTS/StorefrontRemote)  wrote:
>> 
>> Folks,
>> 
>> I’ve been trying to test my GetJMSQueue configuration so that it detects a 
>> dead broker connection and fails over to an alternate broker.  When I say 
>> dead connection I mean TCP connection that has not been closed but is no 
>> longer passing traffic.  In the real world this typically happens when 
>> broker server crashes and so it does not reset the open connections.  For my 
>> test case I am using iptables to block traffic.
>> 
>> This is the connection URI I am using
>> 
>> failover:(tcp://host2:61616,tcp://host1:61616)?randomize=false&timeout=3000&nested.soTimeout=3&nested.soWriteTimeout=3&startupMaxReconnectAttempts=1&maxReconnectAttempts=0
>> 
>> They key parameters here are soTimeout=3 and soWriteTimeout=3
>> 
>> These set a 30 second timeout on socket reads and writes.  I’m not sure if 
>> these are necessary since I believe the JMSConsumer classes specifies its 
>> own timeout according to the processor configuration.  The important thing 
>> to note is that when one of these timeouts occurs the AMQ client does not 
>> close the connection.
>> 
>> I believe the deficiency here is that JMSConsumer does not consider the 
>> possibility that the connection is dead.   The problem with this is that an 
>> attempt to reconnect and failover to an alternate broker is not made.
>> 
>> I think the fix would involve counting the number of sequential empty 
>> responses on the connection and then closing the connection once that number 
>> crosses some threshold.  Then subsequent onTrigger() would cause a new 
>> connection attempt.
>> 
>> Thoughts?
>> 
>> Chris McDermott
>> 
>> Remote Business Analytics
>> STaTS/StoreFront Remote
>> HPE Storage
>> Hewlett Packard Enterprise
>> Mobile: +1 978-697-5315
>> 
>> https://www.storefrontremote.com
>> 
>> 
>> 
>> 
>> 
> 



Re: [DISCUSS] Closing in on the Apache NiFi 0.7.0 Release

2016-06-17 Thread Oleg Zhurakousky
2009 is all done (including 0.x)

Sent from my iPhone

> On Jun 17, 2016, at 14:16, Joe Percivall  
> wrote:
> 
> Team,
> 
> We are continuing to have good progress towards the 0.7.0 release. Tagged as 
> "0.7.0" in Jira and still pending, we currently have 9 tickets that are all 
> patch available. 
> 
> As a note, there seem to be a number of unversioned "patch available" tickets 
> in Jira. I am going to take a look at them to see if any can easily make it 
> into 0.7.0 (or are critical) but unless if they are critical I am not going 
> to hold up the release for them. 
> 
> The current status for those tagged as 0.7.0:
> 
> - "Processors could be started before the Controller Services that they 
> depend on" NIFI-2032[1] [status] Critical bug that was added two days ago. 
> Oleg Zhurakousky posted a PR and needs review.
> 
> 
> - "Allow user to specify file filter regex when unpacking zip/tar archives" 
> NIFI-1568[2] [status] Matt Burgess finished reviewing and is waiting for 
> Ricky Saltzer to address the final comment.
> 
> - "Create PutSlack processor" NIFI-1578 [3] [status] Matt Burgess found one 
> final issue before it is committed. Waiting for Adam Lamar to address.
> 
> -  "Allow empty Content-Type in InvokeHTTP processor" NIFI-1620[4] [status] 
> Pierre Villard addressed Adam Taft's comments is waiting for a final review.
> 
> - "Support Custom Properties in Expression Language" NIFI-1974[5] [status] No 
> Change, waiting for Mark Payne to give a final review.
> 
> - "StandardProcessNode and AbstractConfiguredComponent duplicate instance 
> variable "annotationData"" NIFI-2009[6] [status] Was merged into master but 
> needs to fix merge conflicts to be merged into 0.x
> - "Create a processor to extract WAV file characteristics" NIFI-615[7] 
> [status] Joe Skora posted a new branch addressing the comments but needs to 
> rebase and update PR. Branch needs final review from Joe Percivall (me)
> 
> - "Add SNMP processors" NIFI-1537[8] [status] PR is rebased and Oleg 
> Zhurakousky will be reviewing
> 
> - "Create FlowDebugger processor" NIFI-1829[9] [status] Joe Skora added 
> another commit and rebased. Tony won't be able to finalize review until the 
> weekend, Michael Moser volunteered to finish
> 
> 
> Also NiFi-1850 is marked as 0.7.0 and "Patch Available" in Jira but it was 
> already was merged into 0.x and work is being done to get it into master. I 
> will not reflect it here.
> 
> [1] https://issues.apache.org/jira/browse/NIFI-2032
> [2] https://issues.apache.org/jira/browse/NIFI-1568
> [3] https://issues.apache.org/jira/browse/NIFI-1578
> [4] https://issues.apache.org/jira/browse/NIFI-1620
> [5] https://issues.apache.org/jira/browse/NIFI-1974
> [6] https://issues.apache.org/jira/browse/NIFI-2009
> [7] https://issues.apache.org/jira/browse/NIFI-615
> [8] https://issues.apache.org/jira/browse/NIFI-1537
> [9] https://issues.apache.org/jira/browse/NIFI-1829
> 
> Joe
> - - - - - - 
> Joseph Percivalllinkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
> 
> 
> 
> 
> On Wednesday, June 15, 2016 10:56 AM, Joe Percivall 
>  wrote:
> Team,
> 
> There was a lot of great progress yesterday, we closed or pushed 6 tickets. 
> Also two tickets were added that are either critical or finishing shortly. 
> The status of the remaining 12 tickets are below:
> 
> - "Corrupted flow file leads to a wedged flow" NIFI-2015[1] Added yesterday. 
> Christopher McDermott, Mark Payne and Oleg Zhurakousky have been discussing 
> the bug but no resolution yet.
> 
> - "Allow user to specify file filter regex when unpacking zip/tar archives" 
> NIFI-1568[2] [status] Matt Burgess is continuing to review
> 
> - "Create PutSlack processor" NIFI-1578 [3] [status] Actively being worked on 
> by contributor and reviewers
> 
> -  "Allow empty Content-Type in InvokeHTTP processor" NIFI-1620[4] [status] 
> No change since yesterday, still waiting for final review by Adam Taft. 
> Commented asking if Adam would like me to finish the review
> 
> - "If unable to write to Content Repository, Process Session should 
> automatically roll itself back" NIFI-1644[5] [status] No progress, commented 
> asking Mark Payne (reporter) to see if it can slide.
> 
> - "Misconfigured MonitorMemory ReportingTask can not be stopped" NIFI-1690[6] 
> [status] Actively being worked on by contributor and reviewer
> 
> - "Support Custom Properties in Expression Language" NIFI-1974[7] [status] 
> New yesterday. Yolanda Davis submitted the PR 

Re: ControllerService Enabling and Processor onPropertyModified

2016-06-20 Thread Oleg Zhurakousky
Indeed this sounds like https://issues.apache.org/jira/browse/NIFI-2032. The PR 
for it has been out https://github.com/apache/nifi/pull/541 so if you get a 
chance to try it please let us know. Hopefully it will be merged soon.

Cheers
Oleg

On Jun 20, 2016, at 4:25 PM, Michael Moser 
mailto:moser...@gmail.com>> wrote:

Michael,

You may be encountering the bug NIFI-2032 [1] which exists in NiFi 0.6.1.

[1] - https://issues.apache.org/jira/browse/NIFI-2032

-- Mike



On Mon, Jun 20, 2016 at 12:20 PM, Michael D. Coon 
mailto:mdco...@yahoo.com.invalid>
wrote:

All,
  Before I get too deep in submitting Jira tickets, etc. I'm wondering if
this is expected behavior. I'm using NiFi 0.6.1.

  I have a ControllerService that I reference as a service property on my
Processor. The Processor, in turn, uses the ControllerService's internal
configuration state to determine Processor output relationships. But, it
appears that at NiFi startup, I'm given the ControllerService before it is
actually enabled. When I try to invoke the  methods to get the
ControllerService's state, it fails (because it's not enabled).
  Two problems I found:
1) Logging the invocation exception for calling a disabled
ControllerService is being suppressed in this case, which caused this to
take me a full day to track down2) Why would I ever be given a service to
use without it being fully enabled?
I thought I would just block in my Processor's onPropertyModified method
until the ControllerService was enabled; but it looks like the thread
that's actually enabling the service is calling my Processor's
onPropertyModified method and the ControllerService is never enabled until
my Processor's onPropertyModified method is done.
Is this expected behavior? If so, can someone please explain the
assumptions around sending non-enabled ControllerServices to my Processor?

Mike




Re: Question about Mock classes in ControllerServiceInitializer

2016-06-30 Thread Oleg Zhurakousky
Guys

FWIW, there are discussion points on the WIKI that may be relevant to 
understanding of this issue especially in relation to what Mark just stated
https://cwiki.apache.org/confluence/display/NIFI/Component+documentation+improvements

There are also some points in the Extension Registry (linked from the link 
above).
There is also an open JIRA with some more info: 
https://issues.apache.org/jira/browse/NIFI-1384

I think in reality if we declare (what is now obvious) a convention that 
PropertyDescriptors, Relationships etc. all have to be static variables then we 
would not need to create throw-away instances of class just to get their 
documentation.
Anyway, something to think about. . .

Cheers
Oleg

On Jun 30, 2016, at 1:59 PM, Mark Payne 
mailto:marka...@hotmail.com>> wrote:

Joe,

I think the nifi-documentation module is using that to instantiate Processors, 
Controller Services, etc.
so that it can inspect their annotations & call their getPropertyDescriptors() 
methods, etc. when it generates
documentation for the component. Those should not be used for any component 
that is added to the flow.



On Jun 30, 2016, at 1:32 PM, Joe Skora 
mailto:jsk...@gmail.com>> wrote:

Brandon and I have both run into log entries saying something along the
lines of "o.a.n.d.mock.MockProcessorLogger Shutting down server".

Checking the code
,
there are references to the MockProcessorLogger and MockConfigurationContext
in the org.apache.nifi.documentation.init.ControllerServiceInitizer,
ProcessorInitializer, and ReportingTaskingInitializer classes.

What are we missing?  Why are there Mock framework classes used in regular
classes?





Re: Question about Mock classes in ControllerServiceInitializer

2016-07-01 Thread Oleg Zhurakousky
Joe

Personally I probably need to look at it more thoroughly to have an intelligent 
answer, but one thing I know for sure (or at least I believe. . .) is that if 
you see any of these Mock* references in the log when you starting NiFi 
instance it’s definitely a bug. 
If that is what you see, I’d suggest to raise JIRA.

Cheers
Oleg

> On Jul 1, 2016, at 7:58 AM, Joe Skora  wrote:
> 
> Mark and Oleg, thanks, it makes sense now.
> 
> But, I'm still trying to track down how we have logs containing "... INFO
> [Finalizer] o.a.n.d.mock.MockProcessorLogger Shutting down server".  It
> still doesn't seem like that should occur.
> 
> 
> On Thu, Jun 30, 2016 at 6:30 PM, Oleg Zhurakousky <
> ozhurakou...@hortonworks.com> wrote:
> 
>> Guys
>> 
>> FWIW, there are discussion points on the WIKI that may be relevant to
>> understanding of this issue especially in relation to what Mark just stated
>> 
>> https://cwiki.apache.org/confluence/display/NIFI/Component+documentation+improvements
>> 
>> There are also some points in the Extension Registry (linked from the link
>> above).
>> There is also an open JIRA with some more info:
>> https://issues.apache.org/jira/browse/NIFI-1384
>> 
>> I think in reality if we declare (what is now obvious) a convention that
>> PropertyDescriptors, Relationships etc. all have to be static variables
>> then we would not need to create throw-away instances of class just to get
>> their documentation.
>> Anyway, something to think about. . .
>> 
>> Cheers
>> Oleg
>> 
>> On Jun 30, 2016, at 1:59 PM, Mark Payne > marka...@hotmail.com>> wrote:
>> 
>> Joe,
>> 
>> I think the nifi-documentation module is using that to instantiate
>> Processors, Controller Services, etc.
>> so that it can inspect their annotations & call their
>> getPropertyDescriptors() methods, etc. when it generates
>> documentation for the component. Those should not be used for any
>> component that is added to the flow.
>> 
>> 
>> 
>> On Jun 30, 2016, at 1:32 PM, Joe Skora > jsk...@gmail.com>> wrote:
>> 
>> Brandon and I have both run into log entries saying something along the
>> lines of "o.a.n.d.mock.MockProcessorLogger Shutting down server".
>> 
>> Checking the code
>> <
>> https://github.com/apache/nifi/blob/release-nifi-0.3.0-rc1/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-documentation/src/main/java/org/apache/nifi/documentation/init/ControllerServiceInitializer.java#L52
>>> ,
>> there are references to the MockProcessorLogger and
>> MockConfigurationContext
>> in the org.apache.nifi.documentation.init.ControllerServiceInitizer,
>> ProcessorInitializer, and ReportingTaskingInitializer classes.
>> 
>> What are we missing?  Why are there Mock framework classes used in regular
>> classes?
>> 
>> 
>> 
>> 



Re: Update on 0.7.0 RC2

2016-07-05 Thread Oleg Zhurakousky
Found a bug for NIFI-2032, will be issuing PR shortly.

Oleg

> On Jul 2, 2016, at 1:02 PM, Joe Percivall  
> wrote:
> 
> Hello Team,
> 
> There have been two new critical bugs identified by Brandon DeVries and added 
> to the 0.7.0 release. The status of the remaining 0.7.0 issues are below.
> 
> - "Enabled ControllerServices disabled on restart" NIFI-2160[1] [status] A 
> regression due to NIFI-2032 that breaks backwards compatibility. Discussion 
> has started between Oleg Zhurakousky and Brandon DeVries.
> 
> - "Fingerprint not detecting flow.xml differences" NIFI-2159[2] [status] 
> Reported by Brandon. No one has started work on it. NEEDS AN ASSIGNEE
> 
> - "Secure clustering returning bad request response" NIFI-2119[3] [status] No 
> updates, Andy LoPresto is looking into it.
> 
> Again for clarity, NIFI-1974 and NIFI-2089 are listed as 0.7.0 and unresolved 
> but have already been merged into 0.7.0. Also due to the reasons highlighted 
> in my previous message I will not be tracking NIFI-2120 and it is punted.
> 
> [1] https://issues.apache.org/jira/browse/NIFI-2160
> [2] https://issues.apache.org/jira/browse/NIFI-2159
> [3] https://issues.apache.org/jira/browse/NIFI-2119
> 
> Hope everyone has a fun and safe fourth,
> Joe
> - - - - - - 
> Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
> 
> 
> 
> 
> On Wednesday, June 29, 2016 9:12 AM, Joe Percivall 
>  wrote:
> Hello Team,
> 
> As most know, the closing of RC1 was due to three issues discovered by Joe 
> Witt. A status update for each is below. 
> 
> 
> - "Secure clustering returning bad request response" NIFI-2119[1] [status] A 
> probable source of the problem was discovered by Matt Gilman, NIFI-1753 a 
> security upgrade. Andy LoPresto is looking into a fix.
> 
> - "source release produces an extraneous file" NIFI-2120[2] [status] I looked 
> into how it would be done and I couldn't find any options in the maven 
> release plugin that support excluding specific files from the release. I will 
> be punting this, unless if someone with more experience with the maven 
> release plugin can fix it.
> 
> - "Correct license/notice issues" NIFI-2118[3] [status] I addressed the 
> issues identified by Joe Witt and the fix was merged by Matt Burgess.
> 
> Again I apologize for not finding this issues earlier and will keep you all 
> up to date on any changes. Also since RC1 a commit addressing NIFI-1920[4] 
> has been merged into the 0.x branch. I will be including it in RC2. It is a 
> simple fix that addresses an inadvertent rollback in UnpackContent.
> 
> [1] https://issues.apache.org/jira/browse/NIFI-2119
> [2] https://issues.apache.org/jira/browse/NIFI-2120
> [3] https://issues.apache.org/jira/browse/NIFI-2118
> [4] https://issues.apache.org/jira/browse/NIFI-920
> 
> Thank you,
> Joe
> - - - - - - 
> Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
> 



Re: Update on 0.7.0 RC2

2016-07-05 Thread Oleg Zhurakousky
For a second there I thought I did but never mind that, I've just commented on 
https://issues.apache.org/jira/browse/NIFI-2160, I can’t reproduce the issue no 
mater what I do. Will keep on digging, but what worried me is that Brandon’s 
original explanation seem to point to some jiggery-trickery with CS lifecycle 
that doesn’t seem to be reliant on public API. So I’ve asked him for more 
details. . .
Oleg

On Jul 5, 2016, at 9:27 AM, Oleg Zhurakousky 
mailto:ozhurakou...@hortonworks.com>> wrote:

Found a bug for NIFI-2032, will be issuing PR shortly.

Oleg

On Jul 2, 2016, at 1:02 PM, Joe Percivall 
mailto:joeperciv...@yahoo.com.invalid>> wrote:

Hello Team,

There have been two new critical bugs identified by Brandon DeVries and added 
to the 0.7.0 release. The status of the remaining 0.7.0 issues are below.

- "Enabled ControllerServices disabled on restart" NIFI-2160[1] [status] A 
regression due to NIFI-2032 that breaks backwards compatibility. Discussion has 
started between Oleg Zhurakousky and Brandon DeVries.

- "Fingerprint not detecting flow.xml differences" NIFI-2159[2] [status] 
Reported by Brandon. No one has started work on it. NEEDS AN ASSIGNEE

- "Secure clustering returning bad request response" NIFI-2119[3] [status] No 
updates, Andy LoPresto is looking into it.

Again for clarity, NIFI-1974 and NIFI-2089 are listed as 0.7.0 and unresolved 
but have already been merged into 0.7.0. Also due to the reasons highlighted in 
my previous message I will not be tracking NIFI-2120 and it is punted.

[1] https://issues.apache.org/jira/browse/NIFI-2160
[2] https://issues.apache.org/jira/browse/NIFI-2159
[3] https://issues.apache.org/jira/browse/NIFI-2119

Hope everyone has a fun and safe fourth,
Joe
- - - - - -
Joseph Percivall
linkedin.com/in/Percivall<http://linkedin.com/in/Percivall>
e: joeperciv...@yahoo.com




On Wednesday, June 29, 2016 9:12 AM, Joe Percivall 
 wrote:
Hello Team,

As most know, the closing of RC1 was due to three issues discovered by Joe 
Witt. A status update for each is below.


- "Secure clustering returning bad request response" NIFI-2119[1] [status] A 
probable source of the problem was discovered by Matt Gilman, NIFI-1753 a 
security upgrade. Andy LoPresto is looking into a fix.

- "source release produces an extraneous file" NIFI-2120[2] [status] I looked 
into how it would be done and I couldn't find any options in the maven release 
plugin that support excluding specific files from the release. I will be 
punting this, unless if someone with more experience with the maven release 
plugin can fix it.

- "Correct license/notice issues" NIFI-2118[3] [status] I addressed the issues 
identified by Joe Witt and the fix was merged by Matt Burgess.

Again I apologize for not finding this issues earlier and will keep you all up 
to date on any changes. Also since RC1 a commit addressing NIFI-1920[4] has 
been merged into the 0.x branch. I will be including it in RC2. It is a simple 
fix that addresses an inadvertent rollback in UnpackContent.

[1] https://issues.apache.org/jira/browse/NIFI-2119
[2] https://issues.apache.org/jira/browse/NIFI-2120
[3] https://issues.apache.org/jira/browse/NIFI-2118
[4] https://issues.apache.org/jira/browse/NIFI-920

Thank you,
Joe
- - - - - -
Joseph Percivall
linkedin.com/in/Percivall
e: joeperciv...@yahoo.com






Re: Update on 0.7.0 RC2

2016-07-07 Thread Oleg Zhurakousky
As it appears, the spotlight is on me ;)
I’ll merge 2160 once I get to the stopping point with what I am doing now 
(NIFI-826 . . .really close) and that is when I’ll take care of Kafka as well 
(had a nice chat with Mark so there is a plan). 

Cheers
Oleg

> On Jul 7, 2016, at 2:33 PM, Joe Percivall  
> wrote:
> 
> Hello Team,
> 
> As the theme with 0.7.0 has been, it's one step forward and one step back. 
> There has been some great work and most of the issues previously seen were 
> addressed but another blocker has been found.
> 
> - "Enabled ControllerServices disabled on restart" NIFI-2160[1] [status] PR 
> have been posted by Oleg Zhurakousky and Brandon DeVries gave a +1 on the 
> ticket but ambiguous about what is the next step. I commented asking for a 
> follow-up.
> 
> - "PutKafka results in OOME if sending very large delimited file" 
> NIFI-2192[2] [status] New issue discovered by Mark Payne that causes OOME. 
> Oleg Zhurakousky is working to address it.
> 
> [1] https://issues.apache.org/jira/browse/NIFI-2160
> [2] https://issues.apache.org/jira/browse/NIFI-2192
> 
> Joe
> - - - - - - 
> Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
> 
> 
> 
> 
> On Saturday, July 2, 2016 1:02 PM, Joe Percivall  
> wrote:
> Hello Team,
> 
> There have been two new critical bugs identified by Brandon DeVries and added 
> to the 0.7.0 release. The status of the remaining 0.7.0 issues are below.
> 
> - "Enabled ControllerServices disabled on restart" NIFI-2160[1] [status] A 
> regression due to NIFI-2032 that breaks backwards compatibility. Discussion 
> has started between Oleg Zhurakousky and Brandon DeVries.
> 
> - "Fingerprint not detecting flow.xml differences" NIFI-2159[2] [status] 
> Reported by Brandon. No one has started work on it. NEEDS AN ASSIGNEE
> 
> - "Secure clustering returning bad request response" NIFI-2119[3] [status] No 
> updates, Andy LoPresto is looking into it.
> 
> Again for clarity, NIFI-1974 and NIFI-2089 are listed as 0.7.0 and unresolved 
> but have already been merged into 0.7.0. Also due to the reasons highlighted 
> in my previous message I will not be tracking NIFI-2120 and it is punted.
> 
> [1] https://issues.apache.org/jira/browse/NIFI-2160
> [2] https://issues.apache.org/jira/browse/NIFI-2159
> [3] https://issues.apache.org/jira/browse/NIFI-2119
> 
> Hope everyone has a fun and safe fourth,
> Joe
> - - - - - - 
> Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
> 
> 
> 
> 
> 
> On Wednesday, June 29, 2016 9:12 AM, Joe Percivall 
>  wrote:
> Hello Team,
> 
> As most know, the closing of RC1 was due to three issues discovered by Joe 
> Witt. A status update for each is below. 
> 
> 
> - "Secure clustering returning bad request response" NIFI-2119[1] [status] A 
> probable source of the problem was discovered by Matt Gilman, NIFI-1753 a 
> security upgrade. Andy LoPresto is looking into a fix.
> 
> - "source release produces an extraneous file" NIFI-2120[2] [status] I looked 
> into how it would be done and I couldn't find any options in the maven 
> release plugin that support excluding specific files from the release. I will 
> be punting this, unless if someone with more experience with the maven 
> release plugin can fix it.
> 
> - "Correct license/notice issues" NIFI-2118[3] [status] I addressed the 
> issues identified by Joe Witt and the fix was merged by Matt Burgess.
> 
> Again I apologize for not finding this issues earlier and will keep you all 
> up to date on any changes. Also since RC1 a commit addressing NIFI-1920[4] 
> has been merged into the 0.x branch. I will be including it in RC2. It is a 
> simple fix that addresses an inadvertent rollback in UnpackContent.
> 
> [1] https://issues.apache.org/jira/browse/NIFI-2119
> [2] https://issues.apache.org/jira/browse/NIFI-2120
> [3] https://issues.apache.org/jira/browse/NIFI-2118
> [4] https://issues.apache.org/jira/browse/NIFI-920
> 
> Thank you,
> Joe
> - - - - - - 
> Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
> 



Re: [ANNOUNCE] New Apache NiFi PMC Member - Andy LoPresto

2016-07-07 Thread Oleg Zhurakousky
Way to go Andy! Well deserved!

Cheers
Oleg
> On Jul 7, 2016, at 5:09 PM, Matt Burgess  wrote:
> 
> Congratulations Andy! Well deserved!
> 
> 
>> On Jul 7, 2016, at 12:44 PM, Joe Witt  wrote:
>> 
>> On behalf of the Apache NiFI PMC, I am very pleased to announce that
>> Andy LoPresto has accepted the PMC's invitation to join the Apache
>> NiFi PMC. We greatly appreciate all of Andy's hard work and generous
>> contributions to the project with a specific focus on security related
>> elements. We look forward to his continued involvement in the project.
>> 
>> Welcome and congratulations!
> 



  1   2   3   >