[jira] [Commented] (NIFI-4465) ConvertExcelToCSV Data Formatting and Delimiters

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192466#comment-16192466
 ] 

ASF GitHub Bot commented on NIFI-4465:
--

Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/2194
  
@ijokarumawak I saw that you made some changes to this processor back in 
July. I ended up gutting parts of it and using other parts of the Apache POI 
library so I could read formatted data. The unit test you built for empty cells 
is passing :) If you want to review it... I wouldn't be unhappy


> ConvertExcelToCSV Data Formatting and Delimiters
> 
>
> Key: NIFI-4465
> URL: https://issues.apache.org/jira/browse/NIFI-4465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.5.0
>
>
> The ConvertExcelToCSV Processor does not output cell values using the 
> formatting set in Excel.
> There are also no delimiter options available for column/record delimiting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2194: NIFI-4465 ConvertExcelToCSV Data Formatting and Delimiters

2017-10-04 Thread patricker
Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/2194
  
@ijokarumawak I saw that you made some changes to this processor back in 
July. I ended up gutting parts of it and using other parts of the Apache POI 
library so I could read formatted data. The unit test you built for empty cells 
is passing :) If you want to review it... I wouldn't be unhappy


---


[GitHub] nifi pull request #2194: NIFI-4465 ConvertExcelToCSV Data Formatting and Del...

2017-10-04 Thread patricker
GitHub user patricker opened a pull request:

https://github.com/apache/nifi/pull/2194

NIFI-4465 ConvertExcelToCSV Data Formatting and Delimiters



### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/patricker/nifi NIFI-4465

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2194.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2194


commit 2d1dc6a2276af214401d7e5164f6423ea184eb35
Author: patricker 
Date:   2017-10-05T05:01:47Z

NIFI-4465




---


[jira] [Commented] (NIFI-4465) ConvertExcelToCSV Data Formatting and Delimiters

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192465#comment-16192465
 ] 

ASF GitHub Bot commented on NIFI-4465:
--

GitHub user patricker opened a pull request:

https://github.com/apache/nifi/pull/2194

NIFI-4465 ConvertExcelToCSV Data Formatting and Delimiters



### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/patricker/nifi NIFI-4465

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2194.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2194


commit 2d1dc6a2276af214401d7e5164f6423ea184eb35
Author: patricker 
Date:   2017-10-05T05:01:47Z

NIFI-4465




> ConvertExcelToCSV Data Formatting and Delimiters
> 
>
> Key: NIFI-4465
> URL: https://issues.apache.org/jira/browse/NIFI-4465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.5.0
>
>
> The ConvertExcelToCSV Processor does not output cell values using the 
> formatting set in Excel.
> There are also no delimiter options available for column/record delimiting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4465) ConvertExcelToCSV Data Formatting and Delimiters

2017-10-04 Thread Peter Wicks (JIRA)
Peter Wicks created NIFI-4465:
-

 Summary: ConvertExcelToCSV Data Formatting and Delimiters
 Key: NIFI-4465
 URL: https://issues.apache.org/jira/browse/NIFI-4465
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Peter Wicks
Assignee: Peter Wicks
Priority: Minor
 Fix For: 1.5.0


The ConvertExcelToCSV Processor does not output cell values using the 
formatting set in Excel.

There are also no delimiter options available for column/record delimiting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-113) Move from LevelDB to Rocks DB for all repositories.

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192367#comment-16192367
 ] 

ASF GitHub Bot commented on MINIFICPP-113:
--

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/137
  
@minifirocks I removed tests from being built, but did not remove the 
source code. I think that's a good idea. I'll also remove the docs since they 
aren't really necessary. Thanks!


> Move from LevelDB to Rocks DB for all repositories. 
> 
>
> Key: MINIFICPP-113
> URL: https://issues.apache.org/jira/browse/MINIFICPP-113
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Assignee: marco polo
>Priority: Minor
>
> Can also be used as a file system repo where we want to minimize the number 
> of inodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #137: MINIFI-372: Replace leveldb with RocksDB

2017-10-04 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/137
  
@minifirocks I removed tests from being built, but did not remove the 
source code. I think that's a good idea. I'll also remove the docs since they 
aren't really necessary. Thanks!


---


[jira] [Commented] (NIFI-4297) Immediately actionable dependency upgrades

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192343#comment-16192343
 ] 

ASF GitHub Bot commented on NIFI-4297:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2084
  
Thanks @mcgilman and @joewitt . I rebased against `1.5.0-SNAPSHOT master`, 
added the explicit dependency on `jackson-core` in 
`nifi-solr-processors/pom.xml` and removed `jackson-databind` from the 
`` block in `nifi/pom.xml` and explicitly referenced the 
`${jackson.version}` variable throughout the children poms. The full build 
works, and I verified that not only does it compile, the project seems to be 
able to correctly perform JSON parsing in a variety of ways (used [this 
template](https://gist.github.com/alopresto/63c087854c5300f7c0763ce118c1eef6)) 
to verify. 

![Web Server and JSON Parsing 
Template](https://user-images.githubusercontent.com/798465/31206880-7cf37a8a-a92f-11e7-928b-5a43c0900bb1.png)

I still need to evaluate if 
`org.codehaus.jackson:jackson-mapper-asl:1.9.13` is used anywhere, as 
discussion between @mcgilman and myself [revealed it is replaced by 
`com.fasterxml.jackson.core:jackson-core:2.9.1`](https://mvnrepository.com/artifact/org.codehaus.jackson/jackson-mapper-asl/1.9.13).
 


> Immediately actionable dependency upgrades
> --
>
> Key: NIFI-4297
> URL: https://issues.apache.org/jira/browse/NIFI-4297
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: dependencies, security
>
> The immediately actionable items are:
> * {{org.apache.logging.log4j:log4j-core}} in {{nifi-storm-spout}} 2.1 -> 2.8.2
> * {{org.apache.poi:poi}} in {{nifi-email-processors}} 3.14 -> 3.15
> * {{org.apache.logging.log4j:log4j-core}} in 
> {{nifi-elasticsearch-5-processors}} 2.7 -> 2.8.2
> * {{org.springframework:spring-web}} in {{nifi-jetty}} 4.2.4.RELEASE -> 
> 4.3.10.RELEASE
> * {{org.springframework:spring-web}} in {{nifi-jetty}} 4.2.4.RELEASE -> 
> 4.3.10.RELEASE
> * {{org.apache.derby:derby}} in {{nifi-kite-processors}} 10.11.1.1 -> 
> 10.12.1.1 (already excluded)
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-azure-processors}} 
> 2.6.0 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-expression-language}} 
> 2.6.1 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-standard-utils}} 
> 2.6.2 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-hwx-schema-registry}} 
> 2.7.3 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-solr-processors}} 
> 2.5.4 -> 2.8.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2084: NIFI-4297 Updated dependency versions

2017-10-04 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2084
  
Thanks @mcgilman and @joewitt . I rebased against `1.5.0-SNAPSHOT master`, 
added the explicit dependency on `jackson-core` in 
`nifi-solr-processors/pom.xml` and removed `jackson-databind` from the 
`` block in `nifi/pom.xml` and explicitly referenced the 
`${jackson.version}` variable throughout the children poms. The full build 
works, and I verified that not only does it compile, the project seems to be 
able to correctly perform JSON parsing in a variety of ways (used [this 
template](https://gist.github.com/alopresto/63c087854c5300f7c0763ce118c1eef6)) 
to verify. 

![Web Server and JSON Parsing 
Template](https://user-images.githubusercontent.com/798465/31206880-7cf37a8a-a92f-11e7-928b-5a43c0900bb1.png)

I still need to evaluate if 
`org.codehaus.jackson:jackson-mapper-asl:1.9.13` is used anywhere, as 
discussion between @mcgilman and myself [revealed it is replaced by 
`com.fasterxml.jackson.core:jackson-core:2.9.1`](https://mvnrepository.com/artifact/org.codehaus.jackson/jackson-mapper-asl/1.9.13).
 


---


[jira] [Commented] (NIFI-4445) Unit Test Failure for Scripted Reporting Task

2017-10-04 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192123#comment-16192123
 ] 

Matt Burgess commented on NIFI-4445:


Also I just ran the test successfully with all of the same from above, but with 
Fedora 26 (Linux version 4.11.8-300)

> Unit Test Failure for Scripted Reporting Task
> -
>
> Key: NIFI-4445
> URL: https://issues.apache.org/jira/browse/NIFI-4445
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.4.0
> Environment: Apache Maven 3.5.0 
> (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T15:39:06-04:00)
> Maven home: /data1/apache-maven-3.5.0
> Java version: 1.8.0_144, vendor: Oracle Corporation
> Java home: /usr/java/jdk1.8.0_144/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "4.12.14-300.fc26.x86_64", arch: "amd64", family: 
> "unix"
>Reporter: Joseph Witt
>
> ---
>  T E S T S
> ---
> Running org.apache.nifi.reporting.script.ScriptedReportingTaskGroovyTest
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.641 sec <<< 
> FAILURE! - in org.apache.nifi.reporting.script.ScriptedReportingTaskGroovyTest
> testVMEventsGroovyScript(org.apache.nifi.reporting.script.ScriptedReportingTaskGroovyTest)
>   Time elapsed: 0.024 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at org.junit.Assert$assertTrue$0.callStatic(Unknown Source)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:206)
>   at 
> org.apache.nifi.reporting.script.ScriptedReportingTaskGroovyTest.testVMEventsGroovyScript(ScriptedReportingTaskGroovyTest.groovy:195)
> Appears to be related to the JDK/JRE.  The same environment in terms of 
> kernel/maven/etc... just slightly older JDK/JRE works perfectly fine.
> Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
> 2017-04-03T15:39:06-04:00)
> Maven home: /opt/apache-maven-3.5.0
> Java version: 1.8.0_141, vendor: Oracle Corporation
> Java home: /usr/java/jdk1.8.0_141/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "4.12.14-300.fc26.x86_64", arch: "amd64", family: 
> "unix"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Moved] (MINIFICPP-244) Look at using Allocators for STL containers

2017-10-04 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri moved MINIFI-232 to MINIFICPP-244:
--

Key: MINIFICPP-244  (was: MINIFI-232)
Project: NiFi MiNiFi C++  (was: Apache NiFi MiNiFi)

> Look at using Allocators for STL containers
> ---
>
> Key: MINIFICPP-244
> URL: https://issues.apache.org/jira/browse/MINIFICPP-244
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Priority: Minor
>
> We may be able to improve memory usage if we create allocators that are aware 
> of allocations amongst certain objects. This will allow us to create higher 
> priority allocations and free memory for low priority objects in certain 
> cases to prevent system failure. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Moved] (MINIFICPP-243) s_addr not being populated correctly for remote addresses.

2017-10-04 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri moved MINIFI-235 to MINIFICPP-243:
--

Key: MINIFICPP-243  (was: MINIFI-235)
Project: NiFi MiNiFi C++  (was: Apache NiFi MiNiFi)

> s_addr not being populated correctly for remote addresses. 
> ---
>
> Key: MINIFICPP-243
> URL: https://issues.apache.org/jira/browse/MINIFICPP-243
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Assignee: marco polo
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Moved] (MINIFICPP-241) Tailfile doesn't keep track of flow files added

2017-10-04 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri moved MINIFI-385 to MINIFICPP-241:
--

Key: MINIFICPP-241  (was: MINIFI-385)
Project: NiFi MiNiFi C++  (was: Apache NiFi MiNiFi)

> Tailfile doesn't keep track of flow files added
> ---
>
> Key: MINIFICPP-241
> URL: https://issues.apache.org/jira/browse/MINIFICPP-241
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Assignee: marco polo
>
> Tailfile doesn't keep track of flow files added because the vector is passed 
> to process session by value and not by reference. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Moved] (MINIFICPP-242) Consider making FileSystemRepository bundle writes into blocks

2017-10-04 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri moved MINIFI-361 to MINIFICPP-242:
--

Key: MINIFICPP-242  (was: MINIFI-361)
Project: NiFi MiNiFi C++  (was: Apache NiFi MiNiFi)

> Consider making FileSystemRepository bundle writes into blocks
> --
>
> Key: MINIFICPP-242
> URL: https://issues.apache.org/jira/browse/MINIFICPP-242
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>
> Writes to the FileSystemRepository can overwhelm the number of inode entries. 
> We should consider combining these writes into a slab, minimizing 
> fragmentation as much as possible. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191981#comment-16191981
 ] 

ASF GitHub Bot commented on NIFI-4450:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2187#discussion_r142782296
  
--- Diff: nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/pom.xml ---
@@ -25,8 +25,9 @@
 jar
 
 
-1.0.0
+1.1.0
 
1.3.9-1
+2.3.1
--- End diff --

Ok you were right, here is what is happening...

Kite SDK 1.0.0 has a dependency only on jackson-databind 2.3.1, which 
transitively depends on jackson-core 2.3.1 and jackson-annotations 2.3.1, but 
NiFi is overriding all uses of jackson-databind to 2.6.1 so previously you got 
all three of those jars at 2.6.1.

Kite SDK 1.1.0 directly declares all three jars, so then NiFi overrides 
jackson-databind to 2.6.1, but jackson-annotations and jackson-core are still 
at 2.3.1 because that is what Kite declared.

There is another effort going on to clean up some of the dependencies 
(https://issues.apache.org/jira/browse/NIFI-4297) and I believe part of that is 
going to remove the part of NiFi that is forcing everything to databind 2.6.1.

I'd prefer to wait for that to happen first, then we can simplify things 
here and remove all of the jackson stuff from this PR and everything should 
work. Let me know if there is any issue with that. 


> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2187: NIFI-4450 Update Kite SDK version

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2187#discussion_r142782296
  
--- Diff: nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/pom.xml ---
@@ -25,8 +25,9 @@
 jar
 
 
-1.0.0
+1.1.0
 
1.3.9-1
+2.3.1
--- End diff --

Ok you were right, here is what is happening...

Kite SDK 1.0.0 has a dependency only on jackson-databind 2.3.1, which 
transitively depends on jackson-core 2.3.1 and jackson-annotations 2.3.1, but 
NiFi is overriding all uses of jackson-databind to 2.6.1 so previously you got 
all three of those jars at 2.6.1.

Kite SDK 1.1.0 directly declares all three jars, so then NiFi overrides 
jackson-databind to 2.6.1, but jackson-annotations and jackson-core are still 
at 2.3.1 because that is what Kite declared.

There is another effort going on to clean up some of the dependencies 
(https://issues.apache.org/jira/browse/NIFI-4297) and I believe part of that is 
going to remove the part of NiFi that is forcing everything to databind 2.6.1.

I'd prefer to wait for that to happen first, then we can simplify things 
here and remove all of the jackson stuff from this PR and everything should 
work. Let me know if there is any issue with that. 


---


[jira] [Assigned] (NIFI-4462) Document the Variable Registry UI

2017-10-04 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim reassigned NIFI-4462:


Assignee: Andrew Lim

> Document the Variable Registry UI
> -
>
> Key: NIFI-4462
> URL: https://issues.apache.org/jira/browse/NIFI-4462
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Matt Gilman
>Assignee: Andrew Lim
>
> We need to update the user/admin guide to describe the capabilities of the 
> variable registry UI. The guides currently call out the existing capabilities 
> using the file-based approach but this need to be updated to incorporate the 
> new UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4008) ConsumeKafkaRecord_0_10 assumes there is always one Record in a message

2017-10-04 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191963#comment-16191963
 ] 

Mark Payne commented on NIFI-4008:
--

[~gardellajuanpablo] - that's a great catch! Thanks for checking. Have pushed a 
new commit that addresses this.

> ConsumeKafkaRecord_0_10 assumes there is always one Record in a message
> ---
>
> Key: NIFI-4008
> URL: https://issues.apache.org/jira/browse/NIFI-4008
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.2.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> ConsumeKafkaRecord_0_10 uses ConsumerLease underneath, and it [assumes there 
> is one Record available in a consumed 
> message|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-0-10-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumerLease.java#L434]
>  retrieved from a Kafka topic.
> But in fact, a message can contain 0 or more records in it. For example, with 
> a record schema shown below:
> {code}
> {
>   "type": "record",
>   "name": "temp",
>   "fields" : [
> {"name": "value", "type": "string"}
>   ]
> }
> {code}
> Multiple records can be sent within a single message, e.g. using JSON:
> {code}
> [{"value": "a"}, {"value": "b"}, {"value": "c"}]
> {code}
> But ConsumeKafkaRecord only outputs the first record:
> {code}
> [{"value": "a"}]
> {code}
> Also, if a message doesn't contain any record in it, the processor fails with 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4372) Wait processor - recommend prioritizer in documentation

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191955#comment-16191955
 ] 

ASF GitHub Bot commented on NIFI-4372:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2139
  
Thanks for pointing this out, @pvillard31. Confirmed that I was able to 
replicate the issue and that it is resolved once I use the FIFO Prioritizer. 
Have merged this changed to master.


> Wait processor - recommend prioritizer in documentation
> ---
>
> Key: NIFI-4372
> URL: https://issues.apache.org/jira/browse/NIFI-4372
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>  Labels: documentation
> Fix For: 1.5.0
>
>
> When using the Wait processor, the use of a connection prioritizer (FIFO for 
> instance) on the wait relationship should be recommended in the processor 
> documentation. When having tens of thousands of flow files in the wait 
> relationship, not using a prioritizer could lead to flow files not being 
> released when signal is notified and generate expired flow files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4372) Wait processor - recommend prioritizer in documentation

2017-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191953#comment-16191953
 ] 

ASF subversion and git services commented on NIFI-4372:
---

Commit ea2c91f9d052016390c07bbc06fd7f8c734d373b in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=ea2c91f ]

NIFI-4372 Wait processor - recommend prioritizer in documentation. This closes 
#2139.


> Wait processor - recommend prioritizer in documentation
> ---
>
> Key: NIFI-4372
> URL: https://issues.apache.org/jira/browse/NIFI-4372
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>  Labels: documentation
> Fix For: 1.5.0
>
>
> When using the Wait processor, the use of a connection prioritizer (FIFO for 
> instance) on the wait relationship should be recommended in the processor 
> documentation. When having tens of thousands of flow files in the wait 
> relationship, not using a prioritizer could lead to flow files not being 
> released when signal is notified and generate expired flow files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4372) Wait processor - recommend prioritizer in documentation

2017-10-04 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4372:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Wait processor - recommend prioritizer in documentation
> ---
>
> Key: NIFI-4372
> URL: https://issues.apache.org/jira/browse/NIFI-4372
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>  Labels: documentation
>
> When using the Wait processor, the use of a connection prioritizer (FIFO for 
> instance) on the wait relationship should be recommended in the processor 
> documentation. When having tens of thousands of flow files in the wait 
> relationship, not using a prioritizer could lead to flow files not being 
> released when signal is notified and generate expired flow files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4372) Wait processor - recommend prioritizer in documentation

2017-10-04 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4372:
-
Fix Version/s: 1.5.0

> Wait processor - recommend prioritizer in documentation
> ---
>
> Key: NIFI-4372
> URL: https://issues.apache.org/jira/browse/NIFI-4372
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>  Labels: documentation
> Fix For: 1.5.0
>
>
> When using the Wait processor, the use of a connection prioritizer (FIFO for 
> instance) on the wait relationship should be recommended in the processor 
> documentation. When having tens of thousands of flow files in the wait 
> relationship, not using a prioritizer could lead to flow files not being 
> released when signal is notified and generate expired flow files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4372) Wait processor - recommend prioritizer in documentation

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191954#comment-16191954
 ] 

ASF GitHub Bot commented on NIFI-4372:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2139


> Wait processor - recommend prioritizer in documentation
> ---
>
> Key: NIFI-4372
> URL: https://issues.apache.org/jira/browse/NIFI-4372
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>  Labels: documentation
> Fix For: 1.5.0
>
>
> When using the Wait processor, the use of a connection prioritizer (FIFO for 
> instance) on the wait relationship should be recommended in the processor 
> documentation. When having tens of thousands of flow files in the wait 
> relationship, not using a prioritizer could lead to flow files not being 
> released when signal is notified and generate expired flow files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2139: NIFI-4372 Wait processor - recommend prioritizer in docume...

2017-10-04 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2139
  
Thanks for pointing this out, @pvillard31. Confirmed that I was able to 
replicate the issue and that it is resolved once I use the FIFO Prioritizer. 
Have merged this changed to master.


---


[GitHub] nifi pull request #2139: NIFI-4372 Wait processor - recommend prioritizer in...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2139


---


[jira] [Commented] (NIFI-4445) Unit Test Failure for Scripted Reporting Task

2017-10-04 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191951#comment-16191951
 ] 

Matt Burgess commented on NIFI-4445:


I just ran the test successfully with the following environment:

Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
2017-04-03T19:39:06Z)
Maven home: /root/.sdkman/candidates/maven/current
Java version: 1.8.0_144, vendor: Oracle Corporation
Java home: /usr/java/jdk1.8.0_144/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-504.1.3.el6.x86_64", arch: "amd64", family: 
"unix"

Looks like the only difference is the Linux version? I was using RHEL 6, will 
try with RHEL 7 or something else if necessary.  Also I was building from the 
1.4.0 release source, not master (I don't have Git on this machine).

> Unit Test Failure for Scripted Reporting Task
> -
>
> Key: NIFI-4445
> URL: https://issues.apache.org/jira/browse/NIFI-4445
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.4.0
> Environment: Apache Maven 3.5.0 
> (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T15:39:06-04:00)
> Maven home: /data1/apache-maven-3.5.0
> Java version: 1.8.0_144, vendor: Oracle Corporation
> Java home: /usr/java/jdk1.8.0_144/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "4.12.14-300.fc26.x86_64", arch: "amd64", family: 
> "unix"
>Reporter: Joseph Witt
>
> ---
>  T E S T S
> ---
> Running org.apache.nifi.reporting.script.ScriptedReportingTaskGroovyTest
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.641 sec <<< 
> FAILURE! - in org.apache.nifi.reporting.script.ScriptedReportingTaskGroovyTest
> testVMEventsGroovyScript(org.apache.nifi.reporting.script.ScriptedReportingTaskGroovyTest)
>   Time elapsed: 0.024 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at org.junit.Assert$assertTrue$0.callStatic(Unknown Source)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:206)
>   at 
> org.apache.nifi.reporting.script.ScriptedReportingTaskGroovyTest.testVMEventsGroovyScript(ScriptedReportingTaskGroovyTest.groovy:195)
> Appears to be related to the JDK/JRE.  The same environment in terms of 
> kernel/maven/etc... just slightly older JDK/JRE works perfectly fine.
> Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
> 2017-04-03T15:39:06-04:00)
> Maven home: /opt/apache-maven-3.5.0
> Java version: 1.8.0_141, vendor: Oracle Corporation
> Java home: /usr/java/jdk1.8.0_141/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "4.12.14-300.fc26.x86_64", arch: "amd64", family: 
> "unix"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4372) Wait processor - recommend prioritizer in documentation

2017-10-04 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191947#comment-16191947
 ] 

Mark Payne commented on NIFI-4372:
--

[~pvillard] after looking into this more, I appear to have misunderstood how 
the Wait processor was working. Based on the design of it, adding a 
FirstInFirstOut prioritizer is definitely necessary, regardless of the issue 
that I mentioned above.

> Wait processor - recommend prioritizer in documentation
> ---
>
> Key: NIFI-4372
> URL: https://issues.apache.org/jira/browse/NIFI-4372
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>  Labels: documentation
>
> When using the Wait processor, the use of a connection prioritizer (FIFO for 
> instance) on the wait relationship should be recommended in the processor 
> documentation. When having tens of thousands of flow files in the wait 
> relationship, not using a prioritizer could lead to flow files not being 
> released when signal is notified and generate expired flow files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4443:
--
Fix Version/s: 1.5.0

> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
> Fix For: 1.5.0
>
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191937#comment-16191937
 ] 

ASF GitHub Bot commented on NIFI-4443:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2186


> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
> Fix For: 1.5.0
>
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-4443.
---
Resolution: Fixed

> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
> Fix For: 1.5.0
>
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende reassigned NIFI-4443:
-

Assignee: Giovanni Lanzani

> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
> Fix For: 1.5.0
>
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2186: NIFI-4443 Increase StoreInKiteDataset flexibility

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2186


---


[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191936#comment-16191936
 ] 

ASF subversion and git services commented on NIFI-4443:
---

Commit f1bd866005e8e06e502ebf67fa6540bf379adfcf in nifi's branch 
refs/heads/master from [~lanzani]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=f1bd866 ]

NIFI-4443 Increase StoreInKiteDataset flexibility

* The configuration property CONF_XML_FILE now support Expression
  Language and reuse a Hadoop validator;
* The ADDITIONAL_CLASSPATH_RESOURCES property has been added, so that
  things such as writing to Azure Blob Storage should become possible.

This closes #2186.

Signed-off-by: Bryan Bende 


> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2186: NIFI-4443 Increase StoreInKiteDataset flexibility

2017-10-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2186
  
Thanks, this looks good, I'll merge to master shortly


---


[jira] [Commented] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191921#comment-16191921
 ] 

ASF GitHub Bot commented on NIFI-4450:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2187#discussion_r142771999
  
--- Diff: nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/pom.xml ---
@@ -25,8 +25,9 @@
 jar
 
 
-1.0.0
+1.1.0
 
1.3.9-1
+2.3.1
--- End diff --

Good question :) l didn't get to building it, let me play around with it 
and get back to you.


> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2187: NIFI-4450 Update Kite SDK version

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2187#discussion_r142771999
  
--- Diff: nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/pom.xml ---
@@ -25,8 +25,9 @@
 jar
 
 
-1.0.0
+1.1.0
 
1.3.9-1
+2.3.1
--- End diff --

Good question :) l didn't get to building it, let me play around with it 
and get back to you.


---


[jira] [Commented] (NIFI-4346) Add a lookup service that uses HBase

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191920#comment-16191920
 ] 

ASF GitHub Bot commented on NIFI-4346:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142768509
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_LookupService.java
 ---
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.hbase;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.controller.ControllerServiceInitializationContext;
+import org.apache.nifi.lookup.LookupFailureException;
+import org.apache.nifi.lookup.LookupService;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.serialization.SimpleRecordSchema;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Optional;
+import java.util.Set;
+
+@Tags({"hbase", "record", "lookup", "service"})
+@CapabilityDescription(
+"A lookup service that retrieves one or more columns from HBase based 
on a supplied rowKey."
+)
+public class HBase_1_1_2_LookupService extends HBase_1_1_2_ClientService 
implements LookupService {
+private static final Set REQUIRED_KEYS = 
Collections.singleton("rowKey");
+
+public static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("hb-lu-table-name")
+.displayName("Table Name")
+.description("The name of the table where look ups will be 
run.")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+public static final PropertyDescriptor RETURN_CFS = new 
PropertyDescriptor.Builder()
+.name("hb-lu-return-cfs")
+.displayName("Column Families")
+.description("The column families that will be returned.")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+public static final PropertyDescriptor RETURN_QFS = new 
PropertyDescriptor.Builder()
+.name("hb-lu-return-qfs")
+.displayName("Column Qualifiers")
+.description("The column qualifies that will be returned.")
+.required(false)
+.addValidator(Validator.VALID)
+.build();
+protected static final PropertyDescriptor CHARSET = new 
PropertyDescriptor.Builder()
+.name("hb-lu-charset")
+.displayName("Character Set")
+.description("Specifies the character set of the document 
data.")
+.required(true)
+.defaultValue("UTF-8")
+   

[jira] [Commented] (NIFI-4346) Add a lookup service that uses HBase

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191916#comment-16191916
 ] 

ASF GitHub Bot commented on NIFI-4346:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142771753
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -88,16 +88,16 @@
 
 static final long TICKET_RENEWAL_PERIOD = 6;
 
-private volatile Connection connection;
+protected volatile Connection connection;
 private volatile UserGroupInformation ugi;
 private volatile KerberosTicketRenewer renewer;
 
-private List properties;
-private KerberosProperties kerberosProperties;
+protected List properties;
--- End diff --

Can we leave this as private and then add a protected method that 
sub-classes can use to add additional properties? So something like this...
```
protected List getAdditionalProperties() {
}
```
And then in init, it would call it like this:
```
List props = new ArrayList<>();
...
props.addAll(getAdditionalProperties());
this.properties = Collections.unmodifiableList(props);
```

Then the lookup service would override getAdditionalProperties() to provide 
it's specific properties to be added.


> Add a lookup service that uses HBase
> 
>
> Key: NIFI-4346
> URL: https://issues.apache.org/jira/browse/NIFI-4346
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> A LookupService based on HBase should be able to handle at least two 
> scenarios:
> 1. Pull a single cell and return it as a string.
> 2. Pull multiple cells and return them as a Record that can be merged into 
> another record.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4346) Add a lookup service that uses HBase

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191917#comment-16191917
 ] 

ASF GitHub Bot commented on NIFI-4346:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142770600
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -88,16 +88,16 @@
 
 static final long TICKET_RENEWAL_PERIOD = 6;
 
-private volatile Connection connection;
+protected volatile Connection connection;
 private volatile UserGroupInformation ugi;
 private volatile KerberosTicketRenewer renewer;
 
-private List properties;
-private KerberosProperties kerberosProperties;
+protected List properties;
+protected KerberosProperties kerberosProperties;
 private volatile File kerberosConfigFile = null;
 
 // Holder of cached Configuration information so validation does not 
reload the same config over and over
-private final AtomicReference 
validationResourceHolder = new AtomicReference<>();
+protected final AtomicReference 
validationResourceHolder = new AtomicReference<>();
--- End diff --

Can we leave as private? don't see anything referencing it.


> Add a lookup service that uses HBase
> 
>
> Key: NIFI-4346
> URL: https://issues.apache.org/jira/browse/NIFI-4346
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> A LookupService based on HBase should be able to handle at least two 
> scenarios:
> 1. Pull a single cell and return it as a string.
> 2. Pull multiple cells and return them as a Record that can be merged into 
> another record.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4346) Add a lookup service that uses HBase

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191918#comment-16191918
 ] 

ASF GitHub Bot commented on NIFI-4346:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142770453
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -88,16 +88,16 @@
 
 static final long TICKET_RENEWAL_PERIOD = 6;
 
-private volatile Connection connection;
+protected volatile Connection connection;
--- End diff --

Can we leave the connection as private and add a `protected Connection 
getConnection()` for the lookup service to use?


> Add a lookup service that uses HBase
> 
>
> Key: NIFI-4346
> URL: https://issues.apache.org/jira/browse/NIFI-4346
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> A LookupService based on HBase should be able to handle at least two 
> scenarios:
> 1. Pull a single cell and return it as a string.
> 2. Pull multiple cells and return them as a Record that can be merged into 
> another record.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4346) Add a lookup service that uses HBase

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191919#comment-16191919
 ] 

ASF GitHub Bot commented on NIFI-4346:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142770583
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -88,16 +88,16 @@
 
 static final long TICKET_RENEWAL_PERIOD = 6;
 
-private volatile Connection connection;
+protected volatile Connection connection;
 private volatile UserGroupInformation ugi;
 private volatile KerberosTicketRenewer renewer;
 
-private List properties;
-private KerberosProperties kerberosProperties;
+protected List properties;
+protected KerberosProperties kerberosProperties;
--- End diff --

Can we leave as private? don't see anything referencing it.


> Add a lookup service that uses HBase
> 
>
> Key: NIFI-4346
> URL: https://issues.apache.org/jira/browse/NIFI-4346
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> A LookupService based on HBase should be able to handle at least two 
> scenarios:
> 1. Pull a single cell and return it as a string.
> 2. Pull multiple cells and return them as a Record that can be merged into 
> another record.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2125: NIFI-4346 Created a LookupService that uses HBase a...

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142771753
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -88,16 +88,16 @@
 
 static final long TICKET_RENEWAL_PERIOD = 6;
 
-private volatile Connection connection;
+protected volatile Connection connection;
 private volatile UserGroupInformation ugi;
 private volatile KerberosTicketRenewer renewer;
 
-private List properties;
-private KerberosProperties kerberosProperties;
+protected List properties;
--- End diff --

Can we leave this as private and then add a protected method that 
sub-classes can use to add additional properties? So something like this...
```
protected List getAdditionalProperties() {
}
```
And then in init, it would call it like this:
```
List props = new ArrayList<>();
...
props.addAll(getAdditionalProperties());
this.properties = Collections.unmodifiableList(props);
```

Then the lookup service would override getAdditionalProperties() to provide 
it's specific properties to be added.


---


[GitHub] nifi pull request #2125: NIFI-4346 Created a LookupService that uses HBase a...

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142770583
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -88,16 +88,16 @@
 
 static final long TICKET_RENEWAL_PERIOD = 6;
 
-private volatile Connection connection;
+protected volatile Connection connection;
 private volatile UserGroupInformation ugi;
 private volatile KerberosTicketRenewer renewer;
 
-private List properties;
-private KerberosProperties kerberosProperties;
+protected List properties;
+protected KerberosProperties kerberosProperties;
--- End diff --

Can we leave as private? don't see anything referencing it.


---


[GitHub] nifi pull request #2125: NIFI-4346 Created a LookupService that uses HBase a...

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142770600
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -88,16 +88,16 @@
 
 static final long TICKET_RENEWAL_PERIOD = 6;
 
-private volatile Connection connection;
+protected volatile Connection connection;
 private volatile UserGroupInformation ugi;
 private volatile KerberosTicketRenewer renewer;
 
-private List properties;
-private KerberosProperties kerberosProperties;
+protected List properties;
+protected KerberosProperties kerberosProperties;
 private volatile File kerberosConfigFile = null;
 
 // Holder of cached Configuration information so validation does not 
reload the same config over and over
-private final AtomicReference 
validationResourceHolder = new AtomicReference<>();
+protected final AtomicReference 
validationResourceHolder = new AtomicReference<>();
--- End diff --

Can we leave as private? don't see anything referencing it.


---


[GitHub] nifi pull request #2125: NIFI-4346 Created a LookupService that uses HBase a...

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142768509
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_LookupService.java
 ---
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.hbase;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.controller.ControllerServiceInitializationContext;
+import org.apache.nifi.lookup.LookupFailureException;
+import org.apache.nifi.lookup.LookupService;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.serialization.SimpleRecordSchema;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Optional;
+import java.util.Set;
+
+@Tags({"hbase", "record", "lookup", "service"})
+@CapabilityDescription(
+"A lookup service that retrieves one or more columns from HBase based 
on a supplied rowKey."
+)
+public class HBase_1_1_2_LookupService extends HBase_1_1_2_ClientService 
implements LookupService {
+private static final Set REQUIRED_KEYS = 
Collections.singleton("rowKey");
+
+public static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("hb-lu-table-name")
+.displayName("Table Name")
+.description("The name of the table where look ups will be 
run.")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+public static final PropertyDescriptor RETURN_CFS = new 
PropertyDescriptor.Builder()
+.name("hb-lu-return-cfs")
+.displayName("Column Families")
+.description("The column families that will be returned.")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+public static final PropertyDescriptor RETURN_QFS = new 
PropertyDescriptor.Builder()
+.name("hb-lu-return-qfs")
+.displayName("Column Qualifiers")
+.description("The column qualifies that will be returned.")
+.required(false)
+.addValidator(Validator.VALID)
+.build();
+protected static final PropertyDescriptor CHARSET = new 
PropertyDescriptor.Builder()
+.name("hb-lu-charset")
+.displayName("Character Set")
+.description("Specifies the character set of the document 
data.")
+.required(true)
+.defaultValue("UTF-8")
+.addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
+.build();
+
+private String tableName;
+private List families;
+private List qualifiers;
+private 

[GitHub] nifi pull request #2125: NIFI-4346 Created a LookupService that uses HBase a...

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2125#discussion_r142770453
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -88,16 +88,16 @@
 
 static final long TICKET_RENEWAL_PERIOD = 6;
 
-private volatile Connection connection;
+protected volatile Connection connection;
--- End diff --

Can we leave the connection as private and add a `protected Connection 
getConnection()` for the lookup service to use?


---


[jira] [Commented] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191881#comment-16191881
 ] 

ASF GitHub Bot commented on NIFI-4450:
--

Github user gglanzani commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2187#discussion_r142768827
  
--- Diff: nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/pom.xml ---
@@ -25,8 +25,9 @@
 jar
 
 
-1.0.0
+1.1.0
 
1.3.9-1
+2.3.1
--- End diff --

Well, the thing is... the tests would fail without jackson. So I've looked 
at the `pom.xml` provided by Kite, and just plugged that into nifi-kite.

Are your tests successful without that addition and kite 1.1.0?


> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2187: NIFI-4450 Update Kite SDK version

2017-10-04 Thread gglanzani
Github user gglanzani commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2187#discussion_r142768827
  
--- Diff: nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/pom.xml ---
@@ -25,8 +25,9 @@
 jar
 
 
-1.0.0
+1.1.0
 
1.3.9-1
+2.3.1
--- End diff --

Well, the thing is... the tests would fail without jackson. So I've looked 
at the `pom.xml` provided by Kite, and just plugged that into nifi-kite.

Are your tests successful without that addition and kite 1.1.0?


---


[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191877#comment-16191877
 ] 

ASF GitHub Bot commented on NIFI-4443:
--

Github user gglanzani commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2186#discussion_r142768212
  
--- Diff: 
nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/src/main/java/org/apache/nifi/processors/kite/AbstractKiteProcessor.java
 ---
@@ -47,33 +47,16 @@
 abstract class AbstractKiteProcessor extends AbstractProcessor {
 
 private static final Splitter COMMA = Splitter.on(',').trimResults();
-protected static final Validator FILES_EXIST = new Validator() {
-@Override
-public ValidationResult validate(String subject, String 
configFiles,
-ValidationContext context) {
-if (configFiles != null && !configFiles.isEmpty()) {
-for (String file : COMMA.split(configFiles)) {
-ValidationResult result = 
StandardValidators.FILE_EXISTS_VALIDATOR
-.validate(subject, file, context);
-if (!result.isValid()) {
-return result;
-}
-}
-}
-return new ValidationResult.Builder()
-.subject(subject)
-.input(configFiles)
-.explanation("Files exist")
-.valid(true)
-.build();
-}
-};
 
 protected static final PropertyDescriptor CONF_XML_FILES
 = new PropertyDescriptor.Builder()
-.name("Hadoop configuration files")
-.description("A comma-separated list of Hadoop configuration 
files")
-.addValidator(FILES_EXIST)
+.name("hadoop-configuration-resources")
--- End diff --

Good point, fixed


> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2186: NIFI-4443 Increase StoreInKiteDataset flexibility

2017-10-04 Thread gglanzani
Github user gglanzani commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2186#discussion_r142768212
  
--- Diff: 
nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/src/main/java/org/apache/nifi/processors/kite/AbstractKiteProcessor.java
 ---
@@ -47,33 +47,16 @@
 abstract class AbstractKiteProcessor extends AbstractProcessor {
 
 private static final Splitter COMMA = Splitter.on(',').trimResults();
-protected static final Validator FILES_EXIST = new Validator() {
-@Override
-public ValidationResult validate(String subject, String 
configFiles,
-ValidationContext context) {
-if (configFiles != null && !configFiles.isEmpty()) {
-for (String file : COMMA.split(configFiles)) {
-ValidationResult result = 
StandardValidators.FILE_EXISTS_VALIDATOR
-.validate(subject, file, context);
-if (!result.isValid()) {
-return result;
-}
-}
-}
-return new ValidationResult.Builder()
-.subject(subject)
-.input(configFiles)
-.explanation("Files exist")
-.valid(true)
-.build();
-}
-};
 
 protected static final PropertyDescriptor CONF_XML_FILES
 = new PropertyDescriptor.Builder()
-.name("Hadoop configuration files")
-.description("A comma-separated list of Hadoop configuration 
files")
-.addValidator(FILES_EXIST)
+.name("hadoop-configuration-resources")
--- End diff --

Good point, fixed


---


[jira] [Commented] (NIFI-3950) Separate AWS ControllerService API

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191853#comment-16191853
 ] 

ASF GitHub Bot commented on NIFI-3950:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2140
  
@jvwing sorry I never responded to this... The results you got seem correct.

The part I wasn't sure about was that the `nifi-aws-nar-1.3.0.nar` has a 
dependency on` nifi-standard-services-api-nar-1.3.0.nar`, so I thought you may 
have to leave the whole change of dependencies around.

Looking at the code during start up, I think when loading the 
`nifi-aws-nar-1.3.0.nar` we would detect that there is no 1.3.0 version of 
`standard-services-api-nar` present, but there is a 1.4.0 version so just use 
that, which is probably why it worked.


> Separate AWS ControllerService API
> --
>
> Key: NIFI-3950
> URL: https://issues.apache.org/jira/browse/NIFI-3950
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: James Wing
>Priority: Minor
>
> The nifi-aws-bundle currently contains the interface for the 
> AWSCredentialsProviderService as well as the service implementation, and 
> dependent abstract classes and processor classes.
> This results in the following warning logged as NiFi loads:
> {quote}
> org.apache.nifi.nar.ExtensionManager Component 
> org.apache.nifi.processors.aws.s3.PutS3Object is bundled with its referenced 
> Controller Service APIs 
> org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderService.
>  The service APIs should not be bundled with component implementations that 
> reference it.
> {quote}
> Some [discussion of this issue and potential solutions occurred on the dev 
> list|http://apache-nifi.1125220.n5.nabble.com/Duplicated-processors-when-using-nifi-processors-dependency-td17038.html].
> We also need a migration plan in addition to the new structure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2140: NIFI-3950 Refactor AWS bundle

2017-10-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2140
  
@jvwing sorry I never responded to this... The results you got seem correct.

The part I wasn't sure about was that the `nifi-aws-nar-1.3.0.nar` has a 
dependency on` nifi-standard-services-api-nar-1.3.0.nar`, so I thought you may 
have to leave the whole change of dependencies around.

Looking at the code during start up, I think when loading the 
`nifi-aws-nar-1.3.0.nar` we would detect that there is no 1.3.0 version of 
`standard-services-api-nar` present, but there is a 1.4.0 version so just use 
that, which is probably why it worked.


---


[jira] [Commented] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191716#comment-16191716
 ] 

ASF GitHub Bot commented on NIFI-4450:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2187#discussion_r142745624
  
--- Diff: nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/pom.xml ---
@@ -25,8 +25,9 @@
 jar
 
 
-1.0.0
+1.1.0
 
1.3.9-1
+2.3.1
--- End diff --

Any specific reason for adding the jackson dependencies?

Looking at the Kite NAR from master, it appears to already include a newer 
version of Jackson:
```
ls -l 
work/nar/extensions/nifi-kite-nar-1.5.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/
 | grep jackson
-rw-r--r--  1 bbende  staff46968 Oct  4 13:57 
jackson-annotations-2.6.0.jar
-rw-r--r--  1 bbende  staff   258833 Oct  4 13:57 jackson-core-2.6.1.jar
-rw-r--r--  1 bbende  staff  1166488 Oct  4 13:57 jackson-databind-2.6.1.jar
-rw-r--r--  1 bbende  staff  1029033 Oct  4 13:57 parquet-jackson-1.4.1.jar
```


> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2187: NIFI-4450 Update Kite SDK version

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2187#discussion_r142745624
  
--- Diff: nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/pom.xml ---
@@ -25,8 +25,9 @@
 jar
 
 
-1.0.0
+1.1.0
 
1.3.9-1
+2.3.1
--- End diff --

Any specific reason for adding the jackson dependencies?

Looking at the Kite NAR from master, it appears to already include a newer 
version of Jackson:
```
ls -l 
work/nar/extensions/nifi-kite-nar-1.5.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/
 | grep jackson
-rw-r--r--  1 bbende  staff46968 Oct  4 13:57 
jackson-annotations-2.6.0.jar
-rw-r--r--  1 bbende  staff   258833 Oct  4 13:57 jackson-core-2.6.1.jar
-rw-r--r--  1 bbende  staff  1166488 Oct  4 13:57 jackson-databind-2.6.1.jar
-rw-r--r--  1 bbende  staff  1029033 Oct  4 13:57 parquet-jackson-1.4.1.jar
```


---


[jira] [Assigned] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-04 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende reassigned NIFI-4450:
-

Assignee: Giovanni Lanzani

> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191607#comment-16191607
 ] 

ASF GitHub Bot commented on NIFI-4443:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2186#discussion_r142735370
  
--- Diff: 
nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/src/main/java/org/apache/nifi/processors/kite/AbstractKiteProcessor.java
 ---
@@ -47,33 +47,16 @@
 abstract class AbstractKiteProcessor extends AbstractProcessor {
 
 private static final Splitter COMMA = Splitter.on(',').trimResults();
-protected static final Validator FILES_EXIST = new Validator() {
-@Override
-public ValidationResult validate(String subject, String 
configFiles,
-ValidationContext context) {
-if (configFiles != null && !configFiles.isEmpty()) {
-for (String file : COMMA.split(configFiles)) {
-ValidationResult result = 
StandardValidators.FILE_EXISTS_VALIDATOR
-.validate(subject, file, context);
-if (!result.isValid()) {
-return result;
-}
-}
-}
-return new ValidationResult.Builder()
-.subject(subject)
-.input(configFiles)
-.explanation("Files exist")
-.valid(true)
-.build();
-}
-};
 
 protected static final PropertyDescriptor CONF_XML_FILES
 = new PropertyDescriptor.Builder()
-.name("Hadoop configuration files")
-.description("A comma-separated list of Hadoop configuration 
files")
-.addValidator(FILES_EXIST)
+.name("hadoop-configuration-resources")
--- End diff --

I'd recommend we leave the name the same since changing the name will cause 
the processor to go invalid when someone upgrades an existing flow. 

We can still introduce displayName if you'd like to call "Hadoop 
configuration Resources" as opposed to "Hadoop configuration files", but we'd 
leave name as "Hadoop configuration files".


> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2186: NIFI-4443 Increase StoreInKiteDataset flexibility

2017-10-04 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2186#discussion_r142735370
  
--- Diff: 
nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/src/main/java/org/apache/nifi/processors/kite/AbstractKiteProcessor.java
 ---
@@ -47,33 +47,16 @@
 abstract class AbstractKiteProcessor extends AbstractProcessor {
 
 private static final Splitter COMMA = Splitter.on(',').trimResults();
-protected static final Validator FILES_EXIST = new Validator() {
-@Override
-public ValidationResult validate(String subject, String 
configFiles,
-ValidationContext context) {
-if (configFiles != null && !configFiles.isEmpty()) {
-for (String file : COMMA.split(configFiles)) {
-ValidationResult result = 
StandardValidators.FILE_EXISTS_VALIDATOR
-.validate(subject, file, context);
-if (!result.isValid()) {
-return result;
-}
-}
-}
-return new ValidationResult.Builder()
-.subject(subject)
-.input(configFiles)
-.explanation("Files exist")
-.valid(true)
-.build();
-}
-};
 
 protected static final PropertyDescriptor CONF_XML_FILES
 = new PropertyDescriptor.Builder()
-.name("Hadoop configuration files")
-.description("A comma-separated list of Hadoop configuration 
files")
-.addValidator(FILES_EXIST)
+.name("hadoop-configuration-resources")
--- End diff --

I'd recommend we leave the name the same since changing the name will cause 
the processor to go invalid when someone upgrades an existing flow. 

We can still introduce displayName if you'd like to call "Hadoop 
configuration Resources" as opposed to "Hadoop configuration files", but we'd 
leave name as "Hadoop configuration files".


---


[jira] [Created] (NIFI-4464) Couldn't install nifi as service on a Mac OS (OS X El Capitan 10.11.6)

2017-10-04 Thread Karthikeyan Govindaraj (JIRA)
Karthikeyan Govindaraj created NIFI-4464:


 Summary: Couldn't install nifi as service on a Mac OS (OS X El 
Capitan 10.11.6)
 Key: NIFI-4464
 URL: https://issues.apache.org/jira/browse/NIFI-4464
 Project: Apache NiFi
  Issue Type: Bug
  Components: Configuration
Reporter: Karthikeyan Govindaraj
 Attachments: Screen Shot 2017-10-04 at 10.04.55 AM.png

Installed the NiFi via HomeBrew on my Mac laptop with *OS X (El Capital v 
10.11.6)* and ran the _start / run_ commands, gave the expected result.

But when I say `*nifi install*` , I couldn't do so. It gave the following 
message.

`*_Installing Apache NiFi as a service is not supported on OS X or Cygwin._*`

and I could see, that the *nifi.sh *says the above message if it detected the 
OS as _*Darwin*_ (
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/bin/nifi.sh#L59
)
and unfortunately the OS X is Darwin. But in the "*Getting Started with Apache 
NiFi*", it says installing nifi as a service is supported for OS X users. (Link 
-> 
https://nifi.apache.org/docs/nifi-docs/html/getting-started.html#installing-as-a-service)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-10-04 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende reassigned NIFI-3349:
-

Assignee: Bryan Bende

> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.5.0
>
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-10-04 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-3349:
--
Fix Version/s: 1.5.0

> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Priority: Minor
> Fix For: 1.5.0
>
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-10-04 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-3349.
---
Resolution: Fixed

> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.5.0
>
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191550#comment-16191550
 ] 

ASF GitHub Bot commented on NIFI-3349:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2149


> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Priority: Minor
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2149: NIFI-3349 retry stale connections

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2149


---


[jira] [Commented] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191548#comment-16191548
 ] 

ASF subversion and git services commented on NIFI-3349:
---

Commit 0a47a3bde58ce6b07ef10c6ecd6fd9d415b76127 in nifi's branch 
refs/heads/master from [~ndet...@minerkasch.com]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=0a47a3b ]

NIFI-3349 retry stale connections

This closes #2149.

Signed-off-by: Bryan Bende 


> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Priority: Minor
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191547#comment-16191547
 ] 

ASF GitHub Bot commented on NIFI-3349:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2149
  
+1 Looks good, thanks for the contribution!


> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Priority: Minor
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2149: NIFI-3349 retry stale connections

2017-10-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2149
  
+1 Looks good, thanks for the contribution!


---


[GitHub] nifi-minifi-cpp pull request #141: MINIFICPP-215: Make libcurl containing cl...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/141


---


[jira] [Updated] (NIFI-4461) DistributedMapCacheClient/Server are inefficient when waiting for data to be received from socket

2017-10-04 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4461:
--
Resolution: Fixed
Status: Resolved  (was: Open)

> DistributedMapCacheClient/Server are inefficient when waiting for data to be 
> received from socket
> -
>
> Key: NIFI-4461
> URL: https://issues.apache.org/jira/browse/NIFI-4461
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.5.0
>
>
> The DistributedMapCacheClient and DistributedMapCacheServer use the  
> SocketChannelInputStream and SSLSocketChannelInputStream (and output streams) 
> for communicating over a socket in non-blocking mode. This is done to allow a 
> timeout to occur on a socket write. However, when reading from the socket, it 
> ends up calling Thread.sleep(10, TimeUnit.MILLISECONDS) when there is no data 
> available, and that can result in extremely slow performance. Instead, we 
> should using blocking mode when receiving data because it will throw a 
> timeout exception as we desire. When writing, we should continue using 
> non-blocking mode but sleep for far less than 10 milliseconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2193: NIFI-4461: When reading from socket channel use blo...

2017-10-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2193


---


[jira] [Commented] (NIFI-4461) DistributedMapCacheClient/Server are inefficient when waiting for data to be received from socket

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191460#comment-16191460
 ] 

ASF GitHub Bot commented on NIFI-4461:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2193


> DistributedMapCacheClient/Server are inefficient when waiting for data to be 
> received from socket
> -
>
> Key: NIFI-4461
> URL: https://issues.apache.org/jira/browse/NIFI-4461
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.5.0
>
>
> The DistributedMapCacheClient and DistributedMapCacheServer use the  
> SocketChannelInputStream and SSLSocketChannelInputStream (and output streams) 
> for communicating over a socket in non-blocking mode. This is done to allow a 
> timeout to occur on a socket write. However, when reading from the socket, it 
> ends up calling Thread.sleep(10, TimeUnit.MILLISECONDS) when there is no data 
> available, and that can result in extremely slow performance. Instead, we 
> should using blocking mode when receiving data because it will throw a 
> timeout exception as we desire. When writing, we should continue using 
> non-blocking mode but sleep for far less than 10 milliseconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4461) DistributedMapCacheClient/Server are inefficient when waiting for data to be received from socket

2017-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191457#comment-16191457
 ] 

ASF subversion and git services commented on NIFI-4461:
---

Commit 8741b6f6a5fea9815984f1e48dbe723d7948746f in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=8741b6f ]

NIFI-4461: When reading from socket channel use blocking mode instead of 
sleeping; when writing, use a far smaller sleep duration

This closes #2193.

Signed-off-by: Bryan Bende 


> DistributedMapCacheClient/Server are inefficient when waiting for data to be 
> received from socket
> -
>
> Key: NIFI-4461
> URL: https://issues.apache.org/jira/browse/NIFI-4461
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.5.0
>
>
> The DistributedMapCacheClient and DistributedMapCacheServer use the  
> SocketChannelInputStream and SSLSocketChannelInputStream (and output streams) 
> for communicating over a socket in non-blocking mode. This is done to allow a 
> timeout to occur on a socket write. However, when reading from the socket, it 
> ends up calling Thread.sleep(10, TimeUnit.MILLISECONDS) when there is no data 
> available, and that can result in extremely slow performance. Instead, we 
> should using blocking mode when receiving data because it will throw a 
> timeout exception as we desire. When writing, we should continue using 
> non-blocking mode but sleep for far less than 10 milliseconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4461) DistributedMapCacheClient/Server are inefficient when waiting for data to be received from socket

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191455#comment-16191455
 ] 

ASF GitHub Bot commented on NIFI-4461:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2193
  
+1 Nice find, this significantly improves the performance of distributed 
map cache, will merge to master shortly


> DistributedMapCacheClient/Server are inefficient when waiting for data to be 
> received from socket
> -
>
> Key: NIFI-4461
> URL: https://issues.apache.org/jira/browse/NIFI-4461
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.5.0
>
>
> The DistributedMapCacheClient and DistributedMapCacheServer use the  
> SocketChannelInputStream and SSLSocketChannelInputStream (and output streams) 
> for communicating over a socket in non-blocking mode. This is done to allow a 
> timeout to occur on a socket write. However, when reading from the socket, it 
> ends up calling Thread.sleep(10, TimeUnit.MILLISECONDS) when there is no data 
> available, and that can result in extremely slow performance. Instead, we 
> should using blocking mode when receiving data because it will throw a 
> timeout exception as we desire. When writing, we should continue using 
> non-blocking mode but sleep for far less than 10 milliseconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2193: NIFI-4461: When reading from socket channel use blocking m...

2017-10-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2193
  
+1 Nice find, this significantly improves the performance of distributed 
map cache, will merge to master shortly


---


[jira] [Created] (NIFI-4463) MQTT client disconnection in ConsumeMQTT Processor

2017-10-04 Thread Stefano Melzi (JIRA)
Stefano Melzi created NIFI-4463:
---

 Summary: MQTT client disconnection in ConsumeMQTT Processor
 Key: NIFI-4463
 URL: https://issues.apache.org/jira/browse/NIFI-4463
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.3.0
 Environment: OS: Centos /
MQTT Borker: mosquitto
Reporter: Stefano Melzi
Priority: Minor


When internal queue is full the messageArrived method throws an exception 
causing the MQTT client to disconnect. This event is managed creating a new 
client which try to reconnect to the broker.

In our test using a mosquitto broker with a high mqtt traffic and a low value 
for "Max Queue Size" property we experiment broker disconnection when internal 
queue is full, but after this event MQTT Processor remains blocked without any 
attempt to reconnect.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Moved] (MINIFICPP-240) MiNiFi-* should employ binding sockets to specific network interface

2017-10-04 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri moved MINIFI-228 to MINIFICPP-240:
--

Component/s: (was: Core Framework)
 Issue Type: Improvement  (was: Bug)
Key: MINIFICPP-240  (was: MINIFI-228)
Project: NiFi MiNiFi C++  (was: Apache NiFi MiNiFi)

> MiNiFi-* should employ binding sockets to specific network interface
> 
>
> Key: MINIFICPP-240
> URL: https://issues.apache.org/jira/browse/MINIFICPP-240
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>
> https://issues.apache.org/jira/browse/NIFI-3541 introduces functionality to 
> support pinning a Socket instance to a controller. We should ensure this 
> functionality exists with MiNiFi-CPP and is employed in the Java Agent. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4462) Document the Variable Registry UI

2017-10-04 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-4462:
-

 Summary: Document the Variable Registry UI
 Key: NIFI-4462
 URL: https://issues.apache.org/jira/browse/NIFI-4462
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation & Website
Reporter: Matt Gilman


We need to update the user/admin guide to describe the capabilities of the 
variable registry UI. The guides currently call out the existing capabilities 
using the file-based approach but this need to be updated to incorporate the 
new UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3432) ExecuteSQL Should Support Multiple ResultSets

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191414#comment-16191414
 ] 

ASF GitHub Bot commented on NIFI-3432:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1471#discussion_r142700529
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java
 ---
@@ -196,44 +196,80 @@ public void process(InputStream in) throws 
IOException {
 selectQuery = queryContents.toString();
 }
 
+int resultCount=0;
 try (final Connection con = dbcpService.getConnection();
 final Statement st = con.createStatement()) {
 st.setQueryTimeout(queryTimeout); // timeout in seconds
-final AtomicLong nrOfRows = new AtomicLong(0L);
-if (fileToProcess == null) {
-fileToProcess = session.create();
-}
-fileToProcess = session.write(fileToProcess, new 
OutputStreamCallback() {
-@Override
-public void process(final OutputStream out) throws 
IOException {
-try {
-logger.debug("Executing query {}", new 
Object[]{selectQuery});
-final ResultSet resultSet = 
st.executeQuery(selectQuery);
-final JdbcCommon.AvroConversionOptions options = 
JdbcCommon.AvroConversionOptions.builder()
-.convertNames(convertNamesForAvro)
-.useLogicalTypes(useAvroLogicalTypes)
-.defaultPrecision(defaultPrecision)
-.defaultScale(defaultScale)
-.build();
-
nrOfRows.set(JdbcCommon.convertToAvroStream(resultSet, out, options, null));
-} catch (final SQLException e) {
-throw new ProcessException(e);
-}
+
+logger.debug("Executing query {}", new Object[]{selectQuery});
+boolean results = st.execute(selectQuery);
+
+
+while(results){
+FlowFile resultSetFF;
+if(fileToProcess == null){
+resultSetFF = session.create();
+} else {
+resultSetFF = session.create(fileToProcess);
+resultSetFF = session.putAllAttributes(resultSetFF, 
fileToProcess.getAttributes());
 }
-});
 
-long duration = stopWatch.getElapsed(TimeUnit.MILLISECONDS);
+final AtomicLong nrOfRows = new AtomicLong(0L);
+resultSetFF = session.write(resultSetFF, new 
OutputStreamCallback() {
+@Override
+public void process(final OutputStream out) throws 
IOException {
+try {
 
-// set attribute how many rows were selected
-fileToProcess = session.putAttribute(fileToProcess, 
RESULT_ROW_COUNT, String.valueOf(nrOfRows.get()));
-fileToProcess = session.putAttribute(fileToProcess, 
RESULT_QUERY_DURATION, String.valueOf(duration));
-fileToProcess = session.putAttribute(fileToProcess, 
CoreAttributes.MIME_TYPE.key(), JdbcCommon.MIME_TYPE_AVRO_BINARY);
+final ResultSet resultSet = st.getResultSet();
+final JdbcCommon.AvroConversionOptions options 
= JdbcCommon.AvroConversionOptions.builder()
+.convertNames(convertNamesForAvro)
+.useLogicalTypes(useAvroLogicalTypes)
+.defaultPrecision(defaultPrecision)
+.defaultScale(defaultScale)
+.build();
+
nrOfRows.set(JdbcCommon.convertToAvroStream(resultSet, out, options, null));
+} catch (final SQLException e) {
+throw new ProcessException(e);
+}
+}
+});
 
-logger.info("{} contains {} Avro records; transferring to 
'success'",
-new Object[]{fileToProcess, nrOfRows.get()});
-session.getProvenanceReporter().modifyContent(fileToProcess, 
"Retrieved " + nrOfRows.get() + " rows", duration);
-session.transfer(fileToProcess, REL_SUCCESS);
+long duration = 
stopWatch.getElapsed(TimeUnit.MILLISECONDS);
  

[GitHub] nifi pull request #1471: NIFI-3432 Handle Multiple Result Sets in ExecuteSQL

2017-10-04 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1471#discussion_r142700529
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java
 ---
@@ -196,44 +196,80 @@ public void process(InputStream in) throws 
IOException {
 selectQuery = queryContents.toString();
 }
 
+int resultCount=0;
 try (final Connection con = dbcpService.getConnection();
 final Statement st = con.createStatement()) {
 st.setQueryTimeout(queryTimeout); // timeout in seconds
-final AtomicLong nrOfRows = new AtomicLong(0L);
-if (fileToProcess == null) {
-fileToProcess = session.create();
-}
-fileToProcess = session.write(fileToProcess, new 
OutputStreamCallback() {
-@Override
-public void process(final OutputStream out) throws 
IOException {
-try {
-logger.debug("Executing query {}", new 
Object[]{selectQuery});
-final ResultSet resultSet = 
st.executeQuery(selectQuery);
-final JdbcCommon.AvroConversionOptions options = 
JdbcCommon.AvroConversionOptions.builder()
-.convertNames(convertNamesForAvro)
-.useLogicalTypes(useAvroLogicalTypes)
-.defaultPrecision(defaultPrecision)
-.defaultScale(defaultScale)
-.build();
-
nrOfRows.set(JdbcCommon.convertToAvroStream(resultSet, out, options, null));
-} catch (final SQLException e) {
-throw new ProcessException(e);
-}
+
+logger.debug("Executing query {}", new Object[]{selectQuery});
+boolean results = st.execute(selectQuery);
+
+
+while(results){
+FlowFile resultSetFF;
+if(fileToProcess == null){
+resultSetFF = session.create();
+} else {
+resultSetFF = session.create(fileToProcess);
+resultSetFF = session.putAllAttributes(resultSetFF, 
fileToProcess.getAttributes());
 }
-});
 
-long duration = stopWatch.getElapsed(TimeUnit.MILLISECONDS);
+final AtomicLong nrOfRows = new AtomicLong(0L);
+resultSetFF = session.write(resultSetFF, new 
OutputStreamCallback() {
+@Override
+public void process(final OutputStream out) throws 
IOException {
+try {
 
-// set attribute how many rows were selected
-fileToProcess = session.putAttribute(fileToProcess, 
RESULT_ROW_COUNT, String.valueOf(nrOfRows.get()));
-fileToProcess = session.putAttribute(fileToProcess, 
RESULT_QUERY_DURATION, String.valueOf(duration));
-fileToProcess = session.putAttribute(fileToProcess, 
CoreAttributes.MIME_TYPE.key(), JdbcCommon.MIME_TYPE_AVRO_BINARY);
+final ResultSet resultSet = st.getResultSet();
+final JdbcCommon.AvroConversionOptions options 
= JdbcCommon.AvroConversionOptions.builder()
+.convertNames(convertNamesForAvro)
+.useLogicalTypes(useAvroLogicalTypes)
+.defaultPrecision(defaultPrecision)
+.defaultScale(defaultScale)
+.build();
+
nrOfRows.set(JdbcCommon.convertToAvroStream(resultSet, out, options, null));
+} catch (final SQLException e) {
+throw new ProcessException(e);
+}
+}
+});
 
-logger.info("{} contains {} Avro records; transferring to 
'success'",
-new Object[]{fileToProcess, nrOfRows.get()});
-session.getProvenanceReporter().modifyContent(fileToProcess, 
"Retrieved " + nrOfRows.get() + " rows", duration);
-session.transfer(fileToProcess, REL_SUCCESS);
+long duration = 
stopWatch.getElapsed(TimeUnit.MILLISECONDS);
+
+// set attribute how many rows were selected
+resultSetFF = session.putAttribute(resultSetFF, 
RESULT_ROW_COUNT, String.valueOf(nrOfRows.get()));
+resultSetFF = 

[jira] [Commented] (MINIFICPP-39) Create FocusArchive processor

2017-10-04 Thread Andrew Christianson (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191353#comment-16191353
 ] 

Andrew Christianson commented on MINIFICPP-39:
--

[~calebj],

Sorry for the delay --I've been out with a brand new baby.

I do not have any WIP unit tests for this, but I would be happy to answer any 
questions about the code. There is a workable unit test framework in minifi-cpp 
now. I would take a look at PutFileTests for an example processor unit test.

> Create FocusArchive processor
> -
>
> Key: MINIFICPP-39
> URL: https://issues.apache.org/jira/browse/MINIFICPP-39
> Project: NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Minor
>
> Create an FocusArchive processor which implements a lens over an archive 
> (tar, etc.). A concise, though informal, definition of a lens is as follows:
> "Essentially, they represent the act of “peering into” or “focusing in on” 
> some particular piece/path of a complex data object such that you can more 
> precisely target particular operations without losing the context or 
> structure of the overall data you’re working with." 
> https://medium.com/@dtipson/functional-lenses-d1aba9e52254#.hdgsvbraq
> Why an FocusArchive in MiNiFi? Simply put, it will enable us to "focus in on" 
> an entry in the archive, perform processing *in-context* of that entry, then 
> re-focus on the overall archive. This allows for transformation or other 
> processing of an entry in the archive without losing the overall context of 
> the archive.
> Initial format support is tar, due to its simplicity and ubiquity.
> Attributes:
> - Path (the path in the archive to focus; "/" to re-focus the overall archive)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2193: NIFI-4461: When reading from socket channel use blocking m...

2017-10-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2193
  
Reviewing...


---


[jira] [Commented] (NIFI-4461) DistributedMapCacheClient/Server are inefficient when waiting for data to be received from socket

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191334#comment-16191334
 ] 

ASF GitHub Bot commented on NIFI-4461:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2193
  
Reviewing...


> DistributedMapCacheClient/Server are inefficient when waiting for data to be 
> received from socket
> -
>
> Key: NIFI-4461
> URL: https://issues.apache.org/jira/browse/NIFI-4461
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.5.0
>
>
> The DistributedMapCacheClient and DistributedMapCacheServer use the  
> SocketChannelInputStream and SSLSocketChannelInputStream (and output streams) 
> for communicating over a socket in non-blocking mode. This is done to allow a 
> timeout to occur on a socket write. However, when reading from the socket, it 
> ends up calling Thread.sleep(10, TimeUnit.MILLISECONDS) when there is no data 
> available, and that can result in extremely slow performance. Instead, we 
> should using blocking mode when receiving data because it will throw a 
> timeout exception as we desire. When writing, we should continue using 
> non-blocking mode but sleep for far less than 10 milliseconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #141: MINIFICPP-215: Make libcurl containing cl...

2017-10-04 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/141#discussion_r142679677
  
--- Diff: cmake/BuildTests.cmake ---
@@ -88,7 +98,19 @@ FOREACH(testfile ${INTEGRATION_TESTS})
 ENDFOREACH()
 message("-- Finished building ${INT_TEST_COUNT} integration test 
file(s)...")
 
-add_test(NAME ControllerServiceIntegrationTests COMMAND 
ControllerServiceIntegrationTests 
"${TEST_RESOURCES}/TestControllerServices.yml" "${TEST_RESOURCES}/")
+if (HTTP-CURL)
+
+SET(CURL_INT_TEST_COUNT 0)
+FOREACH(testfile ${CURL_INTEGRATION_TESTS})
+   get_filename_component(testfilename "${testfile}" NAME_WE)
+   add_executable("${testfilename}" "${TEST_DIR}/curl-tests/${testfile}" 
${SPD_SOURCES} "${TEST_DIR}/TestBase.cpp")
+   createTests("${testfilename}")
+   #message("Adding ${testfilename} from ${testfile}")
--- End diff --

Not sure why I left this here. Should remove. 


---


[jira] [Commented] (MINIFICPP-215) Create Conditional build, using libcURL as a first example

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191295#comment-16191295
 ] 

ASF GitHub Bot commented on MINIFICPP-215:
--

Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/141#discussion_r142679677
  
--- Diff: cmake/BuildTests.cmake ---
@@ -88,7 +98,19 @@ FOREACH(testfile ${INTEGRATION_TESTS})
 ENDFOREACH()
 message("-- Finished building ${INT_TEST_COUNT} integration test 
file(s)...")
 
-add_test(NAME ControllerServiceIntegrationTests COMMAND 
ControllerServiceIntegrationTests 
"${TEST_RESOURCES}/TestControllerServices.yml" "${TEST_RESOURCES}/")
+if (HTTP-CURL)
+
+SET(CURL_INT_TEST_COUNT 0)
+FOREACH(testfile ${CURL_INTEGRATION_TESTS})
+   get_filename_component(testfilename "${testfile}" NAME_WE)
+   add_executable("${testfilename}" "${TEST_DIR}/curl-tests/${testfile}" 
${SPD_SOURCES} "${TEST_DIR}/TestBase.cpp")
+   createTests("${testfilename}")
+   #message("Adding ${testfilename} from ${testfile}")
--- End diff --

Not sure why I left this here. Should remove. 


> Create Conditional build, using libcURL as a first example
> --
>
> Key: MINIFICPP-215
> URL: https://issues.apache.org/jira/browse/MINIFICPP-215
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Assignee: marco polo
>
> Create a conditional build based on what dependencies exist. The first step 
> that I've taken is to extract CURL capabilities into an extension directory. 
> It will be built IFF libcurl exists. That can be the basis for conditional 
> builds. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4431) Support multiple login/SSO configurations concurrently

2017-10-04 Thread Matt Gilman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191218#comment-16191218
 ] 

Matt Gilman commented on NIFI-4431:
---

[~patricker] The spirit of this ticket is indeed meant for the mechanism that 
would navigate the user away from the canvas to login (whether that is in the 
native login page or to an SSO).

The behavior you've described is your browser remembering your preferences for 
that page. You should be able to reset those using your's browsers 'Clear 
Browsing Data' dialog.  

> Support multiple login/SSO configurations concurrently
> --
>
> Key: NIFI-4431
> URL: https://issues.apache.org/jira/browse/NIFI-4431
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI
>Reporter: Matt Gilman
>Priority: Minor
>
> Add support for configuring multiple login/SSO providers concurrently. We 
> currently only allow a single option which causes the user to be 
> automatically redirected to the corresponding login page. We should allow the 
> user to initiate which login sequence they would prefer. Additionally, this 
> would give us the opportunity to relay any error messages when an earlier 
> login attempt fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4343) SiteToSite reporting tasks should support multi RPG urls

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191180#comment-16191180
 ] 

ASF GitHub Bot commented on NIFI-4343:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2121
  
@pvillard31 I looked at the change briefly. I think we also need to change 
to use `new SiteToSiteClient.Builder().urls()` instead of `url()` here:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-site-to-site-reporting-bundle/nifi-site-to-site-reporting-task/src/main/java/org/apache/nifi/reporting/AbstractSiteToSiteReportingTask.java#L193


> SiteToSite reporting tasks should support multi RPG urls
> 
>
> Key: NIFI-4343
> URL: https://issues.apache.org/jira/browse/NIFI-4343
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> With NIFI-3026 multiple URLs are allowed for the Site To Site initial 
> connection. This should be reflected in Site To Site reporting tasks as well. 
> Currently the property validator prevents the use of multiple URLs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2121: NIFI-4343 - allow multiple URLs in SiteToSite reporting ta...

2017-10-04 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2121
  
@pvillard31 I looked at the change briefly. I think we also need to change 
to use `new SiteToSiteClient.Builder().urls()` instead of `url()` here:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-site-to-site-reporting-bundle/nifi-site-to-site-reporting-task/src/main/java/org/apache/nifi/reporting/AbstractSiteToSiteReportingTask.java#L193


---


[jira] [Assigned] (NIFI-4134) Introduce checks to prevent Admin from removing their own permissions

2017-10-04 Thread Eric Ulicny (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Ulicny reassigned NIFI-4134:
-

Assignee: Eric Ulicny

> Introduce checks to prevent Admin from removing their own permissions
> -
>
> Key: NIFI-4134
> URL: https://issues.apache.org/jira/browse/NIFI-4134
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI
>Reporter: Matt Gilman
>Assignee: Eric Ulicny
>Priority: Minor
>
> Install checks and update UI to prevent an administrator from removing their 
> own permissions to the admin policies (access all policies -> Read/Write). If 
> they are the only admin and this action is performed, it would require the 
> user to delete the existing authorizations or hand edit the authorizations to 
> restore access.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4457) "Maximum-value" not increasing when "initial.maxvalue" is set and "Maximum-value column" name is different from "id"

2017-10-04 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190937#comment-16190937
 ] 

Koji Kawamura commented on NIFI-4457:
-

[~mehrdad22] I just wonder how are those rows inserted into the original table? 
Using twitter API? Does 'tweet_id' guarantee ordering when you insert those 
rows to the original table? When I sorted the data set in the attached 
'subquery.csv', I found following rows. By looking at these rows, I'm not sure 
if it's a good idea to use 'tweet_id' as 'Maximum-value column' for 
QueryDatabase table, especially if the 'date' column represents when a row is 
inserted.:

{code}
915459071205093382,10/4/2017 9:41:46 AM
915459072178163714,10/4/2017 9:41:01 AM
915459072908038144,10/4/2017 9:41:46 AM
915459073721630720,10/4/2017 9:41:01 AM
915459075462311936,10/4/2017 9:41:46 AM
915459076183687168,10/4/2017 9:41:01 AM
{code}

'tweet_id' is a unique identifier and have timestamp in it, but I'm not sure if 
it is globally in consistent order when you get these data set because it also 
has worker number in it.
https://developer.twitter.com/en/docs/basics/twitter-ids

> "Maximum-value" not increasing when "initial.maxvalue" is set and 
> "Maximum-value column" name is different from "id" 
> -
>
> Key: NIFI-4457
> URL: https://issues.apache.org/jira/browse/NIFI-4457
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: windows 10
>Reporter: meh
> Attachments: Picture1.png, Picture2.png, subquery.csv
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> when "Maximum-value column" name is "id" there is no problem, when i add 
> "initial.maxvalue.id" property in "QueryDatabaseTable" processor, it works 
> well and maxvalue is increasing by every running.
> !Picture1.png|thumbnail!
> but...
> when the "Maximum-value column" name is different from "id" (such as 
> "tweet_id"), after initial processor working, only given 
> "initial.maxvalue.id" is saves and that repeating just same value for every 
> run.
> !Picture2.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190911#comment-16190911
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2184#discussion_r142597913
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -238,5 +242,23 @@ public void testComplicatedRecursiveSchema() {
 // Make sure the 'parent' field has a schema reference back to the 
original top level record schema
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
+
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new Schema.Parser().parse(
+
"{\"type\":\"record\",\"name\":\"OSMEntity\",\"namespace\":\"org.osm.avro\",\"fields\":[{\"name\":\"osmtype\",\"type\":{\"type\":\"enum\",\"name\":\"OSMType\",\"symbols\":[\"NODE\",\"WAY\",\"POLYGON\",\"RELATION\"]}},{\"name\":\"id\",\"type\":\"long\"},{\"name\":\"node\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"ANode\",\"fields\":[{\"name\":\"id\",\"type\":\"long\"},{\"name\":\"x\",\"type\":\"double\"},{\"name\":\"y\",\"type\":\"double\"},{\"name\":\"fields\",\"type\":[\"null\",{\"type\":\"map\",\"values\":{\"type\":\"string\",\"avro.java.string\":\"String\"},\"avro.java.string\":\"String\"}]}]}]},{\"name\":\"way\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"AComplex\",\"fields\":[{\"name\":\"id\",\"type\":\"long\"},{\"name\":\"geometry\",\"type\":\"bytes\"},{\"name\":\"fields\",\"type\":[\"null\",{\"type\":\"map\",\"values\":{\"type\":\"string\",\"avro.java.string\":\"String\"},\"avro.java.string\":\"String\"}]}]}]},{\"name\":\"polygon\",\"type\":[\"null\",\"AComplex\"]},{\"name\":\"rel\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"ARelation\",\"fields\":[{\"name\":\"id\",\"type\":\"long\"},{\"name\":\"fields\",\"type\":[\"null\",{\"type\":\"map\",\"values\":{\"type\":\"string\",\"avro.java.string\":\"String\"},\"avro.java.string\":\"String\"}]},{\"name\":\"related\",\"type\":[\"null\",{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"ARelated\",\"fields\":[{\"name\":\"relatedId\",\"type\":\"long\"},{\"name\":\"type\",\"type\":{\"type\":\"string\",\"avro.java.string\":\"String\"}},{\"name\":\"role\",\"type\":{\"type\":\"string\",\"avro.java.string\":\"String\"}}]}}]}]}]}]}"
--- End diff --

OK


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2184: NIFI-4441 : add maprecord support for avro union ty...

2017-10-04 Thread frett27
Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2184#discussion_r142597985
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -691,6 +697,10 @@ private static boolean isCompatibleDataType(final 
Object value, final DataType d
 return true;
 }
 break;
+case MAP:
+if (value instanceof MapRecord) {
--- End diff --

OK, i'll have a dive on this


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190913#comment-16190913
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2184#discussion_r142597985
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -691,6 +697,10 @@ private static boolean isCompatibleDataType(final 
Object value, final DataType d
 return true;
 }
 break;
+case MAP:
+if (value instanceof MapRecord) {
--- End diff --

OK, i'll have a dive on this


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2184: NIFI-4441 : add maprecord support for avro union ty...

2017-10-04 Thread frett27
Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2184#discussion_r142597913
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -238,5 +242,23 @@ public void testComplicatedRecursiveSchema() {
 // Make sure the 'parent' field has a schema reference back to the 
original top level record schema
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
+
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new Schema.Parser().parse(
+
"{\"type\":\"record\",\"name\":\"OSMEntity\",\"namespace\":\"org.osm.avro\",\"fields\":[{\"name\":\"osmtype\",\"type\":{\"type\":\"enum\",\"name\":\"OSMType\",\"symbols\":[\"NODE\",\"WAY\",\"POLYGON\",\"RELATION\"]}},{\"name\":\"id\",\"type\":\"long\"},{\"name\":\"node\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"ANode\",\"fields\":[{\"name\":\"id\",\"type\":\"long\"},{\"name\":\"x\",\"type\":\"double\"},{\"name\":\"y\",\"type\":\"double\"},{\"name\":\"fields\",\"type\":[\"null\",{\"type\":\"map\",\"values\":{\"type\":\"string\",\"avro.java.string\":\"String\"},\"avro.java.string\":\"String\"}]}]}]},{\"name\":\"way\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"AComplex\",\"fields\":[{\"name\":\"id\",\"type\":\"long\"},{\"name\":\"geometry\",\"type\":\"bytes\"},{\"name\":\"fields\",\"type\":[\"null\",{\"type\":\"map\",\"values\":{\"type\":\"string\",\"avro.java.string\":\"String\"},\"avro.java.string\":\"String\"}]}]}]},{\"name\":\"polygon\",\"type\
 
":[\"null\",\"AComplex\"]},{\"name\":\"rel\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"ARelation\",\"fields\":[{\"name\":\"id\",\"type\":\"long\"},{\"name\":\"fields\",\"type\":[\"null\",{\"type\":\"map\",\"values\":{\"type\":\"string\",\"avro.java.string\":\"String\"},\"avro.java.string\":\"String\"}]},{\"name\":\"related\",\"type\":[\"null\",{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"ARelated\",\"fields\":[{\"name\":\"relatedId\",\"type\":\"long\"},{\"name\":\"type\",\"type\":{\"type\":\"string\",\"avro.java.string\":\"String\"}},{\"name\":\"role\",\"type\":{\"type\":\"string\",\"avro.java.string\":\"String\"}}]}}]}]}]}]}"
--- End diff --

OK


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190910#comment-16190910
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on the issue:

https://github.com/apache/nifi/pull/2184
  
Hi, markap14, the definition is mine for the tests, and yes on top of the 
osm definition, i think there is no license associated issue. And yes could be 
more interesting for have a separate file for ease the reading.



> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2184: NIFI-4441 : add maprecord support for avro union types

2017-10-04 Thread frett27
Github user frett27 commented on the issue:

https://github.com/apache/nifi/pull/2184
  
Hi, markap14, the definition is mine for the tests, and yes on top of the 
osm definition, i think there is no license associated issue. And yes could be 
more interesting for have a separate file for ease the reading.



---


[jira] [Commented] (NIFI-4457) "Maximum-value" not increasing when "initial.maxvalue" is set and "Maximum-value column" name is different from "id"

2017-10-04 Thread Peter Wicks (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190890#comment-16190890
 ] 

Peter Wicks commented on NIFI-4457:
---

[~meh] what is the data type for tweet_id? Can you post a copy of your table 
schema?
In the past when I've had issues with my table schemas and Avro not mixing well 
I get fairly clear error messages.

> "Maximum-value" not increasing when "initial.maxvalue" is set and 
> "Maximum-value column" name is different from "id" 
> -
>
> Key: NIFI-4457
> URL: https://issues.apache.org/jira/browse/NIFI-4457
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: windows 10
>Reporter: meh
> Attachments: Picture1.png, Picture2.png, subquery.csv
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> when "Maximum-value column" name is "id" there is no problem, when i add 
> "initial.maxvalue.id" property in "QueryDatabaseTable" processor, it works 
> well and maxvalue is increasing by every running.
> !Picture1.png|thumbnail!
> but...
> when the "Maximum-value column" name is different from "id" (such as 
> "tweet_id"), after initial processor working, only given 
> "initial.maxvalue.id" is saves and that repeating just same value for every 
> run.
> !Picture2.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4457) "Maximum-value" not increasing when "initial.maxvalue" is set and "Maximum-value column" name is different from "id"

2017-10-04 Thread meh (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190876#comment-16190876
 ] 

meh commented on NIFI-4457:
---

[~patricker] hi peter.
no that is not already the maximum value in the table, at same time new records 
are inserting and max value is increasing.

try this dataset: [^subquery.csv]

i connect to mariaDB database using jdbc, and query this table to return 
records greater than X by initial.maxvalue.tweet_id . at first use this works 
well, but i need this processor scheduled and for next run this problem is 
happening and maxvalue on view state not changing.
i using this processor on other tables and there is not any problem, but this 
different!

is it probable that relates to "Use Avro Logical Types" or "Default Decimal 
Precision" or "Default Decimal Scale"? because of length of tweet_id?

> "Maximum-value" not increasing when "initial.maxvalue" is set and 
> "Maximum-value column" name is different from "id" 
> -
>
> Key: NIFI-4457
> URL: https://issues.apache.org/jira/browse/NIFI-4457
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: windows 10
>Reporter: meh
> Attachments: Picture1.png, Picture2.png, subquery.csv
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> when "Maximum-value column" name is "id" there is no problem, when i add 
> "initial.maxvalue.id" property in "QueryDatabaseTable" processor, it works 
> well and maxvalue is increasing by every running.
> !Picture1.png|thumbnail!
> but...
> when the "Maximum-value column" name is different from "id" (such as 
> "tweet_id"), after initial processor working, only given 
> "initial.maxvalue.id" is saves and that repeating just same value for every 
> run.
> !Picture2.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4457) "Maximum-value" not increasing when "initial.maxvalue" is set and "Maximum-value column" name is different from "id"

2017-10-04 Thread meh (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

meh updated NIFI-4457:
--
Attachment: subquery.csv

> "Maximum-value" not increasing when "initial.maxvalue" is set and 
> "Maximum-value column" name is different from "id" 
> -
>
> Key: NIFI-4457
> URL: https://issues.apache.org/jira/browse/NIFI-4457
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: windows 10
>Reporter: meh
> Attachments: Picture1.png, Picture2.png, subquery.csv
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> when "Maximum-value column" name is "id" there is no problem, when i add 
> "initial.maxvalue.id" property in "QueryDatabaseTable" processor, it works 
> well and maxvalue is increasing by every running.
> !Picture1.png|thumbnail!
> but...
> when the "Maximum-value column" name is different from "id" (such as 
> "tweet_id"), after initial processor working, only given 
> "initial.maxvalue.id" is saves and that repeating just same value for every 
> run.
> !Picture2.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3432) ExecuteSQL Should Support Multiple ResultSets

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190859#comment-16190859
 ] 

ASF GitHub Bot commented on NIFI-3432:
--

Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/1471
  
@mattyb149 I've rebased and squashed things down. I still don't have a good 
test case; as mentioned before stored procedures aren't so easy in Derby.

Would still like to see this get in.


> ExecuteSQL Should Support Multiple ResultSets
> -
>
> Key: NIFI-3432
> URL: https://issues.apache.org/jira/browse/NIFI-3432
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
>
> ExecuteSQL processor only supports processing a single resultset. If a 
> query/stored procedure call returns multiple resultsets then only one is kept.
> ExecuteSQL should be updated to support handling multiple resultsets. When 
> multiple resultsets exist a flow file should be created for each resultset.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1471: NIFI-3432 Handle Multiple Result Sets in ExecuteSQL

2017-10-04 Thread patricker
Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/1471
  
@mattyb149 I've rebased and squashed things down. I still don't have a good 
test case; as mentioned before stored procedures aren't so easy in Derby.

Would still like to see this get in.


---