[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Sudeep Kumar Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16674926#comment-16674926
 ] 

Sudeep Kumar Garg commented on NIFI-4362:
-

Hi @[~dseifert] ,  Can you please help me with setting up prometheus reporting 
for Apache Nifi. I am not able to find the nar file which i need to place in 
lib directory or please help me with steps as i went through 
[https://github.com/mkjoerg/nifi-prometheus-reporter] but some of the points 
are not clear.

 

Thanks,
Sudeep

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread Vadim (JIRA)
Vadim created NIFI-5788:
---

 Summary: Introduce batch size limit in PutDatabaseRecord processor
 Key: NIFI-5788
 URL: https://issues.apache.org/jira/browse/NIFI-5788
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.8.0
 Environment: Teradata DB
Reporter: Vadim
 Fix For: 1.8.0


Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
prepared SQL statements. Specifically, Teradata JDBC driver 
([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would fail 
SQL statement when the batch overflows the internal limits.

Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
work around the issue in certain scenarios, but generally, this solution is not 
perfect because the SQL statements would be executed in different transaction 
contexts and data integrity would not be preserved.

The solution suggests the following:
 * introduce a new optional parameter in *PutDatabaseRecord* processor, 
*batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
statement; its default value is -1 (INFINITY) preserves the old behavior
 * divide the input into batches of the specified size and invoke 
PreparedStatement.executeBatch()  for each batch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3122: NIFI-5777: Update the tag and the property of LogMe...

2018-11-05 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3122#discussion_r230709356
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestLogMessage.java
 ---
@@ -57,7 +57,7 @@ MockComponentLog getMockComponentLog() {
 public void before() throws InitializationException {
 testableLogMessage = new TestableLogMessage();
 runner = TestRunners.newTestRunner(testableLogMessage);
-
+runner.setValidateExpressionUsage(false);
--- End diff --

Yeah sounds good to me to have EL support ;)


---


[jira] [Commented] (NIFI-5777) Update the tag and the property of LogMessage

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16674960#comment-16674960
 ] 

ASF GitHub Bot commented on NIFI-5777:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3122#discussion_r230709356
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestLogMessage.java
 ---
@@ -57,7 +57,7 @@ MockComponentLog getMockComponentLog() {
 public void before() throws InitializationException {
 testableLogMessage = new TestableLogMessage();
 runner = TestRunners.newTestRunner(testableLogMessage);
-
+runner.setValidateExpressionUsage(false);
--- End diff --

Yeah sounds good to me to have EL support ;)


> Update the tag and the property of LogMessage
> -
>
> Key: NIFI-5777
> URL: https://issues.apache.org/jira/browse/NIFI-5777
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Kotaro Terada
>Assignee: Kotaro Terada
>Priority: Major
>
> There are a few points to update in {{LogMessage}}:
>  * The processor tags are a little bit strange. The current tags are 
> "attributes" and "logging". A tag "attributes" is not suitable for this 
> processor. I suggest just "logging" is enough.
>  * The property "Log Level" should be selected using a drop-down list (as it 
> is done in {{LogAttribute}}). Currently, the field is just a text box, and 
> users need to type a log level manually. If we set "expression language 
> supported" on the property, does it force to make the property become a text 
> field in the Web UI?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Daniel Seifert (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675005#comment-16675005
 ] 

Daniel Seifert commented on NIFI-4362:
--

Hey [~sudeepgarg] I updated the setup guide from our repository [view 
README|https://github.com/mkjoerg/nifi-prometheus-reporter/blob/master/Readme.md].

Does this fix your issues?

Regards
Daniel

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3128: NIFI-5788: Introduce batch size limit in PutDatabas...

2018-11-05 Thread vadimar
GitHub user vadimar opened a pull request:

https://github.com/apache/nifi/pull/3128

NIFI-5788: Introduce batch size limit in PutDatabaseRecord processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vadimar/nifi-1 nifi-5788

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3128.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3128


commit 2f36c8b1a732e249238f5f6f53968e84c05b497c
Author: vadimar 
Date:   2018-11-05T11:15:12Z

NIFI-5788: Introduce batch size limit in PutDatabaseRecord processor




---


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675072#comment-16675072
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

GitHub user vadimar opened a pull request:

https://github.com/apache/nifi/pull/3128

NIFI-5788: Introduce batch size limit in PutDatabaseRecord processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vadimar/nifi-1 nifi-5788

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3128.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3128


commit 2f36c8b1a732e249238f5f6f53968e84c05b497c
Author: vadimar 
Date:   2018-11-05T11:15:12Z

NIFI-5788: Introduce batch size limit in PutDatabaseRecord processor




> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread Vadim (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim updated NIFI-5788:

Description: 
Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
prepared SQL statements. Specifically, Teradata JDBC driver 
([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would fail 
SQL statement when the batch overflows the internal limits.

Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
work around the issue in certain scenarios, but generally, this solution is not 
perfect because the SQL statements would be executed in different transaction 
contexts and data integrity would not be preserved.

The solution suggests the following:
 * introduce a new optional parameter in *PutDatabaseRecord* processor, 
*batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
statement; its default value is -1 (INFINITY) preserves the old behavior
 * divide the input into batches of the specified size and invoke 
PreparedStatement.executeBatch()  for each batch

Pull request: [https://github.com/apache/nifi/pull/3128]

 

  was:
Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
prepared SQL statements. Specifically, Teradata JDBC driver 
([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would fail 
SQL statement when the batch overflows the internal limits.

Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
work around the issue in certain scenarios, but generally, this solution is not 
perfect because the SQL statements would be executed in different transaction 
contexts and data integrity would not be preserved.

The solution suggests the following:
 * introduce a new optional parameter in *PutDatabaseRecord* processor, 
*batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
statement; its default value is -1 (INFINITY) preserves the old behavior
 * divide the input into batches of the specified size and invoke 
PreparedStatement.executeBatch()  for each batch


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Sudeep Kumar Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675286#comment-16675286
 ] 

Sudeep Kumar Garg commented on NIFI-4362:
-

Hi [~dseifert] I am still trying to build project by running below command.

# Build project mvn clean install

By any chance do you have nar file ready as we can't access internet from our 
linux boxes  so this is a show stopper for me.

 

Thanks,

Sudeep

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Daniel Seifert (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675313#comment-16675313
 ] 

Daniel Seifert commented on NIFI-4362:
--

Hi [~sudeepgarg] It is not possible to build the project without internet 
connection.

I suggest to checkout the project on a different machine with internet 
connection and transfer the nar file via FTP or any other method you prefer.

Regards
Daniel

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Sudeep Kumar Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675329#comment-16675329
 ] 

Sudeep Kumar Garg commented on NIFI-4362:
-

Hi [~dseifert],

Thanks for the information, i am getting below error while building up solution.

Can you please share the nifi file please if possible?

 

INFO] 
INFO] Reactor Summary for nifi-prometheus-bundle 1.7.1:
INFO]
INFO] nifi-prometheus-bundle . SUCCESS [04:04 min]
INFO] nifi-prometheus-reporting-task . FAILURE [01:09 min]
INFO] nifi-prometheus-nar  SKIPPED
INFO] 
INFO] BUILD FAILURE
INFO] 
INFO] Total time: 10:05 min
INFO] Finished at: 2018-11-05T20:40:24+05:30
INFO] 
ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.
.0:compile (default-compile) on project nifi-prometheus-reporting-task: Compila
ion failure -> [Help 1]
ERROR]
ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit
h.
ERROR] Re-run Maven using the -X switch to enable full debug logging.
ERROR]
ERROR] For more information about the errors and possible solutions, please rea
 the following articles:
ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureExc
ption
ERROR]
ERROR] After correcting the problems, you can resume the build with the command

ERROR] mvn  -rf :nifi-prometheus-reporting-task

:\Users\sudeepkumar\Desktop\cassandra-poc-gc-stats\nifi-prometheus-reporter-mas

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #432: MINIFICPP-648 - add processor and add pro...

2018-11-05 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230799918
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

I think uuid-shared is already linked elsewhere, so I guess it was just 
unnecessary?


---


[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675333#comment-16675333
 ] 

ASF GitHub Bot commented on MINIFICPP-648:
--

Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230799918
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

I think uuid-shared is already linked elsewhere, so I guess it was just 
unnecessary?


> add processor and add processor with linkage nomenclature is confusing
> --
>
> Key: MINIFICPP-648
> URL: https://issues.apache.org/jira/browse/MINIFICPP-648
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI
> Fix For: 0.6.0
>
>
> add_processor should be changed to always add a processor with linkage 
> without compelling documentation as why this exists.. As a result we will 
> need to add a create_processor function to create one without adding it to 
> the flow ( certain use cases where a flow isn't needed such as invokehttp or 
> listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #432: MINIFICPP-648 - add processor and add pro...

2018-11-05 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230800455
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

Indeed, it ended up with double definition errors. However works fine in 
Linux. 


---


[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675335#comment-16675335
 ] 

ASF GitHub Bot commented on MINIFICPP-648:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230800455
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

Indeed, it ended up with double definition errors. However works fine in 
Linux. 


> add processor and add processor with linkage nomenclature is confusing
> --
>
> Key: MINIFICPP-648
> URL: https://issues.apache.org/jira/browse/MINIFICPP-648
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI
> Fix For: 0.6.0
>
>
> add_processor should be changed to always add a processor with linkage 
> without compelling documentation as why this exists.. As a result we will 
> need to add a create_processor function to create one without adding it to 
> the flow ( certain use cases where a flow isn't needed such as invokehttp or 
> listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #432: MINIFICPP-648 - add processor and add pro...

2018-11-05 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230801158
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

Hmm. Maybe we should add -DENABLE_PYTHON to our travis builds


---


[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675340#comment-16675340
 ] 

ASF GitHub Bot commented on MINIFICPP-648:
--

Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230801158
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

Hmm. Maybe we should add -DENABLE_PYTHON to our travis builds


> add processor and add processor with linkage nomenclature is confusing
> --
>
> Key: MINIFICPP-648
> URL: https://issues.apache.org/jira/browse/MINIFICPP-648
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI
> Fix For: 0.6.0
>
>
> add_processor should be changed to always add a processor with linkage 
> without compelling documentation as why this exists.. As a result we will 
> need to add a create_processor function to create one without adding it to 
> the flow ( certain use cases where a flow isn't needed such as invokehttp or 
> listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #432: MINIFICPP-648 - add processor and add pro...

2018-11-05 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230801807
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

I strongly agree, but in that case Python related tests (the ones that 
don't transport, just create instance, processor, flow files) could also be 
added to cover that. 


---


[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675343#comment-16675343
 ] 

ASF GitHub Bot commented on MINIFICPP-648:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230801807
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

I strongly agree, but in that case Python related tests (the ones that 
don't transport, just create instance, processor, flow files) could also be 
added to cover that. 


> add processor and add processor with linkage nomenclature is confusing
> --
>
> Key: MINIFICPP-648
> URL: https://issues.apache.org/jira/browse/MINIFICPP-648
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI
> Fix For: 0.6.0
>
>
> add_processor should be changed to always add a processor with linkage 
> without compelling documentation as why this exists.. As a result we will 
> need to add a create_processor function to create one without adding it to 
> the flow ( certain use cases where a flow isn't needed such as invokehttp or 
> listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #432: MINIFICPP-648 - add processor and add pro...

2018-11-05 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230803774
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

Indeed -- https://issues.apache.org/jira/browse/MINIFICPP-660 already 
exists. I've pre-empted that in favor of CoAP, but should get back to that 
soon. Python is kind of WIP. I do have some tests from that branch that cover 
what you are talking about. I'll try and re-surface them as soon as I get away 
from CoAP. 


---


[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675351#comment-16675351
 ] 

ASF GitHub Bot commented on MINIFICPP-648:
--

Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/432#discussion_r230803774
  
--- Diff: libminifi/CMakeLists.txt ---
@@ -141,11 +141,14 @@ endif()
 SET (LIBMINIFI core-minifi PARENT_SCOPE)
 
 if (ENABLE_PYTHON)
-if (NOT APPLE)
  shared
 
 add_library(core-minifi-shared SHARED ${SOURCES})
-target_link_libraries(core-minifi-shared ${CMAKE_DL_LIBS} uuid-shared 
yaml-cpp)
+if (APPLE)
--- End diff --

Indeed -- https://issues.apache.org/jira/browse/MINIFICPP-660 already 
exists. I've pre-empted that in favor of CoAP, but should get back to that 
soon. Python is kind of WIP. I do have some tests from that branch that cover 
what you are talking about. I'll try and re-surface them as soon as I get away 
from CoAP. 


> add processor and add processor with linkage nomenclature is confusing
> --
>
> Key: MINIFICPP-648
> URL: https://issues.apache.org/jira/browse/MINIFICPP-648
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI
> Fix For: 0.6.0
>
>
> add_processor should be changed to always add a processor with linkage 
> without compelling documentation as why this exists.. As a result we will 
> need to add a create_processor function to create one without adding it to 
> the flow ( certain use cases where a flow isn't needed such as invokehttp or 
> listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3129: [WIP] NIFI-5748 Fixed proxy header support to use X...

2018-11-05 Thread jtstorck
GitHub user jtstorck opened a pull request:

https://github.com/apache/nifi/pull/3129

[WIP] NIFI-5748 Fixed proxy header support to use X-Forwarded-Host instead 
…

…of X-ForwardedServer

Added support for the context path header used by Traefik when proxying a 
service (X-Forwarded-Prefix)
Added tests to ApplicationResourceTest for X-Forwarded-Context and 
X-Forwarded-Prefix
Updated administration doc to include X-Forwarded-Prefix

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jtstorck/nifi NIFI-5748

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3129


commit 49aecd1127132e2c4cb12639cb9d66b14dee60d0
Author: Jeff Storck 
Date:   2018-10-29T17:29:28Z

NIFI-5748 Fixed proxy header support to use X-Forwarded-Host instead of 
X-ForwardedServer
Added support for the context path header used by Traefik when proxying a 
service (X-Forwarded-Prefix)
Added tests to ApplicationResourceTest for X-Forwarded-Context and 
X-Forwarded-Prefix
Updated administration doc to include X-Forwarded-Prefix




---


[jira] [Commented] (NIFI-5748) Improve handling of X-Forwarded-* headers in URI Rewriting

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675362#comment-16675362
 ] 

ASF GitHub Bot commented on NIFI-5748:
--

GitHub user jtstorck opened a pull request:

https://github.com/apache/nifi/pull/3129

[WIP] NIFI-5748 Fixed proxy header support to use X-Forwarded-Host instead …

…of X-ForwardedServer

Added support for the context path header used by Traefik when proxying a 
service (X-Forwarded-Prefix)
Added tests to ApplicationResourceTest for X-Forwarded-Context and 
X-Forwarded-Prefix
Updated administration doc to include X-Forwarded-Prefix

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jtstorck/nifi NIFI-5748

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3129


commit 49aecd1127132e2c4cb12639cb9d66b14dee60d0
Author: Jeff Storck 
Date:   2018-10-29T17:29:28Z

NIFI-5748 Fixed proxy header support to use X-Forwarded-Host instead of 
X-ForwardedServer
Added support for the context path header used by Traefik when proxying a 
service (X-Forwarded-Prefix)
Added tests to ApplicationResourceTest for X-Forwarded-Context and 
X-Forwarded-Prefix
Updated administration doc to include X-Forwarded-Prefix




> Improve handling of X-Forwarded-* headers in URI Rewriting
> --
>
> Key: NIFI-5748
> URL: https://issues.apache.org/jira/browse/NIFI-5748
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Jeff Storck
>Priority: Major
>
> This ticket is to improve the handling of headers used by popular proxies 
> when rewriting URIs in NiFI. Currently, NiFi checks the following headers 
> when determining how to re-write URLs it returns clients:
> From 
> [ApplicationResource|https://github.com/apache/nifi/blob/2201f7746fd16874aefbd12d546565f5d105ab04/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java#L110]:
> {code:java}
> public static final String PROXY_SCHEME_HTTP_HEADER = "X-ProxyScheme";
> public static final String PROXY_HOST_HTTP_HEADER = "X-ProxyHost";
> public static final String PROXY_PORT_HTTP_HEADER = "X-ProxyPort";
> public static final String PROXY_CONTEXT_PATH_HTTP_HEADER = 
> "X-ProxyContextPath";
> public static final String FORWARDED_PROTO_HTTP_HEADER = "X-Forwarded-Proto";
> public static final String FORWARDED_HOST_HTTP_HEADER = "X-Forwarded-Server";
> public static final String FORWARDED_PORT_HTTP_HEADER = "X-Forwarded-Port";
> public static final String FORWARDED_CONTEXT_HTTP_HEADER = 
> "X-Forwarded-Context";
> // ...
> final String scheme = getFirstHeaderValue(PROXY_SCHEME_HTTP_HEADER, 
> FORWARDED_PROTO_HTTP_HEADER);
> final

[GitHub] nifi issue #3129: [WIP] NIFI-5748 Fixed proxy header support to use X-Forwar...

2018-11-05 Thread jtstorck
Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/3129
  
I'll be adding some docker-compose content for testing this PR.


---


[jira] [Commented] (NIFI-5748) Improve handling of X-Forwarded-* headers in URI Rewriting

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675365#comment-16675365
 ] 

ASF GitHub Bot commented on NIFI-5748:
--

Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/3129
  
I'll be adding some docker-compose content for testing this PR.


> Improve handling of X-Forwarded-* headers in URI Rewriting
> --
>
> Key: NIFI-5748
> URL: https://issues.apache.org/jira/browse/NIFI-5748
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Jeff Storck
>Priority: Major
>
> This ticket is to improve the handling of headers used by popular proxies 
> when rewriting URIs in NiFI. Currently, NiFi checks the following headers 
> when determining how to re-write URLs it returns clients:
> From 
> [ApplicationResource|https://github.com/apache/nifi/blob/2201f7746fd16874aefbd12d546565f5d105ab04/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java#L110]:
> {code:java}
> public static final String PROXY_SCHEME_HTTP_HEADER = "X-ProxyScheme";
> public static final String PROXY_HOST_HTTP_HEADER = "X-ProxyHost";
> public static final String PROXY_PORT_HTTP_HEADER = "X-ProxyPort";
> public static final String PROXY_CONTEXT_PATH_HTTP_HEADER = 
> "X-ProxyContextPath";
> public static final String FORWARDED_PROTO_HTTP_HEADER = "X-Forwarded-Proto";
> public static final String FORWARDED_HOST_HTTP_HEADER = "X-Forwarded-Server";
> public static final String FORWARDED_PORT_HTTP_HEADER = "X-Forwarded-Port";
> public static final String FORWARDED_CONTEXT_HTTP_HEADER = 
> "X-Forwarded-Context";
> // ...
> final String scheme = getFirstHeaderValue(PROXY_SCHEME_HTTP_HEADER, 
> FORWARDED_PROTO_HTTP_HEADER);
> final String host = getFirstHeaderValue(PROXY_HOST_HTTP_HEADER, 
> FORWARDED_HOST_HTTP_HEADER);
> final String port = getFirstHeaderValue(PROXY_PORT_HTTP_HEADER, 
> FORWARDED_PORT_HTTP_HEADER);
> {code}
> Based on this, it looks like if both {{X-Forwarded-Server}} and 
> {{X-Forwarded-Host}} are set, that {{-Host}} will contain the hostname the 
> user/client requested, and {{-Server}} will contain the hostname of the proxy 
> server (ie, what the proxy server is able to determine from inspecting the 
> hostname of the instance it is on). See this for more details:
> https://stackoverflow.com/questions/43689625/x-forwarded-host-vs-x-forwarded-server
> Here is a demo based on docker containers and a reverse-proxy called Traefik 
> that shows where the current NiFi logic can break:
> https://gist.github.com/kevdoran/2892004ccbfbb856115c8a756d9d4538
> To use this gist, you can run the following:
> {noformat}
> wget -qO- 
> https://gist.githubusercontent.com/kevdoran/2892004ccbfbb856115c8a756d9d4538/raw/fb72151900d4d8fdcf4919fe5c8a94805fbb8401/docker-compose.yml
>  | docker-compose -f - up
> {noformat}
> Once the environment is up. Go to http://nifi.docker.localhost/nifi in Chrome 
> and try adding/configuring/moving a processor. This will reproduce the issue.
> For this task, the following improvement is recommended:
> Change the Header (string literal) for FORWARDED_HOST_HTTP_HEADER from 
> "X-Forwarded-Server" to "X-Forwarded-Host"
> Additionally, some proxies use a different header for the context path 
> prefix. Traefik uses {{X-Forwarded-Prefix}}. There does not appear to be a 
> universal standard. In the future, we could make this header configurable, 
> but for now, perhaps we should add {{X-Forwarded-Prefix}} to the headers 
> checked by NiFi so that Traefik is supported out-of-the-box.
> *Important:* After making this change, verify that proxying NiFi via Knox 
> still works, i.e., Knox should be sending the X-Forwarded-Host header that 
> matches what the user requested in the browser.
> This change applies to NiFi Registry as well.
> /cc [~jtstorck]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5748) Improve handling of X-Forwarded-* headers in URI Rewriting

2018-11-05 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5748:
--
Status: Patch Available  (was: In Progress)

> Improve handling of X-Forwarded-* headers in URI Rewriting
> --
>
> Key: NIFI-5748
> URL: https://issues.apache.org/jira/browse/NIFI-5748
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Jeff Storck
>Priority: Major
>
> This ticket is to improve the handling of headers used by popular proxies 
> when rewriting URIs in NiFI. Currently, NiFi checks the following headers 
> when determining how to re-write URLs it returns clients:
> From 
> [ApplicationResource|https://github.com/apache/nifi/blob/2201f7746fd16874aefbd12d546565f5d105ab04/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java#L110]:
> {code:java}
> public static final String PROXY_SCHEME_HTTP_HEADER = "X-ProxyScheme";
> public static final String PROXY_HOST_HTTP_HEADER = "X-ProxyHost";
> public static final String PROXY_PORT_HTTP_HEADER = "X-ProxyPort";
> public static final String PROXY_CONTEXT_PATH_HTTP_HEADER = 
> "X-ProxyContextPath";
> public static final String FORWARDED_PROTO_HTTP_HEADER = "X-Forwarded-Proto";
> public static final String FORWARDED_HOST_HTTP_HEADER = "X-Forwarded-Server";
> public static final String FORWARDED_PORT_HTTP_HEADER = "X-Forwarded-Port";
> public static final String FORWARDED_CONTEXT_HTTP_HEADER = 
> "X-Forwarded-Context";
> // ...
> final String scheme = getFirstHeaderValue(PROXY_SCHEME_HTTP_HEADER, 
> FORWARDED_PROTO_HTTP_HEADER);
> final String host = getFirstHeaderValue(PROXY_HOST_HTTP_HEADER, 
> FORWARDED_HOST_HTTP_HEADER);
> final String port = getFirstHeaderValue(PROXY_PORT_HTTP_HEADER, 
> FORWARDED_PORT_HTTP_HEADER);
> {code}
> Based on this, it looks like if both {{X-Forwarded-Server}} and 
> {{X-Forwarded-Host}} are set, that {{-Host}} will contain the hostname the 
> user/client requested, and {{-Server}} will contain the hostname of the proxy 
> server (ie, what the proxy server is able to determine from inspecting the 
> hostname of the instance it is on). See this for more details:
> https://stackoverflow.com/questions/43689625/x-forwarded-host-vs-x-forwarded-server
> Here is a demo based on docker containers and a reverse-proxy called Traefik 
> that shows where the current NiFi logic can break:
> https://gist.github.com/kevdoran/2892004ccbfbb856115c8a756d9d4538
> To use this gist, you can run the following:
> {noformat}
> wget -qO- 
> https://gist.githubusercontent.com/kevdoran/2892004ccbfbb856115c8a756d9d4538/raw/fb72151900d4d8fdcf4919fe5c8a94805fbb8401/docker-compose.yml
>  | docker-compose -f - up
> {noformat}
> Once the environment is up. Go to http://nifi.docker.localhost/nifi in Chrome 
> and try adding/configuring/moving a processor. This will reproduce the issue.
> For this task, the following improvement is recommended:
> Change the Header (string literal) for FORWARDED_HOST_HTTP_HEADER from 
> "X-Forwarded-Server" to "X-Forwarded-Host"
> Additionally, some proxies use a different header for the context path 
> prefix. Traefik uses {{X-Forwarded-Prefix}}. There does not appear to be a 
> universal standard. In the future, we could make this header configurable, 
> but for now, perhaps we should add {{X-Forwarded-Prefix}} to the headers 
> checked by NiFi so that Traefik is supported out-of-the-box.
> *Important:* After making this change, verify that proxying NiFi via Knox 
> still works, i.e., Knox should be sending the X-Forwarded-Host header that 
> matches what the user requested in the browser.
> This change applies to NiFi Registry as well.
> /cc [~jtstorck]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5788:
---
Fix Version/s: (was: 1.8.0)

> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5788:
---
Affects Version/s: (was: 1.8.0)

> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5788:
---
Issue Type: Improvement  (was: Bug)

> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5788:
---
Status: Patch Available  (was: Open)

> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3128: NIFI-5788: Introduce batch size limit in PutDatabas...

2018-11-05 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r230811717
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
--- End diff --

We should be consistent here with "batch size" and "bulk size" in the 
naming of variables, documentation, etc. Maybe "Maximum Batch Size"?


---


[GitHub] nifi pull request #3128: NIFI-5788: Introduce batch size limit in PutDatabas...

2018-11-05 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r230812123
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("put-db-record-batch-size")
+.displayName("Bulk Size")
+.description("Specifies batch size for INSERT and UPDATE 
statements. This parameter has no effect for other statements specified in 
'Statement Type'."
++ " Non-positive value has the effect of infinite bulk 
size.")
+.defaultValue("-1")
--- End diff --

What does a value of zero do? Would anyone ever use it? If not, perhaps 
zero is the best default to indicate infinite bulk size. If you do change it to 
zero, please change the validator to a NONNEGATIVE_INTEGER_VALIDATOR to match


---


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675387#comment-16675387
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r230811717
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
--- End diff --

We should be consistent here with "batch size" and "bulk size" in the 
naming of variables, documentation, etc. Maybe "Maximum Batch Size"?


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675386#comment-16675386
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r230812123
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("put-db-record-batch-size")
+.displayName("Bulk Size")
+.description("Specifies batch size for INSERT and UPDATE 
statements. This parameter has no effect for other statements specified in 
'Statement Type'."
++ " Non-positive value has the effect of infinite bulk 
size.")
+.defaultValue("-1")
--- End diff --

What does a value of zero do? Would anyone ever use it? If not, perhaps 
zero is the best default to indicate infinite bulk size. If you do change it to 
zero, please change the validator to a NONNEGATIVE_INTEGER_VALIDATOR to match


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Daniel Seifert (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Seifert updated NIFI-4362:
-
Attachment: nifi-prometheus-nar-1.7.1.nar

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
> Attachments: nifi-prometheus-nar-1.7.1.nar
>
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3111: NIFI-5757 AvroRecordSetWriter - Fix for slow synchronized ...

2018-11-05 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/3111
  
@arkadius thanks for compiling that list. Sorry it took so long to reply! 
Looking through the list, I do think you're right - these all appear to be the 
same pattern. I certainly didn't realize that we were making such prolific use 
of this pattern. Reading through the Caffeine docs, it probably does make sense 
to update these as well.


---


[jira] [Commented] (NIFI-5757) AvroRecordSetWriter synchronize every access to compiledAvroSchemaCache

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675407#comment-16675407
 ] 

ASF GitHub Bot commented on NIFI-5757:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/3111
  
@arkadius thanks for compiling that list. Sorry it took so long to reply! 
Looking through the list, I do think you're right - these all appear to be the 
same pattern. I certainly didn't realize that we were making such prolific use 
of this pattern. Reading through the Caffeine docs, it probably does make sense 
to update these as well.


> AvroRecordSetWriter synchronize every access to compiledAvroSchemaCache
> ---
>
> Key: NIFI-5757
> URL: https://issues.apache.org/jira/browse/NIFI-5757
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Arek Burdach
>Priority: Major
>
> Avro record serialization is a quite expensive operation.
> This stack trace I very often see in thread dumps:
> {noformat}
> Thread 48583: (state = BLOCKED)
>  - 
> org.apache.nifi.avro.AvroRecordSetWriter.compileAvroSchema(java.lang.String) 
> @bci=9, line=124 (Compiled frame)
>  - 
> org.apache.nifi.avro.AvroRecordSetWriter.createWriter(org.apache.nifi.logging.ComponentLog,
>  org.apache.nifi.serialization.record.RecordSchema, java.io.OutputStream) 
> @bci=96, line=92 (Compiled frame)
>  - sun.reflect.GeneratedMethodAccessor183.invoke(java.lang.Object, 
> java.lang.Object[]) @bci=56 (Compiled frame)
>  - sun.reflect.DelegatingMethodAccessorImpl.invoke(java.lang.Object, 
> java.lang.Object[]) @bci=6, line=43 (Compiled frame)
>  - java.lang.reflect.Method.invoke(java.lang.Object, java.lang.Object[]) 
> @bci=56, line=498 (Compiled frame)
>  - 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(java.lang.Object,
>  java.lang.reflect.Method, java.lang.Object[]) @bci=309, line=89 (Compiled 
> frame)
>  - com.sun.proxy.$Proxy100.createWriter(org.apache.nifi.logging.ComponentLog, 
> org.apache.nifi.serialization.record.RecordSchema, java.io.OutputStream) 
> @bci=24 (Compiled frame)
>  - 
> org.apache.nifi.processors.kafka.pubsub.PublisherLease.publish(org.apache.nifi.flowfile.FlowFile,
>  org.apache.nifi.serialization.record.RecordSet, 
> org.apache.nifi.serialization.RecordSetWriterFactory, 
> org.apache.nifi.serialization.record.RecordSchema, java.lang.String, 
> java.lang.String) @bci=71, line=169 (Compiled frame)
>  - 
> org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0$1.process(java.io.InputStream)
>  @bci=94, line=412 (Compiled frame)
> {noformat}
> The reason why it happens is because {{AvroRecordSetWriter}} synchronizing 
> every access to cache of compiled schemas.
> I've prepared PR that is fixing this issue by using {{ConcurrentHashMap}} 
> instead: https://github.com/apache/nifi/pull/3111
> It is not a perfect fix because it removes cache size limitation which BTW 
> was hardcoded to {{20}}. Services can be reusable by many flows so such a 
> hard limit is not a good choice.
> What do you think about such an improvement?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Daniel Seifert (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675408#comment-16675408
 ] 

Daniel Seifert commented on NIFI-4362:
--

Hi [~sudeepgarg],

this is very strange. I just cloned repository in clean setup and everything 
worked for me.
But there you go: [^nifi-prometheus-nar-1.7.1.nar] 

Regards,
Daniel

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
> Attachments: nifi-prometheus-nar-1.7.1.nar
>
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3125: NIFI-5677 Added note to clarify why modifying/creat...

2018-11-05 Thread andrewmlim
Github user andrewmlim commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3125#discussion_r230820448
  
--- Diff: nifi-docs/src/main/asciidoc/user-guide.adoc ---
@@ -1928,7 +1928,9 @@ The following actions are not considered local 
changes:
 * modifying sensitive property values
 * modifying remote process group URLs
 * updating a processor that was referencing a non-existent controller 
service to reference an externally available controller service
-* modifying variables
+* creating or modifying variables
--- End diff --

That's a good point.  Will add that scenario.


---


[jira] [Commented] (NIFI-5677) Add/clarify why modifying/creating variables are not considered local changes in versioned flows

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675412#comment-16675412
 ] 

ASF GitHub Bot commented on NIFI-5677:
--

Github user andrewmlim commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3125#discussion_r230820448
  
--- Diff: nifi-docs/src/main/asciidoc/user-guide.adoc ---
@@ -1928,7 +1928,9 @@ The following actions are not considered local 
changes:
 * modifying sensitive property values
 * modifying remote process group URLs
 * updating a processor that was referencing a non-existent controller 
service to reference an externally available controller service
-* modifying variables
+* creating or modifying variables
--- End diff --

That's a good point.  Will add that scenario.


> Add/clarify why modifying/creating variables are not considered local changes 
> in versioned flows
> 
>
> Key: NIFI-5677
> URL: https://issues.apache.org/jira/browse/NIFI-5677
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
>
> There has been some confusion over why creating or modifying variables in a 
> versioned flow do not trigger local changes in the flow.
> Will improve the relevant section in the User Guide 
> (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#managing_local_changes)
>  with the following clarifications:
> Modifying doesn’t trigger local changes because variable values are intended 
> to be different in each environment.  When a flow is imported to an 
> environment, it is assumed there is a one-time operation required to set 
> those variables specific for the given environment. 
> Creating a variable doesn’t trigger a local change because just creating a 
> variable on its own has not changed anything about what the flow processes.  
> A component will have to be created/modified that uses the new variable, 
> which will trigger a local change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3125: NIFI-5677 Added note to clarify why modifying/creat...

2018-11-05 Thread andrewmlim
Github user andrewmlim commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3125#discussion_r230824217
  
--- Diff: nifi-docs/src/main/asciidoc/user-guide.adoc ---
@@ -1928,7 +1928,9 @@ The following actions are not considered local 
changes:
 * modifying sensitive property values
 * modifying remote process group URLs
 * updating a processor that was referencing a non-existent controller 
service to reference an externally available controller service
-* modifying variables
+* creating or modifying variables
--- End diff --

Added "deleting variables" scenario.  PR read for review/merge if no other 
issues raised. Thanks @ijokarumawak !


---


[jira] [Commented] (NIFI-5677) Add/clarify why modifying/creating variables are not considered local changes in versioned flows

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675420#comment-16675420
 ] 

ASF GitHub Bot commented on NIFI-5677:
--

Github user andrewmlim commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3125#discussion_r230824217
  
--- Diff: nifi-docs/src/main/asciidoc/user-guide.adoc ---
@@ -1928,7 +1928,9 @@ The following actions are not considered local 
changes:
 * modifying sensitive property values
 * modifying remote process group URLs
 * updating a processor that was referencing a non-existent controller 
service to reference an externally available controller service
-* modifying variables
+* creating or modifying variables
--- End diff --

Added "deleting variables" scenario.  PR read for review/merge if no other 
issues raised. Thanks @ijokarumawak !


> Add/clarify why modifying/creating variables are not considered local changes 
> in versioned flows
> 
>
> Key: NIFI-5677
> URL: https://issues.apache.org/jira/browse/NIFI-5677
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
>
> There has been some confusion over why creating or modifying variables in a 
> versioned flow do not trigger local changes in the flow.
> Will improve the relevant section in the User Guide 
> (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#managing_local_changes)
>  with the following clarifications:
> Modifying doesn’t trigger local changes because variable values are intended 
> to be different in each environment.  When a flow is imported to an 
> environment, it is assumed there is a one-time operation required to set 
> those variables specific for the given environment. 
> Creating a variable doesn’t trigger a local change because just creating a 
> variable on its own has not changed anything about what the flow processes.  
> A component will have to be created/modified that uses the new variable, 
> which will trigger a local change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5789) DBCPConnectionPool controller service always leaves one connection open

2018-11-05 Thread Colin Dean (JIRA)
Colin Dean created NIFI-5789:


 Summary: DBCPConnectionPool controller service always leaves one 
connection open
 Key: NIFI-5789
 URL: https://issues.apache.org/jira/browse/NIFI-5789
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.8.0
 Environment: Linux, OpenJDK 8, jTDS 1.3.1, MS SQL Server 2016
Reporter: Colin Dean


Out of the box, NiFi (at least, as of 1.8.0), appears to keep open one database 
connection for each DBCPConnectionPool controller service enabled.

I have multiple DBCPConnectionPool controller services configured to access the 
same server with different options, so this quickly adds up against a limited 
number of connections to my database server. I have a scheduled workflow that 
runs ~nightly. Connections need not be active except when in active use during 
a short window of time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread Colin Dean (JIRA)
Colin Dean created NIFI-5790:


 Summary: DBCPConnectionPool configuration should expose underlying 
connection idle and eviction configuration
 Key: NIFI-5790
 URL: https://issues.apache.org/jira/browse/NIFI-5790
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.8.0
Reporter: Colin Dean


While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
 that NiFi appears _not_ to have controller service configuration options 
associated with [Apache 
Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] 
{{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I think 
should be both set to 0 in my particular use case. 

Alternatively, I think I could set {{maxConnLifetimeMillis}} to something even 
in the minutes range and satisfy my use case (a connection need not be released 
_immediately_ but within a reasonable period of time), but this option is also 
not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675460#comment-16675460
 ] 

Colin Dean commented on NIFI-5790:
--

This came from the my Stack Overflow question, [How can I configure NiFi's 
DBCPConnectionPool not to keep idle connections 
open?|https://stackoverflow.com/questions/53110163/how-can-i-configure-nifis-dbcpconnectionpool-not-to-keep-idle-connections-open]

> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5789) DBCPConnectionPool controller service always leaves one connection open

2018-11-05 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675459#comment-16675459
 ] 

Colin Dean commented on NIFI-5789:
--

This came from the my Stack Overflow question, [How can I configure NiFi's 
DBCPConnectionPool not to keep idle connections 
open?|https://stackoverflow.com/questions/53110163/how-can-i-configure-nifis-dbcpconnectionpool-not-to-keep-idle-connections-open]

> DBCPConnectionPool controller service always leaves one connection open
> ---
>
> Key: NIFI-5789
> URL: https://issues.apache.org/jira/browse/NIFI-5789
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0
> Environment: Linux, OpenJDK 8, jTDS 1.3.1, MS SQL Server 2016
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> Out of the box, NiFi (at least, as of 1.8.0), appears to keep open one 
> database connection for each DBCPConnectionPool controller service enabled.
> I have multiple DBCPConnectionPool controller services configured to access 
> the same server with different options, so this quickly adds up against a 
> limited number of connections to my database server. I have a scheduled 
> workflow that runs ~nightly. Connections need not be active except when in 
> active use during a short window of time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5789) DBCPConnectionPool controller service always leaves one connection open

2018-11-05 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675459#comment-16675459
 ] 

Colin Dean edited comment on NIFI-5789 at 11/5/18 5:31 PM:
---

This came from my Stack Overflow question, [How can I configure NiFi's 
DBCPConnectionPool not to keep idle connections 
open?|https://stackoverflow.com/questions/53110163/how-can-i-configure-nifis-dbcpconnectionpool-not-to-keep-idle-connections-open]


was (Author: colindean):
This came from the my Stack Overflow question, [How can I configure NiFi's 
DBCPConnectionPool not to keep idle connections 
open?|https://stackoverflow.com/questions/53110163/how-can-i-configure-nifis-dbcpconnectionpool-not-to-keep-idle-connections-open]

> DBCPConnectionPool controller service always leaves one connection open
> ---
>
> Key: NIFI-5789
> URL: https://issues.apache.org/jira/browse/NIFI-5789
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0
> Environment: Linux, OpenJDK 8, jTDS 1.3.1, MS SQL Server 2016
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> Out of the box, NiFi (at least, as of 1.8.0), appears to keep open one 
> database connection for each DBCPConnectionPool controller service enabled.
> I have multiple DBCPConnectionPool controller services configured to access 
> the same server with different options, so this quickly adds up against a 
> limited number of connections to my database server. I have a scheduled 
> workflow that runs ~nightly. Connections need not be active except when in 
> active use during a short window of time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675460#comment-16675460
 ] 

Colin Dean edited comment on NIFI-5790 at 11/5/18 5:31 PM:
---

This came from my Stack Overflow question, [How can I configure NiFi's 
DBCPConnectionPool not to keep idle connections 
open?|https://stackoverflow.com/questions/53110163/how-can-i-configure-nifis-dbcpconnectionpool-not-to-keep-idle-connections-open]


was (Author: colindean):
This came from the my Stack Overflow question, [How can I configure NiFi's 
DBCPConnectionPool not to keep idle connections 
open?|https://stackoverflow.com/questions/53110163/how-can-i-configure-nifis-dbcpconnectionpool-not-to-keep-idle-connections-open]

> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675467#comment-16675467
 ] 

Colin Dean commented on NIFI-5790:
--

I'm asserting that this issue _causes_ NIFI-5790 because the absence of this 
configurability leaves the NiFi user with no option but to have an open 
connection.

> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5764) Allow ListSftp connection parameter

2018-11-05 Thread Alfredo De Luca (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675468#comment-16675468
 ] 

Alfredo De Luca commented on NIFI-5764:
---

Hi [~ijokarumawak]. Thanks for that but I can't fix that issue that we have. So 
i did a few test (not with NiFi) and if I use a controlmaster on my ssh 
connection I don't get this error. 

 Caused by: com.jcraft.jsch.JSchException: Auth fail

Any idea/thoughts? 

Cheerd

> Allow ListSftp connection parameter
> ---
>
> Key: NIFI-5764
> URL: https://issues.apache.org/jira/browse/NIFI-5764
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: dav
>Priority: Critical
>  Labels: SFTP, customization, sftp
>
> ListSftp and other Sftp processors should be able to add parameters
> (like [-B buffer_size] [-b batchfile] [-c cipher]
>  [-D sftp_server_path] [-F ssh_config] [-i identity_file] [-l limit]
>  [-o ssh_option] [-P port] [-R num_requests] [-S program]
>  [-s subsystem | sftp_server] host
>  sftp [user@]host[:file ...]
>  sftp [user@]host[:dir[/]]
>  sftp -b batchfile [user@]host) 
> in order to edit the type of connection on Sftp Server.
> For instance, I have this error on nifi:
> 2018-10-29 11:06:09,462 ERROR [Timer-Driven Process Thread-5] 
> SimpleProcessLogger.java:254 
> ListSFTP[id=766ac418-27ce-335a-5b13-52abe3495592] Failed to perform listing 
> on remote host due to java.io.IOException: Failed to obtain connection to 
> remote host due to com.jcraft.jsch.JSchException: Auth fail: {}
> java.io.IOException: Failed to obtain connection to remote host due to 
> com.jcraft.jsch.JSchException: Auth fail
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:468)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:192)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:156)
>  at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:105)
>  at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:401)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1147)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:175)
>  at 
> org.apache.nifi.controller.scheduling.QuartzSchedulingAgent$2.run(QuartzSchedulingAgent.java:140)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: com.jcraft.jsch.JSchException: Auth fail
>  at com.jcraft.jsch.Session.connect(Session.java:519)
>  at com.jcraft.jsch.Session.connect(Session.java:183)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:448)
>  ... 15 common frames omitted
> This can be avoided by connect to Sftp server with this string:
> *sftp  -o “controlmaster auto” username@sftp_server*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Joseph Witt (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675472#comment-16675472
 ] 

Joseph Witt commented on NIFI-4362:
---

[~kdoran] recently showed me the awesomeness that is prometheus/micrometer - 
this could be a really powerful capability.

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
> Attachments: nifi-prometheus-nar-1.7.1.nar
>
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5764) Allow ListSftp connection parameter

2018-11-05 Thread Alfredo De Luca (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675468#comment-16675468
 ] 

Alfredo De Luca edited comment on NIFI-5764 at 11/5/18 5:45 PM:


Hi [~ijokarumawak]. Thanks for that but I can't fix that issue that we have. So 
i did a few test (not with NiFi) and if I use a controlmaster on my ssh 
connection I don't get this error. 

 Caused by: com.jcraft.jsch.JSchException: Auth fail

Any idea/thoughts? 

Cheers


was (Author: alfredo.deluca):
Hi [~ijokarumawak]. Thanks for that but I can't fix that issue that we have. So 
i did a few test (not with NiFi) and if I use a controlmaster on my ssh 
connection I don't get this error. 

 Caused by: com.jcraft.jsch.JSchException: Auth fail

Any idea/thoughts? 

Cheerd

> Allow ListSftp connection parameter
> ---
>
> Key: NIFI-5764
> URL: https://issues.apache.org/jira/browse/NIFI-5764
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: dav
>Priority: Critical
>  Labels: SFTP, customization, sftp
>
> ListSftp and other Sftp processors should be able to add parameters
> (like [-B buffer_size] [-b batchfile] [-c cipher]
>  [-D sftp_server_path] [-F ssh_config] [-i identity_file] [-l limit]
>  [-o ssh_option] [-P port] [-R num_requests] [-S program]
>  [-s subsystem | sftp_server] host
>  sftp [user@]host[:file ...]
>  sftp [user@]host[:dir[/]]
>  sftp -b batchfile [user@]host) 
> in order to edit the type of connection on Sftp Server.
> For instance, I have this error on nifi:
> 2018-10-29 11:06:09,462 ERROR [Timer-Driven Process Thread-5] 
> SimpleProcessLogger.java:254 
> ListSFTP[id=766ac418-27ce-335a-5b13-52abe3495592] Failed to perform listing 
> on remote host due to java.io.IOException: Failed to obtain connection to 
> remote host due to com.jcraft.jsch.JSchException: Auth fail: {}
> java.io.IOException: Failed to obtain connection to remote host due to 
> com.jcraft.jsch.JSchException: Auth fail
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:468)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:192)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:156)
>  at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:105)
>  at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:401)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1147)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:175)
>  at 
> org.apache.nifi.controller.scheduling.QuartzSchedulingAgent$2.run(QuartzSchedulingAgent.java:140)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: com.jcraft.jsch.JSchException: Auth fail
>  at com.jcraft.jsch.Session.connect(Session.java:519)
>  at com.jcraft.jsch.Session.connect(Session.java:183)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:448)
>  ... 15 common frames omitted
> This can be avoided by connect to Sftp server with this string:
> *sftp  -o “controlmaster auto” username@sftp_server*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5791) Add Apache Daffodil parse/unparse processor

2018-11-05 Thread Steve Lawrence (JIRA)
Steve Lawrence created NIFI-5791:


 Summary: Add Apache Daffodil parse/unparse processor
 Key: NIFI-5791
 URL: https://issues.apache.org/jira/browse/NIFI-5791
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Steve Lawrence






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle

2018-11-05 Thread stevedlawrence
GitHub user stevedlawrence opened a pull request:

https://github.com/apache/nifi/pull/3130

NIFI-5791: Add Apache Daffodil (incubating) bundle

Adds a new daffodil bundle containing two processors (DaffodilParse and
DaffodilUnparse) used to convert fixed format data to XML or JSON using
Apache Daffodil and DFDL schemas.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [X] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [X] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [X] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [X] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/stevedlawrence/nifi nifi-5791

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3130.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3130


commit 4ab3ff90bbcab8d02a4ec24fb3823f58b0e39118
Author: Steve Lawrence 
Date:   2018-11-05T13:42:42Z

NIFI-5791: Add Apache Daffodil (incubating) bundle

Adds a new daffodil bundle containing two processors (DaffodilParse and
DaffodilUnparse) used to convert fixed format data to XML or JSON using
Apache Daffodil and DFDL schemas.




---


[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675513#comment-16675513
 ] 

ASF GitHub Bot commented on NIFI-5791:
--

GitHub user stevedlawrence opened a pull request:

https://github.com/apache/nifi/pull/3130

NIFI-5791: Add Apache Daffodil (incubating) bundle

Adds a new daffodil bundle containing two processors (DaffodilParse and
DaffodilUnparse) used to convert fixed format data to XML or JSON using
Apache Daffodil and DFDL schemas.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [X] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [X] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [X] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [X] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/stevedlawrence/nifi nifi-5791

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3130.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3130


commit 4ab3ff90bbcab8d02a4ec24fb3823f58b0e39118
Author: Steve Lawrence 
Date:   2018-11-05T13:42:42Z

NIFI-5791: Add Apache Daffodil (incubating) bundle

Adds a new daffodil bundle containing two processors (DaffodilParse and
DaffodilUnparse) used to convert fixed format data to XML or JSON using
Apache Daffodil and DFDL schemas.




> Add Apache Daffodil parse/unparse processor
> ---
>
> Key: NIFI-5791
> URL: https://issues.apache.org/jira/browse/NIFI-5791
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Steve Lawrence
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle

2018-11-05 Thread stevedlawrence
Github user stevedlawrence commented on the issue:

https://github.com/apache/nifi/pull/3130
  
Documentation about this new processor:

[Apache Daffodil (incubating)](https://daffodil.apache.org) is the open 
source implementation of the [Data Format Description Language 
(DFDL)](https://www.ogf.org/ogf/doku.php/standards/dfdl/dfdl). DFDL is a 
language capable of describing many data formats, including textual and binary, 
commercial record-oriented, scientific and numeric, modern and legacy, and many 
industry standards. It leverages XML technology and concepts, using a subset of 
W3C XML schema type system and annotations to describe such data. Daffodil uses 
this data description to "parse" data into an XML representation of the data. 
This allows one to take advantage of the many XML and JSON technologies (e.g. 
XQuery, XPath, XSLT) to ingest, validate, and manipulate complex data formats. 
Daffodil can also use this data description to "unparse", or serialize, the XML 
or JSON representation back to the original data format.

This PR provides a new Daffodil bundle containing DaffodilParse and 
DaffodilUnparse processors.

For an example of it's usage, I've provided a NiFi template here:


https://gist.github.com/stevedlawrence/5a8259c9fffb3cb3b317ba31a6ef0494#file-daffodil_pcap_filter_nifi_template-xml

Which looks like this:


![nifi-daffodil-pcap-filter](https://user-images.githubusercontent.com/3180601/48017874-6a536c80-e0fd-11e8-9aa0-3eb785a99157.png)


To set the environment up to work with this template, perform the following:
```
mkdir -p /tmp/nifi/{getfile,putfile}
git clone https://github.com/DFDLSchemas/PCAP.git /tmp/nifi/PCAP
curl 
"https://gist.githubusercontent.com/stevedlawrence/5a8259c9fffb3cb3b317ba31a6ef0494/raw/c19fddd6d1e73777e10a549bfd369b077aefbb50/pcap-filter.xsl";
 > /tmp/nifi/pcap-filter.xsl
```
The template has 5 processors in a single pipeline that performs the 
following
1. **GetFile** - Reads a PCAP file from ``/tmp/nifi/getfile``
1. **DaffodilParse** - Parses the PCAP file to an XML representation
1. **TransformXML** - Removes all XML elements that have an IP address of 
``192.168.170.8``
1. **DaffodilUnparse** - Unparses the filtered XML back to PCAP file format
1. **PutFile** - Writes the filtered PCAP file to ``/tmp/nifi/putfile``

To test this flow, perform the following:
```
cp /tmp/nifi/PCAP/src/test/resources/com/tresys/pcap/data/dns.cap 
/tmp/nifi/getfile/
```
The original dns.cap file has about 40 packets. After filtering, the new 
pcap file written by daffodil has approximately 10 that were not filtered out.




---


[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675551#comment-16675551
 ] 

ASF GitHub Bot commented on NIFI-5791:
--

Github user stevedlawrence commented on the issue:

https://github.com/apache/nifi/pull/3130
  
Documentation about this new processor:

[Apache Daffodil (incubating)](https://daffodil.apache.org) is the open 
source implementation of the [Data Format Description Language 
(DFDL)](https://www.ogf.org/ogf/doku.php/standards/dfdl/dfdl). DFDL is a 
language capable of describing many data formats, including textual and binary, 
commercial record-oriented, scientific and numeric, modern and legacy, and many 
industry standards. It leverages XML technology and concepts, using a subset of 
W3C XML schema type system and annotations to describe such data. Daffodil uses 
this data description to "parse" data into an XML representation of the data. 
This allows one to take advantage of the many XML and JSON technologies (e.g. 
XQuery, XPath, XSLT) to ingest, validate, and manipulate complex data formats. 
Daffodil can also use this data description to "unparse", or serialize, the XML 
or JSON representation back to the original data format.

This PR provides a new Daffodil bundle containing DaffodilParse and 
DaffodilUnparse processors.

For an example of it's usage, I've provided a NiFi template here:


https://gist.github.com/stevedlawrence/5a8259c9fffb3cb3b317ba31a6ef0494#file-daffodil_pcap_filter_nifi_template-xml

Which looks like this:


![nifi-daffodil-pcap-filter](https://user-images.githubusercontent.com/3180601/48017874-6a536c80-e0fd-11e8-9aa0-3eb785a99157.png)


To set the environment up to work with this template, perform the following:
```
mkdir -p /tmp/nifi/{getfile,putfile}
git clone https://github.com/DFDLSchemas/PCAP.git /tmp/nifi/PCAP
curl 
"https://gist.githubusercontent.com/stevedlawrence/5a8259c9fffb3cb3b317ba31a6ef0494/raw/c19fddd6d1e73777e10a549bfd369b077aefbb50/pcap-filter.xsl";
 > /tmp/nifi/pcap-filter.xsl
```
The template has 5 processors in a single pipeline that performs the 
following
1. **GetFile** - Reads a PCAP file from ``/tmp/nifi/getfile``
1. **DaffodilParse** - Parses the PCAP file to an XML representation
1. **TransformXML** - Removes all XML elements that have an IP address of 
``192.168.170.8``
1. **DaffodilUnparse** - Unparses the filtered XML back to PCAP file format
1. **PutFile** - Writes the filtered PCAP file to ``/tmp/nifi/putfile``

To test this flow, perform the following:
```
cp /tmp/nifi/PCAP/src/test/resources/com/tresys/pcap/data/dns.cap 
/tmp/nifi/getfile/
```
The original dns.cap file has about 40 packets. After filtering, the new 
pcap file written by daffodil has approximately 10 that were not filtered out.




> Add Apache Daffodil parse/unparse processor
> ---
>
> Key: NIFI-5791
> URL: https://issues.apache.org/jira/browse/NIFI-5791
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Steve Lawrence
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-3229) When a queue contains only Penalized FlowFile's the next processor Tasks/Time statistics becomes extremely large

2018-11-05 Thread Peter Wicks (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Wicks reassigned NIFI-3229:
-

Assignee: Peter Wicks

> When a queue contains only Penalized FlowFile's the next processor Tasks/Time 
> statistics becomes extremely large
> 
>
> Key: NIFI-3229
> URL: https://issues.apache.org/jira/browse/NIFI-3229
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Dmitry Lukyanov
>Assignee: Peter Wicks
>Priority: Minor
> Attachments: flow.xml.gz, nifi-stats.png, nifi-stats2.png
>
>
> fetchfile on `not.found` produces penalized flow file
> in this case i'm expecting the next processor will do one task execution when 
> flow file penalize time over.
> but according to stats it executes approximately 1-6 times.
> i understand that it could be a feature but stats became really unclear...
> maybe there should be two columns? 
> `All Task/Times` and `Committed Task/Times`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-3229) When a queue contains only Penalized FlowFile's the next processor Tasks/Time statistics becomes extremely large

2018-11-05 Thread Peter Wicks (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Wicks updated NIFI-3229:
--
Summary: When a queue contains only Penalized FlowFile's the next processor 
Tasks/Time statistics becomes extremely large  (was: when flowfile penalized 
the next processor `Tasks/Time` statistics becomes extreamly large)

> When a queue contains only Penalized FlowFile's the next processor Tasks/Time 
> statistics becomes extremely large
> 
>
> Key: NIFI-3229
> URL: https://issues.apache.org/jira/browse/NIFI-3229
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Dmitry Lukyanov
>Priority: Minor
> Attachments: flow.xml.gz, nifi-stats.png, nifi-stats2.png
>
>
> fetchfile on `not.found` produces penalized flow file
> in this case i'm expecting the next processor will do one task execution when 
> flow file penalize time over.
> but according to stats it executes approximately 1-6 times.
> i understand that it could be a feature but stats became really unclear...
> maybe there should be two columns? 
> `All Task/Times` and `Committed Task/Times`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2882: NIFI-4914

2018-11-05 Thread david-streamlio
Github user david-streamlio commented on the issue:

https://github.com/apache/nifi/pull/2882
  
@rumbin I am merging in changes from @pvillard31, and correcting some minor 
issues that cause the processors to hang in certain situations. ETA is by 
Monday 11/12 for my next commit with these changes


---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675584#comment-16675584
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user david-streamlio commented on the issue:

https://github.com/apache/nifi/pull/2882
  
@rumbin I am merging in changes from @pvillard31, and correcting some minor 
issues that cause the processors to hang in certain situations. ETA is by 
Monday 11/12 for my next commit with these changes


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle

2018-11-05 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3130
  
First of all this looks like a really thought out contribution and huge 
thanks for taking the time to create what looks like a well done NOTICE file!

I'm not sure about the naming/intent that is communicated to the user.  And 
in general we try to name things VerbSubject style.  What are some other names 
to consider?


---


[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675592#comment-16675592
 ] 

ASF GitHub Bot commented on NIFI-5791:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3130
  
First of all this looks like a really thought out contribution and huge 
thanks for taking the time to create what looks like a well done NOTICE file!

I'm not sure about the naming/intent that is communicated to the user.  And 
in general we try to name things VerbSubject style.  What are some other names 
to consider?


> Add Apache Daffodil parse/unparse processor
> ---
>
> Key: NIFI-5791
> URL: https://issues.apache.org/jira/browse/NIFI-5791
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Steve Lawrence
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4362) Prometheus Reporting Task

2018-11-05 Thread Sudeep Kumar Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675597#comment-16675597
 ] 

Sudeep Kumar Garg commented on NIFI-4362:
-

[~dseifert] thanks a lot. I'll look into this and update you.

Thanks,

Sudeep

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: matt price
>Assignee: matt price
>Priority: Minor
>  Labels: features, newbie
> Attachments: nifi-prometheus-nar-1.7.1.nar
>
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3107: NIFI-5744: Put exception message to attribute while Execut...

2018-11-05 Thread patricker
Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/3107
  
@mattyb149 Not sure if you've seen my latest reply to the email chain, but 
it looks like this is already a standard pattern used in ~12 other processors. 
Would love to see the discussion come to a conclusion in the email chain though.


---


[jira] [Commented] (NIFI-5744) Put exception message to attribute while ExecuteSQL fail

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675602#comment-16675602
 ] 

ASF GitHub Bot commented on NIFI-5744:
--

Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/3107
  
@mattyb149 Not sure if you've seen my latest reply to the email chain, but 
it looks like this is already a standard pattern used in ~12 other processors. 
Would love to see the discussion come to a conclusion in the email chain though.


> Put exception message to attribute while ExecuteSQL fail
> 
>
> Key: NIFI-5744
> URL: https://issues.apache.org/jira/browse/NIFI-5744
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1
>Reporter: Deon Huang
>Assignee: Deon Huang
>Priority: Minor
>
> In some scenario, it would be great if we could have different behavior based 
> on exception.
>  Better error tracking afterwards in attribute format instead of tracking in 
> log.
> For example, if it’s connection refused exception due to wrong url. 
>  We won’t want to retry and error message attribute would be helpful to keep 
> track of.
> While it’s other scenario that database temporary unavailable, we should 
> retry it based on should retry exception.
> Should be a quick fix at AbstractExecuteSQL before transfer flowfile to 
> failure relationship
> {code:java}
>  session.transfer(fileToProcess, REL_FAILURE);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675614#comment-16675614
 ] 

Colin Dean commented on NIFI-5790:
--

I've got code worked up that adds six options:
{code:java}
dataSource.setMinIdle(minIdle);
dataSource.setMaxIdle(maxIdle);
dataSource.setMaxConnLifetimeMillis(maxConnLifetimeMillis);

dataSource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
dataSource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);

dataSource.setSoftMinEvictableIdleTimeMillis(softMinEvictableIdleTimeMillis);
{code}
I'm using the defaults plucked from the [commons-dbcp 
docs|https://commons.apache.org/proper/commons-dbcp/configuration.html] and 
made only slight modifications to the help text for each so that the text makes 
more sense in the context of NiFi.

I'm awaiting a test and package build before I test it.

I don't know how best to write tests for this functionality, though. The test 
in {{org.apache.nifi.dbcp.DBCPServiceTest#testMaxWait}} seems to set a 
non-default option and expect the controller service to be valid. I suppose I 
could do that ~generatively in order to capture a breadth of all six options' 
configurability…

> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle

2018-11-05 Thread stevedlawrence
Github user stevedlawrence commented on the issue:

https://github.com/apache/nifi/pull/3130
  
At it's most basic and in DFDL language, Daffodill parses fixed format data 
to an infoset (which can be either XML or json, or potentially others in the 
future), and unparses that infoset back to the original data format. It uses a 
DFDL schema to define how to perform the transformation. Both of these can be 
considered a type of transformation, but the data format can be pretty much 
anything. So perhaps something like TransformToDFDLInfoset and 
TransformFromDFDLInfoset? Or if it's okay to stick with DFDL terms, 
ParseToDFDLInfoset and UnparseFromDFDLInfoset? Definitely open to other 
suggestions.


---


[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675649#comment-16675649
 ] 

ASF GitHub Bot commented on NIFI-5791:
--

Github user stevedlawrence commented on the issue:

https://github.com/apache/nifi/pull/3130
  
At it's most basic and in DFDL language, Daffodill parses fixed format data 
to an infoset (which can be either XML or json, or potentially others in the 
future), and unparses that infoset back to the original data format. It uses a 
DFDL schema to define how to perform the transformation. Both of these can be 
considered a type of transformation, but the data format can be pretty much 
anything. So perhaps something like TransformToDFDLInfoset and 
TransformFromDFDLInfoset? Or if it's okay to stick with DFDL terms, 
ParseToDFDLInfoset and UnparseFromDFDLInfoset? Definitely open to other 
suggestions.


> Add Apache Daffodil parse/unparse processor
> ---
>
> Key: NIFI-5791
> URL: https://issues.apache.org/jira/browse/NIFI-5791
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Steve Lawrence
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3124: NIFI-5767 Added NiFi Toolkit Guide to docs

2018-11-05 Thread andrewmlim
Github user andrewmlim commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3124#discussion_r230881754
  
--- Diff: nifi-docs/src/main/asciidoc/toolkit-guide.adoc ---
@@ -0,0 +1,1257 @@
+//
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+= Apache NiFi Toolkit Guide
+Apache NiFi Team 
+:homepage: http://nifi.apache.org
+:linkattrs:
+
+== Overview
+The NiFi Toolkit contains several command line utilities to setup and 
support NiFi in standalone and clustered environments.  The utilities include:
+
+* CLI -- The `cli` tool enables administrators to interact with NiFi and 
NiFi Registry instances to automate tasks such as deploying versioned flows and 
managing process groups and cluster nodes.
+* Encrypt Config -- The `encrypt-config` tool encrypts the sensitive keys 
in the _nifi.properties_ file to facilitate the setup of a secure NiFi instance.
+* File Manager -- The `file-manager` tool enables administrators to 
backup, install or restore a NiFi installation from backup.
+* Flow Analyzer -- The `flow-analyzer` tool produces a report that helps 
administrators understand the max amount of data which can be stored in 
backpressure for a given flow.
+* Node Manager -- The `node-manager` tool enables administrators to 
perform status checks on nodes as well as the ability to connect, disconnect, 
or remove nodes from the cluster.
+* Notify -- The `notify` tool enables administrators to send bulletins to 
the NiFi UI.
+* S2S -- The `s2s` tool enables administrators to send data into or out of 
NiFi flows over site-to-site.
+* TLS Toolkit -- The `tls-toolkit` utility generates the required 
keystores, truststore, and relevant configuration files to facilitate the setup 
of a secure NiFi instance.
+* ZooKeeper Migrator -- The `zk-migrator` tool enables administrators to:
+** move ZooKeeper information from one ZooKeeper cluster to another
+** migrate ZooKeeper node ownership
+
+The utilities are executed with scripts found in the `bin` folder of your 
NiFi Toolkit installation.
+
+NOTE: The NiFi Toolkit is downloaded separately from NiFi (see the 
link:https://nifi.apache.org/download.html[Apache NiFi downloads page^]).
+
+=== Prerequisites for Running in a Secure Environment
+For secured nodes and clusters, two policies should be configured in 
advance:
+
+* Access the controller – A user that will have access to these 
utilities should be authorized in NiFi by creating an “access the 
controller” policy (`/controller`) with both view and modify rights
+* Proxy user request – If not previously set, node’s identity (the DN 
value of the node’s certificate) should be authorized to proxy requests on 
behalf of a user
+
+When executing either the Notify or Node Manager tools in a secured 
environment the `proxyDN` flag option should be used in order to properly 
identify the user that was authorized to execute these commands. In non-secure 
environments, or if running the status operation on the Node Manager tool, the 
flag is ignored.
+
+== NiFi CLI
+This tool offers a CLI focused on interacting with NiFi and NiFi Registry 
in order to automate tasks, such as deploying flows from a NIFi Registy to a 
NiFi instance or managing process groups and cluster nodes.
+
+=== Usage
+The CLI toolkit can be executed in standalone mode to execute a single 
command, or interactive mode to enter an interactive shell.
+
+To execute a single command:
+
+ ./bin/cli.sh  
+
+To launch the interactive shell:
+
+ ./bin/cli.sh
+
+To show help:
+
+ cli.sh -h
+
+The following are available options:
+
+ demo quick-import
+ nifi current-user
+ nifi cluster-summary
+ nifi connect-node
+ nifi delete-node
+ nifi disconnect-node
+ nifi get-root-id
+ nifi get-node
+ nifi get-nodes
+ nifi offload-node
+ nifi list-reg-clients
+ nifi create-reg-client
+ nifi update-reg-client
+ nifi get-reg-client-id
+ nifi pg-import
+ 

[GitHub] nifi pull request #3124: NIFI-5767 Added NiFi Toolkit Guide to docs

2018-11-05 Thread andrewmlim
Github user andrewmlim commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3124#discussion_r230881702
  
--- Diff: nifi-docs/src/main/asciidoc/toolkit-guide.adoc ---
@@ -0,0 +1,1257 @@
+//
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+= Apache NiFi Toolkit Guide
+Apache NiFi Team 
+:homepage: http://nifi.apache.org
+:linkattrs:
+
+== Overview
+The NiFi Toolkit contains several command line utilities to setup and 
support NiFi in standalone and clustered environments.  The utilities include:
+
+* CLI -- The `cli` tool enables administrators to interact with NiFi and 
NiFi Registry instances to automate tasks such as deploying versioned flows and 
managing process groups and cluster nodes.
+* Encrypt Config -- The `encrypt-config` tool encrypts the sensitive keys 
in the _nifi.properties_ file to facilitate the setup of a secure NiFi instance.
+* File Manager -- The `file-manager` tool enables administrators to 
backup, install or restore a NiFi installation from backup.
+* Flow Analyzer -- The `flow-analyzer` tool produces a report that helps 
administrators understand the max amount of data which can be stored in 
backpressure for a given flow.
+* Node Manager -- The `node-manager` tool enables administrators to 
perform status checks on nodes as well as the ability to connect, disconnect, 
or remove nodes from the cluster.
+* Notify -- The `notify` tool enables administrators to send bulletins to 
the NiFi UI.
+* S2S -- The `s2s` tool enables administrators to send data into or out of 
NiFi flows over site-to-site.
+* TLS Toolkit -- The `tls-toolkit` utility generates the required 
keystores, truststore, and relevant configuration files to facilitate the setup 
of a secure NiFi instance.
+* ZooKeeper Migrator -- The `zk-migrator` tool enables administrators to:
+** move ZooKeeper information from one ZooKeeper cluster to another
+** migrate ZooKeeper node ownership
+
+The utilities are executed with scripts found in the `bin` folder of your 
NiFi Toolkit installation.
+
+NOTE: The NiFi Toolkit is downloaded separately from NiFi (see the 
link:https://nifi.apache.org/download.html[Apache NiFi downloads page^]).
+
+=== Prerequisites for Running in a Secure Environment
+For secured nodes and clusters, two policies should be configured in 
advance:
+
+* Access the controller – A user that will have access to these 
utilities should be authorized in NiFi by creating an “access the 
controller” policy (`/controller`) with both view and modify rights
+* Proxy user request – If not previously set, node’s identity (the DN 
value of the node’s certificate) should be authorized to proxy requests on 
behalf of a user
+
+When executing either the Notify or Node Manager tools in a secured 
environment the `proxyDN` flag option should be used in order to properly 
identify the user that was authorized to execute these commands. In non-secure 
environments, or if running the status operation on the Node Manager tool, the 
flag is ignored.
+
+== NiFi CLI
+This tool offers a CLI focused on interacting with NiFi and NiFi Registry 
in order to automate tasks, such as deploying flows from a NIFi Registy to a 
NiFi instance or managing process groups and cluster nodes.
+
+=== Usage
+The CLI toolkit can be executed in standalone mode to execute a single 
command, or interactive mode to enter an interactive shell.
+
+To execute a single command:
+
+ ./bin/cli.sh  
+
+To launch the interactive shell:
+
+ ./bin/cli.sh
+
+To show help:
+
+ cli.sh -h
--- End diff --

Updated all help examples accordingly.


---


[jira] [Commented] (NIFI-5767) Documentation of the NiFi Toolkit

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675658#comment-16675658
 ] 

ASF GitHub Bot commented on NIFI-5767:
--

Github user andrewmlim commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3124#discussion_r230881702
  
--- Diff: nifi-docs/src/main/asciidoc/toolkit-guide.adoc ---
@@ -0,0 +1,1257 @@
+//
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+= Apache NiFi Toolkit Guide
+Apache NiFi Team 
+:homepage: http://nifi.apache.org
+:linkattrs:
+
+== Overview
+The NiFi Toolkit contains several command line utilities to setup and 
support NiFi in standalone and clustered environments.  The utilities include:
+
+* CLI -- The `cli` tool enables administrators to interact with NiFi and 
NiFi Registry instances to automate tasks such as deploying versioned flows and 
managing process groups and cluster nodes.
+* Encrypt Config -- The `encrypt-config` tool encrypts the sensitive keys 
in the _nifi.properties_ file to facilitate the setup of a secure NiFi instance.
+* File Manager -- The `file-manager` tool enables administrators to 
backup, install or restore a NiFi installation from backup.
+* Flow Analyzer -- The `flow-analyzer` tool produces a report that helps 
administrators understand the max amount of data which can be stored in 
backpressure for a given flow.
+* Node Manager -- The `node-manager` tool enables administrators to 
perform status checks on nodes as well as the ability to connect, disconnect, 
or remove nodes from the cluster.
+* Notify -- The `notify` tool enables administrators to send bulletins to 
the NiFi UI.
+* S2S -- The `s2s` tool enables administrators to send data into or out of 
NiFi flows over site-to-site.
+* TLS Toolkit -- The `tls-toolkit` utility generates the required 
keystores, truststore, and relevant configuration files to facilitate the setup 
of a secure NiFi instance.
+* ZooKeeper Migrator -- The `zk-migrator` tool enables administrators to:
+** move ZooKeeper information from one ZooKeeper cluster to another
+** migrate ZooKeeper node ownership
+
+The utilities are executed with scripts found in the `bin` folder of your 
NiFi Toolkit installation.
+
+NOTE: The NiFi Toolkit is downloaded separately from NiFi (see the 
link:https://nifi.apache.org/download.html[Apache NiFi downloads page^]).
+
+=== Prerequisites for Running in a Secure Environment
+For secured nodes and clusters, two policies should be configured in 
advance:
+
+* Access the controller – A user that will have access to these utilities 
should be authorized in NiFi by creating an “access the controller” policy 
(`/controller`) with both view and modify rights
+* Proxy user request – If not previously set, node’s identity (the DN 
value of the node’s certificate) should be authorized to proxy requests on 
behalf of a user
+
+When executing either the Notify or Node Manager tools in a secured 
environment the `proxyDN` flag option should be used in order to properly 
identify the user that was authorized to execute these commands. In non-secure 
environments, or if running the status operation on the Node Manager tool, the 
flag is ignored.
+
+== NiFi CLI
+This tool offers a CLI focused on interacting with NiFi and NiFi Registry 
in order to automate tasks, such as deploying flows from a NIFi Registy to a 
NiFi instance or managing process groups and cluster nodes.
+
+=== Usage
+The CLI toolkit can be executed in standalone mode to execute a single 
command, or interactive mode to enter an interactive shell.
+
+To execute a single command:
+
+ ./bin/cli.sh  
+
+To launch the interactive shell:
+
+ ./bin/cli.sh
+
+To show help:
+
+ cli.sh -h
--- End diff --

Updated all help examples accordingly.


> Documentation of the NiFi Toolkit
> -
>
> Key: NIFI-5767
> 

[jira] [Commented] (NIFI-5767) Documentation of the NiFi Toolkit

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675659#comment-16675659
 ] 

ASF GitHub Bot commented on NIFI-5767:
--

Github user andrewmlim commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3124#discussion_r230881754
  
--- Diff: nifi-docs/src/main/asciidoc/toolkit-guide.adoc ---
@@ -0,0 +1,1257 @@
+//
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+= Apache NiFi Toolkit Guide
+Apache NiFi Team 
+:homepage: http://nifi.apache.org
+:linkattrs:
+
+== Overview
+The NiFi Toolkit contains several command line utilities to setup and 
support NiFi in standalone and clustered environments.  The utilities include:
+
+* CLI -- The `cli` tool enables administrators to interact with NiFi and 
NiFi Registry instances to automate tasks such as deploying versioned flows and 
managing process groups and cluster nodes.
+* Encrypt Config -- The `encrypt-config` tool encrypts the sensitive keys 
in the _nifi.properties_ file to facilitate the setup of a secure NiFi instance.
+* File Manager -- The `file-manager` tool enables administrators to 
backup, install or restore a NiFi installation from backup.
+* Flow Analyzer -- The `flow-analyzer` tool produces a report that helps 
administrators understand the max amount of data which can be stored in 
backpressure for a given flow.
+* Node Manager -- The `node-manager` tool enables administrators to 
perform status checks on nodes as well as the ability to connect, disconnect, 
or remove nodes from the cluster.
+* Notify -- The `notify` tool enables administrators to send bulletins to 
the NiFi UI.
+* S2S -- The `s2s` tool enables administrators to send data into or out of 
NiFi flows over site-to-site.
+* TLS Toolkit -- The `tls-toolkit` utility generates the required 
keystores, truststore, and relevant configuration files to facilitate the setup 
of a secure NiFi instance.
+* ZooKeeper Migrator -- The `zk-migrator` tool enables administrators to:
+** move ZooKeeper information from one ZooKeeper cluster to another
+** migrate ZooKeeper node ownership
+
+The utilities are executed with scripts found in the `bin` folder of your 
NiFi Toolkit installation.
+
+NOTE: The NiFi Toolkit is downloaded separately from NiFi (see the 
link:https://nifi.apache.org/download.html[Apache NiFi downloads page^]).
+
+=== Prerequisites for Running in a Secure Environment
+For secured nodes and clusters, two policies should be configured in 
advance:
+
+* Access the controller – A user that will have access to these utilities 
should be authorized in NiFi by creating an “access the controller” policy 
(`/controller`) with both view and modify rights
+* Proxy user request – If not previously set, node’s identity (the DN 
value of the node’s certificate) should be authorized to proxy requests on 
behalf of a user
+
+When executing either the Notify or Node Manager tools in a secured 
environment the `proxyDN` flag option should be used in order to properly 
identify the user that was authorized to execute these commands. In non-secure 
environments, or if running the status operation on the Node Manager tool, the 
flag is ignored.
+
+== NiFi CLI
+This tool offers a CLI focused on interacting with NiFi and NiFi Registry 
in order to automate tasks, such as deploying flows from a NIFi Registy to a 
NiFi instance or managing process groups and cluster nodes.
+
+=== Usage
+The CLI toolkit can be executed in standalone mode to execute a single 
command, or interactive mode to enter an interactive shell.
+
+To execute a single command:
+
+ ./bin/cli.sh  
+
+To launch the interactive shell:
+
+ ./bin/cli.sh
+
+To show help:
+
+ cli.sh -h
+
+The following are available options:
+
+ demo quick-import
+ nifi current-user
+ nifi cluster-summary
+ nifi connect-node
+ nifi delete-node
+ nifi disc

[GitHub] nifi issue #3124: NIFI-5767 Added NiFi Toolkit Guide to docs

2018-11-05 Thread andrewmlim
Github user andrewmlim commented on the issue:

https://github.com/apache/nifi/pull/3124
  
@pvillard31 I think with my latest changes, ready to merge unless you see 
any other issues. Thanks!


---


[jira] [Commented] (NIFI-5767) Documentation of the NiFi Toolkit

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675663#comment-16675663
 ] 

ASF GitHub Bot commented on NIFI-5767:
--

Github user andrewmlim commented on the issue:

https://github.com/apache/nifi/pull/3124
  
@pvillard31 I think with my latest changes, ready to merge unless you see 
any other issues. Thanks!


> Documentation of the NiFi Toolkit
> -
>
> Key: NIFI-5767
> URL: https://issues.apache.org/jira/browse/NIFI-5767
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Pierre Villard
>Assignee: Andrew Lim
>Priority: Major
>
> The NiFi toolkit should have its own documentation in a dedicated page, 
> probably just under "Admin guide".
> The documentation should have a paragraph about each tool:
>  * CLI - 
> https://github.com/apache/nifi/blob/master/nifi-toolkit/nifi-toolkit-cli/README.md
>  * Configuration encryption - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#encrypt-config_tool
>  * File manager - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#file-manager
>  * Flow analyzer
>  * Node manager - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#node-manager
>  * Notify
>  * S2S
>  * TLS Toolkit - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#tls_generation_toolkit
>  * ZooKeeper migrator - 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#zookeeper_migrator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3131: NIFI-3229 When a queue contains only Penalized Flow...

2018-11-05 Thread patricker
GitHub user patricker opened a pull request:

https://github.com/apache/nifi/pull/3131

NIFI-3229 When a queue contains only Penalized FlowFile's the next pr…

…ocessor Tasks/Time statistics becomes extremely large

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/patricker/nifi NIFI-3229

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3131.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3131


commit 3a42c7b671972001ed912ef8e907d5b8658554e9
Author: patricker 
Date:   2018-11-05T18:33:11Z

NIFI-3229 When a queue contains only Penalized FlowFile's the next 
processor Tasks/Time statistics becomes extremely large




---


[jira] [Commented] (NIFI-3229) When a queue contains only Penalized FlowFile's the next processor Tasks/Time statistics becomes extremely large

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675682#comment-16675682
 ] 

ASF GitHub Bot commented on NIFI-3229:
--

GitHub user patricker opened a pull request:

https://github.com/apache/nifi/pull/3131

NIFI-3229 When a queue contains only Penalized FlowFile's the next pr…

…ocessor Tasks/Time statistics becomes extremely large

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/patricker/nifi NIFI-3229

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3131.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3131


commit 3a42c7b671972001ed912ef8e907d5b8658554e9
Author: patricker 
Date:   2018-11-05T18:33:11Z

NIFI-3229 When a queue contains only Penalized FlowFile's the next 
processor Tasks/Time statistics becomes extremely large




> When a queue contains only Penalized FlowFile's the next processor Tasks/Time 
> statistics becomes extremely large
> 
>
> Key: NIFI-3229
> URL: https://issues.apache.org/jira/browse/NIFI-3229
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Dmitry Lukyanov
>Assignee: Peter Wicks
>Priority: Minor
> Attachments: flow.xml.gz, nifi-stats.png, nifi-stats2.png
>
>
> fetchfile on `not.found` produces penalized flow file
> in this case i'm expecting the next processor will do one task execution when 
> flow file penalize time over.
> but according to stats it executes approximately 1-6 times.
> i understand that it could be a feature but stats became really unclear...
> maybe there should be two columns? 
> `All Task/Times` and `Committed Task/Times`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #432: MINIFICPP-648 - add processor and add processor ...

2018-11-05 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/432
  
@arpadboda is this good? I'm good with this otherwise. 


---


[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675692#comment-16675692
 ] 

ASF GitHub Bot commented on MINIFICPP-648:
--

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/432
  
@arpadboda is this good? I'm good with this otherwise. 


> add processor and add processor with linkage nomenclature is confusing
> --
>
> Key: MINIFICPP-648
> URL: https://issues.apache.org/jira/browse/MINIFICPP-648
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI
> Fix For: 0.6.0
>
>
> add_processor should be changed to always add a processor with linkage 
> without compelling documentation as why this exists.. As a result we will 
> need to add a create_processor function to create one without adding it to 
> the flow ( certain use cases where a flow isn't needed such as invokehttp or 
> listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3132: NIFI-5769: Refactored FlowController to use Composi...

2018-11-05 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3132

NIFI-5769: Refactored FlowController to use Composition over Inheritance

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5769

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3132.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3132


commit 8d323e0627fdb82bcff2ada1903240d0db299b82
Author: Mark Payne 
Date:   2018-10-26T14:20:08Z

NIFI-5769: Refactored FlowController to use Composition over Inheritance




---


[jira] [Commented] (NIFI-5769) FlowController should prefer composition over inheritance

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675709#comment-16675709
 ] 

ASF GitHub Bot commented on NIFI-5769:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3132

NIFI-5769: Refactored FlowController to use Composition over Inheritance

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5769

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3132.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3132


commit 8d323e0627fdb82bcff2ada1903240d0db299b82
Author: Mark Payne 
Date:   2018-10-26T14:20:08Z

NIFI-5769: Refactored FlowController to use Composition over Inheritance




> FlowController should prefer composition over inheritance
> -
>
> Key: NIFI-5769
> URL: https://issues.apache.org/jira/browse/NIFI-5769
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Currently, FlowController implements many different interfaces. At this time, 
> the class is several thousand lines of code, which makes rendering take quite 
> a while in IDE's and makes it more difficult to edit and maintain. Many of 
> these interfaces are unrelated and FlowController has become a bit of a 
> hodgepodge of functionality. We should refactor FlowController to externalize 
> a lot of this logic and let FlowController use composition rather than 
> inheritance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5735) Record-oriented processors/services do not properly support Avro Unions

2018-11-05 Thread Alex Savitsky (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Savitsky updated NIFI-5735:

Attachment: NIFI-5735.patch

> Record-oriented processors/services do not properly support Avro Unions
> ---
>
> Key: NIFI-5735
> URL: https://issues.apache.org/jira/browse/NIFI-5735
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 1.7.1
>Reporter: Daniel Solow
>Priority: Major
>  Labels: AVRO, avro
> Attachments: 
> 0001-NIFI-5735-added-preliminary-support-for-union-resolu.patch, 
> NIFI-5735.patch
>
>
> The [Avro spec|https://avro.apache.org/docs/1.8.2/spec.html#Unions] states:
> {quote}Unions may not contain more than one schema with the same type, 
> *except for the named types* record, fixed and enum. For example, unions 
> containing two array types or two map types are not permitted, but two types 
> with different names are permitted. (Names permit efficient resolution when 
> reading and writing unions.)
> {quote}
> However record oriented processors/services in Nifi do not support multiple 
> named types per union. This is a problem, for example, with the following 
> schema:
> {code:javascript}
> {
> "type": "record",
> "name": "root",
> "fields": [
> {
> "name": "children",
> "type": {
> "type": "array",
> "items": [
> {
> "type": "record",
> "name": "left",
> "fields": [
> {
> "name": "f1",
> "type": "string"
> }
> ]
> },
> {
> "type": "record",
> "name": "right",
> "fields": [
> {
> "name": "f2",
> "type": "int"
> }
> ]
> }
> ]
> }
> }
> ]
> }
> {code}
>  This schema contains a field name "children" which is array of type union. 
> The union type contains two possible record types. Currently the Nifi avro 
> utilities will fail to process records of this schema with "children" arrays 
> that contain both "left" and "right" record types.
> I've traced this bug to the [AvroTypeUtils 
> class|https://github.com/apache/nifi/blob/rel/nifi-1.7.1/nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java].
> Specifically there are bugs in the convertUnionFieldValue method and in the 
> buildAvroSchema method. Both of these methods make the assumption that an 
> Avro union can only contain one child type of each type. As stated in the 
> spec, this is true for primitive types and non-named complex types but not 
> for named types.
>  There may be related bugs elsewhere, but I haven't been able to locate them 
> yet.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5735) Record-oriented processors/services do not properly support Avro Unions

2018-11-05 Thread Alex Savitsky (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675735#comment-16675735
 ] 

Alex Savitsky commented on NIFI-5735:
-

Attached is a patch against the master NiFi branch that fixes the issue. 
General idea: convertToAvroObject now returns a pair of the original conversion 
result and the number of fields that failed the conversion for the underlying 
record type, if any (0 otherwise). The only place where the second pair element 
is used, is in the lambda passed to convertUnionFieldValue. Instead of simply 
returning the converted Avro object, the lambda now inspects the number of 
failed fields, throwing an exception if this number is not zero. This signals 
the schema conversion error to the caller, allowing convertUnionFieldValue to 
continue iterating union schemas, until one is found that has all the fields 
recognized.

[^NIFI-5735.patch]

> Record-oriented processors/services do not properly support Avro Unions
> ---
>
> Key: NIFI-5735
> URL: https://issues.apache.org/jira/browse/NIFI-5735
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 1.7.1
>Reporter: Daniel Solow
>Priority: Major
>  Labels: AVRO, avro
> Attachments: 
> 0001-NIFI-5735-added-preliminary-support-for-union-resolu.patch, 
> NIFI-5735.patch
>
>
> The [Avro spec|https://avro.apache.org/docs/1.8.2/spec.html#Unions] states:
> {quote}Unions may not contain more than one schema with the same type, 
> *except for the named types* record, fixed and enum. For example, unions 
> containing two array types or two map types are not permitted, but two types 
> with different names are permitted. (Names permit efficient resolution when 
> reading and writing unions.)
> {quote}
> However record oriented processors/services in Nifi do not support multiple 
> named types per union. This is a problem, for example, with the following 
> schema:
> {code:javascript}
> {
> "type": "record",
> "name": "root",
> "fields": [
> {
> "name": "children",
> "type": {
> "type": "array",
> "items": [
> {
> "type": "record",
> "name": "left",
> "fields": [
> {
> "name": "f1",
> "type": "string"
> }
> ]
> },
> {
> "type": "record",
> "name": "right",
> "fields": [
> {
> "name": "f2",
> "type": "int"
> }
> ]
> }
> ]
> }
> }
> ]
> }
> {code}
>  This schema contains a field name "children" which is array of type union. 
> The union type contains two possible record types. Currently the Nifi avro 
> utilities will fail to process records of this schema with "children" arrays 
> that contain both "left" and "right" record types.
> I've traced this bug to the [AvroTypeUtils 
> class|https://github.com/apache/nifi/blob/rel/nifi-1.7.1/nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java].
> Specifically there are bugs in the convertUnionFieldValue method and in the 
> buildAvroSchema method. Both of these methods make the assumption that an 
> Avro union can only contain one child type of each type. As stated in the 
> spec, this is true for primitive types and non-named complex types but not 
> for named types.
>  There may be related bugs elsewhere, but I haven't been able to locate them 
> yet.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3128: NIFI-5788: Introduce batch size limit in PutDatabas...

2018-11-05 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r230916140
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("put-db-record-batch-size")
+.displayName("Bulk Size")
+.description("Specifies batch size for INSERT and UPDATE 
statements. This parameter has no effect for other statements specified in 
'Statement Type'."
++ " Non-positive value has the effect of infinite bulk 
size.")
+.defaultValue("-1")
--- End diff --

I agree that `0` should be the default, and would replicate the current 
behavior of the processor, "All records in one batch".


---


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675775#comment-16675775
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r230916140
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -265,6 +265,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("put-db-record-batch-size")
+.displayName("Bulk Size")
+.description("Specifies batch size for INSERT and UPDATE 
statements. This parameter has no effect for other statements specified in 
'Statement Type'."
++ " Non-positive value has the effect of infinite bulk 
size.")
+.defaultValue("-1")
--- End diff --

I agree that `0` should be the default, and would replicate the current 
behavior of the processor, "All records in one batch".


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...

2018-11-05 Thread colindean
GitHub user colindean opened a pull request:

https://github.com/apache/nifi/pull/3133

NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectionPool

These six options support the eviction and passivation of idle
connections.

This adds the feature described in NIFI-5790 in pursuit of a
fix for NIFI-5789.



Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] ~~If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?~~
- [ ] ~~If applicable, have you updated the LICENSE file, including the 
main LICENSE file under nifi-assembly?~~
- [ ] ~~If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?~~
- [ ] ~~If adding new Properties, have you added .displayName in addition 
to .name (programmatic access) for each of the new properties?~~ 

### For documentation related changes:
- [ ] ~~Have you ensured that format looks appropriate for the output in 
which it is rendered?~~



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/colindean/nifi nifi-5790_expose-dbcp-options

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3133.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3133


commit 2bd51cbc7495529acc174655d187fdefb7451382
Author: Colin Dean 
Date:   2018-11-05T21:18:21Z

NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectionPool

These six options support the eviction and passivation of idle
connections.

This adds the feature described in NIFI-5790 in pursuit of a
fix for NIFI-5789.




---


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675778#comment-16675778
 ] 

ASF GitHub Bot commented on NIFI-5790:
--

GitHub user colindean opened a pull request:

https://github.com/apache/nifi/pull/3133

NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectionPool

These six options support the eviction and passivation of idle
connections.

This adds the feature described in NIFI-5790 in pursuit of a
fix for NIFI-5789.



Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] ~~If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?~~
- [ ] ~~If applicable, have you updated the LICENSE file, including the 
main LICENSE file under nifi-assembly?~~
- [ ] ~~If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?~~
- [ ] ~~If adding new Properties, have you added .displayName in addition 
to .name (programmatic access) for each of the new properties?~~ 

### For documentation related changes:
- [ ] ~~Have you ensured that format looks appropriate for the output in 
which it is rendered?~~



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/colindean/nifi nifi-5790_expose-dbcp-options

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3133.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3133


commit 2bd51cbc7495529acc174655d187fdefb7451382
Author: Colin Dean 
Date:   2018-11-05T21:18:21Z

NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectionPool

These six options support the eviction and passivation of idle
connections.

This adds the feature described in NIFI-5790 in pursuit of a
fix for NIFI-5789.




> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3128: NIFI-5788: Introduce batch size limit in PutDatabas...

2018-11-05 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r230917511
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -669,11 +685,20 @@ private void executeDML(ProcessContext context, 
ProcessSession session, FlowFile
 }
 }
 ps.addBatch();
+if (++currentBatchSize == batchSize) {
--- End diff --

Would it be beneficial to capture `currentBatchSize*batchIndex`, with 
`batchIndex` being incremented only after a successful call to `executeBatch()` 
as an attribute? My thinking is, if you have a failure, and only part of a 
batch was loaded, you could store how many rows were loaded successfully as an 
attribute?


---


[jira] [Commented] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675777#comment-16675777
 ] 

ASF GitHub Bot commented on NIFI-5788:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3128#discussion_r230917511
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
 ---
@@ -669,11 +685,20 @@ private void executeDML(ProcessContext context, 
ProcessSession session, FlowFile
 }
 }
 ps.addBatch();
+if (++currentBatchSize == batchSize) {
--- End diff --

Would it be beneficial to capture `currentBatchSize*batchIndex`, with 
`batchIndex` being incremented only after a successful call to `executeBatch()` 
as an attribute? My thinking is, if you have a failure, and only part of a 
batch was loaded, you could store how many rows were loaded successfully as an 
attribute?


> Introduce batch size limit in PutDatabaseRecord processor
> -
>
> Key: NIFI-5788
> URL: https://issues.apache.org/jira/browse/NIFI-5788
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Teradata DB
>Reporter: Vadim
>Priority: Major
>  Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
> statement; its default value is -1 (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...

2018-11-05 Thread colindean
Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3133
  
> If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

I added properties but I used a readable string for `.name`.

> Have you written or updated unit tests to verify your changes?

I'm not sure how best I can _operationally_ test these, or if that makes 
sense because we'd be testing the API functionality of commons-dbcp…


---


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675779#comment-16675779
 ] 

ASF GitHub Bot commented on NIFI-5790:
--

Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3133
  
> If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

I added properties but I used a readable string for `.name`.

> Have you written or updated unit tests to verify your changes?

I'm not sure how best I can _operationally_ test these, or if that makes 
sense because we'd be testing the API functionality of commons-dbcp…


> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...

2018-11-05 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r230920291
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
+.description("The minimum number of connections that can 
remain idle in the pool, without extra ones being " +
+"created, or zero to create none.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.sensitive(false)
--- End diff --

You don't need `sensitive(false)`. I think you'r safe to remove it from 
these new properties.


---


[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...

2018-11-05 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r230921657
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
+.description("The minimum number of connections that can 
remain idle in the pool, without extra ones being " +
+"created, or zero to create none.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.sensitive(false)
+.build();
+
+public static final PropertyDescriptor MAX_IDLE = new 
PropertyDescriptor.Builder()
+.name("Max Idle Connections")
+.description("The maximum number of connections that can 
remain idle in the pool, without extra ones being " +
+"released, or negative for no limit.")
+.defaultValue("8")
--- End diff --

Setting this to `8` feels so weird, even though it is legitimately the 
default value in the DBCP library.


---


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675784#comment-16675784
 ] 

ASF GitHub Bot commented on NIFI-5790:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r230920291
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
+.description("The minimum number of connections that can 
remain idle in the pool, without extra ones being " +
+"created, or zero to create none.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.sensitive(false)
--- End diff --

You don't need `sensitive(false)`. I think you'r safe to remove it from 
these new properties.


> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675783#comment-16675783
 ] 

ASF GitHub Bot commented on NIFI-5790:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r230921657
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
+.description("The minimum number of connections that can 
remain idle in the pool, without extra ones being " +
+"created, or zero to create none.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.sensitive(false)
+.build();
+
+public static final PropertyDescriptor MAX_IDLE = new 
PropertyDescriptor.Builder()
+.name("Max Idle Connections")
+.description("The maximum number of connections that can 
remain idle in the pool, without extra ones being " +
+"released, or negative for no limit.")
+.defaultValue("8")
--- End diff --

Setting this to `8` feels so weird, even though it is legitimately the 
default value in the DBCP library.


> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...

2018-11-05 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r230927408
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
--- End diff --

I saw your comment about `.name`. I know the existing properties in 
DBCPConnectionPool do not use `displayName`, but that is only because they are 
from before the change in standards.

Can you move the new `name` property values to `displayName`, and set 
`name` to something like `dbcp-min-idle-conns`?


---


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675798#comment-16675798
 ] 

ASF GitHub Bot commented on NIFI-5790:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r230927408
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
--- End diff --

I saw your comment about `.name`. I know the existing properties in 
DBCPConnectionPool do not use `displayName`, but that is only because they are 
from before the change in standards.

Can you move the new `name` property values to `displayName`, and set 
`name` to something like `dbcp-min-idle-conns`?


> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...

2018-11-05 Thread colindean
Github user colindean commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r230935621
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
+.description("The minimum number of connections that can 
remain idle in the pool, without extra ones being " +
+"created, or zero to create none.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.sensitive(false)
+.build();
+
+public static final PropertyDescriptor MAX_IDLE = new 
PropertyDescriptor.Builder()
+.name("Max Idle Connections")
+.description("The maximum number of connections that can 
remain idle in the pool, without extra ones being " +
+"released, or negative for no limit.")
+.defaultValue("8")
--- End diff --

There are some defaults I could reference, e.g. 
`org.apache.commons.pool2.impl.BaseObjectPoolConfig#DEFAULT_MIN_EVICTABLE_IDLE_TIME_MILLIS`
 and `org.apache.commons.pool2.impl.GenericObjectPoolConfig#DEFAULT_MAX_IDLE`. 
Should I instead reach into DBCP to get those?


---


[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration

2018-11-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675824#comment-16675824
 ] 

ASF GitHub Bot commented on NIFI-5790:
--

Github user colindean commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3133#discussion_r230935621
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
 ---
@@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, 
final String input, final
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
+public static final PropertyDescriptor MIN_IDLE = new 
PropertyDescriptor.Builder()
+.name("Minimum Idle Connections")
+.description("The minimum number of connections that can 
remain idle in the pool, without extra ones being " +
+"created, or zero to create none.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.sensitive(false)
+.build();
+
+public static final PropertyDescriptor MAX_IDLE = new 
PropertyDescriptor.Builder()
+.name("Max Idle Connections")
+.description("The maximum number of connections that can 
remain idle in the pool, without extra ones being " +
+"released, or negative for no limit.")
+.defaultValue("8")
--- End diff --

There are some defaults I could reference, e.g. 
`org.apache.commons.pool2.impl.BaseObjectPoolConfig#DEFAULT_MIN_EVICTABLE_IDLE_TIME_MILLIS`
 and `org.apache.commons.pool2.impl.GenericObjectPoolConfig#DEFAULT_MAX_IDLE`. 
Should I instead reach into DBCP to get those?


> DBCPConnectionPool configuration should expose underlying connection idle and 
> eviction configuration
> 
>
> Key: NIFI-5790
> URL: https://issues.apache.org/jira/browse/NIFI-5790
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Colin Dean
>Priority: Major
>  Labels: DBCP, database
>
> While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool 
> documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html]
>  that NiFi appears _not_ to have controller service configuration options 
> associated with [Apache 
> Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html]
>  {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I 
> think should be both set to 0 in my particular use case. 
> Alternatively, I think I could set {{maxConnLifetimeMillis}} to something 
> even in the minutes range and satisfy my use case (a connection need not be 
> released _immediately_ but within a reasonable period of time), but this 
> option is also not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...

2018-11-05 Thread colindean
Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3133
  
> What about making all of the new fields not required? `required(false)`. 
Then, if a value is provided, you override, otherwise you leave it be. You 
could call out the default values, or default functionality at least, in the 
property description?

I _could_ go this route, but I'm not where to strike the balance between 
"explicit settings from defaults at the time of creation" versus "use the 
latest default". I could reference the DBCP default constants in the property 
description, too, thus preventing the description from becoming out of date…


---


  1   2   >