[jira] [Resolved] (MINIFICPP-1412) GenerateFlowFile should create a ResourceClaim even if the requested size is zero

2020-11-30 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni resolved MINIFICPP-1412.
---
  Assignee: Adam Debreceni
Resolution: Fixed

> GenerateFlowFile should create a ResourceClaim even if the requested size is 
> zero
> -
>
> Key: MINIFICPP-1412
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1412
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Adam Debreceni
>Assignee: Adam Debreceni
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is an invariant "all flowFiles have a non-null ResourceClaim pointer". 
> This invariant is violated in GenerateFlowFile processor when the requested 
> flowFile size is zero. We should create a ResourceClaim even if the contents 
> are zero.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7906) Add graph processor with flexibility to query graph database conditioned on flowfile content and attirbutes

2020-11-30 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-7906:
---
Status: Patch Available  (was: Open)

> Add graph processor with flexibility to query graph database conditioned on 
> flowfile content and attirbutes
> ---
>
> Key: NIFI-7906
> URL: https://issues.apache.org/jira/browse/NIFI-7906
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Levi Lentz
>Assignee: Levi Lentz
>Priority: Minor
>  Labels: graph
> Fix For: 1.13.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> The current graph bundle currently does not allow you to query the graph 
> database (as defined in the GraphClientService) with attributes or content 
> available in the flow file.
>  
> This functionality would allow uses to perform dynamic queries/mutations of 
> the underlying graph data based. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm opened a new pull request #946: MINIFICPP-1417 improve FileStream error reporting

2020-11-30 Thread GitBox


szaszm opened a new pull request #946:
URL: https://github.com/apache/nifi-minifi-cpp/pull/946


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the LICENSE file?
   - [x] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1417) Improve FileStream error reporting

2020-11-30 Thread Marton Szasz (Jira)
Marton Szasz created MINIFICPP-1417:
---

 Summary: Improve FileStream error reporting
 Key: MINIFICPP-1417
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1417
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Marton Szasz
Assignee: Marton Szasz






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7989) Add Hive "data drift" processor

2020-11-30 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-7989:
---
Status: Patch Available  (was: In Progress)

> Add Hive "data drift" processor
> ---
>
> Key: NIFI-7989
> URL: https://issues.apache.org/jira/browse/NIFI-7989
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> It would be nice to have a Hive processor (one for each Hive NAR) that could 
> check an incoming record-based flowfile against a destination table, and 
> either add columns and/or partition values, or even create the table if it 
> does not exist. Such a processor could be used in a flow where the incoming 
> data's schema can change and we want to be able to write it to a Hive table, 
> preferably by using PutHDFS, PutParquet, or PutORC to place it directly where 
> it can be queried.
> Such a processor should be able to use a HiveConnectionPool to execute any 
> DDL (ALTER TABLE ADD COLUMN, e.g.) necessary to make the table match the 
> incoming data. For Partition Values, they could be provided via a property 
> that supports Expression Language. In such a case, an ALTER TABLE would be 
> issued to add the partition directory.
> Whether the table is created or updated, and whether there are partition 
> values to consider, an attribute should be written to the outgoing flowfile 
> corresponding to the location of the table (and any associated partitions). 
> This supports the idea of having a flow that updates a Hive table based on 
> the incoming data, and then allows the user to put the flowfile directly into 
> the destination location (PutHDFS, e.g.) instead of having to load it using 
> HiveQL or being subject to the restrictions of Hive Streaming tables 
> (ORC-backed, transactional, etc.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8047) Support Sensitive Dynamic Properties in DBCPConnectionPool

2020-11-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241135#comment-17241135
 ] 

ASF subversion and git services commented on NIFI-8047:
---

Commit 1e13b62e78a95f869a770771d5e2cde5bd828cad in nifi's branch 
refs/heads/main from exceptionfactory
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1e13b62 ]

NIFI-8047 Added documentation for sensitive DBCP properties

Signed-off-by: Matthew Burgess 

This closes #4696


> Support Sensitive Dynamic Properties in DBCPConnectionPool
> --
>
> Key: NIFI-8047
> URL: https://issues.apache.org/jira/browse/NIFI-8047
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.12.1
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> DBCPConnectionPool currently supports providing dynamic properties to the 
> Data Source object, which enables customization for a wide range of JDBC 
> drivers.  Some JDBC drivers support features such as TLS encryption, 
> requiring the specification of key store and trust store files and passwords. 
>  In order to support secure configuration of these additional properties, the 
> DBCPConnectionPool should provide optional support for sensitive dynamic 
> properties.
> One potential approach is to follow the pattern of the ExecuteGroovyScript 
> Processor and set the sensitive attribute when the property name is prefixed 
> with a predefined string.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8047) Support Sensitive Dynamic Properties in DBCPConnectionPool

2020-11-30 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8047:
---
Fix Version/s: 1.13.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Support Sensitive Dynamic Properties in DBCPConnectionPool
> --
>
> Key: NIFI-8047
> URL: https://issues.apache.org/jira/browse/NIFI-8047
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.12.1
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> DBCPConnectionPool currently supports providing dynamic properties to the 
> Data Source object, which enables customization for a wide range of JDBC 
> drivers.  Some JDBC drivers support features such as TLS encryption, 
> requiring the specification of key store and trust store files and passwords. 
>  In order to support secure configuration of these additional properties, the 
> DBCPConnectionPool should provide optional support for sensitive dynamic 
> properties.
> One potential approach is to follow the pattern of the ExecuteGroovyScript 
> Processor and set the sensitive attribute when the property name is prefixed 
> with a predefined string.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 closed pull request #4696: NIFI-8047 Added documentation for sensitive DBCP properties

2020-11-30 Thread GitBox


mattyb149 closed pull request #4696:
URL: https://github.com/apache/nifi/pull/4696


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #4696: NIFI-8047 Added documentation for sensitive DBCP properties

2020-11-30 Thread GitBox


mattyb149 commented on pull request #4696:
URL: https://github.com/apache/nifi/pull/4696#issuecomment-736118931


   +1 LGTM, thanks for the doc, will help clarify the usage. Merging to main



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 opened a new pull request #4697: NIFI-7989: Add support to UpdateHiveTable for creating external tables

2020-11-30 Thread GitBox


mattyb149 opened a new pull request #4697:
URL: https://github.com/apache/nifi/pull/4697


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Extends NIFI-7989 by adding (dependent) properties that allow the user to 
configure created tables to be external Hive tables at a specified location. 
The same changes were made in each Hive NAR.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [x] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (NIFI-7989) Add Hive "data drift" processor

2020-11-30 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reopened NIFI-7989:


Reopened to add support for creating external tables

> Add Hive "data drift" processor
> ---
>
> Key: NIFI-7989
> URL: https://issues.apache.org/jira/browse/NIFI-7989
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> It would be nice to have a Hive processor (one for each Hive NAR) that could 
> check an incoming record-based flowfile against a destination table, and 
> either add columns and/or partition values, or even create the table if it 
> does not exist. Such a processor could be used in a flow where the incoming 
> data's schema can change and we want to be able to write it to a Hive table, 
> preferably by using PutHDFS, PutParquet, or PutORC to place it directly where 
> it can be queried.
> Such a processor should be able to use a HiveConnectionPool to execute any 
> DDL (ALTER TABLE ADD COLUMN, e.g.) necessary to make the table match the 
> incoming data. For Partition Values, they could be provided via a property 
> that supports Expression Language. In such a case, an ALTER TABLE would be 
> issued to add the partition directory.
> Whether the table is created or updated, and whether there are partition 
> values to consider, an attribute should be written to the outgoing flowfile 
> corresponding to the location of the table (and any associated partitions). 
> This supports the idea of having a flow that updates a Hive table based on 
> the incoming data, and then allows the user to put the flowfile directly into 
> the destination location (PutHDFS, e.g.) instead of having to load it using 
> HiveQL or being subject to the restrictions of Hive Streaming tables 
> (ORC-backed, transactional, etc.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8054) When components are removed from flow, their class loaders are not cleaned up

2020-11-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241055#comment-17241055
 ] 

ASF subversion and git services commented on NIFI-8054:
---

Commit aaa1452d041ff34f3647004e73a1f78f59c561d4 in nifi's branch 
refs/heads/main from markap14
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=aaa1452 ]

NIFI-8054: Updated ReflectionUtils to use a WeakHashMap for the mapping of 
annotations to methods with that annotation. This way, the ReflectionUtils 
class will not hold a reference to Classes that are no longer referenced 
elsewhere. (#4694)



> When components are removed from flow, their class loaders are not cleaned up
> -
>
> Key: NIFI-8054
> URL: https://issues.apache.org/jira/browse/NIFI-8054
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a component is removed from the flow, its corresponding 
> classes/classloaders are not removed from the JVM's count of loaded classes. 
> This can be seen by creating a Process Group with several processors that use 
> the @RequiresInstanceClassLoading annotation (GetHDFS, for example). Then use 
> a profiler/JConsole/etc. to see how many classes are loaded into memory. 
> After the Process Group is deleted and GC is performed, the number of classes 
> loaded should drop significantly. Currently, it does not. So if instantiate a 
> template, delete the Process Group, instantiate it again, delete it again, 
> etc. then I get into a situation where I run out of memory and see 
> OutOfMemoryError: GC overhead limit exceeded.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-8054) When components are removed from flow, their class loaders are not cleaned up

2020-11-30 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-8054.
---
Fix Version/s: 1.13.0
   Resolution: Fixed

> When components are removed from flow, their class loaders are not cleaned up
> -
>
> Key: NIFI-8054
> URL: https://issues.apache.org/jira/browse/NIFI-8054
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When a component is removed from the flow, its corresponding 
> classes/classloaders are not removed from the JVM's count of loaded classes. 
> This can be seen by creating a Process Group with several processors that use 
> the @RequiresInstanceClassLoading annotation (GetHDFS, for example). Then use 
> a profiler/JConsole/etc. to see how many classes are loaded into memory. 
> After the Process Group is deleted and GC is performed, the number of classes 
> loaded should drop significantly. Currently, it does not. So if instantiate a 
> template, delete the Process Group, instantiate it again, delete it again, 
> etc. then I get into a situation where I run out of memory and see 
> OutOfMemoryError: GC overhead limit exceeded.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende merged pull request #4694: NIFI-8054: Updated ReflectionUtils to use a WeakHashMap for the mappi…

2020-11-30 Thread GitBox


bbende merged pull request #4694:
URL: https://github.com/apache/nifi/pull/4694


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #4676: NIFI-8016: Support dynamic properties set on the DataSource in Hive connection pools

2020-11-30 Thread GitBox


mattyb149 commented on pull request #4676:
URL: https://github.com/apache/nifi/pull/4676#issuecomment-736020537


   Since [NIFI-8047](https://issues.apache.org/jira/browse/NIFI-8047) is in, 
this PR might as well add support for sensitive dynamic properties as well, 
will update



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory opened a new pull request #4696: NIFI-8047 Added documentation for sensitive DBCP properties

2020-11-30 Thread GitBox


exceptionfactory opened a new pull request #4696:
URL: https://github.com/apache/nifi/pull/4696


    Description of PR
   
   NIFI-8047 Updated DBCPConnectionPool with DynamicProperty annotation 
describing sensitive properties.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [X] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8047) Support Sensitive Dynamic Properties in DBCPConnectionPool

2020-11-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241008#comment-17241008
 ] 

ASF subversion and git services commented on NIFI-8047:
---

Commit fe53f8090d3d046b60d551e83bcd6f43c560a461 in nifi's branch 
refs/heads/main from exceptionfactory
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=fe53f80 ]

NIFI-8047 Added support for sensitive dynamic properties in DBCP

Signed-off-by: Matthew Burgess 

This closes #4692


> Support Sensitive Dynamic Properties in DBCPConnectionPool
> --
>
> Key: NIFI-8047
> URL: https://issues.apache.org/jira/browse/NIFI-8047
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.12.1
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> DBCPConnectionPool currently supports providing dynamic properties to the 
> Data Source object, which enables customization for a wide range of JDBC 
> drivers.  Some JDBC drivers support features such as TLS encryption, 
> requiring the specification of key store and trust store files and passwords. 
>  In order to support secure configuration of these additional properties, the 
> DBCPConnectionPool should provide optional support for sensitive dynamic 
> properties.
> One potential approach is to follow the pattern of the ExecuteGroovyScript 
> Processor and set the sensitive attribute when the property name is prefixed 
> with a predefined string.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 closed pull request #4692: NIFI-8047 Added support for sensitive dynamic properties in DBCP

2020-11-30 Thread GitBox


mattyb149 closed pull request #4692:
URL: https://github.com/apache/nifi/pull/4692


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #4692: NIFI-8047 Added support for sensitive dynamic properties in DBCP

2020-11-30 Thread GitBox


mattyb149 commented on pull request #4692:
URL: https://github.com/apache/nifi/pull/4692#issuecomment-736007964


   +1 LGTM, ran contrib-check and tested with an Impala instance. Thanks for 
the improvement! Merging to main.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] sushilkm opened a new pull request #4695: NIFI-8055: Fix validation message class-name for AzureCosmosDBClientService

2020-11-30 Thread GitBox


sushilkm opened a new pull request #4695:
URL: https://github.com/apache/nifi/pull/4695


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   fixes bug NIFI-8055: AzureCosmosDBClientService controller service refers 
AzureStorageCredentialsControllerService for validation errors
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8055) AzureCosmosDBClientService controller service refers AzureStorageCredentialsControllerService for validation errors

2020-11-30 Thread Sushil Kumar (Jira)
Sushil Kumar created NIFI-8055:
--

 Summary: AzureCosmosDBClientService controller service refers 
AzureStorageCredentialsControllerService for validation errors
 Key: NIFI-8055
 URL: https://issues.apache.org/jira/browse/NIFI-8055
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.12.1
Reporter: Sushil Kumar
Assignee: Sushil Kumar


[https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/cosmos/document/AzureCosmosDBClientService.java#L153]
 refers AzureStorageCredentialsControllerService because of which validation 
errors for AzureCosmosDBClientService shows 
AzureStorageCredentialsControllerService in the error message.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 opened a new pull request #4694: NIFI-8054: Updated ReflectionUtils to use a WeakHashMap for the mappi…

2020-11-30 Thread GitBox


markap14 opened a new pull request #4694:
URL: https://github.com/apache/nifi/pull/4694


   …ng of annotations to methods with that annotation. This way, the 
ReflectionUtils class will not hold a reference to Classes that are no longer 
referenced elsewhere.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8054) When components are removed from flow, their class loaders are not cleaned up

2020-11-30 Thread Mark Payne (Jira)
Mark Payne created NIFI-8054:


 Summary: When components are removed from flow, their class 
loaders are not cleaned up
 Key: NIFI-8054
 URL: https://issues.apache.org/jira/browse/NIFI-8054
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne


When a component is removed from the flow, its corresponding 
classes/classloaders are not removed from the JVM's count of loaded classes. 
This can be seen by creating a Process Group with several processors that use 
the @RequiresInstanceClassLoading annotation (GetHDFS, for example). Then use a 
profiler/JConsole/etc. to see how many classes are loaded into memory. After 
the Process Group is deleted and GC is performed, the number of classes loaded 
should drop significantly. Currently, it does not. So if instantiate a 
template, delete the Process Group, instantiate it again, delete it again, etc. 
then I get into a situation where I run out of memory and see OutOfMemoryError: 
GC overhead limit exceeded.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-11-30 Thread GitBox


szaszm commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r532815469



##
File path: libminifi/include/utils/file/FileSystem.h
##
@@ -0,0 +1,57 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include "utils/OptionalUtils.h"
+#include "utils/EncryptionProvider.h"
+#include "core/logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace file {
+
+class FileSystem {
+ public:
+  explicit FileSystem(bool should_encrypt = false, 
utils::optional encryptor = {});
+
+  FileSystem(const FileSystem&) = delete;
+  FileSystem(FileSystem&&) = delete;
+  FileSystem& operator=(const FileSystem&) = delete;
+  FileSystem& operator=(FileSystem&&) = delete;
+
+  utils::optional read(const std::string& file_name);
+
+  bool write(const std::string& file_name, const std::string& file_content);
+
+ private:
+  bool should_encrypt_on_write_;
+  utils::optional encryptor_;
+  std::shared_ptr 
logger_{logging::LoggerFactory::getLogger()};
+};

Review comment:
   Maybe `EncryptionAwareFileAccessor`, `ConfigFileIo` or some combination 
of these words. Or refactor to use streams from either 
`minifi::io::InputStream`/`minifi::io::OutputStream` or `` and insert 
an encryption/decryption stream in the chain when needed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-11-30 Thread GitBox


szaszm commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r530490715



##
File path: extensions/http-curl/tests/C2ConfigEncryption.cpp
##
@@ -0,0 +1,58 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#undef NDEBUG
+#include 
+#include 
+#include "HTTPIntegrationBase.h"
+#include "HTTPHandlers.h"
+#include "utils/IntegrationTestUtils.h"
+#include "utils/EncryptionProvider.h"
+
+int main(int argc, char **argv) {
+  const cmd_args args = parse_cmdline_args(argc, argv, "update");
+  TestController controller;
+  // copy config file to temporary location as it will get overridden
+  char tmp_format[] = "/var/tmp/c2.XX";
+  std::string home_path = controller.createTempDirectory(tmp_format);
+  std::string live_config_file = 
utils::file::FileUtils::concat_path(home_path, "config.yml");
+  utils::file::FileUtils::copy_file(args.test_file, live_config_file);
+  // the C2 server will update the flow with the contents of args.test_file
+  // which will be encrypted and persisted to the temporary live_config_file
+  C2UpdateHandler handler(args.test_file);
+  VerifyC2Update harness(1);
+  
harness.getConfiguration()->set(minifi::Configure::nifi_flow_configuration_encrypt,
 "true");
+  harness.setKeyDir(args.key_dir);
+  harness.setUrl(args.url, );
+  handler.setC2RestResponse(harness.getC2RestUrl(), "configuration", "true");
+
+  const auto start = std::chrono::system_clock::now();

Review comment:
   This is unused

##
File path: libminifi/include/core/Flow.h
##
@@ -0,0 +1,59 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include "core/ProcessGroup.h"
+#include "core/Repository.h"
+#include "core/ContentRepository.h"
+#include "core/FlowConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+
+class Flow {

Review comment:
   What is this class? "flow" is a central concept in this project, so 
introducing a new class with that name gives me an uneasy feeling. Still not 
having fully understood the gist of this PR at the time of writing, I can't 
really question the validity, but I'd like to ask for a description as a class 
comment that reveals what the class models and how it fits into the 
architecture.

##
File path: libminifi/src/utils/EncryptionProvider.cpp
##
@@ -33,22 +40,19 @@ constexpr const char* CONFIG_ENCRYPTION_KEY_PROPERTY_NAME = 
"nifi.bootstrap.sens
 
 }  // namespace
 
-namespace org {
-namespace apache {
-namespace nifi {
-namespace minifi {
-
-utils::optional Decryptor::create(const std::string& 
minifi_home) {
+utils::optional EncryptionProvider::create(const 
std::string& home_path) {
   minifi::Properties bootstrap_conf;
-  bootstrap_conf.setHome(minifi_home);
+  bootstrap_conf.setHome(home_path);
   bootstrap_conf.loadConfigureFile(DEFAULT_NIFI_BOOTSTRAP_FILE);
   return bootstrap_conf.getString(CONFIG_ENCRYPTION_KEY_PROPERTY_NAME)
-  | utils::map([](const std::string& encryption_key_hex) { return 
utils::StringUtils::from_hex(encryption_key_hex); })
-  | utils::map(::crypto::stringToBytes)
-  | utils::map([](const utils::crypto::Bytes& encryption_key_bytes) { 
return minifi::Decryptor{encryption_key_bytes}; });
+ | utils::map([](const std::string _key_hex) { return 

[jira] [Resolved] (NIFI-5132) HandleHttpRequest /Response stop accepting request / response

2020-11-30 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi resolved NIFI-5132.
---
Fix Version/s: (was: 1.6.0)
   1.13.0
   Resolution: Duplicate

It has been fixed in https://issues.apache.org/jira/browse/NIFI-6317 in NiFi 
1.10.0.

> HandleHttpRequest /Response stop accepting request / response
> -
>
> Key: NIFI-5132
> URL: https://issues.apache.org/jira/browse/NIFI-5132
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
> Environment: OS RedHat 
>Reporter: Avanish Awasthi
>Priority: Critical
>  Labels: HandleHttpRequest, HandleHttpResponse
> Fix For: 1.13.0
>
> Attachments: bootstrap.conf, nifi-bootstrap.log, nifi-bootstrap.log, 
> nifi.properties, nifi_app.log, screen1.png, screen2.png, screen3.png, 
> screen4.png
>
>
> I moved my template from NiFi-1.0.0 to nifi NiFi-1.6.0, using 
> HandleHttpRequest for Accpeting API calls from Application.
>  
> The HttpRequestHandler stop accepting request suddenly, and given 503 error 
> on hitting from Browser. The same is working perfectly in another NiFi 1.00 
> instance. I runs for variable time sometimes 4 hr sometimes 3 days but gets 
> stuck after that.
>  
> Attached are the configurations for Request Handler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #4691: NIFI-7990 add properties to map Record field as @timestamp in output …

2020-11-30 Thread GitBox


ChrisSamo632 commented on a change in pull request #4691:
URL: https://github.com/apache/nifi/pull/4691#discussion_r532728519



##
File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchRecord.java
##
@@ -376,13 +483,92 @@ private String getFromRecordPath(Record record, 
RecordPath path, final String fa
 );
 }
 
-fieldValue.updateValue(null);
+if (!retain) {
+fieldValue.updateValue(null);
+}
+
+return fieldValue.getValue().toString();
+} else {
+return fallback;
+}
+}
+
+private Object getTimestampFromRecordPath(final Record record, final 
RecordPath path, final String fallback,
+  final boolean retain) {
+if (path == null) {
+return fallback;
+}
+
+final RecordPathResult result = path.evaluate(record);
+final Optional value = 
result.getSelectedFields().findFirst();
+if (value.isPresent() && value.get().getValue() != null) {
+final FieldValue fieldValue = value.get();
+
+final DataType dataType = fieldValue.getField().getDataType();
+final String fieldName = fieldValue.getField().getFieldName();
+final DataType chosenDataType = dataType.getFieldType() == 
RecordFieldType.CHOICE
+? DataTypeUtils.chooseDataType(value, (ChoiceDataType) 
dataType)
+: dataType;
+final Object coercedValue = 
DataTypeUtils.convertType(fieldValue.getValue(), chosenDataType, fieldName);
+if (coercedValue == null) {
+return null;
+}
+
+final Object returnValue;
+switch (chosenDataType.getFieldType()) {
+case DATE:
+case TIME:
+case TIMESTAMP:
+final String format;
+switch (chosenDataType.getFieldType()) {
+case DATE:
+format = this.dateFormat;
+break;
+case TIME:
+format = this.timeFormat;
+break;
+default:
+format = this.timestampFormat;
+}
+returnValue = coerceStringToLong(
+fieldName,
+DataTypeUtils.toString(coercedValue, () -> 
DataTypeUtils.getDateFormat(format))
+);
+break;
+case LONG:
+returnValue = DataTypeUtils.toLong(coercedValue, 
fieldName);
+break;
+case INT:
+case BYTE:
+case SHORT:
+returnValue = DataTypeUtils.toInteger(coercedValue, 
fieldName);
+break;
+case CHAR:
+case STRING:
+returnValue = coerceStringToLong(fieldName, 
coercedValue.toString());
+break;
+case BIGINT:
+returnValue = coercedValue;
+break;
+default:
+throw new ProcessException(
+String.format("Cannot use %s field referenced by 
%s as @timestamp.", chosenDataType.toString(), path.getPath())
+);
+}
 
-String retVal = fieldValue.getValue().toString();
+if (!retain) {
+fieldValue.updateValue(null);
+}
 
-return retVal;
+return returnValue;
 } else {
 return fallback;

Review comment:
   I decided that doing the coercion was probably the sensible thing to be 
consistent throughout the PutElasticsearchRecord processor and also because if 
I specify an epoch timestamp as a FlowFile attribute, I'd want it to be output 
as a Long rather than a String (although Elasticsearch is capable of doing this 
coercion itself if we didn't do this in NiFi)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1413) Improve exception logging

2020-11-30 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz resolved MINIFICPP-1413.
-
Resolution: Fixed

> Improve exception logging
> -
>
> Key: MINIFICPP-1413
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1413
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Major
>  Labels: MiNiFi-CPP-Hygiene
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> 1. No empty std::exceptions
> 2. Add typeid to catch-all logging when possible



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


arpadboda closed pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8003) Create Elasticsearch update_by_query processor

2020-11-30 Thread Chris Sampson (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Sampson updated NIFI-8003:

Status: Patch Available  (was: In Progress)

> Create Elasticsearch update_by_query processor
> --
>
> Key: NIFI-8003
> URL: https://issues.apache.org/jira/browse/NIFI-8003
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.12.1
>Reporter: Chris Sampson
>Assignee: Chris Sampson
>Priority: Minor
>   Original Estimate: 2h
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> An Elasticsearch processor allowing for _update_by_query operations should be 
> created similar to the existing DeleteByQueryElasticsearch processor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8001) Elasticsearch REST processors should allow dynamic properties as query string parameters

2020-11-30 Thread Chris Sampson (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Sampson updated NIFI-8001:

Attachment: NIFI-8001.xml
NiFi-8001.json
Status: Patch Available  (was: In Progress)

> Elasticsearch REST processors should allow dynamic properties as query string 
> parameters
> 
>
> Key: NIFI-8001
> URL: https://issues.apache.org/jira/browse/NIFI-8001
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.12.1
>Reporter: Chris Sampson
>Assignee: Chris Sampson
>Priority: Minor
> Attachments: NIFI-8001.xml, NiFi-8001.json
>
>   Original Estimate: 2h
>  Time Spent: 10m
>  Remaining Estimate: 1h 50m
>
> The Elasticsearch REST processors (e.g. PutElasticsearchRecord) should allow 
> query string parameters to be specified as dynamic properties on the 
> processor (similar to the existing functionality on 
> PutElasticsearchRecordHttp).
> This should be done for all processors in the 
> nifi-elasticsearch-restapi-processors NAR.
> For example, adding a dynamic property with name=slices and value=auto would 
> append {{?slices=auto}} to the {{_bulk}} operation URL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ChrisSamo632 opened a new pull request #4693: NIFI-8001 Dynamic Properties as Request Parameters for Elasticsearch …

2020-11-30 Thread GitBox


ChrisSamo632 opened a new pull request #4693:
URL: https://github.com/apache/nifi/pull/4693


   …Client Service processors; NIFI-8003 UpdateByQueryElasticsearchProcessor; 
addressed various warnings and inefficiencies found in existing processor code
   
   
    Description of PR
   
   NIFI-8001 Dynamic Properties can be used to specify request (query string) 
parameters through ElasticsearchClientService processors (e.g. 
PutElasticsearchRecord)
   
   NIFI-8003 UpdateByQueryElasticsearch (based on existing 
DeleteByQueryElasticsearch) processor
   
   Addressed several warnings/inefficiencies within the existing 
processor/service implementations and fixed previously broken ES Integration 
tests (maven-failsafe-plugin 3-M5 doesn't currently work for these tests due to 
a classpath loading issue expected to be fixed in the upcoming -M6, so reverted 
to -M3 for the time being; also some of the integration tests didn't work when 
run together as one test would change the data that a subsequent test expected 
to be in its original state).
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [x] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1078) Flowfiles shoudn't exist without claim

2020-11-30 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz resolved MINIFICPP-1078.
-
Resolution: Duplicate

> Flowfiles shoudn't exist without claim
> --
>
> Key: MINIFICPP-1078
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1078
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.6.0
>Reporter: Arpad Boda
>Assignee: Marton Szasz
>Priority: Major
> Fix For: 1.0.0
>
>
> Even if a given flowfile is empty, there should be a content claim associated 
> and reading the content should succeed (naturally reading 0 bytes) without 
> the need of adding error handling to a lot of different code paths 
> (processors). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #944: MINIFICPP-1412 - Ensure that all FlowFiles have a non-null ResourceClaim

2020-11-30 Thread GitBox


arpadboda closed pull request #944:
URL: https://github.com/apache/nifi-minifi-cpp/pull/944


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8052) Groovy: SQL inserts crashes DBCPConnectionPool

2020-11-30 Thread DEOM Damien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DEOM Damien resolved NIFI-8052.
---
Resolution: Fixed

I forgot to close connection ...

 

conn?.close()

> Groovy: SQL inserts crashes DBCPConnectionPool
> --
>
> Key: NIFI-8052
> URL: https://issues.apache.org/jira/browse/NIFI-8052
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: DEOM Damien
>Priority: Blocker
>
> Following this tutorial
> [http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]
> I could read the database, but writing to it crashes my DBCPConnectionPool 
> with the following error:
> {quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
> org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
>  failed to process due to java.sql.SQLException: Cannot get a connection, 
> pool error Timeout waiting for idle object; rolling back session: 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> Cannot get a connection, pool error Timeout waiting for idle object
> {quote}
> I'm forced to restart the controller after each execution.
> My (simple) code:
>  
> import java.nio.charset.StandardCharsets
> import org.apache.nifi.controller.ControllerService
> import groovy.sql.Sql
> def lookup = context.controllerServiceLookup
> def dbServiceName = DatabaseConnectionPoolName.value
> def dbcpServiceId = 
> lookup.getControllerServiceIdentifiers(ControllerService).find { 
>  cs -> lookup.getControllerServiceName(cs) == dbServiceName
> }
> def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()
> def sql = new Sql(conn)
> def insertSql = 'INSERT INTO test (a) VALUES (?)'
> def params = ['Jon']
> def keys = sql.executeInsert insertSql, params
> assert keys[0] == [1]
> def flowFile = session.get()
> if(!flowFile) return
>  
> Note that the problem only occurs when using Groovy. The PutSQL processor 
> works just fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8047) Support Sensitive Dynamic Properties in DBCPConnectionPool

2020-11-30 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8047:
---
Status: Patch Available  (was: In Progress)

> Support Sensitive Dynamic Properties in DBCPConnectionPool
> --
>
> Key: NIFI-8047
> URL: https://issues.apache.org/jira/browse/NIFI-8047
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.12.1
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> DBCPConnectionPool currently supports providing dynamic properties to the 
> Data Source object, which enables customization for a wide range of JDBC 
> drivers.  Some JDBC drivers support features such as TLS encryption, 
> requiring the specification of key store and trust store files and passwords. 
>  In order to support secure configuration of these additional properties, the 
> DBCPConnectionPool should provide optional support for sensitive dynamic 
> properties.
> One potential approach is to follow the pattern of the ExecuteGroovyScript 
> Processor and set the sensitive attribute when the property name is prefixed 
> with a predefined string.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory opened a new pull request #4692: NIFI-8047 Added support for sensitive dynamic properties in DBCP

2020-11-30 Thread GitBox


exceptionfactory opened a new pull request #4692:
URL: https://github.com/apache/nifi/pull/4692


    Description of PR
   
   NIFI-8047 Added support for sensitive dynamic properties in 
DBCPConnectionPool using `SENSITIVE.` prefix to indicate sensitive status.  
Updated unit tests to validate support for standard and sensitive dynamic 
properties.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [X] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8052) Groovy: SQL inserts crashes DBCPConnectionPool

2020-11-30 Thread DEOM Damien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DEOM Damien updated NIFI-8052:
--
Description: 
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
I'm forced to restart the controller after each execution.

My (simple) code:

 

import java.nio.charset.StandardCharsets
import org.apache.nifi.controller.ControllerService
import groovy.sql.Sql

def lookup = context.controllerServiceLookup
def dbServiceName = DatabaseConnectionPoolName.value
def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { 
 cs -> lookup.getControllerServiceName(cs) == dbServiceName
}
def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()
def sql = new Sql(conn)


def insertSql = 'INSERT INTO test (a) VALUES (?)'
def params = ['Jon']
def keys = sql.executeInsert insertSql, params
assert keys[0] == [1]


def flowFile = session.get()
if(!flowFile) return

 

Note that the problem only occurs when using Groovy. The PutSQL processor works 
just fine.

  was:
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
I'm forced to restart the controller after each execution.

My (simple) code:

 

{{import java.nio.charset.StandardCharsets}}
 {{import org.apache.nifi.controller.ControllerService}}
 {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
 {{def dbServiceName = DatabaseConnectionPoolName.value}}
 \{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
 \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
 {{}}}
 {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
 {{def sql = new Sql(conn)}}
 {{def insertSql = 'INSERT INTO test (a) VALUES ( ? )'}}
 {{def params = ['Jon']}}
 {{def keys = sql.executeInsert insertSql, params}}
 {{assert keys[0] == [1]}}
 {{def flowFile = session.get()}}
 {{if(!flowFile) return}}

 

Note that the problem only occurs when using Groovy. The PutSQL processor works 
just fine.


> Groovy: SQL inserts crashes DBCPConnectionPool
> --
>
> Key: NIFI-8052
> URL: https://issues.apache.org/jira/browse/NIFI-8052
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: DEOM Damien
>Priority: Blocker
>
> Following this tutorial
> [http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]
> I could read the database, but writing to it crashes my DBCPConnectionPool 
> with the following error:
> {quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
> org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
>  failed to process due to java.sql.SQLException: Cannot get a connection, 
> pool error Timeout waiting for idle object; rolling back session: 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> Cannot get a connection, pool error Timeout waiting for idle object
> {quote}
> I'm forced to restart the controller after each execution.
> My (simple) code:
>  
> import java.nio.charset.StandardCharsets
> import org.apache.nifi.controller.ControllerService
> import groovy.sql.Sql
> def lookup = context.controllerServiceLookup
> def dbServiceName = DatabaseConnectionPoolName.value
> def dbcpServiceId = 
> lookup.getControllerServiceIdentifiers(ControllerService).find { 
>  cs -> lookup.getControllerServiceName(cs) == dbServiceName
> }
> def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()
> def sql = new Sql(conn)
> def insertSql = 'INSERT INTO test (a) VALUES (?)'
> def params = ['Jon']
> def keys = sql.executeInsert insertSql, params
> assert keys[0] == [1]
> def flowFile = session.get()
> 

[jira] [Updated] (NIFI-8052) Groovy: SQL inserts crashes DBCPConnectionPool

2020-11-30 Thread DEOM Damien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DEOM Damien updated NIFI-8052:
--
Description: 
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
I'm forced to restart the controller after each execution.

My (simple) code:

 

{{import java.nio.charset.StandardCharsets}}
 {{import org.apache.nifi.controller.ControllerService}}
 {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
 {{def dbServiceName = DatabaseConnectionPoolName.value}}
 \{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
 \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
 {{}}}
 {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
 {{def sql = new Sql(conn)}}
 {{def insertSql = 'INSERT INTO test (a) VALUES ( ? )'}}
 {{def params = ['Jon']}}
 {{def keys = sql.executeInsert insertSql, params}}
 {{assert keys[0] == [1]}}
 {{def flowFile = session.get()}}
 {{if(!flowFile) return}}

 

Note that the problem only occurs when using Groovy. The PutSQL processor works 
just fine.

  was:
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
I'm forced to restart the controller after each execution.

My (simple) code:

 

{{import java.nio.charset.StandardCharsets}}
 {{import org.apache.nifi.controller.ControllerService}}
 {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
 {{def dbServiceName = DatabaseConnectionPoolName.value}}
 \{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
 \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
 {{}}}
 {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
 {{def sql = new Sql(conn)}}
 {{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
 {{def params = ['Jon']}}
 {{def keys = sql.executeInsert insertSql, params}}
 {{assert keys[0] == [1]}}
 {{def flowFile = session.get()}}
 {{if(!flowFile) return}}

 

Note that the problem only occurs when using Groovy. The PutSQL processor works 
just fine.


> Groovy: SQL inserts crashes DBCPConnectionPool
> --
>
> Key: NIFI-8052
> URL: https://issues.apache.org/jira/browse/NIFI-8052
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: DEOM Damien
>Priority: Blocker
>
> Following this tutorial
> [http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]
> I could read the database, but writing to it crashes my DBCPConnectionPool 
> with the following error:
> {quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
> org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
>  failed to process due to java.sql.SQLException: Cannot get a connection, 
> pool error Timeout waiting for idle object; rolling back session: 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> Cannot get a connection, pool error Timeout waiting for idle object
> {quote}
> I'm forced to restart the controller after each execution.
> My (simple) code:
>  
> {{import java.nio.charset.StandardCharsets}}
>  {{import org.apache.nifi.controller.ControllerService}}
>  {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
>  {{def dbServiceName = DatabaseConnectionPoolName.value}}
>  \{{def dbcpServiceId = 
> lookup.getControllerServiceIdentifiers(ControllerService).find { }}
>  \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
>  {{}}}
>  {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
>  {{def sql = new Sql(conn)}}
>  {{def insertSql = 'INSERT INTO test (a) VALUES ( ? )'}}
>  

[jira] [Updated] (NIFI-8052) Groovy: SQL inserts crashes DBCPConnectionPool

2020-11-30 Thread DEOM Damien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DEOM Damien updated NIFI-8052:
--
Description: 
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
I'm forced to restart the controller after each execution.

My (simple) code:

 

{{import java.nio.charset.StandardCharsets}}
 {{import org.apache.nifi.controller.ControllerService}}
 {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
 {{def dbServiceName = DatabaseConnectionPoolName.value}}
 \{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
 \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
 {{}}}
 {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
 {{def sql = new Sql(conn)}}
 {{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
 {{def params = ['Jon']}}
 {{def keys = sql.executeInsert insertSql, params}}
 {{assert keys[0] == [1]}}
 {{def flowFile = session.get()}}
 {{if(!flowFile) return}}

 

Note that the problem only occurs when using Groovy. The PutSQL processor works 
just fine.

  was:
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
My (simple) code:

 

{{import java.nio.charset.StandardCharsets}}
 {{import org.apache.nifi.controller.ControllerService}}
 {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
 {{def dbServiceName = DatabaseConnectionPoolName.value}}
 \{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
 \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
 {{}}}
 {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
 {{def sql = new Sql(conn)}}
 {{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
 {{def params = ['Jon']}}
 {{def keys = sql.executeInsert insertSql, params}}
 {{assert keys[0] == [1]}}
 {{def flowFile = session.get()}}
 {{if(!flowFile) return}}

 

Note that the problem only occurs when using Groovy. The PutSQL processor works 
just fine.


> Groovy: SQL inserts crashes DBCPConnectionPool
> --
>
> Key: NIFI-8052
> URL: https://issues.apache.org/jira/browse/NIFI-8052
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: DEOM Damien
>Priority: Blocker
>
> Following this tutorial
> [http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]
> I could read the database, but writing to it crashes my DBCPConnectionPool 
> with the following error:
> {quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
> org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
>  failed to process due to java.sql.SQLException: Cannot get a connection, 
> pool error Timeout waiting for idle object; rolling back session: 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> Cannot get a connection, pool error Timeout waiting for idle object
> {quote}
> I'm forced to restart the controller after each execution.
> My (simple) code:
>  
> {{import java.nio.charset.StandardCharsets}}
>  {{import org.apache.nifi.controller.ControllerService}}
>  {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
>  {{def dbServiceName = DatabaseConnectionPoolName.value}}
>  \{{def dbcpServiceId = 
> lookup.getControllerServiceIdentifiers(ControllerService).find { }}
>  \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
>  {{}}}
>  {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
>  {{def sql = new Sql(conn)}}
>  {{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
>  {{def params = ['Jon']}}
>  {{def keys = sql.executeInsert 

[jira] [Updated] (NIFI-8052) Groovy: SQL inserts crashes DBCPConnectionPool

2020-11-30 Thread DEOM Damien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DEOM Damien updated NIFI-8052:
--
Description: 
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
My (simple) code:

 

{{import java.nio.charset.StandardCharsets}}
 {{import org.apache.nifi.controller.ControllerService}}
 {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
 {{def dbServiceName = DatabaseConnectionPoolName.value}}
 \{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
 \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
 {{}}}
 {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
 {{def sql = new Sql(conn)}}
 {{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
 {{def params = ['Jon']}}
 {{def keys = sql.executeInsert insertSql, params}}
 {{assert keys[0] == [1]}}
 {{def flowFile = session.get()}}
 {{if(!flowFile) return}}

 

Note that the problem only occurs when using Groovy. The PutSQL processor works 
just fine.

  was:
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
My (simple) code:

 

{{import java.nio.charset.StandardCharsets}}
{{import org.apache.nifi.controller.ControllerService}}
{{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
{{def dbServiceName = DatabaseConnectionPoolName.value}}
{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
{{}}}
{{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
{{def sql = new Sql(conn)}}
{{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
{{def params = ['Jon']}}
{{def keys = sql.executeInsert insertSql, params}}
{{assert keys[0] == [1]}}
{{def flowFile = session.get()}}
{{if(!flowFile) return}}

 

 

{{Note that the problem only occurs when using Groovy. The PutSQL processor 
works just fine.}}


> Groovy: SQL inserts crashes DBCPConnectionPool
> --
>
> Key: NIFI-8052
> URL: https://issues.apache.org/jira/browse/NIFI-8052
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: DEOM Damien
>Priority: Blocker
>
> Following this tutorial
> [http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]
> I could read the database, but writing to it crashes my DBCPConnectionPool 
> with the following error:
> {quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
> org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
>  failed to process due to java.sql.SQLException: Cannot get a connection, 
> pool error Timeout waiting for idle object; rolling back session: 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> Cannot get a connection, pool error Timeout waiting for idle object
> {quote}
> My (simple) code:
>  
> {{import java.nio.charset.StandardCharsets}}
>  {{import org.apache.nifi.controller.ControllerService}}
>  {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
>  {{def dbServiceName = DatabaseConnectionPoolName.value}}
>  \{{def dbcpServiceId = 
> lookup.getControllerServiceIdentifiers(ControllerService).find { }}
>  \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
>  {{}}}
>  {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
>  {{def sql = new Sql(conn)}}
>  {{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
>  {{def params = ['Jon']}}
>  {{def keys = sql.executeInsert insertSql, params}}
>  {{assert keys[0] == [1]}}
>  {{def flowFile = session.get()}}
>  {{if(!flowFile) return}}
>  
> Note that the 

[jira] [Updated] (NIFI-8052) Groovy: SQL inserts crashes DBCPConnectionPool

2020-11-30 Thread DEOM Damien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DEOM Damien updated NIFI-8052:
--
Description: 
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
My (simple) code:

 

{{import java.nio.charset.StandardCharsets}}
{{import org.apache.nifi.controller.ControllerService}}
{{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
{{def dbServiceName = DatabaseConnectionPoolName.value}}
{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
{{}}}
{{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
{{def sql = new Sql(conn)}}
{{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
{{def params = ['Jon']}}
{{def keys = sql.executeInsert insertSql, params}}
{{assert keys[0] == [1]}}
{{def flowFile = session.get()}}
{{if(!flowFile) return}}

 

 

{{Note that the problem only occurs when using Groovy. The PutSQL processor 
works just fine.}}

  was:
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
My (simple) code:
{quote}{{import java.nio.charset.StandardCharsets}}
 {{import org.apache.nifi.controller.ControllerService}}
 {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
 {{def dbServiceName = DatabaseConnectionPoolName.value}}
 \{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
 \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
 {{}}}
 {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
 {{def sql = new Sql(conn)}}
 {{def insertSql = 'INSERT INTO test (a) VALUES ( ? )'}}
 {{def params = ['Jon']}}
 {{def keys = sql.executeInsert insertSql, params}}
 {{assert keys[0] == [1]}}
 {{def flowFile = session.get()}}
 {{if(!flowFile) return}}
{quote}
{{Note that the problem only occurs when using Groovy. The PutSQL processor 
works just fine.}}


> Groovy: SQL inserts crashes DBCPConnectionPool
> --
>
> Key: NIFI-8052
> URL: https://issues.apache.org/jira/browse/NIFI-8052
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: DEOM Damien
>Priority: Blocker
>
> Following this tutorial
> [http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]
> I could read the database, but writing to it crashes my DBCPConnectionPool 
> with the following error:
> {quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
> org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
>  failed to process due to java.sql.SQLException: Cannot get a connection, 
> pool error Timeout waiting for idle object; rolling back session: 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> Cannot get a connection, pool error Timeout waiting for idle object
> {quote}
> My (simple) code:
>  
> {{import java.nio.charset.StandardCharsets}}
> {{import org.apache.nifi.controller.ControllerService}}
> {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
> {{def dbServiceName = DatabaseConnectionPoolName.value}}
> {{def dbcpServiceId = 
> lookup.getControllerServiceIdentifiers(ControllerService).find { }}
> {{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
> {{}}}
> {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
> {{def sql = new Sql(conn)}}
> {{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
> {{def params = ['Jon']}}
> {{def keys = sql.executeInsert insertSql, params}}
> {{assert keys[0] == [1]}}
> {{def flowFile = session.get()}}
> {{if(!flowFile) return}}
>  
>  
> {{Note that 

[jira] [Updated] (NIFI-8052) Groovy: SQL inserts crashes DBCPConnectionPool

2020-11-30 Thread DEOM Damien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DEOM Damien updated NIFI-8052:
--
Description: 
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
My (simple) code:
{quote}{{import java.nio.charset.StandardCharsets}}
 {{import org.apache.nifi.controller.ControllerService}}
 {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
 {{def dbServiceName = DatabaseConnectionPoolName.value}}
 \{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
 \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
 {{}}}
 {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
 {{def sql = new Sql(conn)}}
 {{def insertSql = 'INSERT INTO test (a) VALUES ( ? )'}}
 {{def params = ['Jon']}}
 {{def keys = sql.executeInsert insertSql, params}}
 {{assert keys[0] == [1]}}
 {{def flowFile = session.get()}}
 {{if(!flowFile) return}}
{quote}
{{Note that the problem only occurs when using Groovy. The PutSQL processor 
works just fine.}}

  was:
Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
My (simple) code:
{quote}{{import java.nio.charset.StandardCharsets}}
{{import org.apache.nifi.controller.ControllerService}}
{{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
{{def dbServiceName = DatabaseConnectionPoolName.value}}
{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
{{}}}
{{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
{{def sql = new Sql(conn)}}
{{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
{{def params = ['Jon']}}
{{def keys = sql.executeInsert insertSql, params}}
{{assert keys[0] == [1]}}
{{def flowFile = session.get()}}
{{if(!flowFile) return}}
{quote}
{{Note that the problem only occurs when using Groovy. The PutSQL processor 
works just fine.}}


> Groovy: SQL inserts crashes DBCPConnectionPool
> --
>
> Key: NIFI-8052
> URL: https://issues.apache.org/jira/browse/NIFI-8052
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: DEOM Damien
>Priority: Blocker
>
> Following this tutorial
> [http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]
> I could read the database, but writing to it crashes my DBCPConnectionPool 
> with the following error:
> {quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
> org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
>  failed to process due to java.sql.SQLException: Cannot get a connection, 
> pool error Timeout waiting for idle object; rolling back session: 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> Cannot get a connection, pool error Timeout waiting for idle object
> {quote}
> My (simple) code:
> {quote}{{import java.nio.charset.StandardCharsets}}
>  {{import org.apache.nifi.controller.ControllerService}}
>  {{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
>  {{def dbServiceName = DatabaseConnectionPoolName.value}}
>  \{{def dbcpServiceId = 
> lookup.getControllerServiceIdentifiers(ControllerService).find { }}
>  \{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
>  {{}}}
>  {{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
>  {{def sql = new Sql(conn)}}
>  {{def insertSql = 'INSERT INTO test (a) VALUES ( ? )'}}
>  {{def params = ['Jon']}}
>  {{def keys = sql.executeInsert insertSql, params}}
>  {{assert keys[0] == [1]}}
>  {{def flowFile = session.get()}}
>  {{if(!flowFile) return}}

[jira] [Created] (NIFI-8052) Groovy: SQL inserts crashes DBCPConnectionPool

2020-11-30 Thread DEOM Damien (Jira)
DEOM Damien created NIFI-8052:
-

 Summary: Groovy: SQL inserts crashes DBCPConnectionPool
 Key: NIFI-8052
 URL: https://issues.apache.org/jira/browse/NIFI-8052
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.11.4
Reporter: DEOM Damien


Following this tutorial

[http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html]

I could read the database, but writing to it crashes my DBCPConnectionPool with 
the following error:
{quote}PutSQL[id=a2c741f9-947d-137e--58ffcc56] 
org.apache.nifi.processors.standard.PutSQL$$Lambda$1891/0x000842eb4440@5c63d8ac
 failed to process due to java.sql.SQLException: Cannot get a connection, pool 
error Timeout waiting for idle object; rolling back session: 
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
Cannot get a connection, pool error Timeout waiting for idle object
{quote}
My (simple) code:
{quote}{{import java.nio.charset.StandardCharsets}}
{{import org.apache.nifi.controller.ControllerService}}
{{import groovy.sql.Sql}}{{def lookup = context.controllerServiceLookup}}
{{def dbServiceName = DatabaseConnectionPoolName.value}}
{{def dbcpServiceId = 
lookup.getControllerServiceIdentifiers(ControllerService).find { }}
{{ cs -> lookup.getControllerServiceName(cs) == dbServiceName}}
{{}}}
{{def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()}}
{{def sql = new Sql(conn)}}
{{def insertSql = 'INSERT INTO test (a) VALUES (?)'}}
{{def params = ['Jon']}}
{{def keys = sql.executeInsert insertSql, params}}
{{assert keys[0] == [1]}}
{{def flowFile = session.get()}}
{{if(!flowFile) return}}
{quote}
{{Note that the problem only occurs when using Groovy. The PutSQL processor 
works just fine.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8053) ReplaceText with Expression Language in Match Group results in Data Loss/Corruption

2020-11-30 Thread Robin Lutz (Jira)
Robin Lutz created NIFI-8053:


 Summary: ReplaceText with Expression Language in Match Group 
results in Data Loss/Corruption
 Key: NIFI-8053
 URL: https://issues.apache.org/jira/browse/NIFI-8053
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.12.1, 1.11.4
 Environment: Running in Docker in Kubernetes on a Ubuntu Host
Reporter: Robin Lutz
 Attachments: replace_text_race_condition.xml

This seems to be related to https://issues.apache.org/jira/browse/NIFI-7683

When I run a ReplaceText processor with multiple Threads and also manipulate 
the result within an expression (im my case _${'$1':escapeJson()}_), the 
processor will corrupt (cut off) or loose data.

I attached a template that demonstrates the case.

I did not see these problems in 1.11.2, never tried 1.11.3, and have i since 
then with every release (1.12.0 and 1.12.1).

Finally, after noticing the bug mentioned above, I was able to reproduce the 
problem.

Blocker and Important, because the user can loose data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #945: MINIFICPP-1416 - Upgrade rocksdb and build on VS2019

2020-11-30 Thread GitBox


adamdebreceni commented on a change in pull request #945:
URL: https://github.com/apache/nifi-minifi-cpp/pull/945#discussion_r532637904



##
File path: CMakeLists.txt
##
@@ -90,17 +90,17 @@ set_property(CACHE STRICT_GSL_CHECKS PROPERTY STRINGS 
${STRICT_GSL_CHECKS_Values
 # Use ccache if present
 find_program(CCACHE_FOUND ccache)
 if(CCACHE_FOUND)
-set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)
-set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache)
-message("-- Found ccache: ${CCACHE_FOUND}")
+   set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)

Review comment:
   it seems to me that the dominant indentation in this file is tabs, I 
would go with spaces as well, but wanted to make that step a separate commit





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #945: MINIFICPP-1416 - Upgrade rocksdb and build on VS2019

2020-11-30 Thread GitBox


arpadboda commented on a change in pull request #945:
URL: https://github.com/apache/nifi-minifi-cpp/pull/945#discussion_r532624581



##
File path: CMakeLists.txt
##
@@ -90,17 +90,17 @@ set_property(CACHE STRICT_GSL_CHECKS PROPERTY STRINGS 
${STRICT_GSL_CHECKS_Values
 # Use ccache if present
 find_program(CCACHE_FOUND ccache)
 if(CCACHE_FOUND)
-set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)
-set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache)
-message("-- Found ccache: ${CCACHE_FOUND}")
+   set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)

Review comment:
   Spaces has been replaced with tabs in this file, I think we should stay 
with spaces. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #942: MINIFICPP-1410 Add permissions property support for Putfile processor

2020-11-30 Thread GitBox


arpadboda commented on a change in pull request #942:
URL: https://github.com/apache/nifi-minifi-cpp/pull/942#discussion_r532619290



##
File path: extensions/standard-processors/processors/PutFile.h
##
@@ -110,6 +114,22 @@ class PutFile : public core::Processor {
const std::string );
   std::shared_ptr logger_;
   static std::shared_ptr id_generator_;
+
+#ifndef WIN32
+  class FilePermissions {
+static const uint32_t MINIMUM_INVALID_PERMISSIONS_VALUE = 1 << 9;

Review comment:
   Nice solution :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8051) NIFI ApiOperations should define nicknames to generate unique swagger operationId

2020-11-30 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17240777#comment-17240777
 ] 

Otto Fowler commented on NIFI-8051:
---

>From looking at this a little bit, fixing one will make the code get junk.  
>I'm sure that code gen is the main thing here.  It is unfortunate.  Maybe 
>there is a way to get both things correct IE> code gen the way it works now 
>but with operationID's unique and not used as the method names?

> NIFI ApiOperations should define nicknames to generate unique swagger 
> operationId
> -
>
> Key: NIFI-8051
> URL: https://issues.apache.org/jira/browse/NIFI-8051
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Otto Fowler
>Priority: Major
>
> The swagger definitions that nifi produces have duplicate operationId's for 
> many operations.
> While the swagger implementation 'works' ( IE.  it can generate clients with 
> all the correct operations ) it is not per the spec, where unique 
> operationIds are required.
> https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#fixed-fields-5
> Thus, tools that are written to the spec throw errors when trying to generate 
> against the nifi api json, such as : 
> {code:json}
>  {
>   "type": "DUPLICATE_OPERATIONID",
>   "message": "Multiple OASs share operations with the same operationId 
> 'getPropertyDescriptor'",
>   "mitigation": "Ignore operation and maintain preexisting operation. The 
> operation from the OAS 'NiFi Rest Api' will be ignored"
> },
> {
>   "type": "DUPLICATE_OPERATIONID",
>   "message": "Multiple OASs share operations with the same operationId 
> 'updateRunStatus'",
>   "mitigation": "Ignore operation and maintain preexisting operation. The 
> operation from the OAS 'NiFi Rest Api' will be ignored"
> },
> {
>   "type": "DUPLICATE_OPERATIONID",
>   "message": "Multiple OASs share operations with the same operationId 
> 'getState'",
>   "mitigation": "Ignore operation and maintain preexisting operation. The 
> operation from the OAS 'NiFi Rest Api' will be ignored"
> },
> {
>   "type": "DUPLICATE_OPERATIONID",
>   "message": "Multiple OASs share operations with the same operationId 
> 'clearState'",
>   "mitigation": "Ignore operation and maintain preexisting operation. The 
> operation from the OAS 'NiFi Rest Api' will be ignored"
> },
> {
>   "type": "MISSING_RESPONSE_SCHEMA",
>   "message": "Operation DELETE /versions/active-requests/{id} has no 
> (valid) response schema. You can use the fillEmptyResponses option to create 
> a placeholder schema",
>   "mitigation": "Ignore operation."
> },
> {
>   "type": "DUPLICATE_OPERATIONID",
>   "message": "Multiple OASs share operations with the same operationId 
> 'deleteUpdateRequest'",
>   "mitigation": "Ignore operation and maintain preexisting operation. The 
> operation from the OAS 'NiFi Rest Api' will be ignored"
> }
> {code}
> the fix for this may be to define a "nickname", that would create a unique 
> operationId, such as
> {code:java}
> @GET
> @Consumes(MediaType.WILDCARD)
> @Produces(MediaType.APPLICATION_JSON)
> @Path("/{id}/state")
> @ApiOperation(
> nickname="processor_get_state"
> value = "Gets the state for a processor",
> response = ComponentStateEntity.class,
> authorizations = {
> @Authorization(value = "Write - /processors/{uuid}")
> }
> )
> {code}
> to reproduce:
> https://loopback.io/openapi-to-graphql.html against the nifi openapi json



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #945: MINIFICPP-1416 - Upgrade rocksdb and build on VS2019

2020-11-30 Thread GitBox


adamdebreceni opened a new pull request #945:
URL: https://github.com/apache/nifi-minifi-cpp/pull/945


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1413) Improve exception logging

2020-11-30 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz updated MINIFICPP-1413:

Labels: MiNiFi-CPP-Hygiene  (was: )

> Improve exception logging
> -
>
> Key: MINIFICPP-1413
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1413
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Major
>  Labels: MiNiFi-CPP-Hygiene
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> 1. No empty std::exceptions
> 2. Add typeid to catch-all logging when possible



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1416) Compile with Visual Studio 2019

2020-11-30 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1416:
-

 Summary: Compile with Visual Studio 2019
 Key: MINIFICPP-1416
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1416
 Project: Apache NiFi MiNiFi C++
  Issue Type: New Feature
Reporter: Adam Debreceni
Assignee: Adam Debreceni


We should be able to build using Visual Studio 2019.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1415) Use raw pointer overloads of onTrigger and onSchedule when possible

2020-11-30 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz updated MINIFICPP-1415:

Description: 
To avoid mandatory shared ownership of ProcessSessionFactory, ProcessSession 
and ProcessContext, possibly enabling future refactoring to simplify processor 
code by removing these overloads.

This would also simplify processor implementation which is important for 
third-party extensibility. I would love to eventually see an ecosystem where 
people start creating their own independent processors for minifi c++ to 
interface with other software.

 

original idea: 
https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532575479

  was:
To avoid mandatory shared ownership of ProcessSessionFactory, ProcessSession 
and ProcessContext, possibly enabling future refactoring to simplify processor 
code by removing these overloads.

This would also simplify processor implementation which is important for 
third-party extensibility. I would love to eventually see an ecosystem where 
people start creating their own independent processors for minifi c++ to 
interface with other software.


> Use raw pointer overloads of onTrigger and onSchedule when possible
> ---
>
> Key: MINIFICPP-1415
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1415
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Priority: Minor
>
> To avoid mandatory shared ownership of ProcessSessionFactory, ProcessSession 
> and ProcessContext, possibly enabling future refactoring to simplify 
> processor code by removing these overloads.
> This would also simplify processor implementation which is important for 
> third-party extensibility. I would love to eventually see an ecosystem where 
> people start creating their own independent processors for minifi c++ to 
> interface with other software.
>  
> original idea: 
> https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532575479



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


szaszm commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532581458



##
File path: extensions/standard-processors/processors/GetTCP.h
##
@@ -212,29 +207,29 @@ class GetTCP : public core::Processor, public 
state::response::MetricsNodeSource
* @param sessionFactory process session factory that is used when creating
* ProcessSession objects.
*/
-  virtual void onSchedule(const std::shared_ptr 
, const std::shared_ptr 
);
+  void onSchedule(const std::shared_ptr , 
const std::shared_ptr ) override;
 
-  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) {
-throw std::exception();
+  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) override {
+throw std::logic_error{"GetTCP::onSchedule(ProcessContext*, 
ProcessSessionFactory*) is unimplemented"};
   }
   /**
* Execution trigger for the GetTCP Processor
* @param context processor context
* @param session processor session reference.
*/
-  virtual void onTrigger(const std::shared_ptr , 
const std::shared_ptr );
+  void onTrigger(const std::shared_ptr , const 
std::shared_ptr ) override;
 
-  virtual void onTrigger(core::ProcessContext *context, core::ProcessSession 
*session) {
-throw std::exception();
+  void onTrigger(core::ProcessContext *context, core::ProcessSession *session) 
override {
+throw std::logic_error{"GetTCP::onTrigger(ProcessContext*, 
ProcessSession*) is unimplemented"};

Review comment:
   [MINIFICPP-1415 Use raw pointer overloads of onTrigger and onSchedule 
when possible](https://issues.apache.org/jira/browse/MINIFICPP-1415)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1415) Use raw pointer overloads of onTrigger and onSchedule when possible

2020-11-30 Thread Marton Szasz (Jira)
Marton Szasz created MINIFICPP-1415:
---

 Summary: Use raw pointer overloads of onTrigger and onSchedule 
when possible
 Key: MINIFICPP-1415
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1415
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Marton Szasz


To avoid mandatory shared ownership of ProcessSessionFactory, ProcessSession 
and ProcessContext, possibly enabling future refactoring to simplify processor 
code by removing these overloads.

This would also simplify processor implementation which is important for 
third-party extensibility. I would love to eventually see an ecosystem where 
people start creating their own independent processors for minifi c++ to 
interface with other software.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1414) Compress logs in-memory so they are ready for transfer

2020-11-30 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni updated MINIFICPP-1414:
--
Summary: Compress logs in-memory so they are ready for transfer  (was: 
Compress logs so they are ready for transfer through the C2 protocol)

> Compress logs in-memory so they are ready for transfer
> --
>
> Key: MINIFICPP-1414
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1414
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Adam Debreceni
>Assignee: Adam Debreceni
>Priority: Major
>
> A compressed in-memory copy of the application logs should be prepared.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1414) Compress logs in-memory so they are ready for transfer

2020-11-30 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni updated MINIFICPP-1414:
--
Description: A compressed in-memory copy of the application logs should be 
prepared. Later a C2 server should be able to direct the agent to upload these 
compressed logs to a C2-specified location.  (was: A compressed in-memory copy 
of the application logs should be prepared.)

> Compress logs in-memory so they are ready for transfer
> --
>
> Key: MINIFICPP-1414
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1414
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Adam Debreceni
>Assignee: Adam Debreceni
>Priority: Major
>
> A compressed in-memory copy of the application logs should be prepared. Later 
> a C2 server should be able to direct the agent to upload these compressed 
> logs to a C2-specified location.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


szaszm commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532576416



##
File path: extensions/standard-processors/processors/GetTCP.h
##
@@ -212,29 +207,29 @@ class GetTCP : public core::Processor, public 
state::response::MetricsNodeSource
* @param sessionFactory process session factory that is used when creating
* ProcessSession objects.
*/
-  virtual void onSchedule(const std::shared_ptr 
, const std::shared_ptr 
);
+  void onSchedule(const std::shared_ptr , 
const std::shared_ptr ) override;
 
-  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) {
-throw std::exception();
+  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) override {
+throw std::logic_error{"GetTCP::onSchedule(ProcessContext*, 
ProcessSessionFactory*) is unimplemented"};

Review comment:
   I also missed it until the compilation errors popped up. :smile: 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


adamdebreceni commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532575479



##
File path: extensions/standard-processors/processors/GetTCP.h
##
@@ -212,29 +207,29 @@ class GetTCP : public core::Processor, public 
state::response::MetricsNodeSource
* @param sessionFactory process session factory that is used when creating
* ProcessSession objects.
*/
-  virtual void onSchedule(const std::shared_ptr 
, const std::shared_ptr 
);
+  void onSchedule(const std::shared_ptr , 
const std::shared_ptr ) override;
 
-  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) {
-throw std::exception();
+  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) override {
+throw std::logic_error{"GetTCP::onSchedule(ProcessContext*, 
ProcessSessionFactory*) is unimplemented"};
   }
   /**
* Execution trigger for the GetTCP Processor
* @param context processor context
* @param session processor session reference.
*/
-  virtual void onTrigger(const std::shared_ptr , 
const std::shared_ptr );
+  void onTrigger(const std::shared_ptr , const 
std::shared_ptr ) override;
 
-  virtual void onTrigger(core::ProcessContext *context, core::ProcessSession 
*session) {
-throw std::exception();
+  void onTrigger(core::ProcessContext *context, core::ProcessSession *session) 
override {
+throw std::logic_error{"GetTCP::onTrigger(ProcessContext*, 
ProcessSession*) is unimplemented"};

Review comment:
   I agree that the symmetry is nice, I think we can leave it be for now, 
and then take a look at these (not just this processor but others as well) in a 
separate PR to override the raw-pointer-taking method wherever possible





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


adamdebreceni commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532572893



##
File path: extensions/standard-processors/processors/GetTCP.h
##
@@ -212,29 +207,29 @@ class GetTCP : public core::Processor, public 
state::response::MetricsNodeSource
* @param sessionFactory process session factory that is used when creating
* ProcessSession objects.
*/
-  virtual void onSchedule(const std::shared_ptr 
, const std::shared_ptr 
);
+  void onSchedule(const std::shared_ptr , 
const std::shared_ptr ) override;
 
-  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) {
-throw std::exception();
+  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) override {
+throw std::logic_error{"GetTCP::onSchedule(ProcessContext*, 
ProcessSessionFactory*) is unimplemented"};

Review comment:
   I see, I just glanced over it and missed the part where it stores the 
factory





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


szaszm commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532570892



##
File path: extensions/standard-processors/processors/GetTCP.h
##
@@ -212,29 +207,29 @@ class GetTCP : public core::Processor, public 
state::response::MetricsNodeSource
* @param sessionFactory process session factory that is used when creating
* ProcessSession objects.
*/
-  virtual void onSchedule(const std::shared_ptr 
, const std::shared_ptr 
);
+  void onSchedule(const std::shared_ptr , 
const std::shared_ptr ) override;
 
-  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) {
-throw std::exception();
+  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) override {
+throw std::logic_error{"GetTCP::onSchedule(ProcessContext*, 
ProcessSessionFactory*) is unimplemented"};
   }
   /**
* Execution trigger for the GetTCP Processor
* @param context processor context
* @param session processor session reference.
*/
-  virtual void onTrigger(const std::shared_ptr , 
const std::shared_ptr );
+  void onTrigger(const std::shared_ptr , const 
std::shared_ptr ) override;
 
-  virtual void onTrigger(core::ProcessContext *context, core::ProcessSession 
*session) {
-throw std::exception();
+  void onTrigger(core::ProcessContext *context, core::ProcessSession *session) 
override {
+throw std::logic_error{"GetTCP::onTrigger(ProcessContext*, 
ProcessSession*) is unimplemented"};

Review comment:
   I could change onTrigger but prefer not to for symmetry with onSchedule.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


szaszm commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532570466



##
File path: extensions/standard-processors/processors/GetTCP.h
##
@@ -212,29 +207,29 @@ class GetTCP : public core::Processor, public 
state::response::MetricsNodeSource
* @param sessionFactory process session factory that is used when creating
* ProcessSession objects.
*/
-  virtual void onSchedule(const std::shared_ptr 
, const std::shared_ptr 
);
+  void onSchedule(const std::shared_ptr , 
const std::shared_ptr ) override;
 
-  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) {
-throw std::exception();
+  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) override {
+throw std::logic_error{"GetTCP::onSchedule(ProcessContext*, 
ProcessSessionFactory*) is unimplemented"};

Review comment:
   Ideally yes, but at the moment GetTCP creates flow files as data arrives 
meaning that it needs to be able to create a ProcessSession whenever needed and 
shared ownership of ProcessSessionFactory is the way it achieves this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1414) Compress logs so they are ready for transfer through the C2 protocol

2020-11-30 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni updated MINIFICPP-1414:
--
Description: A compressed in-memory copy of the application logs should be 
prepared.

> Compress logs so they are ready for transfer through the C2 protocol
> 
>
> Key: MINIFICPP-1414
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1414
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Adam Debreceni
>Assignee: Adam Debreceni
>Priority: Major
>
> A compressed in-memory copy of the application logs should be prepared.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1414) Compress logs so they are ready for transfer through the C2 protocol

2020-11-30 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1414:
-

 Summary: Compress logs so they are ready for transfer through the 
C2 protocol
 Key: MINIFICPP-1414
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1414
 Project: Apache NiFi MiNiFi C++
  Issue Type: New Feature
Reporter: Adam Debreceni
Assignee: Adam Debreceni






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8039) Decrease the initial resource reservation of Listen processor family

2020-11-30 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-8039:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Decrease the initial resource reservation of Listen processor family
> 
>
> Key: NIFI-8039
> URL: https://issues.apache.org/jira/browse/NIFI-8039
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> ListenTCP processor and other processors in the listen family are reserve 
> resources at initialisation based on properties on the processor. This 
> includes:
>  * Executor in SocketChannelDispatcher is created using fixed thread pool
>  * Byte buffer pool is build up based on Max Number of TCP Connections which 
> with high connection can consume high amount of memory
> In some use cases this is undesirable and might even cause failure of the 
> initialisation. In order to avoid this I suggest to remove the buffer pool 
> and replace the fixed thread pool into a more flexible (scaling) 
> implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8039) Decrease the initial resource reservation of Listen processor family

2020-11-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17240589#comment-17240589
 ] 

ASF subversion and git services commented on NIFI-8039:
---

Commit 39f8a008d4bde2024eb804ff7017542a6e86b572 in nifi's branch 
refs/heads/main from Bence Simon
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=39f8a00 ]

NIFI-8039 Adding properties to ListenTCP in order to allow refine behaviour 
under higher load; Refining thread pool for better scalability

NIFI-8039 Review findings; refining thread pool to be able to scale down 
properly when not under load
NIFI-8039 Answers to PR comments

This closes #4689.

Signed-off-by: Peter Turcsanyi 


> Decrease the initial resource reservation of Listen processor family
> 
>
> Key: NIFI-8039
> URL: https://issues.apache.org/jira/browse/NIFI-8039
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ListenTCP processor and other processors in the listen family are reserve 
> resources at initialisation based on properties on the processor. This 
> includes:
>  * Executor in SocketChannelDispatcher is created using fixed thread pool
>  * Byte buffer pool is build up based on Max Number of TCP Connections which 
> with high connection can consume high amount of memory
> In some use cases this is undesirable and might even cause failure of the 
> initialisation. In order to avoid this I suggest to remove the buffer pool 
> and replace the fixed thread pool into a more flexible (scaling) 
> implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4689: NIFI-8039 Adding properties to ListenTCP in order to allow refine behaviour under higher load; Refining thread pool for better scalability

2020-11-30 Thread GitBox


asfgit closed pull request #4689:
URL: https://github.com/apache/nifi/pull/4689


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8039) Decrease the initial resource reservation of Listen processor family

2020-11-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17240590#comment-17240590
 ] 

ASF subversion and git services commented on NIFI-8039:
---

Commit 39f8a008d4bde2024eb804ff7017542a6e86b572 in nifi's branch 
refs/heads/main from Bence Simon
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=39f8a00 ]

NIFI-8039 Adding properties to ListenTCP in order to allow refine behaviour 
under higher load; Refining thread pool for better scalability

NIFI-8039 Review findings; refining thread pool to be able to scale down 
properly when not under load
NIFI-8039 Answers to PR comments

This closes #4689.

Signed-off-by: Peter Turcsanyi 


> Decrease the initial resource reservation of Listen processor family
> 
>
> Key: NIFI-8039
> URL: https://issues.apache.org/jira/browse/NIFI-8039
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ListenTCP processor and other processors in the listen family are reserve 
> resources at initialisation based on properties on the processor. This 
> includes:
>  * Executor in SocketChannelDispatcher is created using fixed thread pool
>  * Byte buffer pool is build up based on Max Number of TCP Connections which 
> with high connection can consume high amount of memory
> In some use cases this is undesirable and might even cause failure of the 
> initialisation. In order to avoid this I suggest to remove the buffer pool 
> and replace the fixed thread pool into a more flexible (scaling) 
> implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8039) Decrease the initial resource reservation of Listen processor family

2020-11-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17240588#comment-17240588
 ] 

ASF subversion and git services commented on NIFI-8039:
---

Commit 39f8a008d4bde2024eb804ff7017542a6e86b572 in nifi's branch 
refs/heads/main from Bence Simon
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=39f8a00 ]

NIFI-8039 Adding properties to ListenTCP in order to allow refine behaviour 
under higher load; Refining thread pool for better scalability

NIFI-8039 Review findings; refining thread pool to be able to scale down 
properly when not under load
NIFI-8039 Answers to PR comments

This closes #4689.

Signed-off-by: Peter Turcsanyi 


> Decrease the initial resource reservation of Listen processor family
> 
>
> Key: NIFI-8039
> URL: https://issues.apache.org/jira/browse/NIFI-8039
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ListenTCP processor and other processors in the listen family are reserve 
> resources at initialisation based on properties on the processor. This 
> includes:
>  * Executor in SocketChannelDispatcher is created using fixed thread pool
>  * Byte buffer pool is build up based on Max Number of TCP Connections which 
> with high connection can consume high amount of memory
> In some use cases this is undesirable and might even cause failure of the 
> initialisation. In order to avoid this I suggest to remove the buffer pool 
> and replace the fixed thread pool into a more flexible (scaling) 
> implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] turcsanyip commented on a change in pull request #4689: NIFI-8039 Adding properties to ListenTCP in order to allow refine behaviour under higher load; Refining thread pool for better s

2020-11-30 Thread GitBox


turcsanyip commented on a change in pull request #4689:
URL: https://github.com/apache/nifi/pull/4689#discussion_r532445464



##
File path: 
nifi-nar-bundles/nifi-extension-utils/nifi-processor-utils/src/main/java/org/apache/nifi/processor/util/listen/dispatcher/SocketChannelDispatcher.java
##
@@ -52,61 +53,80 @@
 
 private final EventFactory eventFactory;
 private final ChannelHandlerFactory 
handlerFactory;
-private final BlockingQueue bufferPool;
+private final ByteBufferSource bufferSource;
 private final BlockingQueue events;
 private final ComponentLog logger;
 private final int maxConnections;
+private final int maxThreadPoolSize;
 private final SSLContext sslContext;
 private final ClientAuth clientAuth;
 private final Charset charset;
 
-private ExecutorService executor;
+private ThreadPoolExecutor executor;
 private volatile boolean stopped = false;
 private Selector selector;
 private final BlockingQueue keyQueue;
 private final AtomicInteger currentConnections = new AtomicInteger(0);
 
 public SocketChannelDispatcher(final EventFactory eventFactory,
final ChannelHandlerFactory handlerFactory,
-   final BlockingQueue bufferPool,
+   final ByteBufferSource bufferSource,
+   final BlockingQueue events,
+   final ComponentLog logger,
+   final int maxConnections,
+   final SSLContext sslContext,
+   final Charset charset) {
+this(eventFactory, handlerFactory, bufferSource, events, logger, 
maxConnections, sslContext, ClientAuth.REQUIRED, charset);
+}
+
+public SocketChannelDispatcher(final EventFactory eventFactory,
+   final ChannelHandlerFactory handlerFactory,
+   final ByteBufferSource bufferSource,
final BlockingQueue events,
final ComponentLog logger,
final int maxConnections,
final SSLContext sslContext,
+   final ClientAuth clientAuth,
final Charset charset) {
-this(eventFactory, handlerFactory, bufferPool, events, logger, 
maxConnections, sslContext, ClientAuth.REQUIRED, charset);
+this(eventFactory, handlerFactory, bufferSource, events, logger, 
maxConnections, maxConnections, sslContext, clientAuth, charset);
 }
 
 public SocketChannelDispatcher(final EventFactory eventFactory,
final ChannelHandlerFactory handlerFactory,
-   final BlockingQueue bufferPool,
+   final ByteBufferSource bufferSource,
final BlockingQueue events,
final ComponentLog logger,
final int maxConnections,
+   final int maxThreadPoolSize,
final SSLContext sslContext,
final ClientAuth clientAuth,
final Charset charset) {
 this.eventFactory = eventFactory;
 this.handlerFactory = handlerFactory;
-this.bufferPool = bufferPool;
+this.bufferSource = bufferSource;
 this.events = events;
 this.logger = logger;
 this.maxConnections = maxConnections;
+this.maxThreadPoolSize = maxThreadPoolSize;
 this.keyQueue = new LinkedBlockingQueue<>(maxConnections);
 this.sslContext = sslContext;
 this.clientAuth = clientAuth;
 this.charset = charset;
-
-if (bufferPool == null || bufferPool.size() == 0 || bufferPool.size() 
!= maxConnections) {
-throw new IllegalArgumentException(
-"A pool of available ByteBuffers equal to the maximum 
number of connections is required");
-}
 }
 
 @Override
 public void open(final InetAddress nicAddress, final int port, final int 
maxBufferSize) throws IOException {
+final InetSocketAddress inetSocketAddress = new 
InetSocketAddress(nicAddress, port);
+
 stopped = false;
-executor = Executors.newFixedThreadPool(maxConnections);
+executor = new ThreadPoolExecutor(
+maxThreadPoolSize,
+maxThreadPoolSize,
+60L,
+TimeUnit.SECONDS,
+new LinkedBlockingQueue<>(),
+new 
BasicThreadFactory.Builder().namingPattern(inetSocketAddress.toString() + 
"-dispatcher-%d").build());

Review comment:
   It is fine with me. Though the network address 

[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #944: MINIFICPP-1412 - Ensure that all FlowFiles have a non-null ResourceClaim

2020-11-30 Thread GitBox


adamdebreceni opened a new pull request #944:
URL: https://github.com/apache/nifi-minifi-cpp/pull/944


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


adamdebreceni commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532428911



##
File path: libminifi/test/TestBase.cpp
##
@@ -140,7 +140,7 @@ std::shared_ptr 
TestPlan::addProcessor(const std::string 
 
   auto ptr = 
core::ClassLoader::getDefaultClassLoader().instantiate(processor_name, uuid);
   if (nullptr == ptr) {
-throw std::exception();
+throw std::runtime_error{fmt::format("Failed to instantiate processor 
name: {0} uuid: {1}", processor_name, uuid.to_string().c_str())};

Review comment:
   why the `.c_str()`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


adamdebreceni commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532418781



##
File path: extensions/standard-processors/processors/GetTCP.h
##
@@ -212,29 +207,29 @@ class GetTCP : public core::Processor, public 
state::response::MetricsNodeSource
* @param sessionFactory process session factory that is used when creating
* ProcessSession objects.
*/
-  virtual void onSchedule(const std::shared_ptr 
, const std::shared_ptr 
);
+  void onSchedule(const std::shared_ptr , 
const std::shared_ptr ) override;
 
-  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) {
-throw std::exception();
+  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) override {
+throw std::logic_error{"GetTCP::onSchedule(ProcessContext*, 
ProcessSessionFactory*) is unimplemented"};
   }
   /**
* Execution trigger for the GetTCP Processor
* @param context processor context
* @param session processor session reference.
*/
-  virtual void onTrigger(const std::shared_ptr , 
const std::shared_ptr );
+  void onTrigger(const std::shared_ptr , const 
std::shared_ptr ) override;
 
-  virtual void onTrigger(core::ProcessContext *context, core::ProcessSession 
*session) {
-throw std::exception();
+  void onTrigger(core::ProcessContext *context, core::ProcessSession *session) 
override {
+throw std::logic_error{"GetTCP::onTrigger(ProcessContext*, 
ProcessSession*) is unimplemented"};

Review comment:
   the same reasoning applies here I believe, that we should move the 
`onTrigger` implementation to this method and leave the smart-pointer-taking 
`onTrigger` to its default behavior





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #943: MINIFICPP-1413 avoid empty std::exception, improve logging

2020-11-30 Thread GitBox


adamdebreceni commented on a change in pull request #943:
URL: https://github.com/apache/nifi-minifi-cpp/pull/943#discussion_r532415163



##
File path: extensions/standard-processors/processors/GetTCP.h
##
@@ -212,29 +207,29 @@ class GetTCP : public core::Processor, public 
state::response::MetricsNodeSource
* @param sessionFactory process session factory that is used when creating
* ProcessSession objects.
*/
-  virtual void onSchedule(const std::shared_ptr 
, const std::shared_ptr 
);
+  void onSchedule(const std::shared_ptr , 
const std::shared_ptr ) override;
 
-  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) {
-throw std::exception();
+  void onSchedule(core::ProcessContext *processContext, 
core::ProcessSessionFactory *sessionFactory) override {
+throw std::logic_error{"GetTCP::onSchedule(ProcessContext*, 
ProcessSessionFactory*) is unimplemented"};

Review comment:
   I kind of get the reason behind this `onSchedule` overload, but 
shouldn't we overload the raw-pointer-taking `onSchedule` (unless the processor 
wants to store the context or factory) and leave the 
`onSchedule(shared_ptr, shared_ptr)` 
alone (defaulting to forwarding to the raw-pointer-taking `onSchedule` in 
`Processor`), then we wouldn't have to throw at all





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org