[jira] [Commented] (NIFI-7873) Conduct 1.13.0 release

2021-02-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282855#comment-17282855
 ] 

ASF subversion and git services commented on NIFI-7873:
---

Commit 3bc6a122091214b33eee17a270163d7ca26e2a0c in nifi's branch 
refs/heads/NIFI-7873-RC4 from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3bc6a12 ]

NIFI-7873-RC4 prepare release nifi-1.13.0-RC4


> Conduct 1.13.0 release
> --
>
> Key: NIFI-7873
> URL: https://issues.apache.org/jira/browse/NIFI-7873
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 1.12.1
>Reporter: Andrei Lopukhov
>Assignee: Joe Witt
>Priority: Trivial
> Fix For: 1.13.0
>
>
> Provide exactly same version of nars in distribution and in maven repo
> Currently nar artifcats in NiFi distribution differs from same artifacts in 
> maven.
> It looks like nars are packaged once again for making distribution assembly.
> Example difference, MANIFEST.MF from nifi-avro-nar-1.12.1:
> ||Maven||Archieve distribution||
> |Manifest-Version: 1.0
> Build-Branch: UNKNOWN
> Build-Timestamp: 2020-09-23T10:17:41Z
> Archiver-Version: Plexus Archiver
> Built-By: jwitt
> Nar-Id: nifi-avro-nar
> Clone-During-Instance-Class-Loading: false
> Nar-Version: 1.12.1
> Build-Tag: nifi-1.12.1-RC2
> Build-Revision: accfaa3
> Nar-Group: org.apache.nifi
> Created-By: Apache Maven 3.6.3
> Build-Jdk: 1.8.0_265|Manifest-Version: 1.0
> Build-Timestamp: 2020-09-23T14:15:53Z
> Clone-During-Instance-Class-Loading: false
> Archiver-Version: Plexus Archiver
> Built-By: jwitt
> Nar-Version: 1.12.1
> Build-Tag: nifi-1.12.1-RC2
> Nar-Id: nifi-avro-nar
> Nar-Group: org.apache.nifi
> Created-By: Apache Maven 3.6.3
> Build-Jdk: 1.8.0_265|
> So it is not possible to validate individual libraries from distribution 
> against libraries in maven cental.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7873) Conduct 1.13.0 release

2021-02-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282856#comment-17282856
 ] 

ASF subversion and git services commented on NIFI-7873:
---

Commit 3f0713ac6bb221ff2efb7f888544829f98e6a432 in nifi's branch 
refs/heads/NIFI-7873-RC4 from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3f0713a ]

NIFI-7873-RC4 prepare for next development iteration


> Conduct 1.13.0 release
> --
>
> Key: NIFI-7873
> URL: https://issues.apache.org/jira/browse/NIFI-7873
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 1.12.1
>Reporter: Andrei Lopukhov
>Assignee: Joe Witt
>Priority: Trivial
> Fix For: 1.13.0
>
>
> Provide exactly same version of nars in distribution and in maven repo
> Currently nar artifcats in NiFi distribution differs from same artifacts in 
> maven.
> It looks like nars are packaged once again for making distribution assembly.
> Example difference, MANIFEST.MF from nifi-avro-nar-1.12.1:
> ||Maven||Archieve distribution||
> |Manifest-Version: 1.0
> Build-Branch: UNKNOWN
> Build-Timestamp: 2020-09-23T10:17:41Z
> Archiver-Version: Plexus Archiver
> Built-By: jwitt
> Nar-Id: nifi-avro-nar
> Clone-During-Instance-Class-Loading: false
> Nar-Version: 1.12.1
> Build-Tag: nifi-1.12.1-RC2
> Build-Revision: accfaa3
> Nar-Group: org.apache.nifi
> Created-By: Apache Maven 3.6.3
> Build-Jdk: 1.8.0_265|Manifest-Version: 1.0
> Build-Timestamp: 2020-09-23T14:15:53Z
> Clone-During-Instance-Class-Loading: false
> Archiver-Version: Plexus Archiver
> Built-By: jwitt
> Nar-Version: 1.12.1
> Build-Tag: nifi-1.12.1-RC2
> Nar-Id: nifi-avro-nar
> Nar-Group: org.apache.nifi
> Created-By: Apache Maven 3.6.3
> Build-Jdk: 1.8.0_265|
> So it is not possible to validate individual libraries from distribution 
> against libraries in maven cental.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8223) PutDatabaseRecord should use table column datatype instead of field datatype

2021-02-10 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8223:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> PutDatabaseRecord should use table column datatype instead of field datatype
> 
>
> Key: NIFI-8223
> URL: https://issues.apache.org/jira/browse/NIFI-8223
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When PutDatabaseRecord calls putObject() to insert a field value into a 
> prepared statement, it passes is the SQL type as determined from the NiFi 
> record field's type. Most of the time this matches the table column's data 
> type or else an error would occur when trying to put incompatible values into 
> the column.
> However in the case of the BIGINT and TIMESTAMP types, the field could be 
> inferred to be BIGINT when the column is of type TIMESTAMP. There's no way to 
> know for large integers whether they correspond to a "plain" number or a 
> number of (milli)seconds for example. In this case PutDatabaseRecord throws 
> an error because it tries to put a BIGINT value into a TIMESTAMP field.
> This Jira proposes to improve this by comparing the field and column 
> datatypes. If they match, either can be used. If they don't match, attempt to 
> convert the value to the column datatype and use the column datatype in 
> setObject(). If conversion is unsuccessful, fall back to the current behavior 
> of using the field datatype and value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8223) PutDatabaseRecord should use table column datatype instead of field datatype

2021-02-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282812#comment-17282812
 ] 

ASF subversion and git services commented on NIFI-8223:
---

Commit d08f02428d6313b4acbe4b1b43b238a74addec0c in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=d08f024 ]

NIFI-8223: This closes #4819. Use column datatype in PutDatabaseRecord when 
calling setObject()

Signed-off-by: Joe Witt 


> PutDatabaseRecord should use table column datatype instead of field datatype
> 
>
> Key: NIFI-8223
> URL: https://issues.apache.org/jira/browse/NIFI-8223
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When PutDatabaseRecord calls putObject() to insert a field value into a 
> prepared statement, it passes is the SQL type as determined from the NiFi 
> record field's type. Most of the time this matches the table column's data 
> type or else an error would occur when trying to put incompatible values into 
> the column.
> However in the case of the BIGINT and TIMESTAMP types, the field could be 
> inferred to be BIGINT when the column is of type TIMESTAMP. There's no way to 
> know for large integers whether they correspond to a "plain" number or a 
> number of (milli)seconds for example. In this case PutDatabaseRecord throws 
> an error because it tries to put a BIGINT value into a TIMESTAMP field.
> This Jira proposes to improve this by comparing the field and column 
> datatypes. If they match, either can be used. If they don't match, attempt to 
> convert the value to the column datatype and use the column datatype in 
> setObject(). If conversion is unsuccessful, fall back to the current behavior 
> of using the field datatype and value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4819: NIFI-8223: Use column datatype in PutDatabaseRecord when calling setObject()

2021-02-10 Thread GitBox


asfgit closed pull request #4819:
URL: https://github.com/apache/nifi/pull/4819


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8224) Add LoggingRecordSink controller service

2021-02-10 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8224:
---
Status: Patch Available  (was: In Progress)

> Add LoggingRecordSink controller service
> 
>
> Key: NIFI-8224
> URL: https://issues.apache.org/jira/browse/NIFI-8224
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are many implementations of RecordSinkService such as 
> DatabaseRecordSink, KafkaRecordSink, etc. usually to send records to an 
> external system. In case the user wants to instead log the records to the 
> NiFi application log (nifi-app.log, e.g.), this Jira adds a LoggingRecordSink 
> controller service to do it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 opened a new pull request #4820: NIFI-8224: Add LoggingRecordSink controller service

2021-02-10 Thread GitBox


mattyb149 opened a new pull request #4820:
URL: https://github.com/apache/nifi/pull/4820


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Adds a RecordSinkService that just logs the records (using the specified 
writer) to the NiFi application log.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [x] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-8224) Add LoggingRecordSink controller service

2021-02-10 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-8224:
--

Assignee: Matt Burgess

> Add LoggingRecordSink controller service
> 
>
> Key: NIFI-8224
> URL: https://issues.apache.org/jira/browse/NIFI-8224
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> There are many implementations of RecordSinkService such as 
> DatabaseRecordSink, KafkaRecordSink, etc. usually to send records to an 
> external system. In case the user wants to instead log the records to the 
> NiFi application log (nifi-app.log, e.g.), this Jira adds a LoggingRecordSink 
> controller service to do it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8224) Add LoggingRecordSink controller service

2021-02-10 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-8224:
--

 Summary: Add LoggingRecordSink controller service
 Key: NIFI-8224
 URL: https://issues.apache.org/jira/browse/NIFI-8224
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Matt Burgess


There are many implementations of RecordSinkService such as DatabaseRecordSink, 
KafkaRecordSink, etc. usually to send records to an external system. In case 
the user wants to instead log the records to the NiFi application log 
(nifi-app.log, e.g.), this Jira adds a LoggingRecordSink controller service to 
do it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8218) SAML message intended destination endpoint {} did not match receipient {}

2021-02-10 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8218:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> SAML message intended destination endpoint {} did not match receipient {}
> -
>
> Key: NIFI-8218
> URL: https://issues.apache.org/jira/browse/NIFI-8218
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When behind a proxy, NiFi will respect the X-ProxyHost header and use that 
> value to construct the URLs in the SAML request, so that the SAML response 
> will be sent back through the proxy.
> When processing the SAML response, there is OpenSAML code that compares the 
> "Destination" value in the SAML response which will have the proxy host, 
> against the host on the HttpServletRequest which comes from the Host header.
> So if the Host header is different from X-ProxyHost, which it could be, then 
> this comparison fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8223) PutDatabaseRecord should use table column datatype instead of field datatype

2021-02-10 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8223:
---
Fix Version/s: 1.13.0

> PutDatabaseRecord should use table column datatype instead of field datatype
> 
>
> Key: NIFI-8223
> URL: https://issues.apache.org/jira/browse/NIFI-8223
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When PutDatabaseRecord calls putObject() to insert a field value into a 
> prepared statement, it passes is the SQL type as determined from the NiFi 
> record field's type. Most of the time this matches the table column's data 
> type or else an error would occur when trying to put incompatible values into 
> the column.
> However in the case of the BIGINT and TIMESTAMP types, the field could be 
> inferred to be BIGINT when the column is of type TIMESTAMP. There's no way to 
> know for large integers whether they correspond to a "plain" number or a 
> number of (milli)seconds for example. In this case PutDatabaseRecord throws 
> an error because it tries to put a BIGINT value into a TIMESTAMP field.
> This Jira proposes to improve this by comparing the field and column 
> datatypes. If they match, either can be used. If they don't match, attempt to 
> convert the value to the column datatype and use the column datatype in 
> setObject(). If conversion is unsuccessful, fall back to the current behavior 
> of using the field datatype and value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8223) PutDatabaseRecord should use table column datatype instead of field datatype

2021-02-10 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282761#comment-17282761
 ] 

Joe Witt commented on NIFI-8223:


i see this mattyb.  Will keep an eye on CI build.   Code looks good.  Will pull 
into 1.13 if no issues.  Starting RC on heels of this if so

> PutDatabaseRecord should use table column datatype instead of field datatype
> 
>
> Key: NIFI-8223
> URL: https://issues.apache.org/jira/browse/NIFI-8223
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When PutDatabaseRecord calls putObject() to insert a field value into a 
> prepared statement, it passes is the SQL type as determined from the NiFi 
> record field's type. Most of the time this matches the table column's data 
> type or else an error would occur when trying to put incompatible values into 
> the column.
> However in the case of the BIGINT and TIMESTAMP types, the field could be 
> inferred to be BIGINT when the column is of type TIMESTAMP. There's no way to 
> know for large integers whether they correspond to a "plain" number or a 
> number of (milli)seconds for example. In this case PutDatabaseRecord throws 
> an error because it tries to put a BIGINT value into a TIMESTAMP field.
> This Jira proposes to improve this by comparing the field and column 
> datatypes. If they match, either can be used. If they don't match, attempt to 
> convert the value to the column datatype and use the column datatype in 
> setObject(). If conversion is unsuccessful, fall back to the current behavior 
> of using the field datatype and value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8223) PutDatabaseRecord should use table column datatype instead of field datatype

2021-02-10 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8223:
---
Status: Patch Available  (was: In Progress)

> PutDatabaseRecord should use table column datatype instead of field datatype
> 
>
> Key: NIFI-8223
> URL: https://issues.apache.org/jira/browse/NIFI-8223
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When PutDatabaseRecord calls putObject() to insert a field value into a 
> prepared statement, it passes is the SQL type as determined from the NiFi 
> record field's type. Most of the time this matches the table column's data 
> type or else an error would occur when trying to put incompatible values into 
> the column.
> However in the case of the BIGINT and TIMESTAMP types, the field could be 
> inferred to be BIGINT when the column is of type TIMESTAMP. There's no way to 
> know for large integers whether they correspond to a "plain" number or a 
> number of (milli)seconds for example. In this case PutDatabaseRecord throws 
> an error because it tries to put a BIGINT value into a TIMESTAMP field.
> This Jira proposes to improve this by comparing the field and column 
> datatypes. If they match, either can be used. If they don't match, attempt to 
> convert the value to the column datatype and use the column datatype in 
> setObject(). If conversion is unsuccessful, fall back to the current behavior 
> of using the field datatype and value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 opened a new pull request #4819: NIFI-8223: Use column datatype in PutDatabaseRecord when calling setObject()

2021-02-10 Thread GitBox


mattyb149 opened a new pull request #4819:
URL: https://github.com/apache/nifi/pull/4819


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Instead of always using the field datatype when calling setObject() for the 
prepared statement, use the column datatype and convert the value as necessary.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-8223) PutDatabaseRecord should use table column datatype instead of field datatype

2021-02-10 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-8223:
--

Assignee: Matt Burgess

> PutDatabaseRecord should use table column datatype instead of field datatype
> 
>
> Key: NIFI-8223
> URL: https://issues.apache.org/jira/browse/NIFI-8223
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
>
> When PutDatabaseRecord calls putObject() to insert a field value into a 
> prepared statement, it passes is the SQL type as determined from the NiFi 
> record field's type. Most of the time this matches the table column's data 
> type or else an error would occur when trying to put incompatible values into 
> the column.
> However in the case of the BIGINT and TIMESTAMP types, the field could be 
> inferred to be BIGINT when the column is of type TIMESTAMP. There's no way to 
> know for large integers whether they correspond to a "plain" number or a 
> number of (milli)seconds for example. In this case PutDatabaseRecord throws 
> an error because it tries to put a BIGINT value into a TIMESTAMP field.
> This Jira proposes to improve this by comparing the field and column 
> datatypes. If they match, either can be used. If they don't match, attempt to 
> convert the value to the column datatype and use the column datatype in 
> setObject(). If conversion is unsuccessful, fall back to the current behavior 
> of using the field datatype and value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8223) PutDatabaseRecord should use table column datatype instead of field datatype

2021-02-10 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-8223:
--

 Summary: PutDatabaseRecord should use table column datatype 
instead of field datatype
 Key: NIFI-8223
 URL: https://issues.apache.org/jira/browse/NIFI-8223
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Matt Burgess


When PutDatabaseRecord calls putObject() to insert a field value into a 
prepared statement, it passes is the SQL type as determined from the NiFi 
record field's type. Most of the time this matches the table column's data type 
or else an error would occur when trying to put incompatible values into the 
column.

However in the case of the BIGINT and TIMESTAMP types, the field could be 
inferred to be BIGINT when the column is of type TIMESTAMP. There's no way to 
know for large integers whether they correspond to a "plain" number or a number 
of (milli)seconds for example. In this case PutDatabaseRecord throws an error 
because it tries to put a BIGINT value into a TIMESTAMP field.

This Jira proposes to improve this by comparing the field and column datatypes. 
If they match, either can be used. If they don't match, attempt to convert the 
value to the column datatype and use the column datatype in setObject(). If 
conversion is unsuccessful, fall back to the current behavior of using the 
field datatype and value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory commented on a change in pull request #4788: NIFI-8132 Replaced framework uses of MD5 with SHA-256

2021-02-10 Thread GitBox


exceptionfactory commented on a change in pull request #4788:
URL: https://github.com/apache/nifi/pull/4788#discussion_r574092517



##
File path: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/file/classloader/ClassLoaderUtils.java
##
@@ -143,19 +143,13 @@ public static String 
generateAdditionalUrlsFingerprint(Set urls) {
 
 //Sorting so that the order is maintained for generating the 
fingerprint
 Collections.sort(listOfUrls);
-try {
-MessageDigest md = MessageDigest.getInstance("MD5");
-listOfUrls.forEach(url -> {
-
urlBuffer.append(url).append("-").append(getLastModified(url)).append(";");
-});
-byte[] bytesOfAdditionalUrls = 
urlBuffer.toString().getBytes(StandardCharsets.UTF_8);
-byte[] bytesOfDigest = md.digest(bytesOfAdditionalUrls);
-
-return DatatypeConverter.printHexBinary(bytesOfDigest);
-} catch (NoSuchAlgorithmException e) {
-LOGGER.error("Unable to generate fingerprint for the provided 
additional resources {}", new Object[]{urls, e});
-return null;
-}
+listOfUrls.forEach(url -> {

Review comment:
   The `StringBuffer` declaration was not part of this initial change, but 
there doesn't appear to be any need for it, so will replace with 
`StringBuilder` and use an expression lambda.

##
File path: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/security/MessageDigestUtils.java
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.util.security;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * Message Digest Utilities for standardized algorithm use within the framework
+ */
+public class MessageDigestUtils {

Review comment:
   That seems reasonable, will make the changes.

##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-nar-utils/src/main/java/org/apache/nifi/nar/FileDigestUtils.java
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.nar;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * File Digest Utilities for standardized algorithm use within NAR Unpacker
+ */
+public class FileDigestUtils {

Review comment:
   Will make the changes.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 opened a new pull request #4818: NIFI-7646, NIFI-8222: WIP for performance improvements to make it readily available for testing

2021-02-10 Thread GitBox


markap14 opened a new pull request #4818:
URL: https://github.com/apache/nifi/pull/4818


   Note that this is NOT READY to be merged. Still more cleanup and testing 
must be done.
   
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8222) When processing a lot of small FlowFiles, Provenance Repo spends most of its time in lock contention. That can be improved.

2021-02-10 Thread Mark Payne (Jira)
Mark Payne created NIFI-8222:


 Summary: When processing a lot of small FlowFiles, Provenance Repo 
spends most of its time in lock contention. That can be improved.
 Key: NIFI-8222
 URL: https://issues.apache.org/jira/browse/NIFI-8222
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-8221) Set default interface for HTTP to localhost

2021-02-10 Thread David Handermann (Jira)


[jira] [Commented] (NIFI-8221) Set default interface for HTTP to localhost

2021-02-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282704#comment-17282704
 ] 

ASF subversion and git services commented on NIFI-8221:
---

Commit 8057f8f6c50928fbba1992743d0b2dfee49503c2 in nifi's branch 
refs/heads/main from Nathan Gough
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8057f8f ]

NIFI-8221 - Set the default HTTP listening interface to 127.0.0.1.

This closes #4817

Signed-off-by: David Handermann 


> Set default interface for HTTP to localhost
> ---
>
> Key: NIFI-8221
> URL: https://issues.apache.org/jira/browse/NIFI-8221
> Project: Apache NiFi
>  Issue Type: Sub-task
>Affects Versions: 1.12.1
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Critical
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When NiFi starts without any secure configuration, it should by default 
> listen only on localhost (127.0.0.1). Add documentation in the 
> nifi.properties file, migration guide and admin guide about enabling internet 
> accessible interfaces without HTTPS/authentication enabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4817: NIFI-8221 - Set the default HTTP listening interface to 127.0.0.1.

2021-02-10 Thread GitBox


asfgit closed pull request #4817:
URL: https://github.com/apache/nifi/pull/4817


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #4817: NIFI-8221 - Set the default HTTP listening interface to 127.0.0.1.

2021-02-10 Thread GitBox


exceptionfactory commented on pull request #4817:
URL: https://github.com/apache/nifi/pull/4817#issuecomment-777022086


   Thanks for the contribution @thenatog!  Confirmed that NiFi now listens on 
127.0.0.1 using the default properties. +1 Merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8221) Set default interface for HTTP to localhost

2021-02-10 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8221:
---
Priority: Critical  (was: Major)

> Set default interface for HTTP to localhost
> ---
>
> Key: NIFI-8221
> URL: https://issues.apache.org/jira/browse/NIFI-8221
> Project: Apache NiFi
>  Issue Type: Sub-task
>Affects Versions: 1.12.1
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Critical
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When NiFi starts without any secure configuration, it should by default 
> listen only on localhost (127.0.0.1). Add documentation in the 
> nifi.properties file, migration guide and admin guide about enabling internet 
> accessible interfaces without HTTPS/authentication enabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8218) SAML message intended destination endpoint {} did not match receipient {}

2021-02-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282700#comment-17282700
 ] 

ASF subversion and git services commented on NIFI-8218:
---

Commit 1d82fb8e01f3e8d3b25fcd773eaa7add03aad363 in nifi's branch 
refs/heads/main from Bryan Bende
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1d82fb8 ]

NIFI-8218 This closes #4816. Use proxy headers when available when getting 
request values while processing SAML responses

Signed-off-by: Joe Witt 


> SAML message intended destination endpoint {} did not match receipient {}
> -
>
> Key: NIFI-8218
> URL: https://issues.apache.org/jira/browse/NIFI-8218
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When behind a proxy, NiFi will respect the X-ProxyHost header and use that 
> value to construct the URLs in the SAML request, so that the SAML response 
> will be sent back through the proxy.
> When processing the SAML response, there is OpenSAML code that compares the 
> "Destination" value in the SAML response which will have the proxy host, 
> against the host on the HttpServletRequest which comes from the Host header.
> So if the Host header is different from X-ProxyHost, which it could be, then 
> this comparison fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8218) SAML message intended destination endpoint {} did not match receipient {}

2021-02-10 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8218:
---
Fix Version/s: 1.13.0

> SAML message intended destination endpoint {} did not match receipient {}
> -
>
> Key: NIFI-8218
> URL: https://issues.apache.org/jira/browse/NIFI-8218
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When behind a proxy, NiFi will respect the X-ProxyHost header and use that 
> value to construct the URLs in the SAML request, so that the SAML response 
> will be sent back through the proxy.
> When processing the SAML response, there is OpenSAML code that compares the 
> "Destination" value in the SAML response which will have the proxy host, 
> against the host on the HttpServletRequest which comes from the Host header.
> So if the Host header is different from X-ProxyHost, which it could be, then 
> this comparison fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4816: NIFI-8218 Use proxy headers when available when getting request value…

2021-02-10 Thread GitBox


asfgit closed pull request #4816:
URL: https://github.com/apache/nifi/pull/4816


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #4754: NIFI-7417: GetAzureCosmosDBRecord processor

2021-02-10 Thread GitBox


Lehel44 commented on a change in pull request #4754:
URL: https://github.com/apache/nifi/pull/4754#discussion_r574038766



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/cosmos/document/GetAzureCosmosDBRecord.java
##
@@ -0,0 +1,266 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.azure.cosmos.document;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+
+import com.azure.cosmos.CosmosContainer;
+import com.azure.cosmos.models.CosmosQueryRequestOptions;
+import com.azure.cosmos.util.CosmosPagedIterable;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+
+@Tags({ "azure", "cosmos", "record", "read", "fetch" })
+@InputRequirement(Requirement.INPUT_ALLOWED)
+@CapabilityDescription("A record-oriented GET processor that uses the record 
writers to write the Azure Cosmos SQL select query result set.")
+public class GetAzureCosmosDBRecord extends AbstractAzureCosmosDBProcessor {
+public static final PropertyDescriptor WRITER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-writer-factory")
+.displayName("Record Writer")
+.description("The record writer to use to write the result sets")
+.identifiesControllerService(RecordSetWriterFactory.class)
+.required(true)
+.build();
+public static final PropertyDescriptor SCHEMA_NAME = new 
PropertyDescriptor.Builder()
+.name("schema-name")
+.displayName("Schema Name")
+.description("The name of the schema in the configured schema registry 
to use for the query results")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.defaultValue("${schema.name}")
+.required(true)
+.build();
+
+public static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("sql-core-document-query")
+.displayName("SQL Core Document Query")
+.description("The SQL select query to execute. "
++ "This should be a valid SQL select query to Cosmos DB with 
core sql api")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+public static final PropertyDescriptor MAX_RESPONSE_PAGE_SIZE = new 
PropertyDescriptor.Builder()
+.name("max-page-size")
+.displayName("Max Page Size")
+

[GitHub] [nifi] thenatog opened a new pull request #4817: NIFI-8221 - Set the default HTTP listening interface to 127.0.0.1.

2021-02-10 Thread GitBox


thenatog opened a new pull request #4817:
URL: https://github.com/apache/nifi/pull/4817


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] jfrazee commented on a change in pull request #4754: NIFI-7417: GetAzureCosmosDBRecord processor

2021-02-10 Thread GitBox


jfrazee commented on a change in pull request #4754:
URL: https://github.com/apache/nifi/pull/4754#discussion_r573998184



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/cosmos/document/GetAzureCosmosDBRecord.java
##
@@ -0,0 +1,266 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.azure.cosmos.document;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+
+import com.azure.cosmos.CosmosContainer;
+import com.azure.cosmos.models.CosmosQueryRequestOptions;
+import com.azure.cosmos.util.CosmosPagedIterable;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+
+@Tags({ "azure", "cosmos", "record", "read", "fetch" })
+@InputRequirement(Requirement.INPUT_ALLOWED)
+@CapabilityDescription("A record-oriented GET processor that uses the record 
writers to write the Azure Cosmos SQL select query result set.")
+public class GetAzureCosmosDBRecord extends AbstractAzureCosmosDBProcessor {
+public static final PropertyDescriptor WRITER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-writer-factory")
+.displayName("Record Writer")
+.description("The record writer to use to write the result sets")
+.identifiesControllerService(RecordSetWriterFactory.class)
+.required(true)
+.build();
+public static final PropertyDescriptor SCHEMA_NAME = new 
PropertyDescriptor.Builder()
+.name("schema-name")
+.displayName("Schema Name")
+.description("The name of the schema in the configured schema registry 
to use for the query results")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.defaultValue("${schema.name}")
+.required(true)
+.build();
+
+public static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("sql-core-document-query")
+.displayName("SQL Core Document Query")
+.description("The SQL select query to execute. "
++ "This should be a valid SQL select query to Cosmos DB with 
core sql api")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+public static final PropertyDescriptor MAX_RESPONSE_PAGE_SIZE = new 
PropertyDescriptor.Builder()
+.name("max-page-size")
+.displayName("Max Page Size")
+

[jira] [Commented] (NIFI-8220) Establish a secure by default configuration for NiFi

2021-02-10 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282629#comment-17282629
 ] 

David Handermann commented on NIFI-8220:


Another aspect of this effort should include requiring an explicit value for 
the `nifi.sensitive.props.key` property.  The current implementation allows for 
the property to be blank, but prints a large error message in the log 
indicating that an internal default value will be used.  Existing flows could 
be supported through migration guidance to set a value and update the flow 
configuration.

> Establish a secure by default configuration for NiFi
> 
>
> Key: NIFI-8220
> URL: https://issues.apache.org/jira/browse/NIFI-8220
> Project: Apache NiFi
>  Issue Type: Epic
>  Components: Tools and Build
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Inspired by this tweet 
> https://twitter.com/_escctrl_/status/1359280656174510081?s=21 and the 
> resulting discussion here 
> https://lists.apache.org/thread.html/rc590f21807192a0dce18293c2d5b47392a6fd8a1ef26d77fbd6ee695%40%3Cdev.nifi.apache.org%3E
> It is time to change our config model.  It was also setup to be easy to use.  
> We've seen these silly setups on the Internet before but has gotten 
> ridiculous.  We need to take action.
> Will create a set of one or more JIRAs to roughly do the following.
> 1.  Disable HTTP by default.  If a user wants to enable to it for whatever 
> reason then also make them enable a new property which says something to the 
> effect of 'allow completely non secure access to the entire nifi instance - 
> not recommended'
> 2. Enable HTTPS with one way authentication by default which would be the 
> client authenticating the server whereby the server has a server cert.  We 
> could either make that cert a self-signed (and thus not trusted by client's 
> by default) cert or give a way for the user to run through command line 
> process to make a legit cert. 
> 3. If not already configured with an authorization provider supply and out of 
> the box provider which supports only a single auto generated at first startup 
> user/password enabling access to the NiFi system.
> 4. Disable all restricted processors by default.  Require the user to 
> explicitly enable them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-8220) Establish a secure by default configuration for NiFi

2021-02-10 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282629#comment-17282629
 ] 

David Handermann edited comment on NIFI-8220 at 2/10/21, 6:33 PM:
--

Another aspect of this effort should include requiring an explicit value for 
the \{{nifi.sensitive.props.key}} property.  The current implementation allows 
for the property to be blank, but prints a large error message in the log 
indicating that an internal default value will be used.  Existing flows could 
be supported through migration guidance to set a value and update the flow 
configuration.


was (Author: exceptionfactory):
Another aspect of this effort should include requiring an explicit value for 
the `nifi.sensitive.props.key` property.  The current implementation allows for 
the property to be blank, but prints a large error message in the log 
indicating that an internal default value will be used.  Existing flows could 
be supported through migration guidance to set a value and update the flow 
configuration.

> Establish a secure by default configuration for NiFi
> 
>
> Key: NIFI-8220
> URL: https://issues.apache.org/jira/browse/NIFI-8220
> Project: Apache NiFi
>  Issue Type: Epic
>  Components: Tools and Build
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Inspired by this tweet 
> https://twitter.com/_escctrl_/status/1359280656174510081?s=21 and the 
> resulting discussion here 
> https://lists.apache.org/thread.html/rc590f21807192a0dce18293c2d5b47392a6fd8a1ef26d77fbd6ee695%40%3Cdev.nifi.apache.org%3E
> It is time to change our config model.  It was also setup to be easy to use.  
> We've seen these silly setups on the Internet before but has gotten 
> ridiculous.  We need to take action.
> Will create a set of one or more JIRAs to roughly do the following.
> 1.  Disable HTTP by default.  If a user wants to enable to it for whatever 
> reason then also make them enable a new property which says something to the 
> effect of 'allow completely non secure access to the entire nifi instance - 
> not recommended'
> 2. Enable HTTPS with one way authentication by default which would be the 
> client authenticating the server whereby the server has a server cert.  We 
> could either make that cert a self-signed (and thus not trusted by client's 
> by default) cert or give a way for the user to run through command line 
> process to make a legit cert. 
> 3. If not already configured with an authorization provider supply and out of 
> the box provider which supports only a single auto generated at first startup 
> user/password enabling access to the NiFi system.
> 4. Disable all restricted processors by default.  Require the user to 
> explicitly enable them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8218) SAML message intended destination endpoint {} did not match receipient {}

2021-02-10 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-8218:
--
Status: Patch Available  (was: Open)

> SAML message intended destination endpoint {} did not match receipient {}
> -
>
> Key: NIFI-8218
> URL: https://issues.apache.org/jira/browse/NIFI-8218
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When behind a proxy, NiFi will respect the X-ProxyHost header and use that 
> value to construct the URLs in the SAML request, so that the SAML response 
> will be sent back through the proxy.
> When processing the SAML response, there is OpenSAML code that compares the 
> "Destination" value in the SAML response which will have the proxy host, 
> against the host on the HttpServletRequest which comes from the Host header.
> So if the Host header is different from X-ProxyHost, which it could be, then 
> this comparison fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende opened a new pull request #4816: NIFI-8218 Use proxy headers when available when getting request value…

2021-02-10 Thread GitBox


bbende opened a new pull request #4816:
URL: https://github.com/apache/nifi/pull/4816


   …s while processing SAML responses
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8221) Set default interface for HTTP to localhost

2021-02-10 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough updated NIFI-8221:
---
Description: When NiFi starts without any secure configuration, it should 
by default listen only on localhost (127.0.0.1). Add documentation in the 
nifi.properties file, migration guide and admin guide about enabling internet 
accessible interfaces without HTTPS/authentication enabled.  (was: When NiFi 
starts without any secure configuration, it should by default listen only on 
localhost (127.0.0.1).)

> Set default interface for HTTP to localhost
> ---
>
> Key: NIFI-8221
> URL: https://issues.apache.org/jira/browse/NIFI-8221
> Project: Apache NiFi
>  Issue Type: Sub-task
>Affects Versions: 1.12.1
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
> Fix For: 1.13.0
>
>
> When NiFi starts without any secure configuration, it should by default 
> listen only on localhost (127.0.0.1). Add documentation in the 
> nifi.properties file, migration guide and admin guide about enabling internet 
> accessible interfaces without HTTPS/authentication enabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8221) Set default interface for HTTP to localhost

2021-02-10 Thread Nathan Gough (Jira)
Nathan Gough created NIFI-8221:
--

 Summary: Set default interface for HTTP to localhost
 Key: NIFI-8221
 URL: https://issues.apache.org/jira/browse/NIFI-8221
 Project: Apache NiFi
  Issue Type: Sub-task
Affects Versions: 1.12.1
Reporter: Nathan Gough
Assignee: Nathan Gough
 Fix For: 1.13.0


When NiFi starts without any secure configuration, it should by default listen 
only on localhost (127.0.0.1).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pgyori commented on a change in pull request #4808: NIFI-8205: Documentation improvements for the Wait processor

2021-02-10 Thread GitBox


pgyori commented on a change in pull request #4808:
URL: https://github.com/apache/nifi/pull/4808#discussion_r573962378



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.Wait/additionalDetails.html
##
@@ -15,9 +15,9 @@
   limitations under the License.
 -->
 
-
-ValidateCsv
-
+   
+   ValidateCsv

Review comment:
   Thank you! Pushed a new commit with the fix.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8220) Establish a secure by default configuration for NiFi

2021-02-10 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282608#comment-17282608
 ] 

Joe Witt commented on NIFI-8220:


Ahhh I like that idea Nathan!

> Establish a secure by default configuration for NiFi
> 
>
> Key: NIFI-8220
> URL: https://issues.apache.org/jira/browse/NIFI-8220
> Project: Apache NiFi
>  Issue Type: Epic
>  Components: Tools and Build
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Inspired by this tweet 
> https://twitter.com/_escctrl_/status/1359280656174510081?s=21 and the 
> resulting discussion here 
> https://lists.apache.org/thread.html/rc590f21807192a0dce18293c2d5b47392a6fd8a1ef26d77fbd6ee695%40%3Cdev.nifi.apache.org%3E
> It is time to change our config model.  It was also setup to be easy to use.  
> We've seen these silly setups on the Internet before but has gotten 
> ridiculous.  We need to take action.
> Will create a set of one or more JIRAs to roughly do the following.
> 1.  Disable HTTP by default.  If a user wants to enable to it for whatever 
> reason then also make them enable a new property which says something to the 
> effect of 'allow completely non secure access to the entire nifi instance - 
> not recommended'
> 2. Enable HTTPS with one way authentication by default which would be the 
> client authenticating the server whereby the server has a server cert.  We 
> could either make that cert a self-signed (and thus not trusted by client's 
> by default) cert or give a way for the user to run through command line 
> process to make a legit cert. 
> 3. If not already configured with an authorization provider supply and out of 
> the box provider which supports only a single auto generated at first startup 
> user/password enabling access to the NiFi system.
> 4. Disable all restricted processors by default.  Require the user to 
> explicitly enable them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pgyori commented on a change in pull request #4815: NIFI-8200: Modifying PutAzureDataLakeStorage to delete temp file if e…

2021-02-10 Thread GitBox


pgyori commented on a change in pull request #4815:
URL: https://github.com/apache/nifi/pull/4815#discussion_r573941975



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/PutAzureDataLakeStorage.java
##
@@ -126,6 +126,9 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 if (length > 0) {
 try (final InputStream rawIn = session.read(flowFile); 
final BufferedInputStream bufferedIn = new BufferedInputStream(rawIn)) {
 uploadContent(fileClient, bufferedIn, length);
+} catch (Exception e) {
+fileClient.delete();
+throw e;

Review comment:
   Exception e is suppressed if fileClient.delete() also throws an 
exception. My recommendation is to put a try-catch-finally around delete(). In 
the catch only an error message needs to be logged, and in the finally the 
original exception 'e' can be thrown forward.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8220) Establish a secure by default configuration for NiFi

2021-02-10 Thread Nathan Gough (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282601#comment-17282601
 ] 

Nathan Gough commented on NIFI-8220:


An additional action we could take would be when running insecurely, to 
restrict the Jetty server from binding to all interfaces (0.0.0.0) and only 
allow access to NiFi via loopback interface (127.0.0.1).

> Establish a secure by default configuration for NiFi
> 
>
> Key: NIFI-8220
> URL: https://issues.apache.org/jira/browse/NIFI-8220
> Project: Apache NiFi
>  Issue Type: Epic
>  Components: Tools and Build
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Inspired by this tweet 
> https://twitter.com/_escctrl_/status/1359280656174510081?s=21 and the 
> resulting discussion here 
> https://lists.apache.org/thread.html/rc590f21807192a0dce18293c2d5b47392a6fd8a1ef26d77fbd6ee695%40%3Cdev.nifi.apache.org%3E
> It is time to change our config model.  It was also setup to be easy to use.  
> We've seen these silly setups on the Internet before but has gotten 
> ridiculous.  We need to take action.
> Will create a set of one or more JIRAs to roughly do the following.
> 1.  Disable HTTP by default.  If a user wants to enable to it for whatever 
> reason then also make them enable a new property which says something to the 
> effect of 'allow completely non secure access to the entire nifi instance - 
> not recommended'
> 2. Enable HTTPS with one way authentication by default which would be the 
> client authenticating the server whereby the server has a server cert.  We 
> could either make that cert a self-signed (and thus not trusted by client's 
> by default) cert or give a way for the user to run through command line 
> process to make a legit cert. 
> 3. If not already configured with an authorization provider supply and out of 
> the box provider which supports only a single auto generated at first startup 
> user/password enabling access to the NiFi system.
> 4. Disable all restricted processors by default.  Require the user to 
> explicitly enable them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8148) Selecting field from array with QueryRecord routes to failure

2021-02-10 Thread Jon Kessler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282558#comment-17282558
 ] 

Jon Kessler commented on NIFI-8148:
---

[~markap14], will you please confirm that this is the exception you saw so that 
I know I'm on the right track?


60867 [pool-1-thread-1] ERROR org.apache.nifi.processors.standard.QueryRecord - 
QueryRecord[id=56c7553f-f299-4e5d-b0fc-a2fc6b2a99f7] Failed to write 
MapRecord[\{zip=[Ljava.lang.Object;@6de8ce3c}] with schema ["zip" : "RECORD"] 
as a JSON Object due to 
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value [[Ljava.lang.Object;@6de8ce3c] of type class 
[Ljava.lang.Object; to Record for field zip: 
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value [[Ljava.lang.Object;@6de8ce3c] of type class 
[Ljava.lang.Object; to Record for field zip
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value [[Ljava.lang.Object;@6de8ce3c] of type class 
[Ljava.lang.Object; to Record for field zip
    at 
org.apache.nifi.serialization.record.util.DataTypeUtils.toRecord(DataTypeUtils.java:398)
    at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:219)
     at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:171)
     at 
org.apache.nifi.json.WriteJsonResult.writeValue(WriteJsonResult.java:327)
     at 
org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:199)
     at 
org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:148)
     at 
org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:59)
     at 
org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:52)
     at 
org.apache.nifi.processors.standard.QueryRecord$1.process(QueryRecord.java:347)


 

> Selecting field from array with QueryRecord routes to failure
> -
>
> Key: NIFI-8148
> URL: https://issues.apache.org/jira/browse/NIFI-8148
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Jon Kessler
>Priority: Major
>
> Given the following JSON document coming into QueryRecord:
> {{ {}}
> {{"name": "John Doe",}}
> {{"try": [}}
> {{{}}
> {{"workAddress": {}}
> {{"number": "123",}}
> {{"street": "5th Avenue",}}
> {{"city": "New York",}}
> {{"state": "NY",}}
> {{"zip": "10020"}}
> {{},}}
> {{"homeAddress": {}}
> {{"number": "456",}}
> {{"street": "116th Avenue",}}
> {{"city": "New York",}}
> {{"state": "NY",}}
> {{"zip": "11697"}}
> {{}}}
> {{}}}
> {{]}}
> {{}}}
> When using a JSON Reader (inferred schema) and JSON Writer (inherit record 
> schema), we should be able to use the query:
> SELECT RPATH(try, '/*/zip') AS zip
> FROM FLOWFILE
> The result should be two records, each consisting of a single field named 
> 'zip' that is of type String.
> Currently, it throws an Exception and routes to failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8179) NiFi standalon json validator processor

2021-02-10 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282538#comment-17282538
 ] 

Otto Fowler commented on NIFI-8179:
---

Validate how?
Validate against json schema?  if so what version?  where will the schema live?

Validate as parsable? just valid json at all?

> NiFi standalon json validator processor
> ---
>
> Key: NIFI-8179
> URL: https://issues.apache.org/jira/browse/NIFI-8179
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: John Carole
>Priority: Major
>
> A standalone processor that reads a json, makes sure its a valid json and 
> then reroute the flow file to success or failure based on the validation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-8148) Selecting field from array with QueryRecord routes to failure

2021-02-10 Thread Jon Kessler (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Kessler reassigned NIFI-8148:
-

Assignee: Jon Kessler

> Selecting field from array with QueryRecord routes to failure
> -
>
> Key: NIFI-8148
> URL: https://issues.apache.org/jira/browse/NIFI-8148
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Jon Kessler
>Priority: Major
>
> Given the following JSON document coming into QueryRecord:
> {{ {}}
> {{"name": "John Doe",}}
> {{"try": [}}
> {{{}}
> {{"workAddress": {}}
> {{"number": "123",}}
> {{"street": "5th Avenue",}}
> {{"city": "New York",}}
> {{"state": "NY",}}
> {{"zip": "10020"}}
> {{},}}
> {{"homeAddress": {}}
> {{"number": "456",}}
> {{"street": "116th Avenue",}}
> {{"city": "New York",}}
> {{"state": "NY",}}
> {{"zip": "11697"}}
> {{}}}
> {{}}}
> {{]}}
> {{}}}
> When using a JSON Reader (inferred schema) and JSON Writer (inherit record 
> schema), we should be able to use the query:
> SELECT RPATH(try, '/*/zip') AS zip
> FROM FLOWFILE
> The result should be two records, each consisting of a single field named 
> 'zip' that is of type String.
> Currently, it throws an Exception and routes to failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8220) Establish a secure by default configuration for NiFi

2021-02-10 Thread Joe Witt (Jira)
Joe Witt created NIFI-8220:
--

 Summary: Establish a secure by default configuration for NiFi
 Key: NIFI-8220
 URL: https://issues.apache.org/jira/browse/NIFI-8220
 Project: Apache NiFi
  Issue Type: Epic
  Components: Tools and Build
Reporter: Joe Witt
Assignee: Joe Witt
 Fix For: 1.14.0


Inspired by this tweet 
https://twitter.com/_escctrl_/status/1359280656174510081?s=21 and the resulting 
discussion here 
https://lists.apache.org/thread.html/rc590f21807192a0dce18293c2d5b47392a6fd8a1ef26d77fbd6ee695%40%3Cdev.nifi.apache.org%3E

It is time to change our config model.  It was also setup to be easy to use.  
We've seen these silly setups on the Internet before but has gotten ridiculous. 
 We need to take action.

Will create a set of one or more JIRAs to roughly do the following.
1.  Disable HTTP by default.  If a user wants to enable to it for whatever 
reason then also make them enable a new property which says something to the 
effect of 'allow completely non secure access to the entire nifi instance - not 
recommended'
2. Enable HTTPS with one way authentication by default which would be the 
client authenticating the server whereby the server has a server cert.  We 
could either make that cert a self-signed (and thus not trusted by client's by 
default) cert or give a way for the user to run through command line process to 
make a legit cert. 
3. If not already configured with an authorization provider supply and out of 
the box provider which supports only a single auto generated at first startup 
user/password enabling access to the NiFi system.
4. Disable all restricted processors by default.  Require the user to 
explicitly enable them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8200) PutAzureDataLakeStorage processor leaves behind a 0B file if upload fails

2021-02-10 Thread Timea Barna (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timea Barna updated NIFI-8200:
--
Status: Patch Available  (was: In Progress)

> PutAzureDataLakeStorage processor leaves behind a 0B file if upload fails
> -
>
> Key: NIFI-8200
> URL: https://issues.apache.org/jira/browse/NIFI-8200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Peter Gyori
>Assignee: Timea Barna
>Priority: Minor
>  Labels: azure
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The PutAzureDataLakeStorage processor works in a way that first it creates an 
> empty file with the given name, and then uploads the content to this file. 
> However, if the upload fails, the empty file does not get removed.
> The processor needs to be modified to remove the file if the upload is not 
> successful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] timeabarna opened a new pull request #4815: NIFI-8200: Modifying PutAzureDataLakeStorage to delete temp file if e…

2021-02-10 Thread GitBox


timeabarna opened a new pull request #4815:
URL: https://github.com/apache/nifi/pull/4815


   …xception was thrown in uploadContent()
   
   https://issues.apache.org/jira/browse/NIFI-8200
   
    Description of PR
   
   delete temp file if exception was thrown in uploadContent()
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8219) PMEM-backed Repositories

2021-02-10 Thread Takashi Menjo (Jira)
Takashi Menjo created NIFI-8219:
---

 Summary: PMEM-backed Repositories
 Key: NIFI-8219
 URL: https://issues.apache.org/jira/browse/NIFI-8219
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core Framework
 Environment: PMEM, x64, Linux, DAX, PMDK
Reporter: Takashi Menjo


Persistent memory (PMEM) is non-volatile and byte-addressable memory installed 
into DIMM slots. With Filesystem DAX (Direct Access) and PMDK (Persistent 
Memory Development Kit), a program can map on-PMEM files to userspace then read 
and write its data, bypassing page caches. These technologies could bring you 
better I/O performance than traditional disks.

I would propose a patchset that lets FlowFile, Content, and Provenance 
Repositories use PMDK (via JNI) to write their data to their files shown as 
follows:

 * FlowFile Repository: Journals (.journal)
 * Content Repository: Content/Resource Claims
 * Provenance Repository: Provenance logs (.prov) and ToCs (.toc)

*Please note that this patchset works only on x64 Linux (4.15 or later) for 
now.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1451) ThreadPoolAdjust test transiently fails

2021-02-10 Thread Martin Zink (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Zink updated MINIFICPP-1451:
---
Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi-minifi-cpp/pull/1000

> ThreadPoolAdjust test transiently fails
> ---
>
> Key: MINIFICPP-1451
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1451
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Gabor Gyimesi
>Assignee: Martin Zink
>Priority: Minor
> Attachments: threadpooladjust-failure-vs2019.txt
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> ThreadPoolAdjust test was seen failing in CI with the VS2019 build. See logs 
> attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1000: MINIFICPP-1451: ThreadPoolAdjust test transiently failed on Windows CI

2021-02-10 Thread GitBox


martinzink commented on a change in pull request #1000:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1000#discussion_r573643357



##
File path: extensions/http-curl/tests/ThreadPoolAdjust.cpp
##
@@ -31,9 +31,11 @@
 #include "processors/LogAttribute.h"
 #include "utils/IntegrationTestUtils.h"
 
+constexpr uint64_t INCREASED_WAITTIME_MSECS = DEFAULT_WAITTIME_MSECS * 1.5;
+
 class HttpTestHarness : public IntegrationBase {
  public:
-  HttpTestHarness() {
+  HttpTestHarness() : IntegrationBase(INCREASED_WAITTIME_MSECS) {

Review comment:
   Changed it to be hard-coded 5000 ms, instead of the previously 
calculated 4500 ms.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8201) LdapUserGroupProvider Group Search Scope "SUBTREE" setting does not search directory tree

2021-02-10 Thread Karl Koeck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Koeck updated NIFI-8201:
-
Summary: LdapUserGroupProvider Group Search Scope "SUBTREE" setting does 
not search directory tree  (was: LdapUserGroupProvider Group Search Scope 
SUBTREE setting does not search directory tree)

> LdapUserGroupProvider Group Search Scope "SUBTREE" setting does not search 
> directory tree
> -
>
> Key: NIFI-8201
> URL: https://issues.apache.org/jira/browse/NIFI-8201
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.11.4, 1.12.1
> Environment: OS: Windows Server 2012 R2, LDAP Server: Microsoft 
> Active Directory
>Reporter: Karl Koeck
>Priority: Major
>
> Our *{{Group Search Scope}}* parameter within the 
> *{{ldap-user-group-provider}}* user group provider is set to *{{SUBTREE}}.* 
> However user authorization only works for user profiles directly located 
> within the {{*Group Search Base*}} OU level. NiFi behaves as if {{*Group 
> Search Scope*}} is set to *{{ONE_LEVEL}}*.
> This results in the following exception in case the to-be-authorized user 
> profile is located within a sub-OU of the {{*Group Search Base*}} parameter:
> {code:java}
> o.a.n.w.a.c.AccessDeniedExceptionMapper identity[myuser], groups[] does not 
> have permission to access the requested resource. Unknown user with identity 
> 'myuser'. Returning Forbidden response.{code}
>  
> The above mentioned behavior was observed with NiFi version 1.11.4 and 1.12.1 
> and was also verified by another Apache NiFi Slack user (see threads below):
>  * [https://apachenifi.slack.com/archives/C0L9VCD47/p1608638026275800]
>  * [https://apachenifi.slack.com/archives/C0L9VCD47/p1604920271147200]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1488) Improve tests for features lacking coverage - Spring Internship Project

2021-02-10 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1488:

Epic Name: 2020 Spring Internship Test Coverage Expansion  (was: Spring 
Internship Test Coverage Expansion)

> Improve tests for features lacking coverage - Spring Internship Project
> ---
>
> Key: MINIFICPP-1488
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1488
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Epic
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 1.0.0
>
>
> *Background:*
> There are quite a few features in MiNiFi that are not properly tested.
> *Proposal:*
> The person going through the Jiras should:
>  # Understand the features to be tested (one can refer to NiFi docs for some 
> of the features already implemented there)
>  # Identify testing requirements and add them to the Jiras as acceptance 
> criteria and have team-members review them.
>  # Implement the tests.
>  # Correct any bugs found while testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1488) Improve tests for features lacking coverage - Spring Internship Project

2021-02-10 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1488:

Epic Name: 2021 Spring Internship Test Coverage Expansion  (was: 2020 
Spring Internship Test Coverage Expansion)

> Improve tests for features lacking coverage - Spring Internship Project
> ---
>
> Key: MINIFICPP-1488
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1488
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Epic
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 1.0.0
>
>
> *Background:*
> There are quite a few features in MiNiFi that are not properly tested.
> *Proposal:*
> The person going through the Jiras should:
>  # Understand the features to be tested (one can refer to NiFi docs for some 
> of the features already implemented there)
>  # Identify testing requirements and add them to the Jiras as acceptance 
> criteria and have team-members review them.
>  # Implement the tests.
>  # Correct any bugs found while testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1000: MINIFICPP-1451: ThreadPoolAdjust test transiently failed on Windows CI

2021-02-10 Thread GitBox


fgerlits commented on a change in pull request #1000:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1000#discussion_r573610053



##
File path: extensions/http-curl/tests/ThreadPoolAdjust.cpp
##
@@ -31,9 +31,11 @@
 #include "processors/LogAttribute.h"
 #include "utils/IntegrationTestUtils.h"
 
+constexpr uint64_t INCREASED_WAITTIME_MSECS = DEFAULT_WAITTIME_MSECS * 1.5;
+
 class HttpTestHarness : public IntegrationBase {
  public:
-  HttpTestHarness() {
+  HttpTestHarness() : IntegrationBase(INCREASED_WAITTIME_MSECS) {

Review comment:
   I would hard-code the wait time value here instead of defining it as 
`DEFAULT_WAITTIME_MSECS * 1.5`, because I don't think we want this wait time to 
change if we change the default later.
   
   Also, since `wait_time_` is only used as a parameter to 
`verifyLogLinePresenceInPollTime()`, ie. it will stop waiting earlier if the 
log line is found earlier, we could make it longer, eg. 5 seconds or even 10 
seconds. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1488) Improve tests for features lacking coverage - Spring Internship Project

2021-02-10 Thread Adam Hunyadi (Jira)
Adam Hunyadi created MINIFICPP-1488:
---

 Summary: Improve tests for features lacking coverage - Spring 
Internship Project
 Key: MINIFICPP-1488
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1488
 Project: Apache NiFi MiNiFi C++
  Issue Type: Epic
Affects Versions: 0.7.0
Reporter: Adam Hunyadi
 Fix For: 1.0.0


*Background:*

There are quite a few features in MiNiFi that are not properly tested.

*Proposal:*

The person going through the Jiras should:
 # Understand the features to be tested (one can refer to NiFi docs for some of 
the features already implemented there)
 # Identify testing requirements and add them to the Jiras as acceptance 
criteria and have team-members review them.
 # Implement the tests.
 # Correct any bugs found while testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] Lehel44 commented on a change in pull request #4754: NIFI-7417: GetAzureCosmosDBRecord processor

2021-02-10 Thread GitBox


Lehel44 commented on a change in pull request #4754:
URL: https://github.com/apache/nifi/pull/4754#discussion_r573018147



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/cosmos/document/GetAzureCosmosDBRecord.java
##
@@ -0,0 +1,266 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.azure.cosmos.document;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+
+import com.azure.cosmos.CosmosContainer;
+import com.azure.cosmos.models.CosmosQueryRequestOptions;
+import com.azure.cosmos.util.CosmosPagedIterable;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+
+@Tags({ "azure", "cosmos", "record", "read", "fetch" })
+@InputRequirement(Requirement.INPUT_ALLOWED)
+@CapabilityDescription("A record-oriented GET processor that uses the record 
writers to write the Azure Cosmos SQL select query result set.")
+public class GetAzureCosmosDBRecord extends AbstractAzureCosmosDBProcessor {
+public static final PropertyDescriptor WRITER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-writer-factory")
+.displayName("Record Writer")
+.description("The record writer to use to write the result sets")
+.identifiesControllerService(RecordSetWriterFactory.class)
+.required(true)
+.build();
+public static final PropertyDescriptor SCHEMA_NAME = new 
PropertyDescriptor.Builder()
+.name("schema-name")
+.displayName("Schema Name")
+.description("The name of the schema in the configured schema registry 
to use for the query results")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.defaultValue("${schema.name}")
+.required(true)
+.build();
+
+public static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("sql-core-document-query")
+.displayName("SQL Core Document Query")
+.description("The SQL select query to execute. "
++ "This should be a valid SQL select query to Cosmos DB with 
core sql api")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+public static final PropertyDescriptor MAX_RESPONSE_PAGE_SIZE = new 
PropertyDescriptor.Builder()
+.name("max-page-size")
+.displayName("Max Page Size")
+

[GitHub] [nifi] Lehel44 commented on a change in pull request #4754: NIFI-7417: GetAzureCosmosDBRecord processor

2021-02-10 Thread GitBox


Lehel44 commented on a change in pull request #4754:
URL: https://github.com/apache/nifi/pull/4754#discussion_r573018147



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/cosmos/document/GetAzureCosmosDBRecord.java
##
@@ -0,0 +1,266 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.azure.cosmos.document;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+
+import com.azure.cosmos.CosmosContainer;
+import com.azure.cosmos.models.CosmosQueryRequestOptions;
+import com.azure.cosmos.util.CosmosPagedIterable;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+
+@Tags({ "azure", "cosmos", "record", "read", "fetch" })
+@InputRequirement(Requirement.INPUT_ALLOWED)
+@CapabilityDescription("A record-oriented GET processor that uses the record 
writers to write the Azure Cosmos SQL select query result set.")
+public class GetAzureCosmosDBRecord extends AbstractAzureCosmosDBProcessor {
+public static final PropertyDescriptor WRITER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-writer-factory")
+.displayName("Record Writer")
+.description("The record writer to use to write the result sets")
+.identifiesControllerService(RecordSetWriterFactory.class)
+.required(true)
+.build();
+public static final PropertyDescriptor SCHEMA_NAME = new 
PropertyDescriptor.Builder()
+.name("schema-name")
+.displayName("Schema Name")
+.description("The name of the schema in the configured schema registry 
to use for the query results")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.defaultValue("${schema.name}")
+.required(true)
+.build();
+
+public static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("sql-core-document-query")
+.displayName("SQL Core Document Query")
+.description("The SQL select query to execute. "
++ "This should be a valid SQL select query to Cosmos DB with 
core sql api")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+public static final PropertyDescriptor MAX_RESPONSE_PAGE_SIZE = new 
PropertyDescriptor.Builder()
+.name("max-page-size")
+.displayName("Max Page Size")
+

[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1000: MINIFICPP-1451: ThreadPoolAdjust test transiently failed on Windows CI

2021-02-10 Thread GitBox


martinzink commented on a change in pull request #1000:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1000#discussion_r573587229



##
File path: extensions/http-curl/tests/ThreadPoolAdjust.cpp
##
@@ -31,9 +31,11 @@
 #include "processors/LogAttribute.h"
 #include "utils/IntegrationTestUtils.h"
 
+constexpr uint64_t INCREASED_WAITTIME_MSECS = DEFAULT_WAITTIME_MSECS * 1.5;
+
 class HttpTestHarness : public IntegrationBase {
  public:
-  HttpTestHarness() {
+  HttpTestHarness() : IntegrationBase(INCREASED_WAITTIME_MSECS) {

Review comment:
   Looking at the available logs on github, this issue is fairly regular 
(happened 7 times in the last 90 days, only on windows)
   
   The strange thing is, usually by the time the assertion message appears the 
required log lines are already present.
   So I suspected that the given wait time for these log messages are too low 
to be consistent for the windows CI. (I wasn't able to reproduce the issue on 
my windows machine)
   I've tested the fix with a custom CI that's tries to run this test 200 times 
on windows2017 and windows2019.
   Without the fix it wasn't able to run the 200 test run without failure 4 
times out of 4. (Failures happened after the 1st, 11th, 12th, 103rd test run)
   * https://github.com/martinzink/nifi-minifi-cpp/actions/runs/551019180
   * https://github.com/martinzink/nifi-minifi-cpp/actions/runs/551315036
   
   With the increased wait time it passed 6 times out of 6. (1200 successful 
test run)
   * https://github.com/martinzink/nifi-minifi-cpp/actions/runs/551722774
   * https://github.com/martinzink/nifi-minifi-cpp/actions/runs/552033701
   * https://github.com/martinzink/nifi-minifi-cpp/actions/runs/552535235





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1000: MINIFICPP-1451: ThreadPoolAdjust test transiently failed on Windows CI

2021-02-10 Thread GitBox


martinzink commented on a change in pull request #1000:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1000#discussion_r573587229



##
File path: extensions/http-curl/tests/ThreadPoolAdjust.cpp
##
@@ -31,9 +31,11 @@
 #include "processors/LogAttribute.h"
 #include "utils/IntegrationTestUtils.h"
 
+constexpr uint64_t INCREASED_WAITTIME_MSECS = DEFAULT_WAITTIME_MSECS * 1.5;
+
 class HttpTestHarness : public IntegrationBase {
  public:
-  HttpTestHarness() {
+  HttpTestHarness() : IntegrationBase(INCREASED_WAITTIME_MSECS) {

Review comment:
   Looking at the available logs on github, this issue is fairly regular 
(happened 7 times in the last 90 days, only on windows)
   
   The strange thing is, usually by the time the assertion message appears the 
required log lines are already present.
   So I suspected that the given wait time for these log messages are too low 
to be consistent for the windows CI. (I wasn't able to reproduce the issue on 
my windows machine)
   I've tested the fix with a custom CI that's tries to run this test 200 times 
on windows2017 and windows2019.
   Without the fix it wasn't able to run the 200 test run without failure 4 
times out of 4. (Failures happened after the 1st, 11th, 12th, 103rd test run)
 https://github.com/martinzink/nifi-minifi-cpp/actions/runs/551019180
 https://github.com/martinzink/nifi-minifi-cpp/actions/runs/551315036
   With the increased wait time it passed 6 times out of 6. (1200 successful 
test run)
 https://github.com/martinzink/nifi-minifi-cpp/actions/runs/551722774
 https://github.com/martinzink/nifi-minifi-cpp/actions/runs/552033701
 https://github.com/martinzink/nifi-minifi-cpp/actions/runs/552535235





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink opened a new pull request #1000: MINIFICPP-1451: ThreadPoolAdjust test transiently failed on Windows CI

2021-02-10 Thread GitBox


martinzink opened a new pull request #1000:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1000


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (MINIFICPP-1290) Create test coverage for OPC processors

2021-02-10 Thread Arpad Boda (Jira)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-1290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282335#comment-17282335
 ] 

Arpad Boda commented on MINIFICPP-1290:
---

Currently OPC was only tested using Prosys OPC UA simulator.
This is enough to get familiar with the protocol.

The 3rd party we use can also be used to work as a server: 
https://open62541.org/

Steps:
1) Get familiar with the protocol and the library
2) Update 3rd party to the latest stable release
3) Create an OPC server using it and test basic functionality (list nodes, get 
different types of values, update them, create new nodes)
4) Test secure connectivity, integrated with SSL context service - this most 
probably requires modification of the code of processors

> Create test coverage for OPC processors
> ---
>
> Key: MINIFICPP-1290
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1290
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Arpad Boda
>Priority: Major
>  Labels: MiNiFi-CPP-Hygiene
>
> Neither fetchOPC nor putOPC has proper test coverage now. 
> Test coverage should be extended. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1215) Document and test SQL extension

2021-02-10 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi reassigned MINIFICPP-1215:
---

Assignee: Adam Debreceni

> Document and test SQL extension
> ---
>
> Key: MINIFICPP-1215
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1215
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Marton Szasz
>Assignee: Adam Debreceni
>Priority: Major
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 0.9.0
>
>
> The SQL extension lacks documentation and test coverage. The purpose of this 
> ticket is to fix that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1305) Create integration tests for MQTT processors using a dockerized MQTT broker

2021-02-10 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi reassigned MINIFICPP-1305:
---

Assignee: Ádám Markovics

> Create integration tests for MQTT processors using a dockerized MQTT broker
> ---
>
> Key: MINIFICPP-1305
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1305
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Ádám Markovics
>Priority: Major
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 1.0.0
>
>
> *Background:*
> The MQTT processors are untested and known to be unstable. We suspect, that 
> setting up secure connections is currently broken and it is quite likely that 
> there are other problems as well.
> As we do not know much about what potential use-cases there are for MQTT, my 
> recommendation is that whoever starts with the implementations first spends a 
> considerable amount of time on understanding this protocol and its use-cases, 
> finding a containerized solution for a broker implementation that adheres to 
> them and plans the potential tests before even touching the code. Also 
> starting with compatibility (platform for the docker frame/CI job) tests and 
> checks before writing the tests is recommended.
> *Acceptance criteria:*
> The person picking up this task should investigate and propose tests and 
> verify it with [~aboda].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-8176) Add indicator flag for Splunk acknowledgements

2021-02-10 Thread Simon Bence (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Bence reassigned NIFI-8176:
-

Assignee: Timea Barna  (was: Simon Bence)

> Add indicator flag for Splunk acknowledgements
> --
>
> Key: NIFI-8176
> URL: https://issues.apache.org/jira/browse/NIFI-8176
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.13.0
>Reporter: Simon Bence
>Assignee: Timea Barna
>Priority: Major
>
> As pointed out in the review of NIFI-7801, there is a corner case where the 
> processor might considers status of indexing as unacknowledged without 
> polling. This might happen when the FF is not being processed by 
> QuerySplunkIndexingStatus before the TTL runs out. Please see the following 
> comments:
> [https://github.com/apache/nifi/pull/4714#discussion_r563907022]
> As a solution, a flag or counter should be introduced which would prevent the 
> processor from giving up on the FF without trying the poll at least once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1435) SFTP tests transiently fail

2021-02-10 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi updated MINIFICPP-1435:
-
Summary: SFTP tests transiently fail  (was: SFTP tests trainsiently fail)

> SFTP tests transiently fail
> ---
>
> Key: MINIFICPP-1435
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1435
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Gabor Gyimesi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Attachments: SFTPTests-ubuntu1604.log
>
>
> SFTP tests rarely fail in CI. According to the logs the port file of the SFTP 
> server does not get created so the server probably does not start. See 
> attachment for logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1468) test_delete_s3_object_proxy transiently fails

2021-02-10 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi resolved MINIFICPP-1468.
--
Resolution: Fixed

> test_delete_s3_object_proxy transiently fails
> -
>
> Key: MINIFICPP-1468
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1468
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Gabor Gyimesi
>Assignee: Gabor Gyimesi
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Mock S3 server needs some time to delete the S3 object after the delete 
> request succeeds. Our check should wait for this to succeed, to avoid 
> transient failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #979: MINIFICPP-1456 Introduce PutAzureBlobStorage processor

2021-02-10 Thread GitBox


lordgamez commented on a change in pull request #979:
URL: https://github.com/apache/nifi-minifi-cpp/pull/979#discussion_r573538790



##
File path: win_build_vs.bat
##
@@ -47,6 +47,7 @@ for %%x in (%*) do (
 if [%%~x] EQU [/M]   set installer_merge_modules=ON
 if [%%~x] EQU [/C]   set build_coap=ON
 if [%%~x] EQU [/A]   set build_AWS=ON
+if [%%~x] EQU [/Z]   set build_azure=ON

Review comment:
   Updated





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org