[jira] [Commented] (NIFI-8361) UnpackContent failing for Deflate:Maximum

2021-03-24 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308378#comment-17308378
 ] 

David Handermann commented on NIFI-8361:


I submitted a pull request to Zip4j implementing support for checking what the 
Zip File Format Specification calls a "temporary spanning marker" in 
APPNOTE.TXT 8.5.4.  Apparently some archiving software inserts this header 
signature even when spanning multiple archives does not occur.  Support for 
this particular signature should resolve the problem, provided it is accepted 
and incorporated in a new release of Zip4j.

> UnpackContent failing for Deflate:Maximum
> -
>
> Key: NIFI-8361
> URL: https://issues.apache.org/jira/browse/NIFI-8361
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.0, 1.13.2
> Environment: Ubuntu 18.04.5 (1.13.2)
> RHEL 7.7 (1.12.1)
>Reporter: Tom P
>Priority: Major
>  Labels: newbie
> Attachments: zip_deflate-maximum.xml
>
>
> Hi team,
> Using 1.13.2 and running a pipeline pulling down a bunch of ZIP files, and 
> noticed a regression in behaviour between my two environments. 
> The 1.12.1 instance (running on RHEL) was able to unpack the file 
> successfully, whereas the 1.13.2 instance complains of an error stating 
> {code:java}
> 2021-03-23 04:12:39,361 ERROR [Timer-Driven Process Thread-8] 
> o.a.n.processors.standard.UnpackContent 
> UnpackContent[id=5d0fda44-0178-1000-872e-6c183c633c89] Unable to unpack 
> StandardFlowFileRecord[uuid=9fa7650e-8557-465c-b39b-0e9b5e25ee0a,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1616471834548-154, 
> container=default, section=154], offset=0, 
> length=11095546],offset=0,name=3b70dbf9-b0a1-4d63-b2fd-0efe2a7291b8,size=11095546]
>  because it does not appear to have any entries; routing to failure{code}
> The only discernable difference between this file and files that were able to 
> be unpacked was that the offending files have
>  * a "compression method" of Deflate:Maximum (as opposed to Deflate on the 
> working files)
>  * an "offset" of 4 (as opposed to "0" on the working files)
> See attached for the template I used for testing the same functionality on my 
> 1.13.2 and 1.12.1 NiFi instances. I've downgraded the Ubuntu instance to 
> 1.12.1 and noted that the UnpackContent processor functions as expected, so I 
> don't believe it's an issue with the host OS. 
> I'm not sure whether the issue introduced in 1.13.1 might also have impacted 
> this, with the offset, or if it's something to do with the compression 
> method, or something else entirely.
> Happy to provide further detail if needed :)
> Cheers,
> Tom
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] noblenumbat360 opened a new pull request #4936: NIFI-8271 Expansion of Prometheus metrics and associated testing.

2021-03-24 Thread GitBox


noblenumbat360 opened a new pull request #4936:
URL: https://github.com/apache/nifi/pull/4936


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Expands the Prometheus metrics output as outlined in NIFI-8271.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8361) UnpackContent failing for Deflate:Maximum

2021-03-24 Thread Tom P (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308301#comment-17308301
 ] 

Tom P commented on NIFI-8361:
-

I was partway through writing an update to similar effect, thinking that must 
have been what the "Offset" was referring to - the header information that 
details what modifiers are present in the compression method (i.e. the info 
that specifies the "Maximum" part of "Deflate:Maximum"). Thanks so much for 
looking into this [~exceptionfactory], very much appreciated :) 

> UnpackContent failing for Deflate:Maximum
> -
>
> Key: NIFI-8361
> URL: https://issues.apache.org/jira/browse/NIFI-8361
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.0, 1.13.2
> Environment: Ubuntu 18.04.5 (1.13.2)
> RHEL 7.7 (1.12.1)
>Reporter: Tom P
>Priority: Major
>  Labels: newbie
> Attachments: zip_deflate-maximum.xml
>
>
> Hi team,
> Using 1.13.2 and running a pipeline pulling down a bunch of ZIP files, and 
> noticed a regression in behaviour between my two environments. 
> The 1.12.1 instance (running on RHEL) was able to unpack the file 
> successfully, whereas the 1.13.2 instance complains of an error stating 
> {code:java}
> 2021-03-23 04:12:39,361 ERROR [Timer-Driven Process Thread-8] 
> o.a.n.processors.standard.UnpackContent 
> UnpackContent[id=5d0fda44-0178-1000-872e-6c183c633c89] Unable to unpack 
> StandardFlowFileRecord[uuid=9fa7650e-8557-465c-b39b-0e9b5e25ee0a,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1616471834548-154, 
> container=default, section=154], offset=0, 
> length=11095546],offset=0,name=3b70dbf9-b0a1-4d63-b2fd-0efe2a7291b8,size=11095546]
>  because it does not appear to have any entries; routing to failure{code}
> The only discernable difference between this file and files that were able to 
> be unpacked was that the offending files have
>  * a "compression method" of Deflate:Maximum (as opposed to Deflate on the 
> working files)
>  * an "offset" of 4 (as opposed to "0" on the working files)
> See attached for the template I used for testing the same functionality on my 
> 1.13.2 and 1.12.1 NiFi instances. I've downgraded the Ubuntu instance to 
> 1.12.1 and noted that the UnpackContent processor functions as expected, so I 
> don't believe it's an issue with the host OS. 
> I'm not sure whether the issue introduced in 1.13.1 might also have impacted 
> this, with the offset, or if it's something to do with the compression 
> method, or something else entirely.
> Happy to provide further detail if needed :)
> Cheers,
> Tom
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8361) UnpackContent failing for Deflate:Maximum

2021-03-24 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308294#comment-17308294
 ] 

David Handermann commented on NIFI-8361:


On further analysis of the file referenced in the template, it appears to be 
using the standard Deflate compression method, but the Zip entry contains an 
additional header signature.  Apache Commons Compress {{ZipArchiveInputStream}} 
has a special 
[readFirstLocalFileHeader|https://github.com/apache/commons-compress/blob/851dbed488159488420607924d86147b5f99d24f/src/main/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStream.java#L416]
 method reading for that particular signature named 
{{SINGLE_SEGMENT_SPLIT_MARKER}}. I will look at submitting an issue to Zip4j.  
It may also be worth adjusting UnpackContent to prefer Apache Commons Compress 
unless the Password property is configured in the Processor.

> UnpackContent failing for Deflate:Maximum
> -
>
> Key: NIFI-8361
> URL: https://issues.apache.org/jira/browse/NIFI-8361
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.0, 1.13.2
> Environment: Ubuntu 18.04.5 (1.13.2)
> RHEL 7.7 (1.12.1)
>Reporter: Tom P
>Priority: Major
>  Labels: newbie
> Attachments: zip_deflate-maximum.xml
>
>
> Hi team,
> Using 1.13.2 and running a pipeline pulling down a bunch of ZIP files, and 
> noticed a regression in behaviour between my two environments. 
> The 1.12.1 instance (running on RHEL) was able to unpack the file 
> successfully, whereas the 1.13.2 instance complains of an error stating 
> {code:java}
> 2021-03-23 04:12:39,361 ERROR [Timer-Driven Process Thread-8] 
> o.a.n.processors.standard.UnpackContent 
> UnpackContent[id=5d0fda44-0178-1000-872e-6c183c633c89] Unable to unpack 
> StandardFlowFileRecord[uuid=9fa7650e-8557-465c-b39b-0e9b5e25ee0a,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1616471834548-154, 
> container=default, section=154], offset=0, 
> length=11095546],offset=0,name=3b70dbf9-b0a1-4d63-b2fd-0efe2a7291b8,size=11095546]
>  because it does not appear to have any entries; routing to failure{code}
> The only discernable difference between this file and files that were able to 
> be unpacked was that the offending files have
>  * a "compression method" of Deflate:Maximum (as opposed to Deflate on the 
> working files)
>  * an "offset" of 4 (as opposed to "0" on the working files)
> See attached for the template I used for testing the same functionality on my 
> 1.13.2 and 1.12.1 NiFi instances. I've downgraded the Ubuntu instance to 
> 1.12.1 and noted that the UnpackContent processor functions as expected, so I 
> don't believe it's an issue with the host OS. 
> I'm not sure whether the issue introduced in 1.13.1 might also have impacted 
> this, with the offset, or if it's something to do with the compression 
> method, or something else entirely.
> Happy to provide further detail if needed :)
> Cheers,
> Tom
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory commented on a change in pull request #4918: NIFI-8347 Set ClassLoader for JettyWebSocketServer to resolve runtime exceptions

2021-03-24 Thread GitBox


exceptionfactory commented on a change in pull request #4918:
URL: https://github.com/apache/nifi/pull/4918#discussion_r600953627



##
File path: 
nifi-nar-bundles/nifi-websocket-bundle/nifi-websocket-services-jetty/src/main/java/org/apache/nifi/websocket/jetty/JettyWebSocketServer.java
##
@@ -213,6 +214,14 @@ public void configure(WebSocketServletFactory 
webSocketServletFactory) {
 webSocketServletFactory.setCreator(this);
 }
 
+@Override
+public void init() throws ServletException {
+// Set Component ClassLoader as Thread Context ClassLoader so that 
jetty-server classes are available to WebSocketServletFactory.Loader
+final ClassLoader componentClassLoader = 
getClass().getClassLoader();
+Thread.currentThread().setContextClassLoader(componentClassLoader);
+super.init();
+}

Review comment:
   Thanks for the feedback @turcsanyip!  That makes sense, and provides a 
cleaner solution.  I confirmed that setting the `ServletContextLoader` 
`ClassLoader` works at runtime.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #4918: NIFI-8347 Set ClassLoader for JettyWebSocketServer to resolve runtime exceptions

2021-03-24 Thread GitBox


turcsanyip commented on a change in pull request #4918:
URL: https://github.com/apache/nifi/pull/4918#discussion_r600926356



##
File path: 
nifi-nar-bundles/nifi-websocket-bundle/nifi-websocket-services-jetty/src/main/java/org/apache/nifi/websocket/jetty/JettyWebSocketServer.java
##
@@ -213,6 +214,14 @@ public void configure(WebSocketServletFactory 
webSocketServletFactory) {
 webSocketServletFactory.setCreator(this);
 }
 
+@Override
+public void init() throws ServletException {
+// Set Component ClassLoader as Thread Context ClassLoader so that 
jetty-server classes are available to WebSocketServletFactory.Loader
+final ClassLoader componentClassLoader = 
getClass().getClassLoader();
+Thread.currentThread().setContextClassLoader(componentClassLoader);
+super.init();
+}

Review comment:
   The class loader for the servlet worker threads can be set on 
`ServletContextHandler` when you build up the Jetty server / servlets.
   ```
   final ServletContextHandler contextHandler = new ServletContextHandler();
   
contextHandler.setClassLoader(Thread.currentThread().getContextClassLoader());
   ```
   (I used the instance class loader here because it was originally in 1.12 but 
the component class loader can also be used).
   
   I'm not sure which one is the right way to set the class loader but using 
the `ServletContextHandler` API seems to me less "hack".




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on a change in pull request #4933: MINIFI-422: Integrate MiNiFi Java codebase into NiFi

2021-03-24 Thread GitBox


mattyb149 commented on a change in pull request #4933:
URL: https://github.com/apache/nifi/pull/4933#discussion_r600918317



##
File path: minifi/README.md
##
@@ -0,0 +1,163 @@
+
+# Apache NiFi -  MiNiFi [![Build 
Status](https://travis-ci.org/apache/nifi-minifi.svg?branch=master)](https://travis-ci.org/apache/nifi-minifi)

Review comment:
   Most of NiFi's README applies to MiNiFi as well (you still have to build 
the whole thing, plus mailing lists and such are the same), so I just created a 
section in NiFi's README for the MiNiFi subproject that refers to the binaries, 
Docker images, etc.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #4869: NIFI-8288 Removed nifi-security-utils dependency from nifi-web-utils

2021-03-24 Thread GitBox


exceptionfactory commented on pull request #4869:
URL: https://github.com/apache/nifi/pull/4869#issuecomment-806208115


   > Things seem good to me, I tried it out with a 3 node secure cluster with 
secured external ZK. From what I can tell, when this change is rebased onto 
main, the .zip build size is 1.2GB which is about 100MB smaller than the main 
build's zip. Does that sound right? Not a bad improvement when combined with 
the other PR that reduced it around 300MB.
   
   Thanks for the additional testing, that size sounds correct given the 
updates.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7846) Supporting os.arch aarch64 issues

2021-03-24 Thread Lance Kinley (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308193#comment-17308193
 ] 

Lance Kinley commented on NIFI-7846:


A workaround is to compile a newer version of snappy-java and place it in the 
lib directory.



 

{{$ git clone [https://github.com/xerial/snappy-java.git]}}

{{$ cd snappy-java}}

{{$ make}}

 

Then copy the resulting jar file from target directory to your nifi lib 
directory.  Nifi starts correctly for me on AWS Graviton 2 now.{{}}

> Supporting os.arch aarch64 issues
> -
>
> Key: NIFI-7846
> URL: https://issues.apache.org/jira/browse/NIFI-7846
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration Management
>Affects Versions: 1.12.0
> Environment: os.arch=aarch64 
>Reporter: libindas
>Priority: Major
>  Labels: AWS, features
>
> [https://snippi.com/s/bzd8w76]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] thenatog commented on pull request #4869: NIFI-8288 Removed nifi-security-utils dependency from nifi-web-utils

2021-03-24 Thread GitBox


thenatog commented on pull request #4869:
URL: https://github.com/apache/nifi/pull/4869#issuecomment-806200033


   Things seem good to me, I tried it out with a 3 node secure cluster with 
secured external ZK. From what I can tell, when this change is rebased onto 
main, the .zip build size is 1.2GB which is about 100MB smaller than the main 
build's zip. Does that sound right? Not a bad improvement when combined with 
the other PR that reduced it around 300MB.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8364) TestQuerySolr is unreliable and needs to be reworked

2021-03-24 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8364:
---
Status: Patch Available  (was: In Progress)

> TestQuerySolr is unreliable and needs to be reworked
> 
>
> Key: NIFI-8364
> URL: https://issues.apache.org/jira/browse/NIFI-8364
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As found in Github CI this test is unreliable: 
> https://github.com/apache/nifi/runs/2185571184?check_suite_focus=true
> In looking at what failed we get 
> Error:  Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.28 
> s <<< FAILURE! - in org.apache.nifi.processors.solr.TestQuerySolr
> Error:  testAllFacetCategories(org.apache.nifi.processors.solr.TestQuerySolr) 
>  Time elapsed: 0.385 s  <<< FAILURE!
> java.lang.AssertionError: expected:<6> but was:<10>
>   at 
> org.apache.nifi.processors.solr.TestQuerySolr.testAllFacetCategories(TestQuerySolr.java:191)
> Error:  Failures: 
> Error:TestQuerySolr.testAllFacetCategories:191 expected:<6> but was:<10>
> Error:  Tests run: 64, Failures: 1, Errors: 0, Skipped: 0
> In looking at the code we see a lot of concerns with the test.
> - It appears to set the datetime as a static call.  Not sure if this is great 
> for other tests or whether isolated.  But such a practice should be avoided.
> - the way it creates a client depends on target dir - seems problematic
> - it inverts expected vs actual in many places including the failing line.
> - it might be a candidate as a system test mroe than a unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory opened a new pull request #4935: NIFI-8364 Refactored TestQuerySolr and TestGetSolr to reuse SolrClient

2021-03-24 Thread GitBox


exceptionfactory opened a new pull request #4935:
URL: https://github.com/apache/nifi/pull/4935


    Description of PR
   
   NIFI-8364 Refactors `TestQuerySolr` and `TestGetSolr` to reuse the embedded 
`SolrClient` across test methods as much as possible to reduce overall runtime. 
 Additional improvements include the following:
   
   - Replaced Gson with Jackson for parsing JSON
   - Corrected assertion argument ordering
   - Simplified relative path determination for EmbeddedSolrServerFactory
   - Replaced SimpleDateFormat with java.time.Instant parsing and formatting
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] joewitt commented on a change in pull request #4933: MINIFI-422: Integrate MiNiFi Java codebase into NiFi

2021-03-24 Thread GitBox


joewitt commented on a change in pull request #4933:
URL: https://github.com/apache/nifi/pull/4933#discussion_r600863121



##
File path: minifi/README.md
##
@@ -0,0 +1,163 @@
+
+# Apache NiFi -  MiNiFi [![Build 
Status](https://travis-ci.org/apache/nifi-minifi.svg?branch=master)](https://travis-ci.org/apache/nifi-minifi)

Review comment:
   seems to me this file should go away and the readme of nifi should be 
updated to reflect that one might want to create/use the convenience binary of 
minifi instead of nifi. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] joewitt commented on a change in pull request #4933: MINIFI-422: Integrate MiNiFi Java codebase into NiFi

2021-03-24 Thread GitBox


joewitt commented on a change in pull request #4933:
URL: https://github.com/apache/nifi/pull/4933#discussion_r600862132



##
File path: minifi/NOTICE
##
@@ -0,0 +1,7 @@
+Apache NiFi - MiNiFi

Review comment:
   this file should go away completely / This is about the source NOTICE 
and thus whatever is in here that isn't in the NiFI/NOTICE should get added 
there and this file removed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] joewitt commented on a change in pull request #4933: MINIFI-422: Integrate MiNiFi Java codebase into NiFi

2021-03-24 Thread GitBox


joewitt commented on a change in pull request #4933:
URL: https://github.com/apache/nifi/pull/4933#discussion_r600861959



##
File path: minifi/LICENSE
##
@@ -0,0 +1,202 @@
+
+ Apache License
+   Version 2.0, January 2004

Review comment:
   this file should go away completely / This is about the source LICENSE 
and thus whatever is in here that isn't in the NiFI/LICENSE should get added 
there and this file removed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] joewitt commented on a change in pull request #4933: MINIFI-422: Integrate MiNiFi Java codebase into NiFi

2021-03-24 Thread GitBox


joewitt commented on a change in pull request #4933:
URL: https://github.com/apache/nifi/pull/4933#discussion_r600861574



##
File path: minifi/KEYS
##
@@ -0,0 +1,404 @@
+This file contains the PGP keys of various developers.

Review comment:
   this file should go away completely.  The KEYS file of the NiFi dir 
itself is sufficient.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] kevdoran commented on pull request #4933: MINIFI-422: Integrate MiNiFi Java codebase into NiFi

2021-03-24 Thread GitBox


kevdoran commented on pull request #4933:
URL: https://github.com/apache/nifi/pull/4933#issuecomment-806173518


   Nice, @mattyb149! This is cool, can't wait to give it a try. 
   
   One quick observation - I would remove the `minifi-c2` submodule. That was a 
PoC based on https://cwiki.apache.org/confluence/display/MINIFI/C2+Design. 
Ultimately, the C2 Design was adopted in the nifi-minifi-cpp project, but I 
think it has diverged from the last time this server effort was worked on, and 
it is now out of date. I don't think we want to incur the maintenance overhead 
of adding it to the nifi project repo at this time. Eventually, I would like to 
revive that effort for a C2 server implementation. We should discuss more the 
right way to go about that, and we can always bring this over later if this is 
the right home for it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] tpalfy opened a new pull request #4934: NIFI-8365 Fix JSON AbstractJsonRowRecordReader to handle deep CHOICE-typed records properly

2021-03-24 Thread GitBox


tpalfy opened a new pull request #4934:
URL: https://github.com/apache/nifi/pull/4934


   Change the logic that selects the first  compatible schema which can have 
missing fields compared to the real value and search for a more strict match 
first and fallback to the existing logic only if not one found.
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 opened a new pull request #4933: MINIFI-422: Integrate MiNiFi Java codebase into NiFi

2021-03-24 Thread GitBox


mattyb149 opened a new pull request #4933:
URL: https://github.com/apache/nifi/pull/4933


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Copies the code from nifi-minifi master branch to a module of nifi. Bumps 
NiFi dependency version to latest (1.14.0-SNAPSHOT), replaces some MiNiFi 
components with their NiFi counterparts, moved MiNiFi "common" libraries back 
into the individual NARs rather than dumping them all into lib/. The size is 
still large (188MB), a follow-up PR could do some additional refactor to reduce 
the overall size of the MiNiFi binary. This PR is to produce a functioning 
MiNiFi binary from inside the NiFi codebase.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8365) JSON record reader mishandles deep CHOICE types

2021-03-24 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-8365:
--
Description: 
The AbstractJsonRowRecordReader when trying to find the correct schema for a 
given record it may come with a wrong one.
For example:

Suppose the following record:

{code:json}
{
  "dataCollection":[
{
  "record": {
"integer": 1,
"boolean": true
  }
},
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
  ]
}
{code}
Even if the schema is correctly set (which may not be the case as infer schema 
itself has a similar issue),
the second record
{code:json}
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
{code}
will be assigned the schema of the first (["integer" : "INT", "boolean" : 
"BOOLEAN"] instead of ["integer" : "INT", "string" : "STRING"]).

This will cause the fields that are not present in the schema (in this case 
"string") to be omitted when writing it out.


  was:
The AbstractJsonRowRecordReader when trying to find the correct schema for a 
given record it may come with a wrong one.
For example:

Suppose the following record:

{code:json}
{
  "dataCollection":[
{
  "record": {
"integer": 1,
"boolean": true
  }
},
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
  ]
}
{code}
Even if the schema is correctly set (which may not be the case as infer schema 
itself has a similar issue),
the second record
{code:json}
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
{code}
will be assigned the schema of the first.

This will cause the fields that are not present in the schema to be omitted 
when writing it out.



> JSON record reader mishandles deep CHOICE types
> ---
>
> Key: NIFI-8365
> URL: https://issues.apache.org/jira/browse/NIFI-8365
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>
> The AbstractJsonRowRecordReader when trying to find the correct schema for a 
> given record it may come with a wrong one.
> For example:
> Suppose the following record:
> {code:json}
> {
>   "dataCollection":[
> {
>   "record": {
> "integer": 1,
> "boolean": true
>   }
> },
> {
>   "record": {
> "integer": 2,
> "string": "stringValue2"
>   }
> }
>   ]
> }
> {code}
> Even if the schema is correctly set (which may not be the case as infer 
> schema itself has a similar issue),
> the second record
> {code:json}
> {
>   "record": {
> "integer": 2,
> "string": "stringValue2"
>   }
> }
> {code}
> will be assigned the schema of the first (["integer" : "INT", "boolean" : 
> "BOOLEAN"] instead of ["integer" : "INT", "string" : "STRING"]).
> This will cause the fields that are not present in the schema (in this case 
> "string") to be omitted when writing it out.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8365) JSON record reader mishandles deep CHOICE types

2021-03-24 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-8365:
--
Description: 
The AbstractJsonRowRecordReader when trying to find the correct schema for a 
given record it may come with a wrong one.
For example:

Suppose the following record:

{code:json}
{
  "dataCollection":[
{
  "record": {
"integer": 1,
"boolean": true
  }
},
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
  ]
}
{code}
Even if the schema is correctly set (which may not be the case as infer schema 
itself has a similar issue),
the second record
{code:json}
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
{code}
will be assigned the schema of the first.

This will cause the fields that are not present in the schema to be omitted 
when writing it out.


  was:
The AbstractJsonRowRecordReader when trying to find the correct schema for a 
given record it may come with a wrong one.
For example:

Suppose the following record:

{code:json}
{
  "dataCollection":[
{
  "record": {
"integer": 1,
"boolean": true
  }
},
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
  ]
}
{code}
Even if the schema is correctly set (which is may not be the case as infer 
schema itself has a similar issue),
the second record
{code:json}
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
{code}
will be assigned the schema of the first.

This will cause the fields that are not present in the schema to be omitted 
when writing it out.



> JSON record reader mishandles deep CHOICE types
> ---
>
> Key: NIFI-8365
> URL: https://issues.apache.org/jira/browse/NIFI-8365
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>
> The AbstractJsonRowRecordReader when trying to find the correct schema for a 
> given record it may come with a wrong one.
> For example:
> Suppose the following record:
> {code:json}
> {
>   "dataCollection":[
> {
>   "record": {
> "integer": 1,
> "boolean": true
>   }
> },
> {
>   "record": {
> "integer": 2,
> "string": "stringValue2"
>   }
> }
>   ]
> }
> {code}
> Even if the schema is correctly set (which may not be the case as infer 
> schema itself has a similar issue),
> the second record
> {code:json}
> {
>   "record": {
> "integer": 2,
> "string": "stringValue2"
>   }
> }
> {code}
> will be assigned the schema of the first.
> This will cause the fields that are not present in the schema to be omitted 
> when writing it out.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8365) JSON record reader mishandles deep CHOICE types

2021-03-24 Thread Tamas Palfy (Jira)
Tamas Palfy created NIFI-8365:
-

 Summary: JSON record reader mishandles deep CHOICE types
 Key: NIFI-8365
 URL: https://issues.apache.org/jira/browse/NIFI-8365
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Tamas Palfy


The AbstractJsonRowRecordReader when trying to find the correct schema for a 
given record it may come with a wrong one.
For example:

Suppose the following record:

{code:json}
{
  "dataCollection":[
{
  "record": {
"integer": 1,
"boolean": true
  }
},
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
  ]
}
{code}
Even if the schema is correctly set (which is may not be the case as infer 
schema itself has a similar issue),
the second record
{code:json}
{
  "record": {
"integer": 2,
"string": "stringValue2"
  }
}
{code}
will be assigned the schema of the first.

This will cause the fields that are not present in the schema to be omitted 
when writing it out.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-8364) TestQuerySolr is unreliable and needs to be reworked

2021-03-24 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann reassigned NIFI-8364:
--

Assignee: David Handermann

> TestQuerySolr is unreliable and needs to be reworked
> 
>
> Key: NIFI-8364
> URL: https://issues.apache.org/jira/browse/NIFI-8364
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: David Handermann
>Priority: Major
>
> As found in Github CI this test is unreliable: 
> https://github.com/apache/nifi/runs/2185571184?check_suite_focus=true
> In looking at what failed we get 
> Error:  Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.28 
> s <<< FAILURE! - in org.apache.nifi.processors.solr.TestQuerySolr
> Error:  testAllFacetCategories(org.apache.nifi.processors.solr.TestQuerySolr) 
>  Time elapsed: 0.385 s  <<< FAILURE!
> java.lang.AssertionError: expected:<6> but was:<10>
>   at 
> org.apache.nifi.processors.solr.TestQuerySolr.testAllFacetCategories(TestQuerySolr.java:191)
> Error:  Failures: 
> Error:TestQuerySolr.testAllFacetCategories:191 expected:<6> but was:<10>
> Error:  Tests run: 64, Failures: 1, Errors: 0, Skipped: 0
> In looking at the code we see a lot of concerns with the test.
> - It appears to set the datetime as a static call.  Not sure if this is great 
> for other tests or whether isolated.  But such a practice should be avoided.
> - the way it creates a client depends on target dir - seems problematic
> - it inverts expected vs actual in many places including the failing line.
> - it might be a candidate as a system test mroe than a unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8364) TestQuerySolr is unreliable and needs to be reworked

2021-03-24 Thread Joe Witt (Jira)
Joe Witt created NIFI-8364:
--

 Summary: TestQuerySolr is unreliable and needs to be reworked
 Key: NIFI-8364
 URL: https://issues.apache.org/jira/browse/NIFI-8364
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt


As found in Github CI this test is unreliable: 
https://github.com/apache/nifi/runs/2185571184?check_suite_focus=true

In looking at what failed we get 

Error:  Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.28 s 
<<< FAILURE! - in org.apache.nifi.processors.solr.TestQuerySolr
Error:  testAllFacetCategories(org.apache.nifi.processors.solr.TestQuerySolr)  
Time elapsed: 0.385 s  <<< FAILURE!
java.lang.AssertionError: expected:<6> but was:<10>
at 
org.apache.nifi.processors.solr.TestQuerySolr.testAllFacetCategories(TestQuerySolr.java:191)

Error:  Failures: 
Error:TestQuerySolr.testAllFacetCategories:191 expected:<6> but was:<10>
Error:  Tests run: 64, Failures: 1, Errors: 0, Skipped: 0

In looking at the code we see a lot of concerns with the test.
- It appears to set the datetime as a static call.  Not sure if this is great 
for other tests or whether isolated.  But such a practice should be avoided.
- the way it creates a client depends on target dir - seems problematic
- it inverts expected vs actual in many places including the failing line.
- it might be a candidate as a system test mroe than a unit test.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8363) Implement Default Single User LoginIdentityProvider

2021-03-24 Thread David Handermann (Jira)
David Handermann created NIFI-8363:
--

 Summary: Implement Default Single User LoginIdentityProvider
 Key: NIFI-8363
 URL: https://issues.apache.org/jira/browse/NIFI-8363
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Security
Reporter: David Handermann
Assignee: David Handermann


Supporting default authenticated access to NiFi requires implementing a new 
{{LoginIdentityProvider}} and related classes to generate and store a random 
default username and password.

The provider implementation should have the following features:
 * Support one username and password account record
 * Use java.util.UUID to generate a username when no existing account is found
 * Use java.security.SecureRandom to generate a password when no existing 
account is found
 * Store the generated password using a secure hashing algorithm
 * Write the generated username and password to the log once after initial 
generation

This provider is intended for initial configuration of a standalone NiFi 
system.  Adjustment of default NiFi properties should be implemented separately 
following implementation of default HTTPS access.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8339) Input Threads get Interrupted and stuck indefinitely

2021-03-24 Thread Mark Payne (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308101#comment-17308101
 ] 

Mark Payne commented on NIFI-8339:
--

[~renewei]  Unfortunately I've been unable to replicate any issues here. Any 
chance that you see the message "Failed to process data due to" in your logs? I 
can see how, if we got to that point in the code, we might have an issue, but I 
can't find a way to get to that point in the code.

> Input Threads get Interrupted and stuck indefinitely
> 
>
> Key: NIFI-8339
> URL: https://issues.apache.org/jira/browse/NIFI-8339
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Rene Weidlinger
>Priority: Major
> Attachments: firefox_Yf6NUeQe5X.png, td1.txt
>
>
> After some seconds we see this stack trace in nifi on one of our inputs:
> {noformat}
> 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] 
> o.a.nifi.web.api.ApplicationResource Unexpected exception occurred. 
> portId=c4d93fb6-5e5b-1382-b39b-66fbc04660f0
> 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] 
> o.a.nifi.web.api.ApplicationResource Exception detail:
> org.apache.nifi.processor.exception.ProcessException: 
> org.apache.nifi.processor.exception.ProcessException: Interrupted while 
> waiting for site-to-site request to be serviced
> at 
> org.apache.nifi.remote.StandardPublicPort.receiveFlowFiles(StandardPublicPort.java:588)
> at 
> org.apache.nifi.web.api.DataTransferResource.receiveFlowFiles(DataTransferResource.java:277)
> at sun.reflect.GeneratedMethodAccessor198.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
> at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
> at 
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
> at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
> at 
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
> at 
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
> at 
> org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791)
> at 
> org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626)
> at 
> org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66)
> at 
> org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
> at 
> org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
> at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)
> at 
> 

[jira] [Resolved] (NIFI-8356) Add unit test for LongRunningTaskMonitor

2021-03-24 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-8356.

Resolution: Fixed

Closing this issue after further discussion, additional adjustments can be 
addressed in a separate issue.

> Add unit test for LongRunningTaskMonitor
> 
>
> Key: NIFI-8356
> URL: https://issues.apache.org/jira/browse/NIFI-8356
> Project: Apache NiFi
>  Issue Type: Test
>  Components: Core Framework
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] phrocker edited a comment on pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


phrocker edited a comment on pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#issuecomment-805981714


   Hi @timeabarna  May want to add this as an option that is true by default. 
Thanks for the change.
   
   There are cases where people do not want values and want to only get the 
keys, so an option can facilitate those cases as well. 
   
   Apologies for forgetting about that change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8283) Value handling in ScanAccumulo processor

2021-03-24 Thread Marc Parisi (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307995#comment-17307995
 ] 

Marc Parisi commented on NIFI-8283:
---

Thanks for this patch! My apologies, this fell off my radar [~kzsihovszki] , 
but I wrote a follow on change in my personal branch to add this as an option 
and forgot to commit.

 

In my use cases I don't use values, but of course that should have been 
returned. In my branch I added an option to return the values by default, but 
disable entering them into the schema if chosen to do so. Would that be 
possible to add in this PR? Thanks!

> Value handling in ScanAccumulo processor
> 
>
> Key: NIFI-8283
> URL: https://issues.apache.org/jira/browse/NIFI-8283
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.11.4
>Reporter: Zsihovszki Krisztina
>Priority: Major
> Attachments: Screenshot 2021-03-02 at 12.53.58.png, scanned.avro
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> ScanAccumuloRecord processor does not return the value, just the keys.
> If I add the record with the Accumulo Java client or via accumulo shell 
> command
> ("insert row1 cf1 cq1 value1"), the record looks like this in the table:
> 2021-03-02 11:29:22,765 [Shell.audit] INFO : root@accumulo nifiTable> scan
> row0 cf0:cq0 []*value0*
> The "value0" is not returned when I run ScanAccumulo processor.
> The scanned record (converted from avro) looks like this:
> {"row": "row0", "columnFamily": "cf0", "columnQualifier": "cq0", 
> "columnVisibility": "", "timestamp": 1614684537363}
> The avro file does not seem to be containing the values either (please a 
> scanned avro file attached).
> I don't see the inserting the value in the 
> [code|https://github.com/apache/nifi/blob/f9ae3bb9c970cd8d6d1d9e10f07cab9bdb66baa9/nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java#L274]
>  either.
> I did not specify any schema when used ConvertAvroToJSON processor, by 
> default it uses the schema defined in the avro file. 
>  Please find the ScanAccumulo processor settings in attachment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] phrocker commented on pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


phrocker commented on pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#issuecomment-805981714


   Hi @timeabarna  May want to add this as an option that is true by default. 
Thanks for the change.
   
   There are cases where people do not want values and want to only get the 
keys. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8354) ExecuteStreamCommand processor doesn't delete the temp file if the process start failed

2021-03-24 Thread Otto Fowler (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otto Fowler resolved NIFI-8354.
---
Resolution: Fixed

> ExecuteStreamCommand processor doesn't delete the temp file if the process 
> start failed
> ---
>
> Key: NIFI-8354
> URL: https://issues.apache.org/jira/browse/NIFI-8354
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Hsin-Ying Lee
>Assignee: Hsin-Ying Lee
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ExecuteStreamCommand processor doesn't delete the temp file if the process 
> start failed
> The i-node of /tmp will be exhausted, and the service will run into an 
> unexpected situation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8354) ExecuteStreamCommand processor doesn't delete the temp file if the process start failed

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307981#comment-17307981
 ] 

ASF subversion and git services commented on NIFI-8354:
---

Commit 0f28702b475e5cf00ccb5b7e01b68119e645ad7d in nifi's branch 
refs/heads/main from Hsin-Ying Lee
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f28702 ]

NIFI-8354 ExecuteStreamCommand processor doesn't delete the temp file… (#4923)

* NIFI-8354 ExecuteStreamCommand processor doesn't delete the temp file if the 
process start failed
* NIFI-8354 Record the log when delete file failed


This closes #4923

Signed-off-by: Otto Fowler 

> ExecuteStreamCommand processor doesn't delete the temp file if the process 
> start failed
> ---
>
> Key: NIFI-8354
> URL: https://issues.apache.org/jira/browse/NIFI-8354
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Hsin-Ying Lee
>Assignee: Hsin-Ying Lee
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ExecuteStreamCommand processor doesn't delete the temp file if the process 
> start failed
> The i-node of /tmp will be exhausted, and the service will run into an 
> unexpected situation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8354) ExecuteStreamCommand processor doesn't delete the temp file if the process start failed

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307982#comment-17307982
 ] 

ASF subversion and git services commented on NIFI-8354:
---

Commit 0f28702b475e5cf00ccb5b7e01b68119e645ad7d in nifi's branch 
refs/heads/main from Hsin-Ying Lee
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f28702 ]

NIFI-8354 ExecuteStreamCommand processor doesn't delete the temp file… (#4923)

* NIFI-8354 ExecuteStreamCommand processor doesn't delete the temp file if the 
process start failed
* NIFI-8354 Record the log when delete file failed


This closes #4923

Signed-off-by: Otto Fowler 

> ExecuteStreamCommand processor doesn't delete the temp file if the process 
> start failed
> ---
>
> Key: NIFI-8354
> URL: https://issues.apache.org/jira/browse/NIFI-8354
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Hsin-Ying Lee
>Assignee: Hsin-Ying Lee
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ExecuteStreamCommand processor doesn't delete the temp file if the process 
> start failed
> The i-node of /tmp will be exhausted, and the service will run into an 
> unexpected situation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8354) ExecuteStreamCommand processor doesn't delete the temp file if the process start failed

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307980#comment-17307980
 ] 

ASF subversion and git services commented on NIFI-8354:
---

Commit 0f28702b475e5cf00ccb5b7e01b68119e645ad7d in nifi's branch 
refs/heads/main from Hsin-Ying Lee
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f28702 ]

NIFI-8354 ExecuteStreamCommand processor doesn't delete the temp file… (#4923)

* NIFI-8354 ExecuteStreamCommand processor doesn't delete the temp file if the 
process start failed
* NIFI-8354 Record the log when delete file failed


This closes #4923

Signed-off-by: Otto Fowler 

> ExecuteStreamCommand processor doesn't delete the temp file if the process 
> start failed
> ---
>
> Key: NIFI-8354
> URL: https://issues.apache.org/jira/browse/NIFI-8354
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Hsin-Ying Lee
>Assignee: Hsin-Ying Lee
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ExecuteStreamCommand processor doesn't delete the temp file if the process 
> start failed
> The i-node of /tmp will be exhausted, and the service will run into an 
> unexpected situation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ottobackwards merged pull request #4923: NIFI-8354 ExecuteStreamCommand processor doesn't delete the temp file…

2021-03-24 Thread GitBox


ottobackwards merged pull request #4923:
URL: https://github.com/apache/nifi/pull/4923


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] timeabarna commented on a change in pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


timeabarna commented on a change in pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#discussion_r600644024



##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -270,8 +281,9 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 data.put("columnQualifier", 
key.getColumnQualifier().toString());
 data.put("columnVisibility", 
key.getColumnVisibility().toString());
 data.put("timestamp", 
key.getTimestamp());
+data.put("value", 
kv.getValue().toString());

Review comment:
   To be on the safe side I've added the null check




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


fgerlits commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600635867



##
File path: PROCESSORS.md
##
@@ -971,13 +971,13 @@ In the list below, the names of required properties 
appear in bold. Any other pr
 |**Known Brokers**|||A comma-separated list of known Kafka Brokers in the 
format :**Supports Expression Language: true**|
 |Max Flow Segment Size|0 B||Maximum flow content payload segment size for the 
kafka record. 0 B means unlimited.|
 |Max Request Size|||Maximum Kafka protocol request message size|
-|Message Key Field|||The name of a field in the Input Records that should be 
used as the Key for the Kafka message.
-Supports Expression Language: true (will be evaluated using flow file 
attributes)|
-|Message Timeout|30 sec||The total time sending a message could 
take**Supports Expression Language: true**|
+|Kafka Key|||The key to use for the message. If not specified, the UUID of the 
flow file is used as the message key.**Supports Expression Language: 
true**|

Review comment:
   fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (NIFI-8356) Add unit test for LongRunningTaskMonitor

2021-03-24 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann reopened NIFI-8356:


Based on additional review, it would be helpful to adjust the log verification 
assertions to avoid expecting exact log statements.

> Add unit test for LongRunningTaskMonitor
> 
>
> Key: NIFI-8356
> URL: https://issues.apache.org/jira/browse/NIFI-8356
> Project: Apache NiFi
>  Issue Type: Test
>  Components: Core Framework
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8356) Add unit test for LongRunningTaskMonitor

2021-03-24 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8356:
---
Fix Version/s: 1.14.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add unit test for LongRunningTaskMonitor
> 
>
> Key: NIFI-8356
> URL: https://issues.apache.org/jira/browse/NIFI-8356
> Project: Apache NiFi
>  Issue Type: Test
>  Components: Core Framework
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8356) Add unit test for LongRunningTaskMonitor

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307932#comment-17307932
 ] 

ASF subversion and git services commented on NIFI-8356:
---

Commit 4473d23ccdd3bda782b20be84b48a19ebecc5564 in nifi's branch 
refs/heads/main from Peter Turcsanyi
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4473d23 ]

NIFI-8356: Add unit test for LongRunningTaskMonitor.

This closes #4925

Signed-off-by: David Handermann 


> Add unit test for LongRunningTaskMonitor
> 
>
> Key: NIFI-8356
> URL: https://issues.apache.org/jira/browse/NIFI-8356
> Project: Apache NiFi
>  Issue Type: Test
>  Components: Core Framework
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4925: NIFI-8356: Add unit test for LongRunningTaskMonitor.

2021-03-24 Thread GitBox


asfgit closed pull request #4925:
URL: https://github.com/apache/nifi/pull/4925


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8143) GetFile: Keep Source File should be true by default

2021-03-24 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-8143.

Resolution: Won't Do

> GetFile: Keep Source File should be true by default
> ---
>
> Key: NIFI-8143
> URL: https://issues.apache.org/jira/browse/NIFI-8143
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: DEOM Damien
>Priority: Critical
>
> GetFile should not remove files by default
> It is very dangerous, even with experimented users (we basically spend 4 
> hours to restore a cluster because of this)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 commented on a change in pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


markap14 commented on a change in pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#discussion_r600590158



##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -270,8 +281,9 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 data.put("columnQualifier", 
key.getColumnQualifier().toString());
 data.put("columnVisibility", 
key.getColumnVisibility().toString());
 data.put("timestamp", 
key.getTimestamp());
+data.put("value", 
kv.getValue().toString());

Review comment:
   Is it possible here for the value to be null? I don't know much about 
accumulo, but i would guess so. If so, we need to check if null and if so avoid 
calling `toString()` on the null reference




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #4925: NIFI-8356: Add unit test for LongRunningTaskMonitor.

2021-03-24 Thread GitBox


exceptionfactory commented on a change in pull request #4925:
URL: https://github.com/apache/nifi/pull/4925#discussion_r600589912



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/components/monitor/LongRunningTaskMonitor.java
##
@@ -72,6 +73,16 @@ public void run() {
 }
 }
 
-LOGGER.info("Active threads: {}; Long running threads: {}", 
activeThreadCount, longRunningThreadCount);
+getLogger().info("Active threads: {}; Long running threads: {}", 
activeThreadCount, longRunningThreadCount);
+}
+
+@VisibleForTesting

Review comment:
   Thanks for the explanation @turcsanyip, that makes sense.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #4925: NIFI-8356: Add unit test for LongRunningTaskMonitor.

2021-03-24 Thread GitBox


turcsanyip commented on a change in pull request #4925:
URL: https://github.com/apache/nifi/pull/4925#discussion_r600584036



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/components/monitor/LongRunningTaskMonitor.java
##
@@ -72,6 +73,16 @@ public void run() {
 }
 }
 
-LOGGER.info("Active threads: {}; Long running threads: {}", 
activeThreadCount, longRunningThreadCount);
+getLogger().info("Active threads: {}; Long running threads: {}", 
activeThreadCount, longRunningThreadCount);
+}
+
+@VisibleForTesting

Review comment:
   The main purpose of this class to log info about long running (possible 
stuck) tasks in the nifi log file to make troubleshooting easier. The nifi log 
is the only persistent storage of these messages because the bulletins 
disappear after a while. That's why I would assert these statements.
   
   Additionally, the messages are shown on the UI in two places: controller and 
processor level bulletins. `EventReport.reportEvent()` handles the controller 
level bulletin but the processor level bulletin is triggered by log items 
written by the processors logger. That's why we need assert that log call.
   
   Asserting the last "Active threads:" summary log may be unnecessary but 
after checking the other logs I added it too.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8143) GetFile: Keep Source File should be true by default

2021-03-24 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307908#comment-17307908
 ] 

Joe Witt commented on NIFI-8143:


GetFile will likely go away in whenever we do a NiFi 2.0 release.  The 
ListFile/FetchFile combination is far more powerful.

> GetFile: Keep Source File should be true by default
> ---
>
> Key: NIFI-8143
> URL: https://issues.apache.org/jira/browse/NIFI-8143
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: DEOM Damien
>Priority: Critical
>
> GetFile should not remove files by default
> It is very dangerous, even with experimented users (we basically spend 4 
> hours to restore a cluster because of this)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600565978



##
File path: PROCESSORS.md
##
@@ -971,13 +971,13 @@ In the list below, the names of required properties 
appear in bold. Any other pr
 |**Known Brokers**|||A comma-separated list of known Kafka Brokers in the 
format :**Supports Expression Language: true**|
 |Max Flow Segment Size|0 B||Maximum flow content payload segment size for the 
kafka record. 0 B means unlimited.|
 |Max Request Size|||Maximum Kafka protocol request message size|
-|Message Key Field|||The name of a field in the Input Records that should be 
used as the Key for the Kafka message.
-Supports Expression Language: true (will be evaluated using flow file 
attributes)|
-|Message Timeout|30 sec||The total time sending a message could 
take**Supports Expression Language: true**|
+|Kafka Key|||The key to use for the message. If not specified, the UUID of the 
flow file is used as the message key.**Supports Expression Language: 
true**|

Review comment:
   nitpick: other properties seem to be in alphabetical order




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600565978



##
File path: PROCESSORS.md
##
@@ -971,13 +971,13 @@ In the list below, the names of required properties 
appear in bold. Any other pr
 |**Known Brokers**|||A comma-separated list of known Kafka Brokers in the 
format :**Supports Expression Language: true**|
 |Max Flow Segment Size|0 B||Maximum flow content payload segment size for the 
kafka record. 0 B means unlimited.|
 |Max Request Size|||Maximum Kafka protocol request message size|
-|Message Key Field|||The name of a field in the Input Records that should be 
used as the Key for the Kafka message.
-Supports Expression Language: true (will be evaluated using flow file 
attributes)|
-|Message Timeout|30 sec||The total time sending a message could 
take**Supports Expression Language: true**|
+|Kafka Key|||The key to use for the message. If not specified, the UUID of the 
flow file is used as the message key.**Supports Expression Language: 
true**|

Review comment:
   nitpick: other properties seem to be in an alphabetical order




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8362) Set KEEP_SOURCE_FILE default to true for GetFile processor

2021-03-24 Thread dml (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dml resolved NIFI-8362.
---
Resolution: Won't Do

https://github.com/apache/nifi/pull/4932/commits

> Set KEEP_SOURCE_FILE default to true for GetFile processor
> --
>
> Key: NIFI-8362
> URL: https://issues.apache.org/jira/browse/NIFI-8362
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: dml
>Priority: Major
>
> Currently the default value for the Keep Source File property on the GetFile 
> processor is false. It would be safer to default this to true, in case the 
> wrong directory path is entered and all files are removed from the directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8362) Set KEEP_SOURCE_FILE default to true for GetFile processor

2021-03-24 Thread dml (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307883#comment-17307883
 ] 

dml commented on NIFI-8362:
---

Pull Request [#4932|https://github.com/apache/nifi/pull/4932/commits] rejected 
and closed. Closing this issue

> Set KEEP_SOURCE_FILE default to true for GetFile processor
> --
>
> Key: NIFI-8362
> URL: https://issues.apache.org/jira/browse/NIFI-8362
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: dml
>Priority: Major
>
> Currently the default value for the Keep Source File property on the GetFile 
> processor is false. It would be safer to default this to true, in case the 
> wrong directory path is entered and all files are removed from the directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] dml872 commented on pull request #4932: changing default value for Keep Source File property on GetFile proce…

2021-03-24 Thread GitBox


dml872 commented on pull request #4932:
URL: https://github.com/apache/nifi/pull/4932#issuecomment-805878964


   Fair enough, I've often had an issue where a user mistakenly set the 
directory path to `./` which ended up removing all files from the system 
including all the configuration files for NiFi, resulting NiFi to go down - 
hence my request for this change. Many thanks anyway


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


szaszm commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600542921



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -541,6 +543,11 @@ void PublishKafka::onSchedule(const 
std::shared_ptr 
   conn_ = utils::make_unique(key_);
   configureNewConnection(context);
 
+  std::string message_key_field;
+  if (context->getProperty(MessageKeyField.getName(), message_key_field)) {
+logger_->log_error("The %s property is set. This property is DEPRECATED 
and has no effect; please use Kafka Key instead.", MessageKeyField.getName());

Review comment:
   ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-6999) Encrypt Config Toolkit fails on very large flow.xml.gz files

2021-03-24 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough resolved NIFI-6999.

Fix Version/s: 1.13.0
   Resolution: Fixed

> Encrypt Config Toolkit fails on very large flow.xml.gz files
> 
>
> Key: NIFI-6999
> URL: https://issues.apache.org/jira/browse/NIFI-6999
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.2.0, 1.10.0
>Reporter: Andy LoPresto
>Assignee: Nathan Gough
>Priority: Critical
>  Labels: documentation, encryption, heap, security, streaming, 
> toolkit
> Fix For: 1.13.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> A user reported failure when using the encrypt config toolkit to process 
> (encrypt) a large {{flow.xml.gz}}. The compressed file was 49 MB, but was 687 
> MB uncompressed. It contained 545 encrypted values, and approximately 90 
> templates. This caused the toolkit to fail during {{loadFlowXml()}} unless 
> the toolkit invocation set the heap to 8 GB via {{-Xms2g -Xmx8g}}. Even with 
> the expanded heap, the serialization of the newly-encrypted flow XML to the 
> file system fails with the following exception:
> {code}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> at java.lang.StringCoding.encode(StringCoding.java:350)
> at java.lang.String.getBytes(String.java:941)
> at org.apache.commons.io.IOUtils.write(IOUtils.java:1857)
> at org.apache.commons.io.IOUtils$write$0.call(Unknown Source)
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:141)
> at 
> org.apache.nifi.properties.ConfigEncryptionTool$_writeFlowXmlToFile_closure5$_closure20.doCall(ConfigEncryptionTool.groovy:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
> at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
> at 
> org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
> at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019)
> at groovy.lang.Closure.call(Closure.java:426)
> at groovy.lang.Closure.call(Closure.java:442)
> at 
> org.codehaus.groovy.runtime.IOGroovyMethods.withCloseable(IOGroovyMethods.java:1622)
> at 
> org.codehaus.groovy.runtime.NioGroovyMethods.withCloseable(NioGroovyMethods.java:1754)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.codehaus.groovy.runtime.metaclass.ReflectionMetaMethod.invoke(ReflectionMetaMethod.java:54)
> at 
> org.codehaus.groovy.runtime.metaclass.NewInstanceMetaMethod.invoke(NewInstanceMetaMethod.java:56)
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274)
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
> at 
> org.apache.nifi.properties.ConfigEncryptionTool$_writeFlowXmlToFile_closure5.doCall(ConfigEncryptionTool.groovy:691)
> {code}
> The immediate fix was to remove the duplicated template definitions in the 
> flow definition, returning the file to a reasonable size. However, if run as 
> an inline replacement, this can cause the {{flow.xml.gz}} to be overwritten 
> with an empty file, potentially leading to data loss. The following steps 
> should be taken:
> # Guard against loading/operating on/serializing large files (log statements, 
> simple conditional checks)
> # Handle large files internally (change from direct {{String}} access to 
> {{BufferedInputStream}}, etc.)
> # Document the internal memory usage of the toolkit in the toolkit guide
> # Document best practices and steps to resolve issue in the toolkit guide



--
This message was sent by Atlassian Jira

[jira] [Updated] (NIFI-8360) SplitContent does not find any 'splits' that occur after about 2 GB into FlowFile

2021-03-24 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-8360:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> SplitContent does not find any 'splits' that occur after about 2 GB into 
> FlowFile
> -
>
> Key: NIFI-8360
> URL: https://issues.apache.org/jira/browse/NIFI-8360
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If we have a FlowFile greater than 2 GB, and we use SplitContent, anything 
> after about the 2 GB mark will not be detected as a Split.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8360) SplitContent does not find any 'splits' that occur after about 2 GB into FlowFile

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307859#comment-17307859
 ] 

ASF subversion and git services commented on NIFI-8360:
---

Commit 91313a2e75b264601984c9c3ead68f9b75e3e478 in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=91313a2 ]

NIFI-8360: Fixed an overflow issue where we used an integer to store the number 
of bytes encountered when reading data and searching for a given pattern

Signed-off-by: Pierre Villard 

This closes #4929.


> SplitContent does not find any 'splits' that occur after about 2 GB into 
> FlowFile
> -
>
> Key: NIFI-8360
> URL: https://issues.apache.org/jira/browse/NIFI-8360
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If we have a FlowFile greater than 2 GB, and we use SplitContent, anything 
> after about the 2 GB mark will not be detected as a Split.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4929: NIFI-8360: Fixed an overflow issue where we used an integer to store …

2021-03-24 Thread GitBox


asfgit closed pull request #4929:
URL: https://github.com/apache/nifi/pull/4929


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


arpadboda commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600527153



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -541,6 +543,11 @@ void PublishKafka::onSchedule(const 
std::shared_ptr 
   conn_ = utils::make_unique(key_);
   configureNewConnection(context);
 
+  std::string message_key_field;
+  if (context->getProperty(MessageKeyField.getName(), message_key_field)) {
+logger_->log_error("The %s property is set. This property is DEPRECATED 
and has no effect; please use Kafka Key instead.", MessageKeyField.getName());

Review comment:
   I tend to agree with the error approach. Especially as this is only 
going to be logged once, so won't spam the log with error messages.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


fgerlits commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600526296



##
File path: extensions/librdkafka/tests/CMakeLists.txt
##
@@ -29,10 +29,12 @@ FOREACH(testfile ${KAFKA_TESTS})
 createTests("${testfilename}")
 MATH(EXPR KAFKA_TEST_COUNT "${KAFKA_TEST_COUNT}+1")
 # The line below handles integration test
+target_include_directories(${testfilename} BEFORE PRIVATE "../..")

Review comment:
   I have removed the comment.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


fgerlits commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600522506



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -541,6 +543,11 @@ void PublishKafka::onSchedule(const 
std::shared_ptr 
   conn_ = utils::make_unique(key_);
   configureNewConnection(context);
 
+  std::string message_key_field;
+  if (context->getProperty(MessageKeyField.getName(), message_key_field)) {
+logger_->log_error("The %s property is set. This property is DEPRECATED 
and has no effect; please use Kafka Key instead.", MessageKeyField.getName());

Review comment:
   I'm fine with logging it on either warning or error level, so we'll need 
a 4th vote to decide. :)
   
   My reasoning for an error log was that if the user has set the 
MessageKeyField property, then their flow is not working as intended.  E.g. 
they may have wanted to set the message key to the value of a particular flow 
file attribute, but it is set to the flow file UUID instead -- this can cause 
difficult-to-debug problems down the line.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] timeabarna commented on a change in pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


timeabarna commented on a change in pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#discussion_r600517501



##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -254,7 +258,12 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 
 try{
 final RecordSchema writeSchema = 
writerFactory.getSchema(flowAttributes, new KeySchema());
-try (final RecordSetWriter writer = 
writerFactory.createWriter(getLogger(), writeSchema, out)) {
+List fieldList = new 
ArrayList<>();

Review comment:
   renamed variable to recordSchemaFields




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] timeabarna commented on a change in pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


timeabarna commented on a change in pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#discussion_r600516907



##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -254,7 +258,12 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 
 try{
 final RecordSchema writeSchema = 
writerFactory.getSchema(flowAttributes, new KeySchema());
-try (final RecordSetWriter writer = 
writerFactory.createWriter(getLogger(), writeSchema, out)) {
+List fieldList = new 
ArrayList<>();
+fieldList.addAll(writeSchema.getFields());
+fieldList.add(new RecordField("value", 
RecordFieldType.STRING.getDataType()));

Review comment:
   Thanks for your help exceptionfactory, String will cover most cases




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] joewitt closed pull request #4932: changing default value for Keep Source File property on GetFile proce…

2021-03-24 Thread GitBox


joewitt closed pull request #4932:
URL: https://github.com/apache/nifi/pull/4932


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] joewitt commented on pull request #4932: changing default value for Keep Source File property on GetFile proce…

2021-03-24 Thread GitBox


joewitt commented on pull request #4932:
URL: https://github.com/apache/nifi/pull/4932#issuecomment-805856940


   I am strongly opposed to this change.  This default value has been in place 
for a very long time and the decision then was to make the default match the 
most obvious intent.   ListFile and FetchFile are a more powerful pair and 
decompose what GetFile does. We will likely remove GetFile in an eventual nifi 
2.0.
   
   For future contribs please follow the contrib guidance which means a jira 
must exist and be linked with the commit message.
   
   I am closing this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] timeabarna commented on a change in pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


timeabarna commented on a change in pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#discussion_r600515517



##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -270,8 +279,9 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 data.put("columnQualifier", 
key.getColumnQualifier().toString());
 data.put("columnVisibility", 
key.getColumnVisibility().toString());
 data.put("timestamp", 
key.getTimestamp());
+data.put("value", kv.getValue());

Review comment:
   Thanks Mark for your help, modified to kv.getValue().toString()




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-8206) Allow PropertyDescriptors to drive what type of input they accept

2021-03-24 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne reassigned NIFI-8206:


Assignee: Mark Payne

> Allow PropertyDescriptors to drive what type of input they accept
> -
>
> Key: NIFI-8206
> URL: https://issues.apache.org/jira/browse/NIFI-8206
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions, Tools and Build
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Major
>
> Currently, PropertyDescriptors can be configured with a Validator that 
> validates any proposed value. Rather than solely driving what's allowable 
> through the Validator which only exists at runtime, it may be beneficial if 
> the PropertyDescriptor can describe the type of input they support. For 
> instance a file, a list of files, a URL, etc.
> Using these details to populate the extension metadata generated at build 
> time would be helpful in use cases where the flow might be authored separate 
> from the runtime environment like Stateless NiFi.
> Having these details may even provide an opportunity to update the Processor 
> API to support default file exists validators, URL validators, etc. Also, it 
> may allow for the Processor API to offer capabilities around loading the 
> content from these configured value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1032: MINIFICPP-1504: Add Resource consumption data to heartbeats

2021-03-24 Thread GitBox


szaszm commented on a change in pull request #1032:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1032#discussion_r600511300



##
File path: libminifi/include/core/state/nodes/DeviceInformation.h
##
@@ -353,119 +355,104 @@ class DeviceInfoNode : public DeviceInformation {
   std::vector serialize() {
 std::vector serialized;
 
+serialized.push_back(serializeIdentifier());
+serialized.push_back(serializeSystemInfo());
+serialized.push_back(serializeNetworkInfo());
+
+return serialized;
+  }
+
+ protected:
+  SerializedResponseNode serializeIdentifier() {
 SerializedResponseNode identifier;
 identifier.name = "identifier";
 identifier.value = device_id_;
+return identifier;
+  }
 
-SerializedResponseNode systemInfo;
-systemInfo.name = "systemInfo";
-
-SerializedResponseNode vcores;
-vcores.name = "vCores";
-size_t ncpus = std::thread::hardware_concurrency();
-
-vcores.value = ncpus;
-
-systemInfo.children.push_back(vcores);
-
-SerializedResponseNode ostype;
-ostype.name = "operatingSystem";
-ostype.value = getOperatingSystem();
-
-systemInfo.children.push_back(ostype);
-#if defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)
-SerializedResponseNode mem;
-mem.name = "physicalMem";
-
-uint64_t mema = (size_t) sysconf(_SC_PHYS_PAGES) * (size_t) 
sysconf(_SC_PAGESIZE);
+  SerializedResponseNode serializeVCoreInfo() {
+SerializedResponseNode v_cores;
+v_cores.name = "vCores";
+v_cores.value = std::thread::hardware_concurrency();
+return v_cores;
+  }
 
-mem.value = mema;
+  SerializedResponseNode serializeOperatingSystemType() {
+SerializedResponseNode os_type;
+os_type.name = "operatingSystem";
+os_type.value = getOperatingSystem();
+return os_type;
+  }
 
-systemInfo.children.push_back(mem);
-#endif
-#ifndef WIN32
-SerializedResponseNode arch;
-arch.name = "machinearch";
+  SerializedResponseNode serializeTotalPhysicalMemoryInformation() {
+SerializedResponseNode total_physical_memory;
+total_physical_memory.name = "physicalMem";
+total_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemTotalPhysicalMemory();
+return total_physical_memory;
+  }
 
-utsname buf;
+  SerializedResponseNode serializePhysicalMemoryUsageInformation() {
+SerializedResponseNode used_physical_memory;
+used_physical_memory.name = "memoryUtilization";

Review comment:
   
[utilization](https://dictionary.cambridge.org/dictionary/english/utilization):
   > the amount of something available, produced, etc. compared with the total 
amount that exists or that could be produced
   
   Either we should return a percentage value here or rename this to something 
like memoryUsage.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8357) ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using statically assigned partitions

2021-03-24 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi resolved NIFI-8357.
---
Resolution: Fixed

> ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using 
> statically assigned partitions
> ---
>
> Key: NIFI-8357
> URL: https://issues.apache.org/jira/browse/NIFI-8357
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.14.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If using statically assigned partitions in ConsumeKafka_2_0, 
> ConsumeKafkaRecord_2_0, ConsumeKafka_2_6, or ConsumeKafkaRecord_2_6 (via 
> adding {{partitions.}}{{ properties), when a client connection 
> fails, it recreates connections but does not properly assign the partitions. 
> As a result, the consumer stops consuming data from its partition(s), and the 
> Kafka client that gets created gets leaked. This can slowly build up to 
> leaking many of these connections and potentially could exhaust heap or cause 
> IOException: too many open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8357) ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using statically assigned partitions

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307844#comment-17307844
 ] 

ASF subversion and git services commented on NIFI-8357:
---

Commit 74ea3840ac98c8deff1ab83f673cc8fcb7072bcd in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=74ea384 ]

NIFI-8357: Updated Kafka 2.6 processors to automatically handle recreating 
Consumer Lease objects when an existing one is poisoned, even if using 
statically assigned partitions

This closes #4926.

Signed-off-by: Peter Turcsanyi 


> ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using 
> statically assigned partitions
> ---
>
> Key: NIFI-8357
> URL: https://issues.apache.org/jira/browse/NIFI-8357
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If using statically assigned partitions in ConsumeKafka_2_0, 
> ConsumeKafkaRecord_2_0, ConsumeKafka_2_6, or ConsumeKafkaRecord_2_6 (via 
> adding {{partitions.}}{{ properties), when a client connection 
> fails, it recreates connections but does not properly assign the partitions. 
> As a result, the consumer stops consuming data from its partition(s), and the 
> Kafka client that gets created gets leaked. This can slowly build up to 
> leaking many of these connections and potentially could exhaust heap or cause 
> IOException: too many open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4926: NIFI-8357: Updated Kafka 2.0/2.6 processors to automatically handle recreating Consumer Lease objects when an existing one is poisoned, even if using

2021-03-24 Thread GitBox


asfgit closed pull request #4926:
URL: https://github.com/apache/nifi/pull/4926


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8357) ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using statically assigned partitions

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307843#comment-17307843
 ] 

ASF subversion and git services commented on NIFI-8357:
---

Commit 2f08d1f466b9f6f0b0b8a7b5893341a0d1433a4e in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2f08d1f ]

NIFI-8357: Updated Kafka 2.0 processors to automatically handle recreating 
Consumer Lease objects when an existing one is poisoned, even if using 
statically assigned partitions

This closes #4926.

Signed-off-by: Peter Turcsanyi 


> ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using 
> statically assigned partitions
> ---
>
> Key: NIFI-8357
> URL: https://issues.apache.org/jira/browse/NIFI-8357
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If using statically assigned partitions in ConsumeKafka_2_0, 
> ConsumeKafkaRecord_2_0, ConsumeKafka_2_6, or ConsumeKafkaRecord_2_6 (via 
> adding {{partitions.}}{{ properties), when a client connection 
> fails, it recreates connections but does not properly assign the partitions. 
> As a result, the consumer stops consuming data from its partition(s), and the 
> Kafka client that gets created gets leaked. This can slowly build up to 
> leaking many of these connections and potentially could exhaust heap or cause 
> IOException: too many open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] dml872 opened a new pull request #4932: changing default value for Keep Source File property on GetFile proce…

2021-03-24 Thread GitBox


dml872 opened a new pull request #4932:
URL: https://github.com/apache/nifi/pull/4932


   …ssor
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


szaszm commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600491028



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -541,6 +543,11 @@ void PublishKafka::onSchedule(const 
std::shared_ptr 
   conn_ = utils::make_unique(key_);
   configureNewConnection(context);
 
+  std::string message_key_field;
+  if (context->getProperty(MessageKeyField.getName(), message_key_field)) {
+logger_->log_error("The %s property is set. This property is DEPRECATED 
and has no effect; please use Kafka Key instead.", MessageKeyField.getName());

Review comment:
   Having the property set is likely a mistake, but it doesn't cause data 
loss or failures down the line. Seeing this message doesn't mean a change in 
behavior from the happy path, only that the flow is incorrect, but still 
runnable.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


exceptionfactory commented on a change in pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#discussion_r600474873



##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -254,7 +258,12 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 
 try{
 final RecordSchema writeSchema = 
writerFactory.getSchema(flowAttributes, new KeySchema());
-try (final RecordSetWriter writer = 
writerFactory.createWriter(getLogger(), writeSchema, out)) {
+List fieldList = new 
ArrayList<>();

Review comment:
   Recommend declaring this variable and other variables as `final` 
following the general pattern of the Processor.  Renaming the variable to 
something like `writeSchemaFields` or `recordSchemaFields` would also be 
helpful.

##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -254,7 +258,12 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 
 try{
 final RecordSchema writeSchema = 
writerFactory.getSchema(flowAttributes, new KeySchema());
-try (final RecordSetWriter writer = 
writerFactory.createWriter(getLogger(), writeSchema, out)) {
+List fieldList = new 
ArrayList<>();
+fieldList.addAll(writeSchema.getFields());
+fieldList.add(new RecordField("value", 
RecordFieldType.STRING.getDataType()));

Review comment:
   The Accumulo `Value` object contains a byte array which can be converted 
to UTF-8 String, but does conversion to String cover all use cases?

##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -270,8 +279,9 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 data.put("columnQualifier", 
key.getColumnQualifier().toString());
 data.put("columnVisibility", 
key.getColumnVisibility().toString());
 data.put("timestamp", 
key.getTimestamp());
+data.put("value", kv.getValue());

Review comment:
   As mentioned in the RecordField definition, `Value.toString()` converts 
the underlying byte array to a UTF-8 encoded String.  If that covers all use 
cases, this should be changed to `kv.getValue().toString()`.  Alternatively, if 
the underlying byte array should be represented, this should be changed to 
`kv.getValue().get()` and the RecordFieldType should be changed,




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ChrisSamo632 commented on pull request #4854: NIFI-7172: Trim whitespace from NiFi properties

2021-03-24 Thread GitBox


ChrisSamo632 commented on pull request #4854:
URL: https://github.com/apache/nifi/pull/4854#issuecomment-805824784


   > Thanks for making the adjustments @eaolson. Do you have any additional 
comments @ChrisSamo632?
   
   @exceptionfactory nothing more from me (I can't see how to resolve 
conversations, maybe I don't have access, nevermind)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #4931: NIFI-8283 Value handling in ScanAccumulo processor

2021-03-24 Thread GitBox


markap14 commented on a change in pull request #4931:
URL: https://github.com/apache/nifi/pull/4931#discussion_r600483300



##
File path: 
nifi-nar-bundles/nifi-accumulo-bundle/nifi-accumulo-processors/src/main/java/org/apache/nifi/accumulo/processors/ScanAccumulo.java
##
@@ -270,8 +279,9 @@ public void process(final InputStream in, final 
OutputStream out) throws IOExcep
 data.put("columnQualifier", 
key.getColumnQualifier().toString());
 data.put("columnVisibility", 
key.getColumnVisibility().toString());
 data.put("timestamp", 
key.getTimestamp());
+data.put("value", kv.getValue());

Review comment:
   The schema here indicates that this field is to be a `String` type. That 
means that the value added into the map needs to be a String also, but it's of 
type `Value`. Need to ensure that it's properly converted to a `String` before 
putting into the Map.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8296) Integration with API to retrieve all subjects associated with a schema id for Confluent Schema Registry v5.3.1+

2021-03-24 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-8296.
--
Fix Version/s: 1.14.0
   Resolution: Fixed

> Integration with API to retrieve all subjects associated with a schema id for 
> Confluent Schema Registry v5.3.1+
> ---
>
> Key: NIFI-8296
> URL: https://issues.apache.org/jira/browse/NIFI-8296
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Dmitry Ibragimov
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Confluent schema registry version 5.3.1+ implements new API for retrieve 
> schema name by id:
> [https://github.com/confluentinc/schema-registry/pull/1196]
> Currently NiFi loops over all schemas registered in registry. For registries 
> with big count of schemas it can be so long.
> [https://github.com/apache/nifi/blob/e2e137fced39f184e5998f7c4befe03e70f53016/nifi-nar-bundles/nifi-confluent-platform-bundle/nifi-confluent-schema-registry-service/src/main/java/org/apache/nifi/confluent/schemaregistry/client/RestSchemaRegistryClient.java#L111|https://github.com/apache/nifi/blob/ff93ec42c3730f1ba8acc4843e8504f2ff47a5cf/nifi-nar-bundles/nifi-confluent-platform-bundle/nifi-confluent-schema-registry-service/src/main/java/org/apache/nifi/confluent/schemaregistry/client/RestSchemaRegistryClient.java#L111]
> Need to implement this feature - it will gave us a huge increase of 
> performance for consuming using registry with large amount of schemas.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8296) Integration with API to retrieve all subjects associated with a schema id for Confluent Schema Registry v5.3.1+

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307825#comment-17307825
 ] 

ASF subversion and git services commented on NIFI-8296:
---

Commit 057b4af48249b8c22f4f914aac4c7ac4c3f5693c in nifi's branch 
refs/heads/main from Dmitry Ibragimov
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=057b4af ]

NIFI-8296 - Use API to retrieve all subjects associated with a schema id for 
Confluent Schema Registry v5.4.0+

Signed-off-by: Pierre Villard 

This closes #4872.


> Integration with API to retrieve all subjects associated with a schema id for 
> Confluent Schema Registry v5.3.1+
> ---
>
> Key: NIFI-8296
> URL: https://issues.apache.org/jira/browse/NIFI-8296
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Dmitry Ibragimov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Confluent schema registry version 5.3.1+ implements new API for retrieve 
> schema name by id:
> [https://github.com/confluentinc/schema-registry/pull/1196]
> Currently NiFi loops over all schemas registered in registry. For registries 
> with big count of schemas it can be so long.
> [https://github.com/apache/nifi/blob/e2e137fced39f184e5998f7c4befe03e70f53016/nifi-nar-bundles/nifi-confluent-platform-bundle/nifi-confluent-schema-registry-service/src/main/java/org/apache/nifi/confluent/schemaregistry/client/RestSchemaRegistryClient.java#L111|https://github.com/apache/nifi/blob/ff93ec42c3730f1ba8acc4843e8504f2ff47a5cf/nifi-nar-bundles/nifi-confluent-platform-bundle/nifi-confluent-schema-registry-service/src/main/java/org/apache/nifi/confluent/schemaregistry/client/RestSchemaRegistryClient.java#L111]
> Need to implement this feature - it will gave us a huge increase of 
> performance for consuming using registry with large amount of schemas.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4872: NIFI-8296: Use API to retrieve all subjects associated with a schema id for Confluent Schema Registry v5.3.1+

2021-03-24 Thread GitBox


asfgit closed pull request #4872:
URL: https://github.com/apache/nifi/pull/4872


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pvillard31 commented on pull request #4872: NIFI-8296: Use API to retrieve all subjects associated with a schema id for Confluent Schema Registry v5.3.1+

2021-03-24 Thread GitBox


pvillard31 commented on pull request #4872:
URL: https://github.com/apache/nifi/pull/4872#issuecomment-805816849


   Merged to main, thanks @diarworld 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #4857: NIFI-8230 Removed default Sensitive Properties Key

2021-03-24 Thread GitBox


exceptionfactory commented on pull request #4857:
URL: https://github.com/apache/nifi/pull/4857#issuecomment-805814048


   Just to summarize the latest changes, after some additional discussion with 
@markap14, he suggested that implementing a simplified command for setting the 
Sensitive Properties Key to make the migration path easier for current users 
with blank keys.
   
   The most recent commit includes an update to `nifi.sh` to support the 
following:
   
   `nifi.sh set-sensitive-properties-key `
   
   The script passes the new key to the command class, which reads the current 
key from `nifi.properties` and falls back to the internal default value if the 
existing value is blank.
   
   This approach required refactoring the PropertyEncryptor classes to a 
separate module named `nifi-property-encryptor`.  The new command class is 
located under `nifi-flow-encryptor`.  The existing `ConfigEncryptionTool` 
continues to work, but contained a lot of the same code, so this change 
provided an opportunity to refactor `ConfigEncryptionTool` to leverage the 
`PropertyEncryptor` interface.  This simplifies `ConfigEncryptionTool` and also 
removes code that diverged from the standard property encryption approach.
   
   With these changes, the Admin Guide now includes a new section describing 
the `set-sensitive-properties-key` command.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8348) Advanced UI not available when building with Java 11

2021-03-24 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-8348.
--
Fix Version/s: (was: 1.13.3)
   1.14.0
   Resolution: Fixed

> Advanced UI not available when building with Java 11
> 
>
> Key: NIFI-8348
> URL: https://issues.apache.org/jira/browse/NIFI-8348
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4, 1.13.0, 1.12.1, 1.13.1, 1.13.2
>Reporter: Mark Bean
>Assignee: Mark Bean
>Priority: Critical
> Fix For: 1.14.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> If NiFi is built using Java 11, the Advanced UI of the UpdateAttribute 
> processor does not load the rules. The processor will still effectively 
> process flowfiles according to the rules (if they were created in a flow from 
> a different build, for example). However, the rules are not able to be 
> displayed nor edited in the UI.
> Work-around: build NiFI using Java 8. The resulting executable can be run 
> within either Java 8 or Java 11 JVM without demonstrating the above issue.
> This problem affects versions 1.13.0, 1.13.1 and 1.13.2, at a minimum. 
> Earlier versions have not been verified yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8348) Advanced UI not available when building with Java 11

2021-03-24 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307822#comment-17307822
 ] 

ASF subversion and git services commented on NIFI-8348:
---

Commit 1719e3616591e96a39662d3a3684f27ef2505322 in nifi's branch 
refs/heads/main from Mark Bean
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1719e36 ]

NIFI-8348: upgrade jersey version to one fully compatible with Java 11


> Advanced UI not available when building with Java 11
> 
>
> Key: NIFI-8348
> URL: https://issues.apache.org/jira/browse/NIFI-8348
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4, 1.13.0, 1.12.1, 1.13.1, 1.13.2
>Reporter: Mark Bean
>Assignee: Mark Bean
>Priority: Critical
> Fix For: 1.13.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> If NiFi is built using Java 11, the Advanced UI of the UpdateAttribute 
> processor does not load the rules. The processor will still effectively 
> process flowfiles according to the rules (if they were created in a flow from 
> a different build, for example). However, the rules are not able to be 
> displayed nor edited in the UI.
> Work-around: build NiFI using Java 8. The resulting executable can be run 
> within either Java 8 or Java 11 JVM without demonstrating the above issue.
> This problem affects versions 1.13.0, 1.13.1 and 1.13.2, at a minimum. 
> Earlier versions have not been verified yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 merged pull request #4919: NIFI-8348: upgrade jersey version to one fully compatible with Java 11

2021-03-24 Thread GitBox


markap14 merged pull request #4919:
URL: https://github.com/apache/nifi/pull/4919


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #4854: NIFI-7172: Trim whitespace from NiFi properties

2021-03-24 Thread GitBox


exceptionfactory commented on pull request #4854:
URL: https://github.com/apache/nifi/pull/4854#issuecomment-805791573


   Thanks for making the adjustments @eaolson.  Do you have any additional 
comments @ChrisSamo632?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600432935



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -541,6 +543,11 @@ void PublishKafka::onSchedule(const 
std::shared_ptr 
   conn_ = utils::make_unique(key_);
   configureNewConnection(context);
 
+  std::string message_key_field;
+  if (context->getProperty(MessageKeyField.getName(), message_key_field)) {
+logger_->log_error("The %s property is set. This property is DEPRECATED 
and has no effect; please use Kafka Key instead.", MessageKeyField.getName());

Review comment:
   I would go with error, as it uses a property that never worked and is 
only preserved to allow the processor to operate instead of fizzling out in 
onSchedule, personally I would even go for removing it, but I do not know how 
prevalent its usage is




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1037: MINIFICPP-1520 Fix PublishKafka properties to support expression language

2021-03-24 Thread GitBox


szaszm commented on a change in pull request #1037:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1037#discussion_r600426575



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -541,6 +543,11 @@ void PublishKafka::onSchedule(const 
std::shared_ptr 
   conn_ = utils::make_unique(key_);
   configureNewConnection(context);
 
+  std::string message_key_field;
+  if (context->getProperty(MessageKeyField.getName(), message_key_field)) {
+logger_->log_error("The %s property is set. This property is DEPRECATED 
and has no effect; please use Kafka Key instead.", MessageKeyField.getName());

Review comment:
   I would prefer this to be a warning, not an error

##
File path: extensions/librdkafka/tests/CMakeLists.txt
##
@@ -29,10 +29,12 @@ FOREACH(testfile ${KAFKA_TESTS})
 createTests("${testfilename}")
 MATH(EXPR KAFKA_TEST_COUNT "${KAFKA_TEST_COUNT}+1")
 # The line below handles integration test
+target_include_directories(${testfilename} BEFORE PRIVATE "../..")

Review comment:
   The comment above is outdated, I suggest removing it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1036: MINIFICPP-1352 Fix warnings in JNI and Sensors extensions

2021-03-24 Thread GitBox


szaszm commented on a change in pull request #1036:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1036#discussion_r600419189



##
File path: extensions/jni/jvm/JniReferenceObjects.h
##
@@ -142,7 +142,7 @@ class JniByteInputStream : public 
minifi::InputStreamCallback {
 int read = 0;
 do {
 
-  int actual = (int) stream_->read(buffer_, remaining <= buffer_size_ ? 
remaining : buffer_size_);
+  int actual = (int) stream_->read(buffer_, 
gsl::narrow(remaining) <= buffer_size_ ? remaining : buffer_size_);

Review comment:
   In either case, we need to add a precondition (`gsl_Expects`) that 
`size` must not be negative, for the narrowing to be safe. I know that the 
behavior is identical, but crashing because of narrowing error looks 
accidental, while a precondition clearly communicates the intent.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1036: MINIFICPP-1352 Fix warnings in JNI and Sensors extensions

2021-03-24 Thread GitBox


szaszm commented on a change in pull request #1036:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1036#discussion_r600419189



##
File path: extensions/jni/jvm/JniReferenceObjects.h
##
@@ -142,7 +142,7 @@ class JniByteInputStream : public 
minifi::InputStreamCallback {
 int read = 0;
 do {
 
-  int actual = (int) stream_->read(buffer_, remaining <= buffer_size_ ? 
remaining : buffer_size_);
+  int actual = (int) stream_->read(buffer_, 
gsl::narrow(remaining) <= buffer_size_ ? remaining : buffer_size_);

Review comment:
   In any of these cases, we need to add a precondition (`gsl_Expects`) 
that `size` must not be negative, for the narrowing to be safe. I know that the 
behavior is identical, but crashing because of narrowing error looks 
accidental, while a precondition clearly communicates the intent.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1032: MINIFICPP-1504: Add Resource consumption data to heartbeats

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1032:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1032#discussion_r600418932



##
File path: libminifi/test/unit/ResponseNodeValueTests.cpp
##
@@ -0,0 +1,113 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include 
+
+#include "../../include/core/state/Value.h"
+#include "../TestBase.h"
+
+template 
+bool canConvertToType(org::apache::nifi::minifi::state::response::ValueNode 
value_node) {
+  T conversion_target;
+  return value_node.getValue()->convertValue(conversion_target);
+}
+
+template 
+bool canConvertToType(org::apache::nifi::minifi::state::response::ValueNode 
value_node, const T& expected_result) {
+  T conversion_target;
+  bool canConvert = value_node.getValue()->convertValue(conversion_target);
+  return canConvert && (expected_result == conversion_target);
+}
+
+TEST_CASE("IntValueNodeTests", "[responsenodevaluetests]") {
+  org::apache::nifi::minifi::state::response::ValueNode value_node;
+
+  int positive_int_value = 6;
+  value_node = positive_int_value;
+  REQUIRE(value_node.getValue()->getTypeIndex() == 
org::apache::nifi::minifi::state::response::Value::INT_TYPE);
+  REQUIRE(canConvertToType(value_node));
+  REQUIRE(canConvertToType(value_node));
+  REQUIRE(canConvertToType(value_node));
+  REQUIRE(canConvertToType(value_node));
+  REQUIRE(canConvertToType (value_node));
+  REQUIRE(!canConvertToType(value_node));
+  REQUIRE(!canConvertToType(value_node));
+  REQUIRE(value_node.to_string() == "6");
+
+  int negative_int_value = -7;
+  value_node = negative_int_value;
+  REQUIRE(value_node.getValue()->getTypeIndex() == 
org::apache::nifi::minifi::state::response::Value::INT_TYPE);
+  REQUIRE(!canConvertToType(value_node));

Review comment:
   `REQUIRE_FALSE` might be used here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markobean commented on pull request #4919: NIFI-8348: upgrade jersey version to one fully compatible with Java 11

2021-03-24 Thread GitBox


markobean commented on pull request #4919:
URL: https://github.com/apache/nifi/pull/4919#issuecomment-805767465


   Thanks @markap14


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1039: MINIFICPP-1533

2021-03-24 Thread GitBox


szaszm closed pull request #1039:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1039


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1032: MINIFICPP-1504: Add Resource consumption data to heartbeats

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1032:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1032#discussion_r600410229



##
File path: libminifi/src/utils/SystemCPUUsageTracker.cpp
##
@@ -0,0 +1,186 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/SystemCPUUsageTracker.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+#ifdef __linux__
+
+SystemCPUUsageTracker::SystemCPUUsageTracker() :

Review comment:
   as the functionality here is inherently platform specific we could 
create separate `SystemCPUUsageTracker{Linux,Win,Apple}.cpp` files and include 
them here (they would need to be placed in a `platform` directory as the `cpp` 
files are collected by a glob)
   
   alternatively we could also make the distinction at the cmake level




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm closed pull request #979: MINIFICPP-1456 Introduce PutAzureBlobStorage processor

2021-03-24 Thread GitBox


szaszm closed pull request #979:
URL: https://github.com/apache/nifi-minifi-cpp/pull/979


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #4898: NIFI-8325: Improve SNMP processors

2021-03-24 Thread GitBox


Lehel44 commented on a change in pull request #4898:
URL: https://github.com/apache/nifi/pull/4898#discussion_r600401175



##
File path: 
nifi-nar-bundles/nifi-snmp-bundle/nifi-snmp-processors/src/test/java/org/apache/nifi/snmp/helper/SNMPTestUtil.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.snmp.helper;
+
+import org.apache.nifi.flowfile.FlowFile;
+
+import java.net.ServerSocket;
+import java.util.HashMap;
+import java.util.Map;
+
+public class SNMPTestUtil {
+
+/**
+ * Will determine an available port.
+ */
+public static synchronized int availablePort() {
+try (ServerSocket s = new ServerSocket(0)) {
+s.setReuseAddress(true);
+return s.getLocalPort();
+} catch (Exception e) {
+throw new IllegalStateException("Failed to discover available 
port.", e);
+}
+}
+
+public static FlowFile createFlowFile(final long id, final long fileSize, 
final Map attributes) {

Review comment:
   No, I removed it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8362) Set KEEP_SOURCE_FILE default to true for GetFile processor

2021-03-24 Thread dml (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dml updated NIFI-8362:
--
Description: Currently the default value for the Keep Source File property 
on the GetFile processor is false. It would be safer to default this to true, 
in case the wrong directory path is entered and all files are removed from the 
directory.

> Set KEEP_SOURCE_FILE default to true for GetFile processor
> --
>
> Key: NIFI-8362
> URL: https://issues.apache.org/jira/browse/NIFI-8362
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: dml
>Priority: Major
>
> Currently the default value for the Keep Source File property on the GetFile 
> processor is false. It would be safer to default this to true, in case the 
> wrong directory path is entered and all files are removed from the directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1032: MINIFICPP-1504: Add Resource consumption data to heartbeats

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1032:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1032#discussion_r600388400



##
File path: libminifi/include/core/state/nodes/DeviceInformation.h
##
@@ -353,119 +355,104 @@ class DeviceInfoNode : public DeviceInformation {
   std::vector serialize() {
 std::vector serialized;
 
+serialized.push_back(serializeIdentifier());
+serialized.push_back(serializeSystemInfo());
+serialized.push_back(serializeNetworkInfo());
+
+return serialized;
+  }
+
+ protected:
+  SerializedResponseNode serializeIdentifier() {
 SerializedResponseNode identifier;
 identifier.name = "identifier";
 identifier.value = device_id_;
+return identifier;
+  }
 
-SerializedResponseNode systemInfo;
-systemInfo.name = "systemInfo";
-
-SerializedResponseNode vcores;
-vcores.name = "vCores";
-size_t ncpus = std::thread::hardware_concurrency();
-
-vcores.value = ncpus;
-
-systemInfo.children.push_back(vcores);
-
-SerializedResponseNode ostype;
-ostype.name = "operatingSystem";
-ostype.value = getOperatingSystem();
-
-systemInfo.children.push_back(ostype);
-#if defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)
-SerializedResponseNode mem;
-mem.name = "physicalMem";
-
-uint64_t mema = (size_t) sysconf(_SC_PHYS_PAGES) * (size_t) 
sysconf(_SC_PAGESIZE);
+  SerializedResponseNode serializeVCoreInfo() {
+SerializedResponseNode v_cores;
+v_cores.name = "vCores";
+v_cores.value = std::thread::hardware_concurrency();
+return v_cores;
+  }
 
-mem.value = mema;
+  SerializedResponseNode serializeOperatingSystemType() {
+SerializedResponseNode os_type;
+os_type.name = "operatingSystem";
+os_type.value = getOperatingSystem();
+return os_type;
+  }
 
-systemInfo.children.push_back(mem);
-#endif
-#ifndef WIN32
-SerializedResponseNode arch;
-arch.name = "machinearch";
+  SerializedResponseNode serializeTotalPhysicalMemoryInformation() {
+SerializedResponseNode total_physical_memory;
+total_physical_memory.name = "physicalMem";
+total_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemTotalPhysicalMemory();
+return total_physical_memory;
+  }
 
-utsname buf;
+  SerializedResponseNode serializePhysicalMemoryUsageInformation() {
+SerializedResponseNode used_physical_memory;
+used_physical_memory.name = "memoryUtilization";
+used_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemPhysicalMemoryUsage();

Review comment:
   ~`static_cast` is preferred (or better yet, `gsl::narrow` if need be)~
   
   there is no need for the cast here, if we really want to ensure that it is 
being called with `uint64_t` we could go for 
   
   ```suggestion
   used_physical_memory.value = 
uint64_t{utils::OsUtils::getSystemPhysicalMemoryUsage()};
   ```
   
   which will compile time prevent narrowing conversion from whatever the 
function returns, to `uint64_t`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1032: MINIFICPP-1504: Add Resource consumption data to heartbeats

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1032:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1032#discussion_r600388400



##
File path: libminifi/include/core/state/nodes/DeviceInformation.h
##
@@ -353,119 +355,104 @@ class DeviceInfoNode : public DeviceInformation {
   std::vector serialize() {
 std::vector serialized;
 
+serialized.push_back(serializeIdentifier());
+serialized.push_back(serializeSystemInfo());
+serialized.push_back(serializeNetworkInfo());
+
+return serialized;
+  }
+
+ protected:
+  SerializedResponseNode serializeIdentifier() {
 SerializedResponseNode identifier;
 identifier.name = "identifier";
 identifier.value = device_id_;
+return identifier;
+  }
 
-SerializedResponseNode systemInfo;
-systemInfo.name = "systemInfo";
-
-SerializedResponseNode vcores;
-vcores.name = "vCores";
-size_t ncpus = std::thread::hardware_concurrency();
-
-vcores.value = ncpus;
-
-systemInfo.children.push_back(vcores);
-
-SerializedResponseNode ostype;
-ostype.name = "operatingSystem";
-ostype.value = getOperatingSystem();
-
-systemInfo.children.push_back(ostype);
-#if defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)
-SerializedResponseNode mem;
-mem.name = "physicalMem";
-
-uint64_t mema = (size_t) sysconf(_SC_PHYS_PAGES) * (size_t) 
sysconf(_SC_PAGESIZE);
+  SerializedResponseNode serializeVCoreInfo() {
+SerializedResponseNode v_cores;
+v_cores.name = "vCores";
+v_cores.value = std::thread::hardware_concurrency();
+return v_cores;
+  }
 
-mem.value = mema;
+  SerializedResponseNode serializeOperatingSystemType() {
+SerializedResponseNode os_type;
+os_type.name = "operatingSystem";
+os_type.value = getOperatingSystem();
+return os_type;
+  }
 
-systemInfo.children.push_back(mem);
-#endif
-#ifndef WIN32
-SerializedResponseNode arch;
-arch.name = "machinearch";
+  SerializedResponseNode serializeTotalPhysicalMemoryInformation() {
+SerializedResponseNode total_physical_memory;
+total_physical_memory.name = "physicalMem";
+total_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemTotalPhysicalMemory();
+return total_physical_memory;
+  }
 
-utsname buf;
+  SerializedResponseNode serializePhysicalMemoryUsageInformation() {
+SerializedResponseNode used_physical_memory;
+used_physical_memory.name = "memoryUtilization";
+used_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemPhysicalMemoryUsage();

Review comment:
   ~`static_cast` is preferred (or better yet, `gsl::narrow` if need be)~
   
   there is no need for the cast here, if we really want to ensure that it is 
being called with `uint64_t` we could go for 
   
   ```suggestion
   used_physical_memory.value = 
uint64_t{utils::OsUtils::getSystemPhysicalMemoryUsage()};
   ```
   
   which will compile time prevent narrowing conversion from, whatever the 
function returns, to `uint64_t`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1032: MINIFICPP-1504: Add Resource consumption data to heartbeats

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1032:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1032#discussion_r600388400



##
File path: libminifi/include/core/state/nodes/DeviceInformation.h
##
@@ -353,119 +355,104 @@ class DeviceInfoNode : public DeviceInformation {
   std::vector serialize() {
 std::vector serialized;
 
+serialized.push_back(serializeIdentifier());
+serialized.push_back(serializeSystemInfo());
+serialized.push_back(serializeNetworkInfo());
+
+return serialized;
+  }
+
+ protected:
+  SerializedResponseNode serializeIdentifier() {
 SerializedResponseNode identifier;
 identifier.name = "identifier";
 identifier.value = device_id_;
+return identifier;
+  }
 
-SerializedResponseNode systemInfo;
-systemInfo.name = "systemInfo";
-
-SerializedResponseNode vcores;
-vcores.name = "vCores";
-size_t ncpus = std::thread::hardware_concurrency();
-
-vcores.value = ncpus;
-
-systemInfo.children.push_back(vcores);
-
-SerializedResponseNode ostype;
-ostype.name = "operatingSystem";
-ostype.value = getOperatingSystem();
-
-systemInfo.children.push_back(ostype);
-#if defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)
-SerializedResponseNode mem;
-mem.name = "physicalMem";
-
-uint64_t mema = (size_t) sysconf(_SC_PHYS_PAGES) * (size_t) 
sysconf(_SC_PAGESIZE);
+  SerializedResponseNode serializeVCoreInfo() {
+SerializedResponseNode v_cores;
+v_cores.name = "vCores";
+v_cores.value = std::thread::hardware_concurrency();
+return v_cores;
+  }
 
-mem.value = mema;
+  SerializedResponseNode serializeOperatingSystemType() {
+SerializedResponseNode os_type;
+os_type.name = "operatingSystem";
+os_type.value = getOperatingSystem();
+return os_type;
+  }
 
-systemInfo.children.push_back(mem);
-#endif
-#ifndef WIN32
-SerializedResponseNode arch;
-arch.name = "machinearch";
+  SerializedResponseNode serializeTotalPhysicalMemoryInformation() {
+SerializedResponseNode total_physical_memory;
+total_physical_memory.name = "physicalMem";
+total_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemTotalPhysicalMemory();
+return total_physical_memory;
+  }
 
-utsname buf;
+  SerializedResponseNode serializePhysicalMemoryUsageInformation() {
+SerializedResponseNode used_physical_memory;
+used_physical_memory.name = "memoryUtilization";
+used_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemPhysicalMemoryUsage();

Review comment:
   ~`static_cast` is preferred (or better yet, `gsl::narrow` if need be)~
   
   there is no need for the cast here, if we really want to ensure that it is 
being called with `uint64_t` we could go for 
   
   ```suggestion
   used_physical_memory.value = 
uint64_t{utils::OsUtils::getSystemPhysicalMemoryUsage()};
   ```
   
   which will compile time prevent narrowing conversion from whatever the 
functions return type is to `uint64_t`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1032: MINIFICPP-1504: Add Resource consumption data to heartbeats

2021-03-24 Thread GitBox


adamdebreceni commented on a change in pull request #1032:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1032#discussion_r600388400



##
File path: libminifi/include/core/state/nodes/DeviceInformation.h
##
@@ -353,119 +355,104 @@ class DeviceInfoNode : public DeviceInformation {
   std::vector serialize() {
 std::vector serialized;
 
+serialized.push_back(serializeIdentifier());
+serialized.push_back(serializeSystemInfo());
+serialized.push_back(serializeNetworkInfo());
+
+return serialized;
+  }
+
+ protected:
+  SerializedResponseNode serializeIdentifier() {
 SerializedResponseNode identifier;
 identifier.name = "identifier";
 identifier.value = device_id_;
+return identifier;
+  }
 
-SerializedResponseNode systemInfo;
-systemInfo.name = "systemInfo";
-
-SerializedResponseNode vcores;
-vcores.name = "vCores";
-size_t ncpus = std::thread::hardware_concurrency();
-
-vcores.value = ncpus;
-
-systemInfo.children.push_back(vcores);
-
-SerializedResponseNode ostype;
-ostype.name = "operatingSystem";
-ostype.value = getOperatingSystem();
-
-systemInfo.children.push_back(ostype);
-#if defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)
-SerializedResponseNode mem;
-mem.name = "physicalMem";
-
-uint64_t mema = (size_t) sysconf(_SC_PHYS_PAGES) * (size_t) 
sysconf(_SC_PAGESIZE);
+  SerializedResponseNode serializeVCoreInfo() {
+SerializedResponseNode v_cores;
+v_cores.name = "vCores";
+v_cores.value = std::thread::hardware_concurrency();
+return v_cores;
+  }
 
-mem.value = mema;
+  SerializedResponseNode serializeOperatingSystemType() {
+SerializedResponseNode os_type;
+os_type.name = "operatingSystem";
+os_type.value = getOperatingSystem();
+return os_type;
+  }
 
-systemInfo.children.push_back(mem);
-#endif
-#ifndef WIN32
-SerializedResponseNode arch;
-arch.name = "machinearch";
+  SerializedResponseNode serializeTotalPhysicalMemoryInformation() {
+SerializedResponseNode total_physical_memory;
+total_physical_memory.name = "physicalMem";
+total_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemTotalPhysicalMemory();
+return total_physical_memory;
+  }
 
-utsname buf;
+  SerializedResponseNode serializePhysicalMemoryUsageInformation() {
+SerializedResponseNode used_physical_memory;
+used_physical_memory.name = "memoryUtilization";
+used_physical_memory.value = 
(uint64_t)utils::OsUtils::getSystemPhysicalMemoryUsage();

Review comment:
   ~`static_cast` is preferred (or better yet, `gsl::narrow` if need be)~
   
   there is no need for the cast here, if we really want to ensure that it is 
being called by `uint64_t` we could go for 
   
   ```suggestion
   used_physical_memory.value = 
uint64_t{utils::OsUtils::getSystemPhysicalMemoryUsage()};
   ```
   
   which will compile time prevent narrowing conversion from whatever the 
functions return type is to `uint64_t`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8362) Set KEEP_SOURCE_FILE default to true for GetFile processor

2021-03-24 Thread dml (Jira)
dml created NIFI-8362:
-

 Summary: Set KEEP_SOURCE_FILE default to true for GetFile processor
 Key: NIFI-8362
 URL: https://issues.apache.org/jira/browse/NIFI-8362
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: dml






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >