[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries
[ https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431652#comment-16431652 ] ASF GitHub Bot commented on NIFI-1706: -- Github user patricker commented on the issue: https://github.com/apache/nifi/pull/2618 Building and unit tests for both QueryDatabaseTable and GenerateTableFetch were good. I ran a real test of an MS SQL table using the custom query function and it worked as expected. > Extend QueryDatabaseTable to support arbitrary queries > -- > > Key: NIFI-1706 > URL: https://issues.apache.org/jira/browse/NIFI-1706 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Paul Bormans >Assignee: Peter Wicks >Priority: Major > Labels: features > > The QueryDatabaseTable is able to observe a configured database table for new > rows and yield these into the flowfile. The model of an rdbms however is > often (if not always) normalized so you would need to join various tables in > order to "flatten" the data into useful events for a processing pipeline as > can be build with nifi or various tools within the hadoop ecosystem. > The request is to extend the processor to specify an arbitrary sql query > instead of specifying the table name + columns. > In addition (this may be another issue?) it is desired to limit the number of > rows returned per run. Not just because of bandwidth issue's from the nifi > pipeline onwards but mainly because huge databases may not be able to return > so many records within a reasonable time. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2618: NIFI-1706: Extend QueryDatabaseTable to support arbitrary ...
Github user patricker commented on the issue: https://github.com/apache/nifi/pull/2618 Building and unit tests for both QueryDatabaseTable and GenerateTableFetch were good. I ran a real test of an MS SQL table using the custom query function and it worked as expected. ---
[jira] [Commented] (NIFI-5055) Need ability to un-penalize MockFlowFile
[ https://issues.apache.org/jira/browse/NIFI-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431641#comment-16431641 ] ASF GitHub Bot commented on NIFI-5055: -- Github user markobean commented on the issue: https://github.com/apache/nifi/pull/2617 Updated to use assertTrue and assertFalse. I had simply borrowed assertEquals from the previous unit test. So, while updating, I updated that test to use assertTrue as well. Commit, squashed and pushed. > Need ability to un-penalize MockFlowFile > > > Key: NIFI-5055 > URL: https://issues.apache.org/jira/browse/NIFI-5055 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.6.0 >Reporter: Mark Bean >Assignee: Mark Bean >Priority: Major > > The MockFlowFile has a method setPenalized() which sets the 'penalized' > variable to true. And, the isPenalized() method simply returns the value of > 'penalized'. (In the real world, isPenalized() is time-based.) I believe an > unsetPenalized() method may be needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2617: NIFI-5055 added ability to unpenalize MockFlowFile directl...
Github user markobean commented on the issue: https://github.com/apache/nifi/pull/2617 Updated to use assertTrue and assertFalse. I had simply borrowed assertEquals from the previous unit test. So, while updating, I updated that test to use assertTrue as well. Commit, squashed and pushed. ---
[GitHub] nifi issue #2614: Added Apache Pulsar Processors and Controller Service
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2614 @joewitt something changed with the way expression language support is declared in the `PropertyDescriptor.Builder`. TL;DR when you use the deprecated `expressionLanguageSupported(boolean)` method, it doesn't seem to set up the new internal structure correctly. ---
[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries
[ https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431574#comment-16431574 ] ASF GitHub Bot commented on NIFI-1706: -- Github user patricker commented on the issue: https://github.com/apache/nifi/pull/2618 Thanks @ijokarumawak for the fixes. I'm building and will test. > Extend QueryDatabaseTable to support arbitrary queries > -- > > Key: NIFI-1706 > URL: https://issues.apache.org/jira/browse/NIFI-1706 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Paul Bormans >Assignee: Peter Wicks >Priority: Major > Labels: features > > The QueryDatabaseTable is able to observe a configured database table for new > rows and yield these into the flowfile. The model of an rdbms however is > often (if not always) normalized so you would need to join various tables in > order to "flatten" the data into useful events for a processing pipeline as > can be build with nifi or various tools within the hadoop ecosystem. > The request is to extend the processor to specify an arbitrary sql query > instead of specifying the table name + columns. > In addition (this may be another issue?) it is desired to limit the number of > rows returned per run. Not just because of bandwidth issue's from the nifi > pipeline onwards but mainly because huge databases may not be able to return > so many records within a reasonable time. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2618: NIFI-1706: Extend QueryDatabaseTable to support arbitrary ...
Github user patricker commented on the issue: https://github.com/apache/nifi/pull/2618 Thanks @ijokarumawak for the fixes. I'm building and will test. ---
[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...
Github user patricker closed the pull request at: https://github.com/apache/nifi/pull/2162 ---
[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries
[ https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431570#comment-16431570 ] ASF GitHub Bot commented on NIFI-1706: -- Github user patricker closed the pull request at: https://github.com/apache/nifi/pull/2162 > Extend QueryDatabaseTable to support arbitrary queries > -- > > Key: NIFI-1706 > URL: https://issues.apache.org/jira/browse/NIFI-1706 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Paul Bormans >Assignee: Peter Wicks >Priority: Major > Labels: features > > The QueryDatabaseTable is able to observe a configured database table for new > rows and yield these into the flowfile. The model of an rdbms however is > often (if not always) normalized so you would need to join various tables in > order to "flatten" the data into useful events for a processing pipeline as > can be build with nifi or various tools within the hadoop ecosystem. > The request is to extend the processor to specify an arbitrary sql query > instead of specifying the table name + columns. > In addition (this may be another issue?) it is desired to limit the number of > rows returned per run. Not just because of bandwidth issue's from the nifi > pipeline onwards but mainly because huge databases may not be able to return > so many records within a reasonable time. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2614: Added Apache Pulsar Processors and Controller Service
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2614 @MikeThomsen just wanted to clarify the pointers here.. Did we break an API that existing prior to 1.6 release that we should not have or was this a newer thing being leveraged that just happened to not settle until the release itself and thus requires a little rework in the PR. Just want to make sure what we're putting @david-streamlio through here (and thanks for sticking with it david) is isolated. ---
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431542#comment-16431542 ] ASF GitHub Bot commented on NIFI-4914: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2614 @david-streamlio you brought in a whole lot of commits from other people with that merge from upstream/master. I checked out your branch and did a rebase on it (`git rebase master`) and that seemed to clear it up. I would recommend doing that on your branch locally and verifying that that "merge commit" with the commit message `Merge remote-tracking branch 'upstream/master' into NIFI-4914` goes away. As a rule of thumb, this is how you want to do this sort of update: 1. git checkout master 2. git pull upstream master 3. git checkout YOUR_BRANCH 4. git rebase master Once you've done that, the last command will replay your commits on top of the most recent version of upstream/master. > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2614: Added Apache Pulsar Processors and Controller Service
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2614 @david-streamlio you brought in a whole lot of commits from other people with that merge from upstream/master. I checked out your branch and did a rebase on it (`git rebase master`) and that seemed to clear it up. I would recommend doing that on your branch locally and verifying that that "merge commit" with the commit message `Merge remote-tracking branch 'upstream/master' into NIFI-4914` goes away. As a rule of thumb, this is how you want to do this sort of update: 1. git checkout master 2. git pull upstream master 3. git checkout YOUR_BRANCH 4. git rebase master Once you've done that, the last command will replay your commits on top of the most recent version of upstream/master. ---
[jira] [Commented] (MINIFICPP-449) Allow cURL to be built and statically linked
[ https://issues.apache.org/jira/browse/MINIFICPP-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431532#comment-16431532 ] ASF GitHub Bot commented on MINIFICPP-449: -- Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/296#discussion_r180265512 --- Diff: extensions/http-curl/CMakeLists.txt --- @@ -42,9 +42,9 @@ if(CMAKE_THREAD_LIBS_INIT) endif() if (CURL_FOUND) -include_directories(${CURL_INCLUDE_DIRS}) -target_link_libraries (minifi-http-curl ${CURL_LIBRARIES}) -endif(CURL_FOUND) + include_directories(${CURL_INCLUDE_DIRS}) + target_link_libraries(minifi-http-curl ${CURL_LIBRARIES}) --- End diff -- ^ Even in the case of a system library we could rely on the static library if it exists. We don't do it now to ensure we have a small binary size, but we should make it clear that while downloading the source and building manually is an option in most cases a static curl lib likely already exists on the build system. In my case I've just checked my centos 7,6, u16-18, fedora, and suse VMs and they all have static libs. There should be a little work here to ensure we get that IF the user wants that. Perhaps that necessitates an added option? > Allow cURL to be built and statically linked > > > Key: MINIFICPP-449 > URL: https://issues.apache.org/jira/browse/MINIFICPP-449 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > Allowing cURL to be built as an external project and linked statically will > help support certain embedded deployments and certain portability situations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-449) Allow cURL to be built and statically linked
[ https://issues.apache.org/jira/browse/MINIFICPP-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431533#comment-16431533 ] ASF GitHub Bot commented on MINIFICPP-449: -- Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/296#discussion_r180264700 --- Diff: main/CMakeLists.txt --- @@ -66,7 +66,7 @@ else () target_link_libraries (minifiexe -Wl,--whole-archive minifi -Wl,--no-whole-archive) endif () -target_link_libraries(minifiexe yaml-cpp ${UUID_LIBRARIES} ${OPENSSL_LIBRARIES}) +target_link_libraries(minifiexe yaml-cpp ${UUID_LIBRARIES} ${CURL_LIBRARIES} ${OPENSSL_LIBRARIES}) --- End diff -- This should not be included here. It should be passed transitively through the minifi-http-curl iff it is enabled. > Allow cURL to be built and statically linked > > > Key: MINIFICPP-449 > URL: https://issues.apache.org/jira/browse/MINIFICPP-449 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > Allowing cURL to be built as an external project and linked statically will > help support certain embedded deployments and certain portability situations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #296: MINIFICPP-449 Add cURL external project b...
Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/296#discussion_r180264700 --- Diff: main/CMakeLists.txt --- @@ -66,7 +66,7 @@ else () target_link_libraries (minifiexe -Wl,--whole-archive minifi -Wl,--no-whole-archive) endif () -target_link_libraries(minifiexe yaml-cpp ${UUID_LIBRARIES} ${OPENSSL_LIBRARIES}) +target_link_libraries(minifiexe yaml-cpp ${UUID_LIBRARIES} ${CURL_LIBRARIES} ${OPENSSL_LIBRARIES}) --- End diff -- This should not be included here. It should be passed transitively through the minifi-http-curl iff it is enabled. ---
[GitHub] nifi-minifi-cpp pull request #296: MINIFICPP-449 Add cURL external project b...
Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/296#discussion_r180265512 --- Diff: extensions/http-curl/CMakeLists.txt --- @@ -42,9 +42,9 @@ if(CMAKE_THREAD_LIBS_INIT) endif() if (CURL_FOUND) -include_directories(${CURL_INCLUDE_DIRS}) -target_link_libraries (minifi-http-curl ${CURL_LIBRARIES}) -endif(CURL_FOUND) + include_directories(${CURL_INCLUDE_DIRS}) + target_link_libraries(minifi-http-curl ${CURL_LIBRARIES}) --- End diff -- ^ Even in the case of a system library we could rely on the static library if it exists. We don't do it now to ensure we have a small binary size, but we should make it clear that while downloading the source and building manually is an option in most cases a static curl lib likely already exists on the build system. In my case I've just checked my centos 7,6, u16-18, fedora, and suse VMs and they all have static libs. There should be a little work here to ensure we get that IF the user wants that. Perhaps that necessitates an added option? ---
[jira] [Commented] (MINIFICPP-449) Allow cURL to be built and statically linked
[ https://issues.apache.org/jira/browse/MINIFICPP-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431523#comment-16431523 ] ASF GitHub Bot commented on MINIFICPP-449: -- Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/296#discussion_r180264480 --- Diff: CMakeLists.txt --- @@ -117,6 +118,49 @@ else () message( FATAL_ERROR "OpenSSL was not found. Please install OpenSSL" ) endif (OPENSSL_FOUND) +if(NOT USE_SYSTEM_CURL) --- End diff -- This should be in http-curl. If curl is disabled there is no point in adding it as an external project > Allow cURL to be built and statically linked > > > Key: MINIFICPP-449 > URL: https://issues.apache.org/jira/browse/MINIFICPP-449 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > Allowing cURL to be built as an external project and linked statically will > help support certain embedded deployments and certain portability situations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #296: MINIFICPP-449 Add cURL external project b...
Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/296#discussion_r180264480 --- Diff: CMakeLists.txt --- @@ -117,6 +118,49 @@ else () message( FATAL_ERROR "OpenSSL was not found. Please install OpenSSL" ) endif (OPENSSL_FOUND) +if(NOT USE_SYSTEM_CURL) --- End diff -- This should be in http-curl. If curl is disabled there is no point in adding it as an external project ---
[jira] [Commented] (NIFI-4035) Implement record-based Solr processors
[ https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431421#comment-16431421 ] ASF GitHub Bot commented on NIFI-4035: -- Github user abhinavrohatgi30 commented on the issue: https://github.com/apache/nifi/pull/2561 Hi, I've looked at the comments and I've made the following changes as part of the latest commit that cover all the comments : 1. Fixed the issue with Nested Records (The issue came up because of the change in field names in the previous commit) 2. Fixed the issue with Array of Records (It was generating an Object[] as opposed to a Record[] that I was expecting and as a result was storing the string representation of a Record) 3. Trimming field names individually 4. Adding Test cases for Nested Record, Array of Record and Record Parser failure 5. Using the getLogger() later in the code 6. Wrapping the Jsons in the additionalDetails.html in a tag I hope the processor now works as expected, let me know if any further changes are to be made Thanks > Implement record-based Solr processors > -- > > Key: NIFI-4035 > URL: https://issues.apache.org/jira/browse/NIFI-4035 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0 >Reporter: Bryan Bende >Priority: Minor > > Now that we have record readers and writers, we should implement variants of > the existing Solr processors that record-based... > Processors to consider: > * PutSolrRecord - uses a configured record reader to read an incoming flow > file and insert records to Solr > * GetSolrRecord - extracts records from Solr and uses a configured record > writer to write them to a flow file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2561: NIFI-4035 Implement record-based Solr processors
Github user abhinavrohatgi30 commented on the issue: https://github.com/apache/nifi/pull/2561 Hi, I've looked at the comments and I've made the following changes as part of the latest commit that cover all the comments : 1. Fixed the issue with Nested Records (The issue came up because of the change in field names in the previous commit) 2. Fixed the issue with Array of Records (It was generating an Object[] as opposed to a Record[] that I was expecting and as a result was storing the string representation of a Record) 3. Trimming field names individually 4. Adding Test cases for Nested Record, Array of Record and Record Parser failure 5. Using the getLogger() later in the code 6. Wrapping the Jsons in the additionalDetails.html in a tag I hope the processor now works as expected, let me know if any further changes are to be made Thanks ---
[jira] [Assigned] (NIFI-5063) Add documentation for "primary node" processors
[ https://issues.apache.org/jira/browse/NIFI-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Lim reassigned NIFI-5063: Assignee: Andrew Lim > Add documentation for "primary node" processors > --- > > Key: NIFI-5063 > URL: https://issues.apache.org/jira/browse/NIFI-5063 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Trivial > > Processors that have been configured for "primary node" only are now > identified in the UI on both the canvas and in the Summary page. It would be > helpful to document this in the User Guide. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5063) Add documentation for "primary node" processors
[ https://issues.apache.org/jira/browse/NIFI-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Lim updated NIFI-5063: - Description: Processors that have been configured for "primary node" only are now identified in the UI on both the canvas and in the Summary page. It would be helpful to document this in the User Guide. was: Processors that have been configured for "primary node" only are now identified in the UI on bot the canvas and in the Summary page. It would be helpful to document this in the User Guide. > Add documentation for "primary node" processors > --- > > Key: NIFI-5063 > URL: https://issues.apache.org/jira/browse/NIFI-5063 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Priority: Trivial > > Processors that have been configured for "primary node" only are now > identified in the UI on both the canvas and in the Summary page. It would be > helpful to document this in the User Guide. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5063) Add documentation for "primary node" processors
Andrew Lim created NIFI-5063: Summary: Add documentation for "primary node" processors Key: NIFI-5063 URL: https://issues.apache.org/jira/browse/NIFI-5063 Project: Apache NiFi Issue Type: Improvement Components: Documentation Website Reporter: Andrew Lim Processors that have been configured for "primary node" only are now identified in the UI on bot the canvas and in the Summary page. It would be helpful to document this in the User Guide. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2620: NIFI-4941 Updated nifi.sensitive.props.additional.k...
GitHub user andrewmlim opened a pull request: https://github.com/apache/nifi/pull/2620 NIFI-4941 Updated nifi.sensitive.props.additional.keys description to⦠⦠refer to nifi.properties Also corrected formatting for a reference to bootstrap.conf file You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrewmlim/nifi NIFI-4941 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2620.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2620 commit a4c5bc5cd58cf39f715732750a813d227d261da7 Author: Andrew LimDate: 2018-04-09T20:12:37Z NIFI-4941 Updated nifi.sensitive.props.additional.keys description to refer to nifi.properties ---
[jira] [Commented] (NIFI-4941) Make description of "nifi.sensitive.props.additional.keys" property more explicit by referring to properties in nifi.properties
[ https://issues.apache.org/jira/browse/NIFI-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431161#comment-16431161 ] ASF GitHub Bot commented on NIFI-4941: -- Github user scottyaslan commented on the issue: https://github.com/apache/nifi/pull/2620 Will Review... > Make description of "nifi.sensitive.props.additional.keys" property more > explicit by referring to properties in nifi.properties > --- > > Key: NIFI-4941 > URL: https://issues.apache.org/jira/browse/NIFI-4941 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Trivial > > Description in 1.5.0 is: > The comma separated list of properties to encrypt in addition to the default > sensitive properties (see Encrypt-Config Tool). > Should clarify which "properties" can be encrypted. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2620: NIFI-4941 Updated nifi.sensitive.props.additional.keys des...
Github user scottyaslan commented on the issue: https://github.com/apache/nifi/pull/2620 Will Review... ---
[jira] [Commented] (NIFI-4941) Make description of "nifi.sensitive.props.additional.keys" property more explicit by referring to properties in nifi.properties
[ https://issues.apache.org/jira/browse/NIFI-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431159#comment-16431159 ] ASF GitHub Bot commented on NIFI-4941: -- GitHub user andrewmlim opened a pull request: https://github.com/apache/nifi/pull/2620 NIFI-4941 Updated nifi.sensitive.props.additional.keys description to… … refer to nifi.properties Also corrected formatting for a reference to bootstrap.conf file You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrewmlim/nifi NIFI-4941 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2620.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2620 commit a4c5bc5cd58cf39f715732750a813d227d261da7 Author: Andrew LimDate: 2018-04-09T20:12:37Z NIFI-4941 Updated nifi.sensitive.props.additional.keys description to refer to nifi.properties > Make description of "nifi.sensitive.props.additional.keys" property more > explicit by referring to properties in nifi.properties > --- > > Key: NIFI-4941 > URL: https://issues.apache.org/jira/browse/NIFI-4941 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Trivial > > Description in 1.5.0 is: > The comma separated list of properties to encrypt in addition to the default > sensitive properties (see Encrypt-Config Tool). > Should clarify which "properties" can be encrypted. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (MINIFICPP-445) Implement escape/unescape CSV functions in expression language
[ https://issues.apache.org/jira/browse/MINIFICPP-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri updated MINIFICPP-445: -- Fix Version/s: 0.5.0 > Implement escape/unescape CSV functions in expression language > -- > > Key: MINIFICPP-445 > URL: https://issues.apache.org/jira/browse/MINIFICPP-445 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > Fix For: 0.5.0 > > > * > [escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv] > * > [unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-445) Implement escape/unescape CSV functions in expression language
[ https://issues.apache.org/jira/browse/MINIFICPP-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431151#comment-16431151 ] ASF GitHub Bot commented on MINIFICPP-445: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/293 > Implement escape/unescape CSV functions in expression language > -- > > Key: MINIFICPP-445 > URL: https://issues.apache.org/jira/browse/MINIFICPP-445 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > Fix For: 0.5.0 > > > * > [escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv] > * > [unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #293: MINIFICPP-445 Added escape/unescape CSV e...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/293 ---
[jira] [Created] (NIFI-5062) Remove hbase-client from nifi-hbase-bundle pom
Bryan Bende created NIFI-5062: - Summary: Remove hbase-client from nifi-hbase-bundle pom Key: NIFI-5062 URL: https://issues.apache.org/jira/browse/NIFI-5062 Project: Apache NiFi Issue Type: Improvement Affects Versions: 1.6.0 Reporter: Bryan Bende Since the hbase-client dependency should be coming from the client service implementation, we shouldn't need to specify it here: [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hbase-bundle/pom.xml#L43] We should also make hbase.version a property that can be overridden at build time, rather than hard-coding it here: https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/pom.xml#L73 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (MINIFICPP-445) Implement escape/unescape CSV functions in expression language
[ https://issues.apache.org/jira/browse/MINIFICPP-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri resolved MINIFICPP-445. --- Resolution: Fixed > Implement escape/unescape CSV functions in expression language > -- > > Key: MINIFICPP-445 > URL: https://issues.apache.org/jira/browse/MINIFICPP-445 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > Fix For: 0.5.0 > > > * > [escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv] > * > [unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2614: Added Apache Pulsar Processors and Controller Service
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2614 I don't see an upstream branch labeled 1.6.0which upstream branch should I merge into my local branch? ---
[jira] [Commented] (NIFI-543) Provide extensions a way to indicate that they can run only on primary node, if clustered
[ https://issues.apache.org/jira/browse/NIFI-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431140#comment-16431140 ] ASF GitHub Bot commented on NIFI-543: - Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2509#discussion_r180211928 --- Diff: nifi-docs/src/main/asciidoc/developer-guide.adoc --- @@ -1751,6 +1751,12 @@ will handle your Processor: will always be set to `1`. This does *not*, however, mean that the Processor does not have to be thread-safe, as the thread that is executing `onTrigger` may change between invocations. +- `PrimaryNodeOnly`: Apache NiFi, when clustered, offers two modes of execution for Processors: "Primary Node" and +"All Nodes". Although running in all the nodes offers better parallelism, some Processors are known to cause unintended +behaviors when run in multiple nodes. For instance, some Processors lists or reads files from remote filesystems. If such --- End diff -- Typo in the docs: think it should read "some Processors list or read files" rather than "lists of reads" > Provide extensions a way to indicate that they can run only on primary node, > if clustered > - > > Key: NIFI-543 > URL: https://issues.apache.org/jira/browse/NIFI-543 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework, Documentation Website, Extensions >Reporter: Mark Payne >Assignee: Sivaprasanna Sethuraman >Priority: Major > > There are Processors that are known to be problematic if run from multiple > nodes simultaneously. These processors should be able to use a > @PrimaryNodeOnly annotation (or something similar) to indicate that they can > be scheduled to run only on primary node if run in a cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-543) Provide extensions a way to indicate that they can run only on primary node, if clustered
[ https://issues.apache.org/jira/browse/NIFI-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431139#comment-16431139 ] ASF GitHub Bot commented on NIFI-543: - Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2509#discussion_r180211409 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js --- @@ -741,8 +742,8 @@ } }); -// show the execution node option if we're cluster or we're currently configured to run on the primary node only -if (nfClusterSummary.isClustered() || executionNode === 'PRIMARY') { +// show the execution node option if we're clustered and execution node is not restricted to run only in primary node +if (nfClusterSummary.isClustered() && executionNodeRestricted !== true) { --- End diff -- I think we still need the executionNode === 'PRIMARY' here: ``` if ((nfClusterSummary.isClustered() && executionNodeRestricted !== true) || executionNode === 'PRIMARY') { ``` This way, if running in standalone mode, but the processor is marked with an ExecutionNode of Primary Node (which may be the case if instantiating a template from a cluster, or if if copying a flow.xml.gz over or something like that) we still have the ability to change it to 'All Nodes'. > Provide extensions a way to indicate that they can run only on primary node, > if clustered > - > > Key: NIFI-543 > URL: https://issues.apache.org/jira/browse/NIFI-543 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework, Documentation Website, Extensions >Reporter: Mark Payne >Assignee: Sivaprasanna Sethuraman >Priority: Major > > There are Processors that are known to be problematic if run from multiple > nodes simultaneously. These processors should be able to use a > @PrimaryNodeOnly annotation (or something similar) to indicate that they can > be scheduled to run only on primary node if run in a cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-543) Provide extensions a way to indicate that they can run only on primary node, if clustered
[ https://issues.apache.org/jira/browse/NIFI-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431141#comment-16431141 ] ASF GitHub Bot commented on NIFI-543: - Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2509#discussion_r180211507 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/nf-processor-details.js --- @@ -215,9 +215,10 @@ } var executionNode = details.config['executionNode']; +var executionNodeRestricted = details.executionNodeRestricted // only show the execution-node when applicable -if (nfClusterSummary.isClustered() || executionNode === 'PRIMARY') { +if (nfClusterSummary.isClustered() && executionNodeRestricted !== true) { --- End diff -- I think we need the executionNode === 'PRIMARY' to still be considered here as well. > Provide extensions a way to indicate that they can run only on primary node, > if clustered > - > > Key: NIFI-543 > URL: https://issues.apache.org/jira/browse/NIFI-543 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework, Documentation Website, Extensions >Reporter: Mark Payne >Assignee: Sivaprasanna Sethuraman >Priority: Major > > There are Processors that are known to be problematic if run from multiple > nodes simultaneously. These processors should be able to use a > @PrimaryNodeOnly annotation (or something similar) to indicate that they can > be scheduled to run only on primary node if run in a cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2509: NIFI-543 Added annotation to indicate processor sho...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2509#discussion_r180211928 --- Diff: nifi-docs/src/main/asciidoc/developer-guide.adoc --- @@ -1751,6 +1751,12 @@ will handle your Processor: will always be set to `1`. This does *not*, however, mean that the Processor does not have to be thread-safe, as the thread that is executing `onTrigger` may change between invocations. +- `PrimaryNodeOnly`: Apache NiFi, when clustered, offers two modes of execution for Processors: "Primary Node" and +"All Nodes". Although running in all the nodes offers better parallelism, some Processors are known to cause unintended +behaviors when run in multiple nodes. For instance, some Processors lists or reads files from remote filesystems. If such --- End diff -- Typo in the docs: think it should read "some Processors list or read files" rather than "lists of reads" ---
[GitHub] nifi pull request #2509: NIFI-543 Added annotation to indicate processor sho...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2509#discussion_r180211409 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js --- @@ -741,8 +742,8 @@ } }); -// show the execution node option if we're cluster or we're currently configured to run on the primary node only -if (nfClusterSummary.isClustered() || executionNode === 'PRIMARY') { +// show the execution node option if we're clustered and execution node is not restricted to run only in primary node +if (nfClusterSummary.isClustered() && executionNodeRestricted !== true) { --- End diff -- I think we still need the executionNode === 'PRIMARY' here: ``` if ((nfClusterSummary.isClustered() && executionNodeRestricted !== true) || executionNode === 'PRIMARY') { ``` This way, if running in standalone mode, but the processor is marked with an ExecutionNode of Primary Node (which may be the case if instantiating a template from a cluster, or if if copying a flow.xml.gz over or something like that) we still have the ability to change it to 'All Nodes'. ---
[GitHub] nifi pull request #2509: NIFI-543 Added annotation to indicate processor sho...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2509#discussion_r180211507 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/nf-processor-details.js --- @@ -215,9 +215,10 @@ } var executionNode = details.config['executionNode']; +var executionNodeRestricted = details.executionNodeRestricted // only show the execution-node when applicable -if (nfClusterSummary.isClustered() || executionNode === 'PRIMARY') { +if (nfClusterSummary.isClustered() && executionNodeRestricted !== true) { --- End diff -- I think we need the executionNode === 'PRIMARY' to still be considered here as well. ---
[GitHub] nifi issue #2614: Added Apache Pulsar Processors and Controller Service
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2614 I think it made it into "1.6.0-SNAPSHOT" after RC3 was cut. You have a bunch of errors in there, but they look like this: > [ERROR] testSingleRecordSuccess(org.apache.nifi.processors.pulsar.pubsub.async.TestAsyncPublishPulsarRecord_1_X) Time elapsed: 0.011 s <<< FAILURE! java.lang.AssertionError: java.lang.IllegalStateException: Attempting to evaluate expression language for topic using flow file attributes but the scope evaluation is set to NONE. The proper scope should be set in the property descriptor using PropertyDescriptor.Builder.expressionLanguageSupported(ExpressionLanguageScope) at org.apache.nifi.processors.pulsar.pubsub.async.TestAsyncPublishPulsarRecord_1_X.testSingleRecordSuccess(TestAsyncPublishPulsarRecord_1_X.java:87) Caused by: java.lang.IllegalStateException: Attempting to evaluate expression language for topic using flow file attributes but the scope evaluation is set to NONE. The proper scope should be set in the property descriptor using PropertyDescriptor.Builder.expressionLanguageSupported(ExpressionLanguageScope) ---
[jira] [Commented] (MINIFICPP-449) Allow cURL to be built and statically linked
[ https://issues.apache.org/jira/browse/MINIFICPP-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431119#comment-16431119 ] ASF GitHub Bot commented on MINIFICPP-449: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/296 MINIFICPP-449 Add cURL external project build with static linking Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-449 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/296.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #296 commit da754443dfaabeae56796aa2e8d0528dc4b8a4d4 Author: Andrew I. ChristiansonDate: 2018-03-22T19:49:50Z MINIFICPP-449 Add cURL external project build with static linking > Allow cURL to be built and statically linked > > > Key: MINIFICPP-449 > URL: https://issues.apache.org/jira/browse/MINIFICPP-449 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > Allowing cURL to be built as an external project and linked statically will > help support certain embedded deployments and certain portability situations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #296: MINIFICPP-449 Add cURL external project b...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/296 MINIFICPP-449 Add cURL external project build with static linking Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-449 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/296.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #296 commit da754443dfaabeae56796aa2e8d0528dc4b8a4d4 Author: Andrew I. ChristiansonDate: 2018-03-22T19:49:50Z MINIFICPP-449 Add cURL external project build with static linking ---
[GitHub] nifi issue #2614: Added Apache Pulsar Processors and Controller Service
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2614 So, is this a change in the 1.7.x code base or is it already in the 1.6.0 code? I created my fork back on Feb 22nd based on the 1.6.0-SNAPSHOT branch, which does not have these enums. Should I create a new fork of 1.6.x? of 1.7.x? Please advise ---
[jira] [Commented] (NIFI-5055) Need ability to un-penalize MockFlowFile
[ https://issues.apache.org/jira/browse/NIFI-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431096#comment-16431096 ] ASF GitHub Bot commented on NIFI-5055: -- Github user MikeThomsen commented on a diff in the pull request: https://github.com/apache/nifi/pull/2617#discussion_r180204173 --- Diff: nifi-mock/src/test/java/org/apache/nifi/util/TestMockProcessSession.java --- @@ -101,6 +101,17 @@ public void testKeepPenalizedStatusAfterPuttingAttribute(){ assertEquals(true, ff1.isPenalized()); } +@Test +public void testUnpenalizeFlowFile() { +final Processor processor = new PoorlyBehavedProcessor(); +final MockProcessSession session = new MockProcessSession(new SharedSessionState(processor, new AtomicLong(0L)), processor); +FlowFile ff1 = session.createFlowFile("hello, world".getBytes()); +ff1 = session.penalize(ff1); +assertEquals(true, ff1.isPenalized()); +ff1 = session.unpenalize(ff1); +assertEquals(false, ff1.isPenalized()); --- End diff -- Please change to `assertFalse` > Need ability to un-penalize MockFlowFile > > > Key: NIFI-5055 > URL: https://issues.apache.org/jira/browse/NIFI-5055 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.6.0 >Reporter: Mark Bean >Assignee: Mark Bean >Priority: Major > > The MockFlowFile has a method setPenalized() which sets the 'penalized' > variable to true. And, the isPenalized() method simply returns the value of > 'penalized'. (In the real world, isPenalized() is time-based.) I believe an > unsetPenalized() method may be needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-1295) Add UI option to interrupt a running processor
[ https://issues.apache.org/jira/browse/NIFI-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431097#comment-16431097 ] ASF GitHub Bot commented on NIFI-1295: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2607#discussion_r180204274 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FlowController.java --- @@ -3846,6 +3851,73 @@ public QueueSize getTotalFlowFileCount(final ProcessGroup group) { return new QueueSize(count, contentSize); } +public class GroupStatusCounts { --- End diff -- Ah yes, I overlooked that. Is a bit difficult to find these things in a Github Diff page :) > Add UI option to interrupt a running processor > -- > > Key: NIFI-1295 > URL: https://issues.apache.org/jira/browse/NIFI-1295 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core UI >Affects Versions: 0.4.0 >Reporter: Oleg Zhurakousky >Assignee: Matt Gilman >Priority: Major > > Basically we need an expose option to a user to kill Processors that can't be > shut down the usual way (see NIFI-78 for more details). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5055) Need ability to un-penalize MockFlowFile
[ https://issues.apache.org/jira/browse/NIFI-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431095#comment-16431095 ] ASF GitHub Bot commented on NIFI-5055: -- Github user MikeThomsen commented on a diff in the pull request: https://github.com/apache/nifi/pull/2617#discussion_r180204129 --- Diff: nifi-mock/src/test/java/org/apache/nifi/util/TestMockProcessSession.java --- @@ -101,6 +101,17 @@ public void testKeepPenalizedStatusAfterPuttingAttribute(){ assertEquals(true, ff1.isPenalized()); } +@Test +public void testUnpenalizeFlowFile() { +final Processor processor = new PoorlyBehavedProcessor(); +final MockProcessSession session = new MockProcessSession(new SharedSessionState(processor, new AtomicLong(0L)), processor); +FlowFile ff1 = session.createFlowFile("hello, world".getBytes()); +ff1 = session.penalize(ff1); +assertEquals(true, ff1.isPenalized()); --- End diff -- A little better if it's `assertTrue` > Need ability to un-penalize MockFlowFile > > > Key: NIFI-5055 > URL: https://issues.apache.org/jira/browse/NIFI-5055 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.6.0 >Reporter: Mark Bean >Assignee: Mark Bean >Priority: Major > > The MockFlowFile has a method setPenalized() which sets the 'penalized' > variable to true. And, the isPenalized() method simply returns the value of > 'penalized'. (In the real world, isPenalized() is time-based.) I believe an > unsetPenalized() method may be needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2607: NIFI-1295: Adding UI controls for terminating threa...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2607#discussion_r180204274 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FlowController.java --- @@ -3846,6 +3851,73 @@ public QueueSize getTotalFlowFileCount(final ProcessGroup group) { return new QueueSize(count, contentSize); } +public class GroupStatusCounts { --- End diff -- Ah yes, I overlooked that. Is a bit difficult to find these things in a Github Diff page :) ---
[GitHub] nifi pull request #2617: NIFI-5055 added ability to unpenalize MockFlowFile ...
Github user MikeThomsen commented on a diff in the pull request: https://github.com/apache/nifi/pull/2617#discussion_r180204129 --- Diff: nifi-mock/src/test/java/org/apache/nifi/util/TestMockProcessSession.java --- @@ -101,6 +101,17 @@ public void testKeepPenalizedStatusAfterPuttingAttribute(){ assertEquals(true, ff1.isPenalized()); } +@Test +public void testUnpenalizeFlowFile() { +final Processor processor = new PoorlyBehavedProcessor(); +final MockProcessSession session = new MockProcessSession(new SharedSessionState(processor, new AtomicLong(0L)), processor); +FlowFile ff1 = session.createFlowFile("hello, world".getBytes()); +ff1 = session.penalize(ff1); +assertEquals(true, ff1.isPenalized()); --- End diff -- A little better if it's `assertTrue` ---
[GitHub] nifi pull request #2617: NIFI-5055 added ability to unpenalize MockFlowFile ...
Github user MikeThomsen commented on a diff in the pull request: https://github.com/apache/nifi/pull/2617#discussion_r180204173 --- Diff: nifi-mock/src/test/java/org/apache/nifi/util/TestMockProcessSession.java --- @@ -101,6 +101,17 @@ public void testKeepPenalizedStatusAfterPuttingAttribute(){ assertEquals(true, ff1.isPenalized()); } +@Test +public void testUnpenalizeFlowFile() { +final Processor processor = new PoorlyBehavedProcessor(); +final MockProcessSession session = new MockProcessSession(new SharedSessionState(processor, new AtomicLong(0L)), processor); +FlowFile ff1 = session.createFlowFile("hello, world".getBytes()); +ff1 = session.penalize(ff1); +assertEquals(true, ff1.isPenalized()); +ff1 = session.unpenalize(ff1); +assertEquals(false, ff1.isPenalized()); --- End diff -- Please change to `assertFalse` ---
[jira] [Commented] (NIFI-5060) UpdateRecord substringAfter and substringAfterLast only increments by 1
[ https://issues.apache.org/jira/browse/NIFI-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431092#comment-16431092 ] Mark Payne commented on NIFI-5060: -- [~greenCee] thanks for reporting the bug. I think you may have copied & pasted the same value for the "Resulting record currently" and "Resulting record should be". What I believe the Resulting Record should be is: {code:java} [ { "value": "01230123", "example1": "230123", "example2": "0123", "example3": "23", "example4": "" } ]{code} Do you agree? > UpdateRecord substringAfter and substringAfterLast only increments by 1 > --- > > Key: NIFI-5060 > URL: https://issues.apache.org/jira/browse/NIFI-5060 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.6.0 >Reporter: Chris Green >Priority: Major > Labels: easyfix, newbie > Attachments: Validate_substringafter_Behavior.xml > > > This is my first submitted issue, so please feel free to point me in the > correct direction if I make process mistakes. > Replication: > Drag a GenerateFlowFile onto the canvas and configure this property, and set > run schedule to some high value like 600 seconds > "Custom Text" \{"value": "01230123"} > Connect GenerateFlowFile with an UpdateAttribute set to add the attribute > "avro.schema" with a value of "{ "type": "record", > "name": "test", > "fields" : [\{"name": "value", "type": "string"}] > }" > > Connect UpdateAttribute to an UpdateRecord onto the canvas, Autoterminate > success and failure. Set the Record Reader to a new JSONTreeReader. On the > JsonTreeReader configure it to use the "Use 'Schema Text' Attribute". > Create a JsonRecordSetWriter and set the Schema Text to: > "{ "type": "record", > "name": "test", > "fields" : [\{"name": "value", "type": "string"}, > {"name": "example1", "type": "string"}, > {"name": "example2", "type": "string"}, > {"name": "example3", "type": "string"}, > {"name": "example4", "type": "string"}] > }" > Add the following properties to UpdateRecord > > ||Heading 1||Heading 2|| > |/example1|substringAfter(/value, "1") | > |/example2|substringAfter(/value, "123") | > |/example3|substringAfterLast(/value, "1")| > |/example4|substringAfterLast(/value, "123")| > > Resulting record currently: > [ { > "value" : "01230123", > "example1" : "230123", > "example2" : "30123", > "example3" : "23", > "example4" : "3" > } ] > > > Problem: > When using the UpdateRecord processor, and the substringAfter() function > after the search phrase is found it will only increment the substring > returned by 1 rather than the length of the search term. > Based off XPath and other implementations of substringAfter functions I've > seen the value returned should remove the search term rather than just the > first character of the search term. > > > Resulting record should be: > [ { > "value" : "01230123", > "example1" : "230123", > "example2" : "30123", > "example3" : "23", > "example4" : "3" > } ] > > I'm cleaning up a fix with test code that will change the increment from 1 to > the length of the search terms. > It appears substringBefore are not impacted by the behavior as always returns > the index before the found search term which is the expected behavior -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFIREG-158) Should be able to retrieve a flow by id without the bucket it
[ https://issues.apache.org/jira/browse/NIFIREG-158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431070#comment-16431070 ] ASF GitHub Bot commented on NIFIREG-158: Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/108 > Should be able to retrieve a flow by id without the bucket it > - > > Key: NIFIREG-158 > URL: https://issues.apache.org/jira/browse/NIFIREG-158 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Fix For: 0.2.0 > > > Currently all the flow end-points are nested under a bucket, like: > {code:java} > /buckets/{bucketId}/flows/{flowId} > /buckets/{bucketId}/flows/{flowId}/versions/{version}{code} > The flow identifier is unique across all buckets, so once the flow is create > we should be able to support end-points without the bucket id: > {code:java} > /flows/{flowId} > /flows/{flowId}/versions/{version} > {code} > We still need to look at the bucket and authorize the bucket id when secure, > but we'll have the bucket id from retrieving the flow. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFIREG-158) Should be able to retrieve a flow by id without the bucket it
[ https://issues.apache.org/jira/browse/NIFIREG-158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Doran resolved NIFIREG-158. - Resolution: Done Fix Version/s: 0.2.0 > Should be able to retrieve a flow by id without the bucket it > - > > Key: NIFIREG-158 > URL: https://issues.apache.org/jira/browse/NIFIREG-158 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Fix For: 0.2.0 > > > Currently all the flow end-points are nested under a bucket, like: > {code:java} > /buckets/{bucketId}/flows/{flowId} > /buckets/{bucketId}/flows/{flowId}/versions/{version}{code} > The flow identifier is unique across all buckets, so once the flow is create > we should be able to support end-points without the bucket id: > {code:java} > /flows/{flowId} > /flows/{flowId}/versions/{version} > {code} > We still need to look at the bucket and authorize the bucket id when secure, > but we'll have the bucket id from retrieving the flow. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2614: Added Apache Pulsar Processors and Controller Service
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2614 > Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.20.1:test (default-test) on project nifi-pulsar-processors: There are test failures. Related to how you set up the expression language support. The boolean version of the method call is deprecated and replaced with one that uses an enum in 1.7. ---
[GitHub] nifi-registry pull request #108: NIFIREG-158 Added ability to retrieve flow ...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/108 ---
[jira] [Updated] (NIFI-4857) Record components do not support String <-> byte[] conversions
[ https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4857: - Resolution: Fixed Fix Version/s: 1.7.0 Status: Resolved (was: Patch Available) > Record components do not support String <-> byte[] conversions > -- > > Key: NIFI-4857 > URL: https://issues.apache.org/jira/browse/NIFI-4857 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.7.0 > > > When trying to perform a conversion of a field between a String and a byte > array, various errors are reporting (depending on where the conversion is > taking place). Here are some examples: > 1) CSVReader, if a column with String values is specified in the schema as > "bytes" > 2) ConvertRecord, if an input field is of type String and the output field is > of type "bytes" > 3) ConvertRecord, if an input field is of type "bytes" and the output field > is of type "String" > Many/most/all of the record components use utility methods to convert values, > I believe these methods need to be updated to support conversion between > String and byte[] values. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions
[ https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431059#comment-16431059 ] ASF GitHub Bot commented on NIFI-4857: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2570 @mattyb149 thanks for the update! Sorry about the delay in getting back to this. All looks good now from my POV. There was a checkstyle violation (unused import) but I addressed that and all else looks good so merged to master. > Record components do not support String <-> byte[] conversions > -- > > Key: NIFI-4857 > URL: https://issues.apache.org/jira/browse/NIFI-4857 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > When trying to perform a conversion of a field between a String and a byte > array, various errors are reporting (depending on where the conversion is > taking place). Here are some examples: > 1) CSVReader, if a column with String values is specified in the schema as > "bytes" > 2) ConvertRecord, if an input field is of type String and the output field is > of type "bytes" > 3) ConvertRecord, if an input field is of type "bytes" and the output field > is of type "String" > Many/most/all of the record components use utility methods to convert values, > I believe these methods need to be updated to support conversion between > String and byte[] values. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions
[ https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431058#comment-16431058 ] ASF GitHub Bot commented on NIFI-4857: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2570 > Record components do not support String <-> byte[] conversions > -- > > Key: NIFI-4857 > URL: https://issues.apache.org/jira/browse/NIFI-4857 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > When trying to perform a conversion of a field between a String and a byte > array, various errors are reporting (depending on where the conversion is > taking place). Here are some examples: > 1) CSVReader, if a column with String values is specified in the schema as > "bytes" > 2) ConvertRecord, if an input field is of type String and the output field is > of type "bytes" > 3) ConvertRecord, if an input field is of type "bytes" and the output field > is of type "String" > Many/most/all of the record components use utility methods to convert values, > I believe these methods need to be updated to support conversion between > String and byte[] values. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2570: NIFI-4857: Support String<->byte[] conversion
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2570 @mattyb149 thanks for the update! Sorry about the delay in getting back to this. All looks good now from my POV. There was a checkstyle violation (unused import) but I addressed that and all else looks good so merged to master. ---
[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2570 ---
[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions
[ https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431056#comment-16431056 ] ASF subversion and git services commented on NIFI-4857: --- Commit b29304df79e78c5687b0c9411d5fab6cb93e6541 in nifi's branch refs/heads/master from [~ca9mbu] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b29304d ] NIFI-4857: Support String<->byte[] conversion for records This closes #2570. Signed-off-by: Mark Payne> Record components do not support String <-> byte[] conversions > -- > > Key: NIFI-4857 > URL: https://issues.apache.org/jira/browse/NIFI-4857 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > When trying to perform a conversion of a field between a String and a byte > array, various errors are reporting (depending on where the conversion is > taking place). Here are some examples: > 1) CSVReader, if a column with String values is specified in the schema as > "bytes" > 2) ConvertRecord, if an input field is of type String and the output field is > of type "bytes" > 3) ConvertRecord, if an input field is of type "bytes" and the output field > is of type "String" > Many/most/all of the record components use utility methods to convert values, > I believe these methods need to be updated to support conversion between > String and byte[] values. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-445) Implement escape/unescape CSV functions in expression language
[ https://issues.apache.org/jira/browse/MINIFICPP-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431053#comment-16431053 ] ASF GitHub Bot commented on MINIFICPP-445: -- Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/293 Code, tests, and build all look good. Thanks for adding these! Will merge. @phrocker I adjusted those lines that were affected by autoformat with no functional changes > Implement escape/unescape CSV functions in expression language > -- > > Key: MINIFICPP-445 > URL: https://issues.apache.org/jira/browse/MINIFICPP-445 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > * > [escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv] > * > [unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #293: MINIFICPP-445 Added escape/unescape CSV expressi...
Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/293 Code, tests, and build all look good. Thanks for adding these! Will merge. @phrocker I adjusted those lines that were affected by autoformat with no functional changes ---
[jira] [Created] (MINIFICPP-449) Allow cURL to be built and statically linked
Andrew Christianson created MINIFICPP-449: - Summary: Allow cURL to be built and statically linked Key: MINIFICPP-449 URL: https://issues.apache.org/jira/browse/MINIFICPP-449 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: Andrew Christianson Assignee: Andrew Christianson Allowing cURL to be built as an external project and linked statically will help support certain embedded deployments and certain portability situations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFIREG-158) Should be able to retrieve a flow by id without the bucket it
[ https://issues.apache.org/jira/browse/NIFIREG-158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430982#comment-16430982 ] ASF GitHub Bot commented on NIFIREG-158: Github user bbende commented on the issue: https://github.com/apache/nifi-registry/pull/108 @kevdoran good idea, I pushed up another commit that wraps the exception > Should be able to retrieve a flow by id without the bucket it > - > > Key: NIFIREG-158 > URL: https://issues.apache.org/jira/browse/NIFIREG-158 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > > Currently all the flow end-points are nested under a bucket, like: > {code:java} > /buckets/{bucketId}/flows/{flowId} > /buckets/{bucketId}/flows/{flowId}/versions/{version}{code} > The flow identifier is unique across all buckets, so once the flow is create > we should be able to support end-points without the bucket id: > {code:java} > /flows/{flowId} > /flows/{flowId}/versions/{version} > {code} > We still need to look at the bucket and authorize the bucket id when secure, > but we'll have the bucket id from retrieving the flow. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry issue #108: NIFIREG-158 Added ability to retrieve flow directl...
Github user bbende commented on the issue: https://github.com/apache/nifi-registry/pull/108 @kevdoran good idea, I pushed up another commit that wraps the exception ---
[jira] [Resolved] (MINIFICPP-448) Allow uuid to be built & statically-linked
[ https://issues.apache.org/jira/browse/MINIFICPP-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri resolved MINIFICPP-448. --- Resolution: Fixed > Allow uuid to be built & statically-linked > -- > > Key: MINIFICPP-448 > URL: https://issues.apache.org/jira/browse/MINIFICPP-448 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > Fix For: 0.5.0 > > > We already bundle uuid code in nifi-minifi-cpp, but there is no way to build > it. The CMakeLists.txt should be updated to allow building of this bundled > uuid and statically-linking it, making it easier to run in environments where > uuid is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-448) Allow uuid to be built & statically-linked
[ https://issues.apache.org/jira/browse/MINIFICPP-448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430975#comment-16430975 ] ASF GitHub Bot commented on MINIFICPP-448: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/294 > Allow uuid to be built & statically-linked > -- > > Key: MINIFICPP-448 > URL: https://issues.apache.org/jira/browse/MINIFICPP-448 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > Fix For: 0.5.0 > > > We already bundle uuid code in nifi-minifi-cpp, but there is no way to build > it. The CMakeLists.txt should be updated to allow building of this bundled > uuid and statically-linking it, making it easier to run in environments where > uuid is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (MINIFICPP-448) Allow uuid to be built & statically-linked
[ https://issues.apache.org/jira/browse/MINIFICPP-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri updated MINIFICPP-448: -- Fix Version/s: 0.5.0 > Allow uuid to be built & statically-linked > -- > > Key: MINIFICPP-448 > URL: https://issues.apache.org/jira/browse/MINIFICPP-448 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > Fix For: 0.5.0 > > > We already bundle uuid code in nifi-minifi-cpp, but there is no way to build > it. The CMakeLists.txt should be updated to allow building of this bundled > uuid and statically-linking it, making it easier to run in environments where > uuid is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #294: MINIFICPP-448 Add uuid external/static li...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/294 ---
[jira] [Commented] (NIFI-3753) ListenBeats: Compressed beats packets may cause: Error decoding Beats frame: Error decompressing frame: invalid distance too far back
[ https://issues.apache.org/jira/browse/NIFI-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430972#comment-16430972 ] John Smith commented on NIFI-3753: -- Hi Nicolas, Turning off compression and setting bulk max size to 0 worked for us and we're now able to use Nifi, however, this of course is far from ideal. John > ListenBeats: Compressed beats packets may cause: Error decoding Beats frame: > Error decompressing frame: invalid distance too far back > --- > > Key: NIFI-3753 > URL: https://issues.apache.org/jira/browse/NIFI-3753 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre F de Miranda >Priority: Critical > > 017-04-28 02:03:37,153 ERROR [pool-106-thread-1] > o.a.nifi.processors.beats.List > enBeats > org.apache.nifi.processors.beats.frame.BeatsFrameException: Error decoding > Beats > frame: Error decompressing frame: invalid distance too far back > at > org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDeco > der.java:123) ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > at > org.apache.nifi.processors.beats.handler.BeatsSocketChannelHandler.pr > ocessBuffer(BeatsSocketChannelHandler.java:71) > ~[nifi-beats-processors-1.2.0-SNA > PSHOT.jar:1.2.0-SNAPSHOT] > at > org.apache.nifi.processor.util.listen.handler.socket.StandardSocketCh > annelHandler.run(StandardSocketChannelHandler.java:76) > [nifi-processor-utils-1.2 > .0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. > java:1142) [na:1.8.0_131] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_131] > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131] > Caused by: org.apache.nifi.processors.beats.frame.BeatsFrameException: Error > decompressing frame: invalid distance too far back > at > org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:292) > ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > at > org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDecoder.java:103) > ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > ... 5 common frames omitted > Caused by: java.util.zip.ZipException: invalid distance too far back > at > java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) > ~[na:1.8.0_131] > at java.io.FilterInputStream.read(FilterInputStream.java:107) > ~[na:1.8.0_131] > at > org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:277) > ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > ... 6 common frames omitted -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-3753) ListenBeats: Compressed beats packets may cause: Error decoding Beats frame: Error decompressing frame: invalid distance too far back
[ https://issues.apache.org/jira/browse/NIFI-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430958#comment-16430958 ] Nicholas Carenza commented on NIFI-3753: [~jsmith1] did turning off compression and setting bulk max size to 0 in beats work for you and you are able to use ListenBeats now in Nifi in it's current state? > ListenBeats: Compressed beats packets may cause: Error decoding Beats frame: > Error decompressing frame: invalid distance too far back > --- > > Key: NIFI-3753 > URL: https://issues.apache.org/jira/browse/NIFI-3753 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre F de Miranda >Priority: Critical > > 017-04-28 02:03:37,153 ERROR [pool-106-thread-1] > o.a.nifi.processors.beats.List > enBeats > org.apache.nifi.processors.beats.frame.BeatsFrameException: Error decoding > Beats > frame: Error decompressing frame: invalid distance too far back > at > org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDeco > der.java:123) ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > at > org.apache.nifi.processors.beats.handler.BeatsSocketChannelHandler.pr > ocessBuffer(BeatsSocketChannelHandler.java:71) > ~[nifi-beats-processors-1.2.0-SNA > PSHOT.jar:1.2.0-SNAPSHOT] > at > org.apache.nifi.processor.util.listen.handler.socket.StandardSocketCh > annelHandler.run(StandardSocketChannelHandler.java:76) > [nifi-processor-utils-1.2 > .0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. > java:1142) [na:1.8.0_131] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_131] > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131] > Caused by: org.apache.nifi.processors.beats.frame.BeatsFrameException: Error > decompressing frame: invalid distance too far back > at > org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:292) > ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > at > org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDecoder.java:103) > ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > ... 5 common frames omitted > Caused by: java.util.zip.ZipException: invalid distance too far back > at > java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) > ~[na:1.8.0_131] > at java.io.FilterInputStream.read(FilterInputStream.java:107) > ~[na:1.8.0_131] > at > org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:277) > ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT] > ... 6 common frames omitted -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5061) NiFi documentation incomplete/wrong for EL hierarchy.
Matthew Clarke created NIFI-5061: Summary: NiFi documentation incomplete/wrong for EL hierarchy. Key: NIFI-5061 URL: https://issues.apache.org/jira/browse/NIFI-5061 Project: Apache NiFi Issue Type: Bug Components: Documentation Website Affects Versions: 1.5.0 Environment: N/A Reporter: Matthew Clarke Within the NiFi Expression Language Guide under the "Structure of a NiFi Expression" section, the hierarchy of how NiFi searches for a Attribute key is wrong (still based off NiFi 0.x hierarchy) and incomplete: Current text: In this example, the value to be returned is the value of the "my attribute" value, if it exists. If that attribute does not exist, the Expression Language will then look for a System Environment Variable named "my attribute." If unable to find this, it will look for a JVM System Property named "my attribute." Finally, if none of these exists, the Expression Language will return a null value. The current hierarchy in NiFi is as follows: 1. Search FlowFile for Attribute/key 2. Search Process Group for attribute/key 3. Search File Registry File for attribute/key 4. Search NiFi JVM Properties for attribute/key 5. Search System Environment Variables for attribute/key We should not only fix the above, but in doing so make the above hierarchy more prominent in the documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5060) UpdateRecord substringAfter and substringAfterLast only increments by 1
Chris Green created NIFI-5060: - Summary: UpdateRecord substringAfter and substringAfterLast only increments by 1 Key: NIFI-5060 URL: https://issues.apache.org/jira/browse/NIFI-5060 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.6.0 Reporter: Chris Green Attachments: Validate_substringafter_Behavior.xml This is my first submitted issue, so please feel free to point me in the correct direction if I make process mistakes. Replication: Drag a GenerateFlowFile onto the canvas and configure this property, and set run schedule to some high value like 600 seconds "Custom Text" \{"value": "01230123"} Connect GenerateFlowFile with an UpdateAttribute set to add the attribute "avro.schema" with a value of "{ "type": "record", "name": "test", "fields" : [\{"name": "value", "type": "string"}] }" Connect UpdateAttribute to an UpdateRecord onto the canvas, Autoterminate success and failure. Set the Record Reader to a new JSONTreeReader. On the JsonTreeReader configure it to use the "Use 'Schema Text' Attribute". Create a JsonRecordSetWriter and set the Schema Text to: "{ "type": "record", "name": "test", "fields" : [\{"name": "value", "type": "string"}, {"name": "example1", "type": "string"}, {"name": "example2", "type": "string"}, {"name": "example3", "type": "string"}, {"name": "example4", "type": "string"}] }" Add the following properties to UpdateRecord ||Heading 1||Heading 2|| |/example1|substringAfter(/value, "1") | |/example2|substringAfter(/value, "123") | |/example3|substringAfterLast(/value, "1")| |/example4|substringAfterLast(/value, "123")| Resulting record currently: [ { "value" : "01230123", "example1" : "230123", "example2" : "30123", "example3" : "23", "example4" : "3" } ] Problem: When using the UpdateRecord processor, and the substringAfter() function after the search phrase is found it will only increment the substring returned by 1 rather than the length of the search term. Based off XPath and other implementations of substringAfter functions I've seen the value returned should remove the search term rather than just the first character of the search term. Resulting record should be: [ { "value" : "01230123", "example1" : "230123", "example2" : "30123", "example3" : "23", "example4" : "3" } ] I'm cleaning up a fix with test code that will change the increment from 1 to the length of the search terms. It appears substringBefore are not impacted by the behavior as always returns the index before the found search term which is the expected behavior -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...
GitHub user MikeThomsen opened a pull request: https://github.com/apache/nifi/pull/2619 NIFI-5059 Updated MongoDBLookupService to be able to detect record sc⦠â¦hemas or take one provided by the user. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/MikeThomsen/nifi NIFI-5059 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2619.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2619 commit 40cbd11915e15e87c5b568a2cd918e57126bb7b4 Author: Mike ThomsenDate: 2018-04-09T11:28:40Z NIFI-5059 Updated MongoDBLookupService to be able to detect record schemas or take one provided by the user. ---
[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided
[ https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430735#comment-16430735 ] ASF GitHub Bot commented on NIFI-5059: -- GitHub user MikeThomsen opened a pull request: https://github.com/apache/nifi/pull/2619 NIFI-5059 Updated MongoDBLookupService to be able to detect record sc… …hemas or take one provided by the user. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/MikeThomsen/nifi NIFI-5059 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2619.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2619 commit 40cbd11915e15e87c5b568a2cd918e57126bb7b4 Author: Mike ThomsenDate: 2018-04-09T11:28:40Z NIFI-5059 Updated MongoDBLookupService to be able to detect record schemas or take one provided by the user. > MongoDBLookupService should be able to determine a schema or have one provided > -- > > Key: NIFI-5059 > URL: https://issues.apache.org/jira/browse/NIFI-5059 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided
Mike Thomsen created NIFI-5059: -- Summary: MongoDBLookupService should be able to determine a schema or have one provided Key: NIFI-5059 URL: https://issues.apache.org/jira/browse/NIFI-5059 Project: Apache NiFi Issue Type: Improvement Reporter: Mike Thomsen Assignee: Mike Thomsen -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2611: NIFI-5015: Implemented Azure Queue Storage processors
Github user zenfenan commented on the issue: https://github.com/apache/nifi/pull/2611 @pvillard31 Mind taking a look at this? ---
[jira] [Commented] (NIFI-5015) Develop Azure Queue Storage processors
[ https://issues.apache.org/jira/browse/NIFI-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430721#comment-16430721 ] ASF GitHub Bot commented on NIFI-5015: -- Github user zenfenan commented on the issue: https://github.com/apache/nifi/pull/2611 @pvillard31 Mind taking a look at this? > Develop Azure Queue Storage processors > -- > > Key: NIFI-5015 > URL: https://issues.apache.org/jira/browse/NIFI-5015 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Sivaprasanna Sethuraman >Assignee: Sivaprasanna Sethuraman >Priority: Minor > > Develop NiFi processors bundle for Azure Queue Storage -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (NIFI-4862) Copy original FlowFile attributes to output FlowFiles at SelectHiveQL processor
[ https://issues.apache.org/jira/browse/NIFI-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakub Leś closed NIFI-4862. --- The fix was applied > Copy original FlowFile attributes to output FlowFiles at SelectHiveQL > processor > --- > > Key: NIFI-4862 > URL: https://issues.apache.org/jira/browse/NIFI-4862 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Jakub Leś >Assignee: Matt Burgess >Priority: Minor > Fix For: 1.7.0 > > Attachments: > 0001-NIFI-4862-Add-Copy-original-attributtes-to-SelectHiv.patch > > > Hi, > Please add "Copy original attributes" to processor SelectHiveQL. Thanks to > that we can use HttpRequest and HttpResponse to synchronize fetching query > result in webservice. > > UPDATED: > SelectHiveQL creates new FlowFiles from Hive query result sets. When it also > has incoming FlowFiles, it should create new FlowFiles from the input > FlowFile, so that it can copy original FlowFile attributes to output > FlowFiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4862) Copy original FlowFile attributes to output FlowFiles at SelectHiveQL processor
[ https://issues.apache.org/jira/browse/NIFI-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakub Leś updated NIFI-4862: Hi, The PR is ok for me. Thank you for your help ! Best regards, Jakub > Copy original FlowFile attributes to output FlowFiles at SelectHiveQL > processor > --- > > Key: NIFI-4862 > URL: https://issues.apache.org/jira/browse/NIFI-4862 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Jakub Leś >Assignee: Matt Burgess >Priority: Minor > Fix For: 1.7.0 > > Attachments: > 0001-NIFI-4862-Add-Copy-original-attributtes-to-SelectHiv.patch > > > Hi, > Please add "Copy original attributes" to processor SelectHiveQL. Thanks to > that we can use HttpRequest and HttpResponse to synchronize fetching query > result in webservice. > > UPDATED: > SelectHiveQL creates new FlowFiles from Hive query result sets. When it also > has incoming FlowFiles, it should create new FlowFiles from the input > FlowFile, so that it can copy original FlowFile attributes to output > FlowFiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4196) *S3 processors do not expose Proxy Authentication settings
[ https://issues.apache.org/jira/browse/NIFI-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430615#comment-16430615 ] ASF GitHub Bot commented on NIFI-4196: -- Github user ottobackwards commented on the issue: https://github.com/apache/nifi/pull/2016 in https://github.com/apache/nifi/pull/2588 I have done this as well, but only in my processor ( scope ). I would like to see this land. What can be done to move this forward? > *S3 processors do not expose Proxy Authentication settings > -- > > Key: NIFI-4196 > URL: https://issues.apache.org/jira/browse/NIFI-4196 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Andre F de Miranda >Assignee: Andre F de Miranda >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2016: NIFI-4196 - Expose AWS proxy authentication settings
Github user ottobackwards commented on the issue: https://github.com/apache/nifi/pull/2016 in https://github.com/apache/nifi/pull/2588 I have done this as well, but only in my processor ( scope ). I would like to see this land. What can be done to move this forward? ---
[jira] [Commented] (NIFIREG-158) Should be able to retrieve a flow by id without the bucket it
[ https://issues.apache.org/jira/browse/NIFIREG-158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430601#comment-16430601 ] ASF GitHub Bot commented on NIFIREG-158: Github user kevdoran commented on a diff in the pull request: https://github.com/apache/nifi-registry/pull/108#discussion_r180112539 --- Diff: nifi-registry-web-api/src/main/java/org/apache/nifi/registry/web/api/FlowResource.java --- @@ -62,4 +85,214 @@ public Response getAvailableFlowFields() { return Response.status(Response.Status.OK).entity(fields).build(); } +@GET +@Path("{flowId}") +@Consumes(MediaType.WILDCARD) +@Produces(MediaType.APPLICATION_JSON) +@ApiOperation( +value = "Gets a flow", +response = VersionedFlow.class, +extensions = { +@Extension(name = "access-policy", properties = { +@ExtensionProperty(name = "action", value = "read"), +@ExtensionProperty(name = "resource", value = "/buckets/{bucketId}") }) +} +) +@ApiResponses({ +@ApiResponse(code = 400, message = HttpStatusMessages.MESSAGE_400), +@ApiResponse(code = 401, message = HttpStatusMessages.MESSAGE_401), +@ApiResponse(code = 403, message = HttpStatusMessages.MESSAGE_403), +@ApiResponse(code = 404, message = HttpStatusMessages.MESSAGE_404), +@ApiResponse(code = 409, message = HttpStatusMessages.MESSAGE_409) }) +public Response getFlow( +@PathParam("flowId") +@ApiParam("The flow identifier") +final String flowId) { + +final VersionedFlow flow = registryService.getFlow(flowId); + +// this should never happen, but if somehow the back-end didn't populate the bucket id let's make sure the flow isn't returned +if (StringUtils.isBlank(flow.getBucketIdentifier())) { +throw new IllegalStateException("Unable to authorize access because bucket identifier is null or blank"); +} + +authorizeBucketAccess(RequestAction.READ, flow.getBucketIdentifier()); --- End diff -- When I try this endpoint with an unauthorized user, I get the following response back: ``` HTTP/1.1 403 Forbidden Connection: close Date: Mon, 09 Apr 2018 14:08:27 GMT Content-Type: text/plain Content-Length: 101 Server: Jetty(9.4.3.v20170317) Unable to view Bucket with ID 6eaeae9c-dbdb-4af3-a98e-4f3b880a0fb2. Contact the system administrator. ``` The 403 status code is good, but I'm not sure about the error message in the response. If someone is attempting to access the flow through /flows/{id}, I don't think the server should return the bucket id containing the flow, as that's leaking information the user would not otherwise have access to. It's a fairly harmless piece of information on it's own, but in a multi-tenant scenario it could reveal more than the owner of the bucket would like, especially if correlated with other information an attacker is able to obtain. It's probably not a huge issue, but if you agree, we could strip this by wrapping the call to authorizeBucketAccess() in a try catch that obscures the error message returned in the response body. We would have to do this for all the GET methods in the FlowResource. Something like: ``` try { authorizeBucketAccess(RequestAction.READ, flow.getBucketIdentifier()); } catch (AccessDeniedException e) { throw new AccessDeniedException("User not authorized to view the specified flow"); } By adding the root throwable to the cause of a custom exception, we could even keep the root cause with the bucket id in the logs for easier admin troubleshooting. Now that I think about it, that approach for customizing HTTP response error messages to differ from internal/logged error message probably be a better approach than the solution that we came up with in PR #99, which inadvertently introduced a "log & throw" pattern in order to maintain resource ids in the logs. > Should be able to retrieve a flow by id without the bucket it > - > > Key: NIFIREG-158 > URL: https://issues.apache.org/jira/browse/NIFIREG-158 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > > Currently all the flow end-points are nested under a bucket, like: > {code:java} > /buckets/{bucketId}/flows/{flowId} > /buckets/{bucketId}/flows/{flowId}/versions/{version}{code}
[GitHub] nifi-registry pull request #108: NIFIREG-158 Added ability to retrieve flow ...
Github user kevdoran commented on a diff in the pull request: https://github.com/apache/nifi-registry/pull/108#discussion_r180112539 --- Diff: nifi-registry-web-api/src/main/java/org/apache/nifi/registry/web/api/FlowResource.java --- @@ -62,4 +85,214 @@ public Response getAvailableFlowFields() { return Response.status(Response.Status.OK).entity(fields).build(); } +@GET +@Path("{flowId}") +@Consumes(MediaType.WILDCARD) +@Produces(MediaType.APPLICATION_JSON) +@ApiOperation( +value = "Gets a flow", +response = VersionedFlow.class, +extensions = { +@Extension(name = "access-policy", properties = { +@ExtensionProperty(name = "action", value = "read"), +@ExtensionProperty(name = "resource", value = "/buckets/{bucketId}") }) +} +) +@ApiResponses({ +@ApiResponse(code = 400, message = HttpStatusMessages.MESSAGE_400), +@ApiResponse(code = 401, message = HttpStatusMessages.MESSAGE_401), +@ApiResponse(code = 403, message = HttpStatusMessages.MESSAGE_403), +@ApiResponse(code = 404, message = HttpStatusMessages.MESSAGE_404), +@ApiResponse(code = 409, message = HttpStatusMessages.MESSAGE_409) }) +public Response getFlow( +@PathParam("flowId") +@ApiParam("The flow identifier") +final String flowId) { + +final VersionedFlow flow = registryService.getFlow(flowId); + +// this should never happen, but if somehow the back-end didn't populate the bucket id let's make sure the flow isn't returned +if (StringUtils.isBlank(flow.getBucketIdentifier())) { +throw new IllegalStateException("Unable to authorize access because bucket identifier is null or blank"); +} + +authorizeBucketAccess(RequestAction.READ, flow.getBucketIdentifier()); --- End diff -- When I try this endpoint with an unauthorized user, I get the following response back: ``` HTTP/1.1 403 Forbidden Connection: close Date: Mon, 09 Apr 2018 14:08:27 GMT Content-Type: text/plain Content-Length: 101 Server: Jetty(9.4.3.v20170317) Unable to view Bucket with ID 6eaeae9c-dbdb-4af3-a98e-4f3b880a0fb2. Contact the system administrator. ``` The 403 status code is good, but I'm not sure about the error message in the response. If someone is attempting to access the flow through /flows/{id}, I don't think the server should return the bucket id containing the flow, as that's leaking information the user would not otherwise have access to. It's a fairly harmless piece of information on it's own, but in a multi-tenant scenario it could reveal more than the owner of the bucket would like, especially if correlated with other information an attacker is able to obtain. It's probably not a huge issue, but if you agree, we could strip this by wrapping the call to authorizeBucketAccess() in a try catch that obscures the error message returned in the response body. We would have to do this for all the GET methods in the FlowResource. Something like: ``` try { authorizeBucketAccess(RequestAction.READ, flow.getBucketIdentifier()); } catch (AccessDeniedException e) { throw new AccessDeniedException("User not authorized to view the specified flow"); } By adding the root throwable to the cause of a custom exception, we could even keep the root cause with the bucket id in the logs for easier admin troubleshooting. Now that I think about it, that approach for customizing HTTP response error messages to differ from internal/logged error message probably be a better approach than the solution that we came up with in PR #99, which inadvertently introduced a "log & throw" pattern in order to maintain resource ids in the logs. ---
[jira] [Commented] (NIFI-5058) Create a "legacy" to Record based Processing Guide
[ https://issues.apache.org/jira/browse/NIFI-5058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430569#comment-16430569 ] Joseph Witt commented on NIFI-5058: --- looks like a good outline/good idea. > Create a "legacy" to Record based Processing Guide > -- > > Key: NIFI-5058 > URL: https://issues.apache.org/jira/browse/NIFI-5058 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Otto Fowler >Priority: Major > > There are many processors in NiFi proper and in the user community that would > benefit from having Record Based Processing versions. > There should be a guide around the considerations for such a conversion, and > the proper implementation as applicable within the NiFi codebase as well as > for external developers. > > An outline of such may be: > > * Why Record Based Processing? > * Things to consider when thinking about conversion to Record Base Processing > * Patterns of Conversion > * Sample Case ( perhaps from a nifi processor that has been converted ) > * Best Practices ( prefer, consider etc ) > * Other references and links > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5058) Create a "legacy" to Record based Processing Guide
[ https://issues.apache.org/jira/browse/NIFI-5058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Otto Fowler updated NIFI-5058: -- Description: There are many processors in NiFi proper and in the user community that would benefit from having Record Based Processing versions. There should be a guide around the considerations for such a conversion, and the proper implementation as applicable within the NiFi codebase as well as for external developers. An outline of such may be: * Why Record Based Processing? * Things to consider when thinking about conversion to Record Base Processing * Patterns of Conversion * Sample Case ( perhaps from a nifi processor that has been converted ) * Best Practices ( prefer, consider etc ) * Other references and links was: There are many processors in NiFi proper and in the user community that would benefit from having Record Based Processing versions. There should be a guide around the considerations for such a conversion, and the proper implementation so such, as applicable within the NiFi codebase as well as for external developers. An outline of such may be: * Why Record Based Processing? * Things to consider when thinking about conversion to Record Base Processing * Patterns of Conversion * Sample Case ( perhaps from a nifi processor that has been converted ) * Best Practices ( prefer, consider etc ) * Other references and links > Create a "legacy" to Record based Processing Guide > -- > > Key: NIFI-5058 > URL: https://issues.apache.org/jira/browse/NIFI-5058 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Otto Fowler >Priority: Major > > There are many processors in NiFi proper and in the user community that would > benefit from having Record Based Processing versions. > There should be a guide around the considerations for such a conversion, and > the proper implementation as applicable within the NiFi codebase as well as > for external developers. > > An outline of such may be: > > * Why Record Based Processing? > * Things to consider when thinking about conversion to Record Base Processing > * Patterns of Conversion > * Sample Case ( perhaps from a nifi processor that has been converted ) > * Best Practices ( prefer, consider etc ) > * Other references and links > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5058) Create a "legacy" to Record based Processing Guide
Otto Fowler created NIFI-5058: - Summary: Create a "legacy" to Record based Processing Guide Key: NIFI-5058 URL: https://issues.apache.org/jira/browse/NIFI-5058 Project: Apache NiFi Issue Type: New Feature Reporter: Otto Fowler There are many processors in NiFi proper and in the user community that would benefit from having Record Based Processing versions. There should be a guide around the considerations for such a conversion, and the proper implementation so such, as applicable within the NiFi codebase as well as for external developers. An outline of such may be: * Why Record Based Processing? * Things to consider when thinking about conversion to Record Base Processing * Patterns of Conversion * Sample Case ( perhaps from a nifi processor that has been converted ) * Best Practices ( prefer, consider etc ) * Other references and links -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5042) Add section on using restricted components to the "Versioning a Dataflow" section of the User Guide
[ https://issues.apache.org/jira/browse/NIFI-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430558#comment-16430558 ] ASF GitHub Bot commented on NIFI-5042: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2610 > Add section on using restricted components to the "Versioning a Dataflow" > section of the User Guide > --- > > Key: NIFI-5042 > URL: https://issues.apache.org/jira/browse/NIFI-5042 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > Fix For: 1.7.0 > > > With granular component restrictions introduced in 1.6.0 (NIFI-4885), it > would be helpful to discuss how restricted components should be used in > versioned flows. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-5042) Add section on using restricted components to the "Versioning a Dataflow" section of the User Guide
[ https://issues.apache.org/jira/browse/NIFI-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende resolved NIFI-5042. --- Resolution: Fixed Fix Version/s: 1.7.0 > Add section on using restricted components to the "Versioning a Dataflow" > section of the User Guide > --- > > Key: NIFI-5042 > URL: https://issues.apache.org/jira/browse/NIFI-5042 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > Fix For: 1.7.0 > > > With granular component restrictions introduced in 1.6.0 (NIFI-4885), it > would be helpful to discuss how restricted components should be used in > versioned flows. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2610: NIFI-5042 Added section Restricted Components in Ve...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2610 ---
[jira] [Commented] (NIFI-5042) Add section on using restricted components to the "Versioning a Dataflow" section of the User Guide
[ https://issues.apache.org/jira/browse/NIFI-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430556#comment-16430556 ] ASF subversion and git services commented on NIFI-5042: --- Commit 5f16f48a2728d6c279768e68e9833f0fa133a758 in nifi's branch refs/heads/master from [~andrewmlim] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=5f16f48 ] NIFI-5042 Added section Restricted Components in Versioned Flows and edited related section in Adding Components to the Canvas This closes #2610. Signed-off-by: Bryan Bende> Add section on using restricted components to the "Versioning a Dataflow" > section of the User Guide > --- > > Key: NIFI-5042 > URL: https://issues.apache.org/jira/browse/NIFI-5042 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > > With granular component restrictions introduced in 1.6.0 (NIFI-4885), it > would be helpful to discuss how restricted components should be used in > versioned flows. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5042) Add section on using restricted components to the "Versioning a Dataflow" section of the User Guide
[ https://issues.apache.org/jira/browse/NIFI-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430555#comment-16430555 ] ASF GitHub Bot commented on NIFI-5042: -- Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2610 +1 Looks good, will merge > Add section on using restricted components to the "Versioning a Dataflow" > section of the User Guide > --- > > Key: NIFI-5042 > URL: https://issues.apache.org/jira/browse/NIFI-5042 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > > With granular component restrictions introduced in 1.6.0 (NIFI-4885), it > would be helpful to discuss how restricted components should be used in > versioned flows. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2610: NIFI-5042 Added section Restricted Components in Versioned...
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2610 +1 Looks good, will merge ---
[jira] [Commented] (NIFIREG-161) Unable to select bucket and flow UUIDs in flow details panel
[ https://issues.apache.org/jira/browse/NIFIREG-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430540#comment-16430540 ] ASF GitHub Bot commented on NIFIREG-161: Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/111 > Unable to select bucket and flow UUIDs in flow details panel > > > Key: NIFIREG-161 > URL: https://issues.apache.org/jira/browse/NIFIREG-161 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.2.0 >Reporter: Bryan Bende >Assignee: Scott Aslan >Priority: Minor > Fix For: 0.2.0 > > > Using master it appears that you can no longer select the UUIDs for the flow > id and bucket id that were added to the flow details panel in the main grid. > This prevents easily copying/pasting the ids. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry pull request #111: [NIFIREG-161] add vendor prefix to user-sel...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/111 ---
[jira] [Commented] (NIFIREG-161) Unable to select bucket and flow UUIDs in flow details panel
[ https://issues.apache.org/jira/browse/NIFIREG-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430538#comment-16430538 ] ASF GitHub Bot commented on NIFIREG-161: Github user pvillard31 commented on the issue: https://github.com/apache/nifi-registry/pull/111 +1, merging, thanks for the quick fix @scottyaslan > Unable to select bucket and flow UUIDs in flow details panel > > > Key: NIFIREG-161 > URL: https://issues.apache.org/jira/browse/NIFIREG-161 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.2.0 >Reporter: Bryan Bende >Assignee: Scott Aslan >Priority: Minor > Fix For: 0.2.0 > > > Using master it appears that you can no longer select the UUIDs for the flow > id and bucket id that were added to the flow details panel in the main grid. > This prevents easily copying/pasting the ids. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry issue #111: [NIFIREG-161] add vendor prefix to user-select sty...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi-registry/pull/111 +1, merging, thanks for the quick fix @scottyaslan ---
[jira] [Commented] (NIFIREG-161) Unable to select bucket and flow UUIDs in flow details panel
[ https://issues.apache.org/jira/browse/NIFIREG-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430515#comment-16430515 ] ASF GitHub Bot commented on NIFIREG-161: GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/111 [NIFIREG-161] add vendor prefix to user-select style You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-161 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/111.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #111 commit 902295a50d38bc3b12539619932fe806bdd69bae Author: Scott AslanDate: 2018-04-09T13:16:30Z [NIFIREG-161] add vendor prefix to user-select style > Unable to select bucket and flow UUIDs in flow details panel > > > Key: NIFIREG-161 > URL: https://issues.apache.org/jira/browse/NIFIREG-161 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.2.0 >Reporter: Bryan Bende >Assignee: Scott Aslan >Priority: Minor > Fix For: 0.2.0 > > > Using master it appears that you can no longer select the UUIDs for the flow > id and bucket id that were added to the flow details panel in the main grid. > This prevents easily copying/pasting the ids. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry pull request #111: [NIFIREG-161] add vendor prefix to user-sel...
GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/111 [NIFIREG-161] add vendor prefix to user-select style You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-161 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/111.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #111 commit 902295a50d38bc3b12539619932fe806bdd69bae Author: Scott AslanDate: 2018-04-09T13:16:30Z [NIFIREG-161] add vendor prefix to user-select style ---
[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries
[ https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430406#comment-16430406 ] ASF GitHub Bot commented on NIFI-1706: -- Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2162#discussion_r180055717 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java --- @@ -291,20 +291,30 @@ public void setup(final ProcessContext context, boolean shouldCleanCache, FlowFi if (shouldCleanCache){ columnTypeMap.clear(); } + +final List maxValueColumnNameList = Arrays.asList(maxValueColumnNames.toLowerCase().split(",")); +final List maxValueQualifiedColumnNameList = new ArrayList<>(); + +for(String maxValueColumn:maxValueColumnNameList){ +String colKey = getStateKey(tableName, maxValueColumn.trim()); +maxValueQualifiedColumnNameList.add(colKey); +} + for (int i = 1; i <= numCols; i++) { String colName = resultSetMetaData.getColumnName(i).toLowerCase(); String colKey = getStateKey(tableName, colName); + +//only include columns that are part of the maximum value tracking column list + if(!maxValueQualifiedColumnNameList.contains(colKey)){ +continue; +} + int colType = resultSetMetaData.getColumnType(i); columnTypeMap.putIfAbsent(colKey, colType); } -List maxValueColumnNameList = Arrays.asList(maxValueColumnNames.split(",")); - -for(String maxValueColumn:maxValueColumnNameList){ -String colKey = getStateKey(tableName, maxValueColumn.trim().toLowerCase()); -if(!columnTypeMap.containsKey(colKey)){ -throw new ProcessException("Column not found in the table/query specified: " + maxValueColumn); -} +if(maxValueQualifiedColumnNameList.size() > 0 && columnTypeMap.size() != maxValueQualifiedColumnNameList.size()){ --- End diff -- @patricker This check should be implemented as the previous commit. The size of columnTypeMap can be different with GenerateTableFetch when it's configured to resolve table and column names dynamically with FlowFile EL and deals with multiple tables. > Extend QueryDatabaseTable to support arbitrary queries > -- > > Key: NIFI-1706 > URL: https://issues.apache.org/jira/browse/NIFI-1706 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Paul Bormans >Assignee: Peter Wicks >Priority: Major > Labels: features > > The QueryDatabaseTable is able to observe a configured database table for new > rows and yield these into the flowfile. The model of an rdbms however is > often (if not always) normalized so you would need to join various tables in > order to "flatten" the data into useful events for a processing pipeline as > can be build with nifi or various tools within the hadoop ecosystem. > The request is to extend the processor to specify an arbitrary sql query > instead of specifying the table name + columns. > In addition (this may be another issue?) it is desired to limit the number of > rows returned per run. Not just because of bandwidth issue's from the nifi > pipeline onwards but mainly because huge databases may not be able to return > so many records within a reasonable time. -- This message was sent by Atlassian JIRA (v7.6.3#76005)