[jira] [Commented] (NIFI-4111) NiFi does not shutdown gracefully
[ https://issues.apache.org/jira/browse/NIFI-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075955#comment-16075955 ] ASF GitHub Bot commented on NIFI-4111: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/1963 @pvillard31 I tried this PR, but still see '2017-07-06 12:05:54,092 INFO [main] org.apache.nifi.bootstrap.Command Waiting for Apache NiFi to finish shutting down...' logs. When I take a thread dump, few threads are sleeping as reported at NIFI-4111: ``` "Site-to-Site Worker Thread-2@11975" prio=5 tid=0x73 nid=NA sleeping java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Thread.java:-1) at java.lang.Thread.sleep(Thread.java:340) at java.util.concurrent.TimeUnit.sleep(TimeUnit.java:386) at org.apache.nifi.remote.io.socket.SocketChannelInputStream.read(SocketChannelInputStream.java:120) at org.apache.nifi.stream.io.ByteCountingInputStream.read(ByteCountingInputStream.java:51) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) - locked (a org.apache.nifi.stream.io.BufferedInputStream) at org.apache.nifi.remote.io.InterruptableInputStream.read(InterruptableInputStream.java:39) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337) at java.io.DataInputStream.readUTF(DataInputStream.java:589) at java.io.DataInputStream.readUTF(DataInputStream.java:564) at org.apache.nifi.remote.protocol.RequestType.readRequestType(RequestType.java:36) at org.apache.nifi.remote.protocol.socket.SocketFlowFileServerProtocol.getRequestType(SocketFlowFileServerProtocol.java:147) at org.apache.nifi.remote.SocketRemoteSiteListener$1$1.run(SocketRemoteSiteListener.java:249) at java.lang.Thread.run(Thread.java:745) ``` This PR doesn't do anything to change SocketRemoteSiteListener behavior, is it correct? > NiFi does not shutdown gracefully > - > > Key: NIFI-4111 > URL: https://issues.apache.org/jira/browse/NIFI-4111 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.3.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > > I don't know exactly for how long we have this issue but NiFi is not able to > shutdown gracefully anymore (standalone and cluster setups). It happens even > if no processor/CS/RT is running in the instance: > {noformat} > 2017-06-22 23:47:40,448 INFO [main] org.apache.nifi.bootstrap.Command Apache > NiFi has accepted the Shutdown Command and is shutting down now > 2017-06-22 23:47:40,527 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:42,540 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:44,553 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:46,569 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:48,585 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:50,601 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:52,614 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:54,626 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:56,640 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:47:58,655 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:48:00,672 INFO [main] org.apache.nifi.bootstrap.Command Waiting > for Apache NiFi to finish shutting down... > 2017-06-22 23:48:00,681 WARN [main] org.apache.nifi.bootstrap.Command NiFi > has not finished shutting down after 20 seconds. Killing process. > 2017-06-22 23:48:00,714 INFO [main] org.apache.nifi.bootstrap.Command NiFi > has finished shutting down. > {noformat} > Thanks to [~markap14], the problem seems to be with shutting down the > following thread: > {noformat} > 2017-06-21 16:23:35,159 INFO [NiFi logging handler] org.apache.nifi.StdOut > "Site-to-Site Worker Thread-1" #87 prio=5 os_prio=31 tid=0x7f9ec968c000 > nid=0xeb03 waiting on condition [0x000137b4e000] > 2017-06-21 16:23:35,159 INFO [NiFi logging handler] org.apache.nifi.StdOut >
[GitHub] nifi issue #1963: NIFI-4111 - NiFi shutdown
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/1963 @pvillard31 I tried this PR, but still see '2017-07-06 12:05:54,092 INFO [main] org.apache.nifi.bootstrap.Command Waiting for Apache NiFi to finish shutting down...' logs. When I take a thread dump, few threads are sleeping as reported at NIFI-4111: ``` "Site-to-Site Worker Thread-2@11975" prio=5 tid=0x73 nid=NA sleeping java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Thread.java:-1) at java.lang.Thread.sleep(Thread.java:340) at java.util.concurrent.TimeUnit.sleep(TimeUnit.java:386) at org.apache.nifi.remote.io.socket.SocketChannelInputStream.read(SocketChannelInputStream.java:120) at org.apache.nifi.stream.io.ByteCountingInputStream.read(ByteCountingInputStream.java:51) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) - locked (a org.apache.nifi.stream.io.BufferedInputStream) at org.apache.nifi.remote.io.InterruptableInputStream.read(InterruptableInputStream.java:39) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337) at java.io.DataInputStream.readUTF(DataInputStream.java:589) at java.io.DataInputStream.readUTF(DataInputStream.java:564) at org.apache.nifi.remote.protocol.RequestType.readRequestType(RequestType.java:36) at org.apache.nifi.remote.protocol.socket.SocketFlowFileServerProtocol.getRequestType(SocketFlowFileServerProtocol.java:147) at org.apache.nifi.remote.SocketRemoteSiteListener$1$1.run(SocketRemoteSiteListener.java:249) at java.lang.Thread.run(Thread.java:745) ``` This PR doesn't do anything to change SocketRemoteSiteListener behavior, is it correct? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #1983: NiFi-2829: Add Date and Time Format Support for Put...
GitHub user yjhyjhyjh0 opened a pull request: https://github.com/apache/nifi/pull/1983 NiFi-2829: Add Date and Time Format Support for PutSQL Fix unit test for Date and Time type time zone problem while testing PutSQL processor @paulgibeault made the original PR #1073, #1468 @patricker add support of **DATE** and **TIME** in Epoch format for PutSQL processor. Iâve fix the unit test in different time zone problem. The detail is list below The originally problem with unit test happens because of different time zone. Internally without specifying time zone, java.sql.Date and java.sql.Time will use local time zone to parse the time. As a result, different time zone will have different format result for a given constant time value. This is mentioned by @mattyb149 in https://github.com/apache/nifi/pull/1524 Currently solve the problem by giving time zone before insert and parse result with same time zone. (GMT) Currently build and test successfully with NiFi newest version on GitHub which is 1.4.0-SNAPSHOT. ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/yjhyjhyjh0/nifi NIFI-2829 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1983.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1983 commit c8e916df1ba39e6be52ae9274804740eba1d4df4 Author: deonhuangDate: 2017-07-03T06:00:22Z NiFi-2829: Add Date and Time Format Support for PutSQL Fix unit test for Date and Time type time zone problem while testing PutSQL processor --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-4135) RangerNiFiAuthorizer should support storing audit info to HDFS
[ https://issues.apache.org/jira/browse/NIFI-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075823#comment-16075823 ] ASF GitHub Bot commented on NIFI-4135: -- Github user YolandaMDavis commented on the issue: https://github.com/apache/nifi/pull/1956 @bbende did add the change to order (couldn't recreate the issue). > RangerNiFiAuthorizer should support storing audit info to HDFS > -- > > Key: NIFI-4135 > URL: https://issues.apache.org/jira/browse/NIFI-4135 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Yolanda M. Davis >Assignee: Yolanda M. Davis > > When using Ranger to support authorization an option to log auditing > information to HDFS can be supported. The RangerNiFiAuthorizer should be > prepared to communicate with a hadoop cluster in order to support this > feature. In it's current implementation the authorizer does not have the > hadoop-client jars available as a dependency nor does it support the ability > to refer to the required *.site.xml files in order to communicate without > using the default configuration. Both of these changes are needed in order > to send audit info to HDFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1956: NIFI-4135 - added hadoop-client and enhanced Authorizers e...
Github user YolandaMDavis commented on the issue: https://github.com/apache/nifi/pull/1956 @bbende did add the change to order (couldn't recreate the issue). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-4154) fix windows line endings in source code
[ https://issues.apache.org/jira/browse/NIFI-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075776#comment-16075776 ] ASF GitHub Bot commented on NIFI-4154: -- GitHub user trkurc opened a pull request: https://github.com/apache/nifi/pull/1982 NIFI-4154 - fix line endings in source code Thank you for submitting a contribution to Apache NiFi. Note - did not squash commits for easier review. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/trkurc/nifi NIFI-4154 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1982.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1982 commit dc24653601b6bac9586f52c9721cc4cda803c917 Author: Tony KurcDate: 2017-07-06T01:34:01Z NIFI-4154: fixing line endings in .java commit 0bc2c5ffe67e26710cd515f963221de8231c53d3 Author: Tony Kurc Date: 2017-07-06T01:36:53Z NIFI-4154: fixing line endings in .html > fix windows line endings in source code > --- > > Key: NIFI-4154 > URL: https://issues.apache.org/jira/browse/NIFI-4154 > Project: Apache NiFi > Issue Type: Task >Reporter: Tony Kurc >Assignee: Tony Kurc >Priority: Minor > > Looks like some windows line endings snuck into the source tree. This task is > to correct that. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1982: NIFI-4154 - fix line endings in source code
GitHub user trkurc opened a pull request: https://github.com/apache/nifi/pull/1982 NIFI-4154 - fix line endings in source code Thank you for submitting a contribution to Apache NiFi. Note - did not squash commits for easier review. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/trkurc/nifi NIFI-4154 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1982.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1982 commit dc24653601b6bac9586f52c9721cc4cda803c917 Author: Tony KurcDate: 2017-07-06T01:34:01Z NIFI-4154: fixing line endings in .java commit 0bc2c5ffe67e26710cd515f963221de8231c53d3 Author: Tony Kurc Date: 2017-07-06T01:36:53Z NIFI-4154: fixing line endings in .html --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (NIFI-4154) fix windows line endings in source code
Tony Kurc created NIFI-4154: --- Summary: fix windows line endings in source code Key: NIFI-4154 URL: https://issues.apache.org/jira/browse/NIFI-4154 Project: Apache NiFi Issue Type: Task Reporter: Tony Kurc Assignee: Tony Kurc Priority: Minor Looks like some windows line endings snuck into the source tree. This task is to correct that. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tony Kurc updated NIFI-552: --- Fix Version/s: 1.4.0 > Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > Fix For: 1.4.0 > > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tony Kurc resolved NIFI-552. Resolution: Fixed > Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > Fix For: 1.4.0 > > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075752#comment-16075752 ] ASF GitHub Bot commented on NIFI-552: - Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1981 > Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075751#comment-16075751 ] ASF subversion and git services commented on NIFI-552: -- Commit e6b166a3a275cb0e4a088ed47607a43f6154df38 in nifi's branch refs/heads/master from m-hogue [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e6b166a ] NIFI-552: added regex properties for include and ignore filters in LogAttribute This closes #1981 Signed-off-by: Tony Kurc> Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1981: NIFI-552: Added include/ignore regex properties to ...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1981 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075713#comment-16075713 ] ASF GitHub Bot commented on NIFI-552: - Github user trkurc commented on the issue: https://github.com/apache/nifi/pull/1981 reviewing > Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1981: NIFI-552: Added include/ignore regex properties to LogAttr...
Github user trkurc commented on the issue: https://github.com/apache/nifi/pull/1981 reviewing --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary
[ https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075490#comment-16075490 ] Adam Taft commented on NIFI-4057: - [~aldrin] Can you please look this over and target this for 1.4.0? Since you've invested a lot in the NIFI docker image, I was hoping you could signoff on these changes? This will reduce the docker image size by half. We should get this in for the next minor release, if it checks out. > Docker Image is twice as large as necessary > --- > > Key: NIFI-4057 > URL: https://issues.apache.org/jira/browse/NIFI-4057 > Project: Apache NiFi > Issue Type: Bug > Components: Docker >Affects Versions: 1.2.0, 1.3.0 >Reporter: Jordan Moore >Priority: Minor > > By calling {{chown}} as a secondary {{RUN}} command, you effectively double > the size of image by creating a Docker layer of the same size as the > extracted binary. > See GitHub discussion: > https://github.com/apache/nifi/pull/1372#issuecomment-307592287 > *Expectation* > The resultant Docker image should be no larger than the Base image + the size > required by extracting the Nifi binaries. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3281) Error on passing 'ftp.listing.user' from ListFTP to FetchSFTP
[ https://issues.apache.org/jira/browse/NIFI-3281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075473#comment-16075473 ] ASF GitHub Bot commented on NIFI-3281: -- Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/1974#discussion_r125764994 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FTPTransfer.java --- @@ -304,6 +304,12 @@ public void flush() throws IOException { } @Override +public void flush(final FlowFile flowFile) throws IOException { +final FTPClient client = getClient(flowFile); +client.completePendingCommand(); --- End diff -- or at least in this methods case return that boolean and explain the same meaning so the caller can decide case by case how/if to handle. Good catch > Error on passing 'ftp.listing.user' from ListFTP to FetchSFTP > - > > Key: NIFI-3281 > URL: https://issues.apache.org/jira/browse/NIFI-3281 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.1.1 >Reporter: Jakhongir Ashrapov >Assignee: Pierre Villard >Priority: Minor > > Cannot get `ftp.listing.user` as EL in FetchFTP when listing files with > ListFTP. Following exception is thrown: > IOException: Could not login for user '' -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1974: NIFI-3281 - fix for (S)FTP processors when using EL...
Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/1974#discussion_r125764994 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FTPTransfer.java --- @@ -304,6 +304,12 @@ public void flush() throws IOException { } @Override +public void flush(final FlowFile flowFile) throws IOException { +final FTPClient client = getClient(flowFile); +client.completePendingCommand(); --- End diff -- or at least in this methods case return that boolean and explain the same meaning so the caller can decide case by case how/if to handle. Good catch --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075445#comment-16075445 ] ASF GitHub Bot commented on NIFI-552: - GitHub user m-hogue opened a pull request: https://github.com/apache/nifi/pull/1981 NIFI-552: Added include/ignore regex properties to LogAttribute Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/m-hogue/nifi NIFI-552 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1981.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1981 commit 759d1369a9fbc85e9dbdc70e1861c2d4db89a40b Author: m-hogueDate: 2017-07-05T21:10:58Z NIFI-552: added regex properties for include and ignore filters in LogAttribute > Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1981: NIFI-552: Added include/ignore regex properties to ...
GitHub user m-hogue opened a pull request: https://github.com/apache/nifi/pull/1981 NIFI-552: Added include/ignore regex properties to LogAttribute Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/m-hogue/nifi NIFI-552 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1981.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1981 commit 759d1369a9fbc85e9dbdc70e1861c2d4db89a40b Author: m-hogueDate: 2017-07-05T21:10:58Z NIFI-552: added regex properties for include and ignore filters in LogAttribute --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075441#comment-16075441 ] Michael Hogue commented on NIFI-552: I've elected to add properties to support regex matching rather than replace the existing ones. The way i've implemented it should be backward compatible and shouldn't break anyone who used the existing properties. > Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075440#comment-16075440 ] Michael Hogue commented on NIFI-552: Work here: https://github.com/m-hogue/nifi/tree/NIFI-552 > Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4151) Slow response times when requesting Process Group Status
[ https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075417#comment-16075417 ] ASF GitHub Bot commented on NIFI-4151: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1979 @markap14 Looks like the proposed changes are causing unit test failures. > Slow response times when requesting Process Group Status > > > Key: NIFI-4151 > URL: https://issues.apache.org/jira/browse/NIFI-4151 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne > > I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand > connections and input/output ports as well. When I refresh stats it is taking > 3-4 seconds to pull back the status. And when I go to the Summary table, it's > taking about 8 seconds. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1979: NIFI-4151: Updated UpdateAttribute to only create JAXB Con...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1979 @markap14 Looks like the proposed changes are causing unit test failures. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-4153) Site-to-Site causing "There are too many outstanding HTTP requests" error message
[ https://issues.apache.org/jira/browse/NIFI-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075407#comment-16075407 ] ASF GitHub Bot commented on NIFI-4153: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1980 Will review... > Site-to-Site causing "There are too many outstanding HTTP requests" error > message > - > > Key: NIFI-4153 > URL: https://issues.apache.org/jira/browse/NIFI-4153 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0 >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.4.0 > > > When using site-to-site in cluster, we sometimes see the following error > message showing up in the log: > {code} > 2017-07-05 16:11:12,452 INFO [NiFi Web Server-318] > o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: > There are too many outstanding HTTP requests with a total 100 outstanding > requests. Returning Conflict response. > {code} > Once this occurs, it keeps occurring, sometimes making the UI unusable until > the nodes are restarted. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1980: NIFI-4153: Use a LinkedBlockingQueue instead of a Synchron...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1980 Will review... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-4153) Site-to-Site causing "There are too many outstanding HTTP requests" error message
[ https://issues.apache.org/jira/browse/NIFI-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4153: - Status: Patch Available (was: Open) > Site-to-Site causing "There are too many outstanding HTTP requests" error > message > - > > Key: NIFI-4153 > URL: https://issues.apache.org/jira/browse/NIFI-4153 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.3.0, 1.2.0 >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.4.0 > > > When using site-to-site in cluster, we sometimes see the following error > message showing up in the log: > {code} > 2017-07-05 16:11:12,452 INFO [NiFi Web Server-318] > o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: > There are too many outstanding HTTP requests with a total 100 outstanding > requests. Returning Conflict response. > {code} > Once this occurs, it keeps occurring, sometimes making the UI unusable until > the nodes are restarted. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4153) Site-to-Site causing "There are too many outstanding HTTP requests" error message
[ https://issues.apache.org/jira/browse/NIFI-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075380#comment-16075380 ] ASF GitHub Bot commented on NIFI-4153: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/1980 NIFI-4153: Use a LinkedBlockingQueue instead of a SynchronousQueue fo… …r Request Replicator's thread pool so that requests will queue when all threads are active, instead of throwing an Exception Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-4153 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1980.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1980 commit 9d3a7618395e78246a43a90ba479b3e47450fb19 Author: Mark PayneDate: 2017-07-05T20:35:04Z NIFI-4153: Use a LinkedBlockingQueue instead of a SynchronousQueue for Request Replicator's thread pool so that requests will queue when all threads are active, instead of throwing an Exception > Site-to-Site causing "There are too many outstanding HTTP requests" error > message > - > > Key: NIFI-4153 > URL: https://issues.apache.org/jira/browse/NIFI-4153 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0 >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.4.0 > > > When using site-to-site in cluster, we sometimes see the following error > message showing up in the log: > {code} > 2017-07-05 16:11:12,452 INFO [NiFi Web Server-318] > o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: > There are too many outstanding HTTP requests with a total 100 outstanding > requests. Returning Conflict response. > {code} > Once this occurs, it keeps occurring, sometimes making the UI unusable until > the nodes are restarted. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1980: NIFI-4153: Use a LinkedBlockingQueue instead of a S...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/1980 NIFI-4153: Use a LinkedBlockingQueue instead of a SynchronousQueue fo⦠â¦r Request Replicator's thread pool so that requests will queue when all threads are active, instead of throwing an Exception Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-4153 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1980.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1980 commit 9d3a7618395e78246a43a90ba479b3e47450fb19 Author: Mark PayneDate: 2017-07-05T20:35:04Z NIFI-4153: Use a LinkedBlockingQueue instead of a SynchronousQueue for Request Replicator's thread pool so that requests will queue when all threads are active, instead of throwing an Exception --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-4153) Site-to-Site causing "There are too many outstanding HTTP requests" error message
[ https://issues.apache.org/jira/browse/NIFI-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4153: - Priority: Critical (was: Major) > Site-to-Site causing "There are too many outstanding HTTP requests" error > message > - > > Key: NIFI-4153 > URL: https://issues.apache.org/jira/browse/NIFI-4153 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0 >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.4.0 > > > When using site-to-site in cluster, we sometimes see the following error > message showing up in the log: > {code} > 2017-07-05 16:11:12,452 INFO [NiFi Web Server-318] > o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: > There are too many outstanding HTTP requests with a total 100 outstanding > requests. Returning Conflict response. > {code} > Once this occurs, it keeps occurring, sometimes making the UI unusable until > the nodes are restarted. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4153) Site-to-Site causing "There are too many outstanding HTTP requests" error message
Mark Payne created NIFI-4153: Summary: Site-to-Site causing "There are too many outstanding HTTP requests" error message Key: NIFI-4153 URL: https://issues.apache.org/jira/browse/NIFI-4153 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.3.0, 1.2.0 Reporter: Mark Payne Assignee: Mark Payne Fix For: 1.4.0 When using site-to-site in cluster, we sometimes see the following error message showing up in the log: {code} 2017-07-05 16:11:12,452 INFO [NiFi Web Server-318] o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: There are too many outstanding HTTP requests with a total 100 outstanding requests. Returning Conflict response. {code} Once this occurs, it keeps occurring, sometimes making the UI unusable until the nodes are restarted. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1962: NIFI-4143 - externalize MAX_CONCURRENT_REQUESTS
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1962 @pvillard31 Just started reviewing this PR. I'm wondering if the property should be associated with the cluster section instead of the web properties. This property is only relevant when clustered and am concerned it may be confusing being colocated with the jetty thread configuration. Since that thread pool drives the size of the thread pool that Jetty uses it could be confused with the maximum number of concurrent requests. It might make sense to associate this property with other request replication properties. For instance the replication thread pool size: ``` nifi.cluster.node.protocol.threads=${nifi.cluster.node.protocol.threads} nifi.cluster.node.protocol.max.threads=${nifi.cluster.node.protocol.max.threads} ``` Or the replication timeouts: ``` nifi.cluster.node.connection.timeout=${nifi.cluster.node.connection.timeout} nifi.cluster.node.read.timeout=${nifi.cluster.node.read.timeout} ``` Thoughts? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-4143) Make configurable maximum number of concurrent requests
[ https://issues.apache.org/jira/browse/NIFI-4143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075303#comment-16075303 ] ASF GitHub Bot commented on NIFI-4143: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1962 @pvillard31 Just started reviewing this PR. I'm wondering if the property should be associated with the cluster section instead of the web properties. This property is only relevant when clustered and am concerned it may be confusing being colocated with the jetty thread configuration. Since that thread pool drives the size of the thread pool that Jetty uses it could be confused with the maximum number of concurrent requests. It might make sense to associate this property with other request replication properties. For instance the replication thread pool size: ``` nifi.cluster.node.protocol.threads=${nifi.cluster.node.protocol.threads} nifi.cluster.node.protocol.max.threads=${nifi.cluster.node.protocol.max.threads} ``` Or the replication timeouts: ``` nifi.cluster.node.connection.timeout=${nifi.cluster.node.connection.timeout} nifi.cluster.node.read.timeout=${nifi.cluster.node.read.timeout} ``` Thoughts? > Make configurable maximum number of concurrent requests > --- > > Key: NIFI-4143 > URL: https://issues.apache.org/jira/browse/NIFI-4143 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Pierre Villard >Assignee: Pierre Villard > > At the moment, the maximum number of concurrent requests is hard coded in > {{ThreadPoolRequestReplicator}} > https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/ThreadPoolRequestReplicator.java > The value is equal to 100. > In some situations where multiple factors are combined (large cluster, S2S to > load balance data in the cluster, multiple users accessing the UI), the limit > can be reached and the UI may become intermittently unavailable with the > message: "There are too many outstanding HTTP requests with a total 100 > outstanding requests". > This value should be configurable in nifi.properties allowing users to > increase the value. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tony Kurc resolved NIFI-3193. - Resolution: Fixed > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > Fix For: 1.4.0 > > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tony Kurc updated NIFI-3193: Fix Version/s: 1.4.0 > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > Fix For: 1.4.0 > > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075277#comment-16075277 ] ASF GitHub Bot commented on NIFI-3193: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1971 > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075276#comment-16075276 ] ASF subversion and git services commented on NIFI-3193: --- Commit 47eece57980782a245a13c693d0bffc2c78f5695 in nifi's branch refs/heads/master from m-hogue [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=47eece5 ] NIFI-3193: added ability to authenticate using cert common names This closes #1971. Signed-off-by: Tony KurcAlso reviewed by Pierre Villard > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1971: NIFI-3193: added ability to authenticate with AMQP ...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1971 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (NIFI-552) Improve LogAttribute property matching
[ https://issues.apache.org/jira/browse/NIFI-552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Hogue reassigned NIFI-552: -- Assignee: Michael Hogue > Improve LogAttribute property matching > -- > > Key: NIFI-552 > URL: https://issues.apache.org/jira/browse/NIFI-552 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.0.2 >Reporter: Michael Moser >Assignee: Michael Hogue >Priority: Minor > > The LogAttribute properties "Attributes to Log" and "Attributes to Ignore" > currently accept a comma separated list of attributes. This becomes unwieldy > when you want to ignore a long list of attributes. > Modify these properties or create new properties allowing users to specify a > regular expression. Any attribute which matches the regular expression will > add that attribute to the appropriate "include" list or "ignore" list. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3281) Error on passing 'ftp.listing.user' from ListFTP to FetchSFTP
[ https://issues.apache.org/jira/browse/NIFI-3281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075217#comment-16075217 ] ASF GitHub Bot commented on NIFI-3281: -- Github user m-hogue commented on a diff in the pull request: https://github.com/apache/nifi/pull/1974#discussion_r125717086 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FTPTransfer.java --- @@ -304,6 +304,12 @@ public void flush() throws IOException { } @Override +public void flush(final FlowFile flowFile) throws IOException { +final FTPClient client = getClient(flowFile); +client.completePendingCommand(); --- End diff -- Should we handle the case where `client.completePendingCommand()` returns false? Per the javadocs [1], it returns false if the command couldn't be completed. [1] https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/ftp/FTPClient.html#completePendingCommand() > Error on passing 'ftp.listing.user' from ListFTP to FetchSFTP > - > > Key: NIFI-3281 > URL: https://issues.apache.org/jira/browse/NIFI-3281 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.1.1 >Reporter: Jakhongir Ashrapov >Assignee: Pierre Villard >Priority: Minor > > Cannot get `ftp.listing.user` as EL in FetchFTP when listing files with > ListFTP. Following exception is thrown: > IOException: Could not login for user '' -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1974: NIFI-3281 - fix for (S)FTP processors when using EL...
Github user m-hogue commented on a diff in the pull request: https://github.com/apache/nifi/pull/1974#discussion_r125717086 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FTPTransfer.java --- @@ -304,6 +304,12 @@ public void flush() throws IOException { } @Override +public void flush(final FlowFile flowFile) throws IOException { +final FTPClient client = getClient(flowFile); +client.completePendingCommand(); --- End diff -- Should we handle the case where `client.completePendingCommand()` returns false? Per the javadocs [1], it returns false if the command couldn't be completed. [1] https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/ftp/FTPClient.html#completePendingCommand() --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi-minifi-cpp pull request #116: Minifi 341 - Tailfile Delimiter for input
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/116 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #1971: NIFI-3193: added ability to authenticate with AMQP using c...
Github user trkurc commented on the issue: https://github.com/apache/nifi/pull/1971 looks good, I'll work on merging it in --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075130#comment-16075130 ] ASF GitHub Bot commented on NIFI-3193: -- Github user trkurc commented on the issue: https://github.com/apache/nifi/pull/1971 looks good, I'll work on merging it in > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4152) Create TCP Record Processors
Bryan Bende created NIFI-4152: - Summary: Create TCP Record Processors Key: NIFI-4152 URL: https://issues.apache.org/jira/browse/NIFI-4152 Project: Apache NiFi Issue Type: Improvement Reporter: Bryan Bende Assignee: Bryan Bende Priority: Minor We should implement a ListenTCPRecord that can pass the underlying InputStream from a TCP connection to a record reader, and also a PutTCPRecord that can stream a large flow file of records over a TCP connection. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4151) Slow response times when requesting Process Group Status
[ https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075096#comment-16075096 ] ASF GitHub Bot commented on NIFI-4151: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1979 Will review... > Slow response times when requesting Process Group Status > > > Key: NIFI-4151 > URL: https://issues.apache.org/jira/browse/NIFI-4151 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne > > I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand > connections and input/output ports as well. When I refresh stats it is taking > 3-4 seconds to pull back the status. And when I go to the Summary table, it's > taking about 8 seconds. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1979: NIFI-4151: Updated UpdateAttribute to only create JAXB Con...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1979 Will review... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1763) Provide an integration with 'Schema registry for Kafka'
[ https://issues.apache.org/jira/browse/NIFI-1763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075060#comment-16075060 ] ASF GitHub Bot commented on NIFI-1763: -- Github user YolandaMDavis commented on the issue: https://github.com/apache/nifi/pull/1938 @markap14 I can try this out as well > Provide an integration with 'Schema registry for Kafka' > --- > > Key: NIFI-1763 > URL: https://issues.apache.org/jira/browse/NIFI-1763 > Project: Apache NiFi > Issue Type: Wish > Components: Extensions >Reporter: Joseph Witt >Assignee: Mark Payne >Priority: Minor > > Reported on a mailing list question on 13 April 2016 > https://github.com/confluentinc/schema-registry > The registry itself is an ASLv2 licensed codebase. It offers a REST-based > Web API for interaction. It would be good to support integration with it for > users of Kafka so it would register schemas if needed when writing to Kafka > and understand how to parse data based on the indicated schema when reading > from Kafka. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1938: NIFI-1763: Initial implementation of ConfluentSchemaRegist...
Github user YolandaMDavis commented on the issue: https://github.com/apache/nifi/pull/1938 @markap14 I can try this out as well --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi-minifi-cpp pull request #117: MINIFI-338: Convert processor threads to ...
GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/117 MINIFI-338: Convert processor threads to use thread pools Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFI-338 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/117.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #117 commit 3388c068429b08bbb2995184f236a3a451a78dc7 Author: Marc ParisiDate: 2017-06-30T14:05:15Z MINIFI-338: Convert processor threads to use thread pools --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-4060) Create a MergeRecord Processor
[ https://issues.apache.org/jira/browse/NIFI-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074944#comment-16074944 ] ASF GitHub Bot commented on NIFI-4060: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1958#discussion_r125677757 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MergeRecord.java --- @@ -0,0 +1,350 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.processors.standard; + +import java.io.IOException; +import java.io.InputStream; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicReference; + +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.ReadsAttribute; +import org.apache.nifi.annotation.behavior.ReadsAttributes; +import org.apache.nifi.annotation.behavior.SideEffectFree; +import org.apache.nifi.annotation.behavior.TriggerWhenEmpty; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnStopped; +import org.apache.nifi.avro.AvroTypeUtil; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.flowfile.attributes.FragmentAttributes; +import org.apache.nifi.processor.AbstractSessionFactoryProcessor; +import org.apache.nifi.processor.DataUnit; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.ProcessSessionFactory; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.FlowFileFilters; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.merge.AttributeStrategyUtil; +import org.apache.nifi.processors.standard.merge.RecordBinManager; +import org.apache.nifi.schema.access.SchemaNotFoundException; +import org.apache.nifi.serialization.MalformedRecordException; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.RecordSetWriterFactory; +import org.apache.nifi.serialization.record.RecordSchema; + + +@SideEffectFree +@TriggerWhenEmpty +@InputRequirement(Requirement.INPUT_REQUIRED) +@Tags({"merge", "record", "content", "correlation", "stream", "event"}) +@CapabilityDescription("This Processor merges together multiple record-oriented FlowFiles into a single FlowFile that contains all of the Records of the input FlowFiles. " ++ "This Processor works by creating 'bins' and then adding FlowFiles to these bins until they are full. Once a bin is full, all of the FlowFiles will be combined into " ++ "a single output FlowFile, and that FlowFile will be routed to the 'merged' Relationship. A bin will consist of potentially many 'like FlowFiles'. In order for two " ++ "FlowFiles to be considered 'like FlowFiles', they must have the same Schema (as identified by the Record Reader) and, if the property " ++ "is set, the same value for the specified attribute. See Processor Usage and Additional Details for more information.") +@ReadsAttributes({ +@ReadsAttribute(attribute
[GitHub] nifi pull request #1958: NIFI-4060: Initial implementation of MergeRecord
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1958#discussion_r125677757 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MergeRecord.java --- @@ -0,0 +1,350 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.processors.standard; + +import java.io.IOException; +import java.io.InputStream; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicReference; + +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.ReadsAttribute; +import org.apache.nifi.annotation.behavior.ReadsAttributes; +import org.apache.nifi.annotation.behavior.SideEffectFree; +import org.apache.nifi.annotation.behavior.TriggerWhenEmpty; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnStopped; +import org.apache.nifi.avro.AvroTypeUtil; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.flowfile.attributes.FragmentAttributes; +import org.apache.nifi.processor.AbstractSessionFactoryProcessor; +import org.apache.nifi.processor.DataUnit; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.ProcessSessionFactory; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.FlowFileFilters; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.merge.AttributeStrategyUtil; +import org.apache.nifi.processors.standard.merge.RecordBinManager; +import org.apache.nifi.schema.access.SchemaNotFoundException; +import org.apache.nifi.serialization.MalformedRecordException; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.RecordSetWriterFactory; +import org.apache.nifi.serialization.record.RecordSchema; + + +@SideEffectFree +@TriggerWhenEmpty +@InputRequirement(Requirement.INPUT_REQUIRED) +@Tags({"merge", "record", "content", "correlation", "stream", "event"}) +@CapabilityDescription("This Processor merges together multiple record-oriented FlowFiles into a single FlowFile that contains all of the Records of the input FlowFiles. " ++ "This Processor works by creating 'bins' and then adding FlowFiles to these bins until they are full. Once a bin is full, all of the FlowFiles will be combined into " ++ "a single output FlowFile, and that FlowFile will be routed to the 'merged' Relationship. A bin will consist of potentially many 'like FlowFiles'. In order for two " ++ "FlowFiles to be considered 'like FlowFiles', they must have the same Schema (as identified by the Record Reader) and, if the property " ++ "is set, the same value for the specified attribute. See Processor Usage and Additional Details for more information.") +@ReadsAttributes({ +@ReadsAttribute(attribute = "fragment.identifier", description = "Applicable only if the property is set to Defragment. " ++ "All FlowFiles with the same value for this attribute will be bundled together."), +@ReadsAttribute(attribute =
[jira] [Updated] (NIFI-4151) Slow response times when requesting Process Group Status
[ https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4151: - Status: Patch Available (was: Open) > Slow response times when requesting Process Group Status > > > Key: NIFI-4151 > URL: https://issues.apache.org/jira/browse/NIFI-4151 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne > > I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand > connections and input/output ports as well. When I refresh stats it is taking > 3-4 seconds to pull back the status. And when I go to the Summary table, it's > taking about 8 seconds. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4151) Slow response times when requesting Process Group Status
[ https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074934#comment-16074934 ] ASF GitHub Bot commented on NIFI-4151: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/1979 NIFI-4151: Updated UpdateAttribute to only create JAXB Context once; … …Minor performance tweaks to standard validators and StatusMerge.prettyPrint; updated AbstractConfiguredComponent to not create a new ValidationContext each time that validate is called but only when needed; updated FlowController, StandardControllerServiceProvider, and StandardProcessGroup so that component lookups can be performed using a ConcurrentMap at FlowController level instead of having to perform a depth-first search through all ProcessGroups when calling findProcessor(), findProcessGroup(), findXYZ() Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-4151 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1979.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1979 commit 6d2adbd30719cf43d93c399772c282ca4df5cd4b Author: Mark PayneDate: 2017-07-05T15:24:01Z NIFI-4151: Updated UpdateAttribute to only create JAXB Context once; Minor performance tweaks to standard validators and StatusMerge.prettyPrint; updated AbstractConfiguredComponent to not create a new ValidationContext each time that validate is called but only when needed; updated FlowController, StandardControllerServiceProvider, and StandardProcessGroup so that component lookups can be performed using a ConcurrentMap at FlowController level instead of having to perform a depth-first search through all ProcessGroups when calling findProcessor(), findProcessGroup(), findXYZ() > Slow response times when requesting Process Group Status > > > Key: NIFI-4151 > URL: https://issues.apache.org/jira/browse/NIFI-4151 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne > > I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand > connections and input/output ports as well. When I refresh stats it is taking > 3-4 seconds to pull back the status. And when I go to the Summary table, it's > taking about 8 seconds. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1979: NIFI-4151: Updated UpdateAttribute to only create J...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/1979 NIFI-4151: Updated UpdateAttribute to only create JAXB Context once; ⦠â¦Minor performance tweaks to standard validators and StatusMerge.prettyPrint; updated AbstractConfiguredComponent to not create a new ValidationContext each time that validate is called but only when needed; updated FlowController, StandardControllerServiceProvider, and StandardProcessGroup so that component lookups can be performed using a ConcurrentMap at FlowController level instead of having to perform a depth-first search through all ProcessGroups when calling findProcessor(), findProcessGroup(), findXYZ() Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-4151 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1979.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1979 commit 6d2adbd30719cf43d93c399772c282ca4df5cd4b Author: Mark PayneDate: 2017-07-05T15:24:01Z NIFI-4151: Updated UpdateAttribute to only create JAXB Context once; Minor performance tweaks to standard validators and StatusMerge.prettyPrint; updated AbstractConfiguredComponent to not create a new ValidationContext each time that validate is called but only when needed; updated FlowController, StandardControllerServiceProvider, and StandardProcessGroup so that component lookups can be performed using a ConcurrentMap at FlowController level instead of having to perform a depth-first search through all ProcessGroups when calling findProcessor(), findProcessGroup(), findXYZ() --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (NIFI-4151) Slow response times when requesting Process Group Status
Mark Payne created NIFI-4151: Summary: Slow response times when requesting Process Group Status Key: NIFI-4151 URL: https://issues.apache.org/jira/browse/NIFI-4151 Project: Apache NiFi Issue Type: Bug Reporter: Mark Payne Assignee: Mark Payne I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand connections and input/output ports as well. When I refresh stats it is taking 3-4 seconds to pull back the status. And when I go to the Summary table, it's taking about 8 seconds. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074922#comment-16074922 ] ASF GitHub Bot commented on NIFI-3193: -- Github user m-hogue commented on the issue: https://github.com/apache/nifi/pull/1971 Fixed in cd66c877b706b1f5b3418301e7e3aed651036bce. Should be gtg. Let me know if there are any other changes needed. > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1971: NIFI-3193: added ability to authenticate with AMQP using c...
Github user m-hogue commented on the issue: https://github.com/apache/nifi/pull/1971 Fixed in cd66c877b706b1f5b3418301e7e3aed651036bce. Should be gtg. Let me know if there are any other changes needed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074919#comment-16074919 ] ASF GitHub Bot commented on NIFI-3193: -- Github user trkurc commented on the issue: https://github.com/apache/nifi/pull/1971 @m-hogue if you can fix that description in the html, I can merge in. > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1971: NIFI-3193: added ability to authenticate with AMQP using c...
Github user trkurc commented on the issue: https://github.com/apache/nifi/pull/1971 @m-hogue if you can fix that description in the html, I can merge in. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074902#comment-16074902 ] ASF GitHub Bot commented on NIFI-3193: -- Github user trkurc commented on a diff in the pull request: https://github.com/apache/nifi/pull/1971#discussion_r125669424 --- Diff: nifi-nar-bundles/nifi-amqp-bundle/nifi-amqp-processors/src/main/resources/docs/org.apache.nifi.amqp.processors.ConsumeAMQP/additionalDetails.html --- @@ -63,6 +63,9 @@ Password - [REQUIRED] password to use with user name to connect to AMQP broker. Usually provided by the administrator. Defaults to 'guest'. +Cert Authentication - [OPTIONAL] whether or not to use the SSL certificate for authentication rather than user name and password. --- End diff -- Would it be possible to have this match the displayName text? > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1971: NIFI-3193: added ability to authenticate with AMQP ...
Github user trkurc commented on a diff in the pull request: https://github.com/apache/nifi/pull/1971#discussion_r125669424 --- Diff: nifi-nar-bundles/nifi-amqp-bundle/nifi-amqp-processors/src/main/resources/docs/org.apache.nifi.amqp.processors.ConsumeAMQP/additionalDetails.html --- @@ -63,6 +63,9 @@ Password - [REQUIRED] password to use with user name to connect to AMQP broker. Usually provided by the administrator. Defaults to 'guest'. +Cert Authentication - [OPTIONAL] whether or not to use the SSL certificate for authentication rather than user name and password. --- End diff -- Would it be possible to have this match the displayName text? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name
[ https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074880#comment-16074880 ] ASF GitHub Bot commented on NIFI-3193: -- Github user m-hogue commented on the issue: https://github.com/apache/nifi/pull/1971 @pvillard31 : Thanks for the review! After wrestling with IntelliJ formatting rules following the import of the nifi style config i think it's good. Please let me know if you'd like any more changes made. > Update ConsumeAMQP and PublishAMQP to retrieve username from certificate > common name > > > Key: NIFI-3193 > URL: https://issues.apache.org/jira/browse/NIFI-3193 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0, 1.1.0, 0.7.1 >Reporter: Brian >Assignee: Michael Hogue > > At the moment the NiFi AMQP processors can establish a SSL connection to > RabbitMQ but still user a user defined username and password to authenticate. > When using certificates RabbitMQ allows you to use to COMMON_NAME from the > certificate to authenticate instead of providing a username and password. > Unfortunately the NiFi processors do not support this so I would like to > request an update to the processors to enable this functionality. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1971: NIFI-3193: added ability to authenticate with AMQP using c...
Github user m-hogue commented on the issue: https://github.com/apache/nifi/pull/1971 @pvillard31 : Thanks for the review! After wrestling with IntelliJ formatting rules following the import of the nifi style config i think it's good. Please let me know if you'd like any more changes made. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #1977: NIFI-515 - DeleteSQS and PutSQS should offer batch process...
Github user jzonthemtn commented on the issue: https://github.com/apache/nifi/pull/1977 @pvillard31 That sounds great! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-515) DeleteSQS and PutSQS should offer batch processing
[ https://issues.apache.org/jira/browse/NIFI-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074853#comment-16074853 ] ASF GitHub Bot commented on NIFI-515: - Github user jzonthemtn commented on the issue: https://github.com/apache/nifi/pull/1977 @pvillard31 That sounds great! > DeleteSQS and PutSQS should offer batch processing > -- > > Key: NIFI-515 > URL: https://issues.apache.org/jira/browse/NIFI-515 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Pierre Villard >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-515) DeleteSQS and PutSQS should offer batch processing
[ https://issues.apache.org/jira/browse/NIFI-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074832#comment-16074832 ] ASF GitHub Bot commented on NIFI-515: - Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/1977 @jzonthemtn - I just pushed a commit to address your comment. Basically, I parse the request results to split failures and success. I also added a unit test for that case. > DeleteSQS and PutSQS should offer batch processing > -- > > Key: NIFI-515 > URL: https://issues.apache.org/jira/browse/NIFI-515 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Pierre Villard >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1977: NIFI-515 - DeleteSQS and PutSQS should offer batch process...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/1977 @jzonthemtn - I just pushed a commit to address your comment. Basically, I parse the request results to split failures and success. I also added a unit test for that case. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #1956: NIFI-4135 - added hadoop-client and enhanced Authorizers e...
Github user YolandaMDavis commented on the issue: https://github.com/apache/nifi/pull/1956 @bbende sorry for the delay (was away on vacation). Not a picky request at all; the weirdness I ran into was that the xsd could not resolve the property value if classpath was just before it. It is weird but I think it has something to do with property being unlimited and classpath being not required. I'll give it a try to see if I can resolve or at least recreate it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-4135) RangerNiFiAuthorizer should support storing audit info to HDFS
[ https://issues.apache.org/jira/browse/NIFI-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074799#comment-16074799 ] ASF GitHub Bot commented on NIFI-4135: -- Github user YolandaMDavis commented on the issue: https://github.com/apache/nifi/pull/1956 @bbende sorry for the delay (was away on vacation). Not a picky request at all; the weirdness I ran into was that the xsd could not resolve the property value if classpath was just before it. It is weird but I think it has something to do with property being unlimited and classpath being not required. I'll give it a try to see if I can resolve or at least recreate it. > RangerNiFiAuthorizer should support storing audit info to HDFS > -- > > Key: NIFI-4135 > URL: https://issues.apache.org/jira/browse/NIFI-4135 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Yolanda M. Davis >Assignee: Yolanda M. Davis > > When using Ranger to support authorization an option to log auditing > information to HDFS can be supported. The RangerNiFiAuthorizer should be > prepared to communicate with a hadoop cluster in order to support this > feature. In it's current implementation the authorizer does not have the > hadoop-client jars available as a dependency nor does it support the ability > to refer to the required *.site.xml files in order to communicate without > using the default configuration. Both of these changes are needed in order > to send audit info to HDFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor
[ https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074769#comment-16074769 ] ASF GitHub Bot commented on NIFI-4024: -- Github user bbende commented on a diff in the pull request: https://github.com/apache/nifi/pull/1961#discussion_r125639967 --- Diff: nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/pom.xml --- @@ -82,5 +90,15 @@ test + +org.apache.nifi +nifi-mock-record-utils +test + + +org.apache.hbase +hbase-common --- End diff -- We should keep all the HBase client dependencies behind HBaseClientService so that the version of the client doesn't leak into the processors NAR. It looks like the main reason for adding this was to use the `Bytes` class from HBase common, which we ran into once before and we ended exposing some `toBytes` methods on `HBaseClientService`. We should add the additional toBytes methods that we need, or we could possibly add a method like: ` byte[] asBytes(String field, RecordFieldType fieldType, Record record)` > Create EvaluateRecordPath processor > --- > > Key: NIFI-4024 > URL: https://issues.apache.org/jira/browse/NIFI-4024 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Steve Champagne >Priority: Minor > > With the new RecordPath DSL, it would be nice if there was a processor that > could pull fields into attributes of the flowfile based on a RecordPath. This > would be similar to the EvaluateJsonPath processor that currently exists, > except it could be used to pull fields from arbitrary record formats. My > current use case for it would be pulling fields out of Avro records while > skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and > then converting back to Avro. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor
[ https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074771#comment-16074771 ] ASF GitHub Bot commented on NIFI-4024: -- Github user bbende commented on a diff in the pull request: https://github.com/apache/nifi/pull/1961#discussion_r125645539 --- Diff: nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java --- @@ -0,0 +1,316 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hbase; + +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.hbase.put.PutColumn; +import org.apache.nifi.hbase.put.PutFlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.record.Record; +import org.apache.nifi.serialization.record.RecordFieldType; +import org.apache.nifi.serialization.record.RecordSchema; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +@EventDriven +@SupportsBatching +@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED) +@Tags({"hadoop", "hbase", "put", "record"}) +@CapabilityDescription("Adds rows to HBase based on the contents of a flowfile using a configured record reader.") +public class PutHBaseRecord extends AbstractPutHBase { + +protected static final PropertyDescriptor ROW_FIELD_NAME = new PropertyDescriptor.Builder() +.name("Row Identifier Field Name") +.description("Specifies the name of a JSON element whose value should be used as the row id for the given JSON document.") +.expressionLanguageSupported(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.build(); + +protected static final String FAIL_VALUE = "Fail"; +protected static final String WARN_VALUE = "Warn"; +protected static final String IGNORE_VALUE = "Ignore"; +protected static final String TEXT_VALUE = "Text"; + +protected static final AllowableValue COMPLEX_FIELD_FAIL = new AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any elements contain complex values."); +protected static final AllowableValue COMPLEX_FIELD_WARN = new AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include field in row sent to HBase."); +protected static final AllowableValue COMPLEX_FIELD_IGNORE = new AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include in row sent to HBase."); +protected static final AllowableValue COMPLEX_FIELD_TEXT = new AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the complex field as the value of the given column."); + +static final PropertyDescriptor RECORD_READER_FACTORY = new PropertyDescriptor.Builder() +.name("record-reader") +.displayName("Record Reader") +.description("Specifies the Controller Service to
[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor
[ https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074768#comment-16074768 ] ASF GitHub Bot commented on NIFI-4024: -- Github user bbende commented on a diff in the pull request: https://github.com/apache/nifi/pull/1961#discussion_r125640198 --- Diff: nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java --- @@ -0,0 +1,316 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hbase; + +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.hbase.put.PutColumn; +import org.apache.nifi.hbase.put.PutFlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.record.Record; +import org.apache.nifi.serialization.record.RecordFieldType; +import org.apache.nifi.serialization.record.RecordSchema; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +@EventDriven +@SupportsBatching +@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED) +@Tags({"hadoop", "hbase", "put", "record"}) +@CapabilityDescription("Adds rows to HBase based on the contents of a flowfile using a configured record reader.") +public class PutHBaseRecord extends AbstractPutHBase { + +protected static final PropertyDescriptor ROW_FIELD_NAME = new PropertyDescriptor.Builder() +.name("Row Identifier Field Name") +.description("Specifies the name of a JSON element whose value should be used as the row id for the given JSON document.") --- End diff -- We should go through all the property descriptors, allowable values, etc. and make sure references to "JSON" are appropriately replaced with "Record". > Create EvaluateRecordPath processor > --- > > Key: NIFI-4024 > URL: https://issues.apache.org/jira/browse/NIFI-4024 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Steve Champagne >Priority: Minor > > With the new RecordPath DSL, it would be nice if there was a processor that > could pull fields into attributes of the flowfile based on a RecordPath. This > would be similar to the EvaluateJsonPath processor that currently exists, > except it could be used to pull fields from arbitrary record formats. My > current use case for it would be pulling fields out of Avro records while > skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and > then converting back to Avro. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor
[ https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074770#comment-16074770 ] ASF GitHub Bot commented on NIFI-4024: -- Github user bbende commented on a diff in the pull request: https://github.com/apache/nifi/pull/1961#discussion_r125645738 --- Diff: nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java --- @@ -0,0 +1,316 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hbase; + +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.hbase.put.PutColumn; +import org.apache.nifi.hbase.put.PutFlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.record.Record; +import org.apache.nifi.serialization.record.RecordFieldType; +import org.apache.nifi.serialization.record.RecordSchema; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +@EventDriven +@SupportsBatching +@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED) +@Tags({"hadoop", "hbase", "put", "record"}) +@CapabilityDescription("Adds rows to HBase based on the contents of a flowfile using a configured record reader.") +public class PutHBaseRecord extends AbstractPutHBase { + +protected static final PropertyDescriptor ROW_FIELD_NAME = new PropertyDescriptor.Builder() +.name("Row Identifier Field Name") +.description("Specifies the name of a JSON element whose value should be used as the row id for the given JSON document.") +.expressionLanguageSupported(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.build(); + +protected static final String FAIL_VALUE = "Fail"; +protected static final String WARN_VALUE = "Warn"; +protected static final String IGNORE_VALUE = "Ignore"; +protected static final String TEXT_VALUE = "Text"; + +protected static final AllowableValue COMPLEX_FIELD_FAIL = new AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any elements contain complex values."); +protected static final AllowableValue COMPLEX_FIELD_WARN = new AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include field in row sent to HBase."); +protected static final AllowableValue COMPLEX_FIELD_IGNORE = new AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include in row sent to HBase."); +protected static final AllowableValue COMPLEX_FIELD_TEXT = new AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the complex field as the value of the given column."); + +static final PropertyDescriptor RECORD_READER_FACTORY = new PropertyDescriptor.Builder() +.name("record-reader") +.displayName("Record Reader") +.description("Specifies the Controller Service to
[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...
Github user bbende commented on a diff in the pull request: https://github.com/apache/nifi/pull/1961#discussion_r125645539 --- Diff: nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java --- @@ -0,0 +1,316 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hbase; + +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.hbase.put.PutColumn; +import org.apache.nifi.hbase.put.PutFlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.record.Record; +import org.apache.nifi.serialization.record.RecordFieldType; +import org.apache.nifi.serialization.record.RecordSchema; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +@EventDriven +@SupportsBatching +@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED) +@Tags({"hadoop", "hbase", "put", "record"}) +@CapabilityDescription("Adds rows to HBase based on the contents of a flowfile using a configured record reader.") +public class PutHBaseRecord extends AbstractPutHBase { + +protected static final PropertyDescriptor ROW_FIELD_NAME = new PropertyDescriptor.Builder() +.name("Row Identifier Field Name") +.description("Specifies the name of a JSON element whose value should be used as the row id for the given JSON document.") +.expressionLanguageSupported(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.build(); + +protected static final String FAIL_VALUE = "Fail"; +protected static final String WARN_VALUE = "Warn"; +protected static final String IGNORE_VALUE = "Ignore"; +protected static final String TEXT_VALUE = "Text"; + +protected static final AllowableValue COMPLEX_FIELD_FAIL = new AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any elements contain complex values."); +protected static final AllowableValue COMPLEX_FIELD_WARN = new AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include field in row sent to HBase."); +protected static final AllowableValue COMPLEX_FIELD_IGNORE = new AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include in row sent to HBase."); +protected static final AllowableValue COMPLEX_FIELD_TEXT = new AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the complex field as the value of the given column."); + +static final PropertyDescriptor RECORD_READER_FACTORY = new PropertyDescriptor.Builder() +.name("record-reader") +.displayName("Record Reader") +.description("Specifies the Controller Service to use for parsing incoming data and determining the data's schema") +.identifiesControllerService(RecordReaderFactory.class) +.required(true) +.build(); + +protected static
[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...
Github user bbende commented on a diff in the pull request: https://github.com/apache/nifi/pull/1961#discussion_r125639967 --- Diff: nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/pom.xml --- @@ -82,5 +90,15 @@ test + +org.apache.nifi +nifi-mock-record-utils +test + + +org.apache.hbase +hbase-common --- End diff -- We should keep all the HBase client dependencies behind HBaseClientService so that the version of the client doesn't leak into the processors NAR. It looks like the main reason for adding this was to use the `Bytes` class from HBase common, which we ran into once before and we ended exposing some `toBytes` methods on `HBaseClientService`. We should add the additional toBytes methods that we need, or we could possibly add a method like: ` byte[] asBytes(String field, RecordFieldType fieldType, Record record)` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...
Github user bbende commented on a diff in the pull request: https://github.com/apache/nifi/pull/1961#discussion_r125645738 --- Diff: nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java --- @@ -0,0 +1,316 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hbase; + +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.hbase.put.PutColumn; +import org.apache.nifi.hbase.put.PutFlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.record.Record; +import org.apache.nifi.serialization.record.RecordFieldType; +import org.apache.nifi.serialization.record.RecordSchema; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +@EventDriven +@SupportsBatching +@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED) +@Tags({"hadoop", "hbase", "put", "record"}) +@CapabilityDescription("Adds rows to HBase based on the contents of a flowfile using a configured record reader.") +public class PutHBaseRecord extends AbstractPutHBase { + +protected static final PropertyDescriptor ROW_FIELD_NAME = new PropertyDescriptor.Builder() +.name("Row Identifier Field Name") +.description("Specifies the name of a JSON element whose value should be used as the row id for the given JSON document.") +.expressionLanguageSupported(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.build(); + +protected static final String FAIL_VALUE = "Fail"; +protected static final String WARN_VALUE = "Warn"; +protected static final String IGNORE_VALUE = "Ignore"; +protected static final String TEXT_VALUE = "Text"; + +protected static final AllowableValue COMPLEX_FIELD_FAIL = new AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any elements contain complex values."); +protected static final AllowableValue COMPLEX_FIELD_WARN = new AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include field in row sent to HBase."); +protected static final AllowableValue COMPLEX_FIELD_IGNORE = new AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include in row sent to HBase."); +protected static final AllowableValue COMPLEX_FIELD_TEXT = new AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the complex field as the value of the given column."); + +static final PropertyDescriptor RECORD_READER_FACTORY = new PropertyDescriptor.Builder() +.name("record-reader") +.displayName("Record Reader") +.description("Specifies the Controller Service to use for parsing incoming data and determining the data's schema") +.identifiesControllerService(RecordReaderFactory.class) +.required(true) +.build(); + +protected static
[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...
Github user bbende commented on a diff in the pull request: https://github.com/apache/nifi/pull/1961#discussion_r125640198 --- Diff: nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java --- @@ -0,0 +1,316 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hbase; + +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.hbase.put.PutColumn; +import org.apache.nifi.hbase.put.PutFlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.record.Record; +import org.apache.nifi.serialization.record.RecordFieldType; +import org.apache.nifi.serialization.record.RecordSchema; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +@EventDriven +@SupportsBatching +@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED) +@Tags({"hadoop", "hbase", "put", "record"}) +@CapabilityDescription("Adds rows to HBase based on the contents of a flowfile using a configured record reader.") +public class PutHBaseRecord extends AbstractPutHBase { + +protected static final PropertyDescriptor ROW_FIELD_NAME = new PropertyDescriptor.Builder() +.name("Row Identifier Field Name") +.description("Specifies the name of a JSON element whose value should be used as the row id for the given JSON document.") --- End diff -- We should go through all the property descriptors, allowable values, etc. and make sure references to "JSON" are appropriately replaced with "Record". --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-4127) Create a CompositeUserGroupProvider
[ https://issues.apache.org/jira/browse/NIFI-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-4127: -- Status: Patch Available (was: Open) > Create a CompositeUserGroupProvider > --- > > Key: NIFI-4127 > URL: https://issues.apache.org/jira/browse/NIFI-4127 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Gilman >Assignee: Matt Gilman > > Create a CompositeUserGroupProvider to support loading users/groups from > multiple sources. This composite implementation should support > {noformat} > 0-1 ConfigurableUserGroupProvider > 0-n UserGroupProviders > {noformat} > Only a single ConfigurableUserGroupProvider can be supplied to keep these > sources/implementation details hidden from the end users. The > CompositeUserGroupProvider must be configured with at least 1 underlying > provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4127) Create a CompositeUserGroupProvider
[ https://issues.apache.org/jira/browse/NIFI-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074765#comment-16074765 ] ASF GitHub Bot commented on NIFI-4127: -- GitHub user mcgilman opened a pull request: https://github.com/apache/nifi/pull/1978 NIFI-4127: Composite User Group Providers NIFI-4127: - Introducing composite ConfigurableUserGroupProvider and UserGroupProvider. - Adding appropriate unit tests. - Updating object model to support per resource (user/group/policy) configuration. - Updating UI to support per resource (user/group/policy) configuration. - Adding necessary documentation. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mcgilman/nifi NIFI-4127 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1978.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1978 commit 0e679007e59bfea050f73b046f52b2a772a281ae Author: Matt GilmanDate: 2017-06-28T20:40:41Z NIFI-4127: - Introducing composite ConfigurableUserGroupProvider and UserGroupProvider. - Adding appropriate unit tests. - Updating object model to support per resource (user/group/policy) configuration. - Updating UI to support per resource (user/group/policy) configuration. - Adding necessary documentation. > Create a CompositeUserGroupProvider > --- > > Key: NIFI-4127 > URL: https://issues.apache.org/jira/browse/NIFI-4127 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Gilman >Assignee: Matt Gilman > > Create a CompositeUserGroupProvider to support loading users/groups from > multiple sources. This composite implementation should support > {noformat} > 0-1 ConfigurableUserGroupProvider > 0-n UserGroupProviders > {noformat} > Only a single ConfigurableUserGroupProvider can be supplied to keep these > sources/implementation details hidden from the end users. The > CompositeUserGroupProvider must be configured with at least 1 underlying > provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1978: NIFI-4127: Composite User Group Providers
GitHub user mcgilman opened a pull request: https://github.com/apache/nifi/pull/1978 NIFI-4127: Composite User Group Providers NIFI-4127: - Introducing composite ConfigurableUserGroupProvider and UserGroupProvider. - Adding appropriate unit tests. - Updating object model to support per resource (user/group/policy) configuration. - Updating UI to support per resource (user/group/policy) configuration. - Adding necessary documentation. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mcgilman/nifi NIFI-4127 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1978.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1978 commit 0e679007e59bfea050f73b046f52b2a772a281ae Author: Matt GilmanDate: 2017-06-28T20:40:41Z NIFI-4127: - Introducing composite ConfigurableUserGroupProvider and UserGroupProvider. - Adding appropriate unit tests. - Updating object model to support per resource (user/group/policy) configuration. - Updating UI to support per resource (user/group/policy) configuration. - Adding necessary documentation. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-515) DeleteSQS and PutSQS should offer batch processing
[ https://issues.apache.org/jira/browse/NIFI-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074700#comment-16074700 ] ASF GitHub Bot commented on NIFI-515: - Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1977#discussion_r125633660 --- Diff: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/sqs/PutSQS.java --- @@ -108,43 +108,57 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final String queueUrl = context.getProperty(QUEUE_URL).evaluateAttributeExpressions(flowFile).getValue(); request.setQueueUrl(queueUrl); +final int batchSize = context.getProperty(BATCH_SIZE).asInteger(); +final List flowFiles = session.get(new UrlFlowFileFilter(batchSize, queueUrl, context)); +flowFiles.add(flowFile); + final Set entries = new HashSet<>(); -final SendMessageBatchRequestEntry entry = new SendMessageBatchRequestEntry(); -entry.setId(flowFile.getAttribute("uuid")); -final ByteArrayOutputStream baos = new ByteArrayOutputStream(); -session.exportTo(flowFile, baos); -final String flowFileContent = baos.toString(); -entry.setMessageBody(flowFileContent); +for(FlowFile flowFileItem : flowFiles) { -final MapmessageAttributes = new HashMap<>(); +final SendMessageBatchRequestEntry entry = new SendMessageBatchRequestEntry(); +entry.setId(flowFileItem.getAttribute("uuid")); +final ByteArrayOutputStream baos = new ByteArrayOutputStream(); +session.exportTo(flowFileItem, baos); +final String flowFileContent = baos.toString(); +entry.setMessageBody(flowFileContent); -for (final PropertyDescriptor descriptor : userDefinedProperties) { -final MessageAttributeValue mav = new MessageAttributeValue(); -mav.setDataType("String"); - mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFile).getValue()); -messageAttributes.put(descriptor.getName(), mav); -} +final Map messageAttributes = new HashMap<>(); -entry.setMessageAttributes(messageAttributes); - entry.setDelaySeconds(context.getProperty(DELAY).asTimePeriod(TimeUnit.SECONDS).intValue()); -entries.add(entry); +for (final PropertyDescriptor descriptor : userDefinedProperties) { +final MessageAttributeValue mav = new MessageAttributeValue(); +mav.setDataType("String"); + mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFileItem).getValue()); +messageAttributes.put(descriptor.getName(), mav); +} + +entry.setMessageAttributes(messageAttributes); + entry.setDelaySeconds(context.getProperty(DELAY).asTimePeriod(TimeUnit.SECONDS).intValue()); +entries.add(entry); + +} request.setEntries(entries); try { client.sendMessageBatch(request); } catch (final Exception e) { -getLogger().error("Failed to send messages to Amazon SQS due to {}; routing to failure", new Object[]{e}); -flowFile = session.penalize(flowFile); -session.transfer(flowFile, REL_FAILURE); +getLogger().error("Failed to send {} messages to Amazon SQS due to {}; routing to failure", new Object[]{flowFiles.size(), e}); --- End diff -- You're right, I forgot that we have individual responses for each entry part of the request. Will update the PR. Thanks @jzonthemtn > DeleteSQS and PutSQS should offer batch processing > -- > > Key: NIFI-515 > URL: https://issues.apache.org/jira/browse/NIFI-515 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Pierre Villard >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1977: NIFI-515 - DeleteSQS and PutSQS should offer batch ...
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1977#discussion_r125633660 --- Diff: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/sqs/PutSQS.java --- @@ -108,43 +108,57 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final String queueUrl = context.getProperty(QUEUE_URL).evaluateAttributeExpressions(flowFile).getValue(); request.setQueueUrl(queueUrl); +final int batchSize = context.getProperty(BATCH_SIZE).asInteger(); +final List flowFiles = session.get(new UrlFlowFileFilter(batchSize, queueUrl, context)); +flowFiles.add(flowFile); + final Set entries = new HashSet<>(); -final SendMessageBatchRequestEntry entry = new SendMessageBatchRequestEntry(); -entry.setId(flowFile.getAttribute("uuid")); -final ByteArrayOutputStream baos = new ByteArrayOutputStream(); -session.exportTo(flowFile, baos); -final String flowFileContent = baos.toString(); -entry.setMessageBody(flowFileContent); +for(FlowFile flowFileItem : flowFiles) { -final MapmessageAttributes = new HashMap<>(); +final SendMessageBatchRequestEntry entry = new SendMessageBatchRequestEntry(); +entry.setId(flowFileItem.getAttribute("uuid")); +final ByteArrayOutputStream baos = new ByteArrayOutputStream(); +session.exportTo(flowFileItem, baos); +final String flowFileContent = baos.toString(); +entry.setMessageBody(flowFileContent); -for (final PropertyDescriptor descriptor : userDefinedProperties) { -final MessageAttributeValue mav = new MessageAttributeValue(); -mav.setDataType("String"); - mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFile).getValue()); -messageAttributes.put(descriptor.getName(), mav); -} +final Map messageAttributes = new HashMap<>(); -entry.setMessageAttributes(messageAttributes); - entry.setDelaySeconds(context.getProperty(DELAY).asTimePeriod(TimeUnit.SECONDS).intValue()); -entries.add(entry); +for (final PropertyDescriptor descriptor : userDefinedProperties) { +final MessageAttributeValue mav = new MessageAttributeValue(); +mav.setDataType("String"); + mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFileItem).getValue()); +messageAttributes.put(descriptor.getName(), mav); +} + +entry.setMessageAttributes(messageAttributes); + entry.setDelaySeconds(context.getProperty(DELAY).asTimePeriod(TimeUnit.SECONDS).intValue()); +entries.add(entry); + +} request.setEntries(entries); try { client.sendMessageBatch(request); } catch (final Exception e) { -getLogger().error("Failed to send messages to Amazon SQS due to {}; routing to failure", new Object[]{e}); -flowFile = session.penalize(flowFile); -session.transfer(flowFile, REL_FAILURE); +getLogger().error("Failed to send {} messages to Amazon SQS due to {}; routing to failure", new Object[]{flowFiles.size(), e}); --- End diff -- You're right, I forgot that we have individual responses for each entry part of the request. Will update the PR. Thanks @jzonthemtn --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-515) DeleteSQS and PutSQS should offer batch processing
[ https://issues.apache.org/jira/browse/NIFI-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074695#comment-16074695 ] ASF GitHub Bot commented on NIFI-515: - Github user jzonthemtn commented on a diff in the pull request: https://github.com/apache/nifi/pull/1977#discussion_r125632070 --- Diff: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/sqs/PutSQS.java --- @@ -108,43 +108,57 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final String queueUrl = context.getProperty(QUEUE_URL).evaluateAttributeExpressions(flowFile).getValue(); request.setQueueUrl(queueUrl); +final int batchSize = context.getProperty(BATCH_SIZE).asInteger(); +final List flowFiles = session.get(new UrlFlowFileFilter(batchSize, queueUrl, context)); +flowFiles.add(flowFile); + final Set entries = new HashSet<>(); -final SendMessageBatchRequestEntry entry = new SendMessageBatchRequestEntry(); -entry.setId(flowFile.getAttribute("uuid")); -final ByteArrayOutputStream baos = new ByteArrayOutputStream(); -session.exportTo(flowFile, baos); -final String flowFileContent = baos.toString(); -entry.setMessageBody(flowFileContent); +for(FlowFile flowFileItem : flowFiles) { -final MapmessageAttributes = new HashMap<>(); +final SendMessageBatchRequestEntry entry = new SendMessageBatchRequestEntry(); +entry.setId(flowFileItem.getAttribute("uuid")); +final ByteArrayOutputStream baos = new ByteArrayOutputStream(); +session.exportTo(flowFileItem, baos); +final String flowFileContent = baos.toString(); +entry.setMessageBody(flowFileContent); -for (final PropertyDescriptor descriptor : userDefinedProperties) { -final MessageAttributeValue mav = new MessageAttributeValue(); -mav.setDataType("String"); - mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFile).getValue()); -messageAttributes.put(descriptor.getName(), mav); -} +final Map messageAttributes = new HashMap<>(); -entry.setMessageAttributes(messageAttributes); - entry.setDelaySeconds(context.getProperty(DELAY).asTimePeriod(TimeUnit.SECONDS).intValue()); -entries.add(entry); +for (final PropertyDescriptor descriptor : userDefinedProperties) { +final MessageAttributeValue mav = new MessageAttributeValue(); +mav.setDataType("String"); + mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFileItem).getValue()); +messageAttributes.put(descriptor.getName(), mav); +} + +entry.setMessageAttributes(messageAttributes); + entry.setDelaySeconds(context.getProperty(DELAY).asTimePeriod(TimeUnit.SECONDS).intValue()); +entries.add(entry); + +} request.setEntries(entries); try { client.sendMessageBatch(request); } catch (final Exception e) { -getLogger().error("Failed to send messages to Amazon SQS due to {}; routing to failure", new Object[]{e}); -flowFile = session.penalize(flowFile); -session.transfer(flowFile, REL_FAILURE); +getLogger().error("Failed to send {} messages to Amazon SQS due to {}; routing to failure", new Object[]{flowFiles.size(), e}); --- End diff -- There could be a mix of successful/failed messages in the `SendMessageBatchResult`. Does that impact how the session is transferred? > DeleteSQS and PutSQS should offer batch processing > -- > > Key: NIFI-515 > URL: https://issues.apache.org/jira/browse/NIFI-515 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Pierre Villard >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1977: NIFI-515 - DeleteSQS and PutSQS should offer batch ...
Github user jzonthemtn commented on a diff in the pull request: https://github.com/apache/nifi/pull/1977#discussion_r125632070 --- Diff: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/sqs/PutSQS.java --- @@ -108,43 +108,57 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final String queueUrl = context.getProperty(QUEUE_URL).evaluateAttributeExpressions(flowFile).getValue(); request.setQueueUrl(queueUrl); +final int batchSize = context.getProperty(BATCH_SIZE).asInteger(); +final List flowFiles = session.get(new UrlFlowFileFilter(batchSize, queueUrl, context)); +flowFiles.add(flowFile); + final Set entries = new HashSet<>(); -final SendMessageBatchRequestEntry entry = new SendMessageBatchRequestEntry(); -entry.setId(flowFile.getAttribute("uuid")); -final ByteArrayOutputStream baos = new ByteArrayOutputStream(); -session.exportTo(flowFile, baos); -final String flowFileContent = baos.toString(); -entry.setMessageBody(flowFileContent); +for(FlowFile flowFileItem : flowFiles) { -final MapmessageAttributes = new HashMap<>(); +final SendMessageBatchRequestEntry entry = new SendMessageBatchRequestEntry(); +entry.setId(flowFileItem.getAttribute("uuid")); +final ByteArrayOutputStream baos = new ByteArrayOutputStream(); +session.exportTo(flowFileItem, baos); +final String flowFileContent = baos.toString(); +entry.setMessageBody(flowFileContent); -for (final PropertyDescriptor descriptor : userDefinedProperties) { -final MessageAttributeValue mav = new MessageAttributeValue(); -mav.setDataType("String"); - mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFile).getValue()); -messageAttributes.put(descriptor.getName(), mav); -} +final Map messageAttributes = new HashMap<>(); -entry.setMessageAttributes(messageAttributes); - entry.setDelaySeconds(context.getProperty(DELAY).asTimePeriod(TimeUnit.SECONDS).intValue()); -entries.add(entry); +for (final PropertyDescriptor descriptor : userDefinedProperties) { +final MessageAttributeValue mav = new MessageAttributeValue(); +mav.setDataType("String"); + mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFileItem).getValue()); +messageAttributes.put(descriptor.getName(), mav); +} + +entry.setMessageAttributes(messageAttributes); + entry.setDelaySeconds(context.getProperty(DELAY).asTimePeriod(TimeUnit.SECONDS).intValue()); +entries.add(entry); + +} request.setEntries(entries); try { client.sendMessageBatch(request); } catch (final Exception e) { -getLogger().error("Failed to send messages to Amazon SQS due to {}; routing to failure", new Object[]{e}); -flowFile = session.penalize(flowFile); -session.transfer(flowFile, REL_FAILURE); +getLogger().error("Failed to send {} messages to Amazon SQS due to {}; routing to failure", new Object[]{flowFiles.size(), e}); --- End diff -- There could be a mix of successful/failed messages in the `SendMessageBatchResult`. Does that impact how the session is transferred? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Comment Edited] (NIFI-3320) Listen to SNMP Trap
[ https://issues.apache.org/jira/browse/NIFI-3320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074501#comment-16074501 ] Lars Francke edited comment on NIFI-3320 at 7/5/17 9:45 AM: I'm looking into this issue. I have never written a "long-running" processor that actively listens for incoming connections though. I know there exist some of those processors and I found the {{AbstractListenEventBatchingProcessor}} and {{AbstractListenEventProcessor}} classes. Is this the way to go? Any other hints you mave have for me would be great as well. was (Author: lars_francke): I'm looking into this issue. I have never written a "long-running" processor that actively listens for incoming connections though. I know there exist some of those processors and I found the `AbstractListenEventBatchingProcessor` class. Is this the way to go? Any other hints you mave have for me would be great as well. > Listen to SNMP Trap > --- > > Key: NIFI-3320 > URL: https://issues.apache.org/jira/browse/NIFI-3320 > Project: Apache NiFi > Issue Type: New Feature >Affects Versions: 1.1.1 >Reporter: Balakrishnan R > Labels: SNMP, Traps > > As part NIFI-1537 SNMP Get, Walk and Set were introduced. However SNMP Traps > sent to NIFI cannot be converted to a flowfile currently. This is a fairly > useful feature. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3320) Listen to SNMP Trap
[ https://issues.apache.org/jira/browse/NIFI-3320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074501#comment-16074501 ] Lars Francke commented on NIFI-3320: I'm looking into this issue. I have never written a "long-running" processor that actively listens for incoming connections though. I know there exist some of those processors and I found the `AbstractListenEventBatchingProcessor` class. Is this the way to go? Any other hints you mave have for me would be great as well. > Listen to SNMP Trap > --- > > Key: NIFI-3320 > URL: https://issues.apache.org/jira/browse/NIFI-3320 > Project: Apache NiFi > Issue Type: New Feature >Affects Versions: 1.1.1 >Reporter: Balakrishnan R > Labels: SNMP, Traps > > As part NIFI-1537 SNMP Get, Walk and Set were introduced. However SNMP Traps > sent to NIFI cannot be converted to a flowfile currently. This is a fairly > useful feature. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-515) DeleteSQS and PutSQS should offer batch processing
[ https://issues.apache.org/jira/browse/NIFI-515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-515: Issue Type: Improvement (was: Bug) > DeleteSQS and PutSQS should offer batch processing > -- > > Key: NIFI-515 > URL: https://issues.apache.org/jira/browse/NIFI-515 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Pierre Villard >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4144) Add minimum/maximum file age properties to ListHDFS
[ https://issues.apache.org/jira/browse/NIFI-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4144: - Affects Version/s: (was: 1.4.0) > Add minimum/maximum file age properties to ListHDFS > --- > > Key: NIFI-4144 > URL: https://issues.apache.org/jira/browse/NIFI-4144 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Fix For: 1.4.0 > > > In some situations it would be interesting to have the minimum/maximum file > age properties in ListHDFS as we have in GetHDFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4144) Add minimum/maximum file age properties to ListHDFS
[ https://issues.apache.org/jira/browse/NIFI-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4144: - Fix Version/s: 1.4.0 > Add minimum/maximum file age properties to ListHDFS > --- > > Key: NIFI-4144 > URL: https://issues.apache.org/jira/browse/NIFI-4144 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Fix For: 1.4.0 > > > In some situations it would be interesting to have the minimum/maximum file > age properties in ListHDFS as we have in GetHDFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-515) DeleteSQS and PutSQS should offer batch processing
[ https://issues.apache.org/jira/browse/NIFI-515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-515: Assignee: Pierre Villard Status: Patch Available (was: Open) > DeleteSQS and PutSQS should offer batch processing > -- > > Key: NIFI-515 > URL: https://issues.apache.org/jira/browse/NIFI-515 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Pierre Villard >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-515) DeleteSQS and PutSQS should offer batch processing
[ https://issues.apache.org/jira/browse/NIFI-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074417#comment-16074417 ] ASF GitHub Bot commented on NIFI-515: - GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/1977 NIFI-515 - DeleteSQS and PutSQS should offer batch processing Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi NIFI-515-temp Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1977.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1977 commit 87812b4dbcd6f7caec8d70e8805c53c59bb81a5b Author: Pierre VillardDate: 2017-07-03T15:10:53Z NIFI-515 - DeleteSQS and PutSQS should offer batch processing > DeleteSQS and PutSQS should offer batch processing > -- > > Key: NIFI-515 > URL: https://issues.apache.org/jira/browse/NIFI-515 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1977: NIFI-515 - DeleteSQS and PutSQS should offer batch ...
GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/1977 NIFI-515 - DeleteSQS and PutSQS should offer batch processing Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi NIFI-515-temp Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1977.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1977 commit 87812b4dbcd6f7caec8d70e8805c53c59bb81a5b Author: Pierre VillardDate: 2017-07-03T15:10:53Z NIFI-515 - DeleteSQS and PutSQS should offer batch processing --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information
[ https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074342#comment-16074342 ] ASF GitHub Bot commented on NIFI-1613: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/293 I've taken over the proposed change into #1976 > ConvertJSONToSQL Drops Type Information > --- > > Key: NIFI-1613 > URL: https://issues.apache.org/jira/browse/NIFI-1613 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.4.1, 0.5.1 > Environment: Ubuntu 14.04 LTS >Reporter: Aaron Stephens >Assignee: Toivo Adams > Labels: ConvertJSONToSQL, Phoenix, SQL > > It appears that the ConvertJSONToSQL processor is turning Boolean (and > possibly Integer and Float) values into Strings. This is okay for some > drivers (like PostgreSQL) which can coerce a String back into a Boolean, but > it causes issues for others (specifically Phoenix in my case). > {noformat} > org.apache.phoenix.schema.ConstraintViolationException: > org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type > mismatch. VARCHAR cannot be coerced to BOOLEAN > at > org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282) > ~[na:na] > at > org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na] > at > org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442) > ~[na:na] > at > org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166) > ~[na:na] > at > org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166) > ~[na:na] > at > org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) > ~[na:na] > at > org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) > ~[na:na] > at > org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na] > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > ~[nifi-api-0.4.1.jar:0.4.1] > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146) > ~[nifi-framework-core-0.4.1.jar:0.4.1] > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139) > [nifi-framework-core-0.4.1.jar:0.4.1] > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49) > [nifi-framework-core-0.4.1.jar:0.4.1] > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119) > [nifi-framework-core-0.4.1.jar:0.4.1] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > [na:1.7.0_79] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) > [na:1.7.0_79] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) > [na:1.7.0_79] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > [na:1.7.0_79] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_79] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_79] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79] > Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 > (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN > at > org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71) > ~[na:na] > at > org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145) > ~[na:na] > ... 20 common frames omitted > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #293: NIFI-1613 Initial version, try to improve conversion for di...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/293 I've taken over the proposed change into #1976 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information
[ https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074337#comment-16074337 ] ASF GitHub Bot commented on NIFI-1613: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/1976 This PR contains commits from #293 which has been closed due to being inactive, but the change was reasonable. I've cherry picked commits from the PR, the 1st commit in this PR is a squashed ones those are made by @ToivoAdams . Thanks for your contribution Toivo! I also added the 2nd commit, in order to update the proposed change with the latest codebase, as well as few modification. We now have JsonTreeReader and PutDatabaseRecord those can handle type coercing better, so I think it's not necessary to handle DATE/TIME conversion at ConvertJSONToSQL. However, without this fix, it can truncate numeric values wrongly and lose part of user data, and that should be fixed at least. This PR is ready for review, thanks! > ConvertJSONToSQL Drops Type Information > --- > > Key: NIFI-1613 > URL: https://issues.apache.org/jira/browse/NIFI-1613 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.4.1, 0.5.1 > Environment: Ubuntu 14.04 LTS >Reporter: Aaron Stephens >Assignee: Toivo Adams > Labels: ConvertJSONToSQL, Phoenix, SQL > > It appears that the ConvertJSONToSQL processor is turning Boolean (and > possibly Integer and Float) values into Strings. This is okay for some > drivers (like PostgreSQL) which can coerce a String back into a Boolean, but > it causes issues for others (specifically Phoenix in my case). > {noformat} > org.apache.phoenix.schema.ConstraintViolationException: > org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type > mismatch. VARCHAR cannot be coerced to BOOLEAN > at > org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282) > ~[na:na] > at > org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na] > at > org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442) > ~[na:na] > at > org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166) > ~[na:na] > at > org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166) > ~[na:na] > at > org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) > ~[na:na] > at > org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) > ~[na:na] > at > org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na] > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > ~[nifi-api-0.4.1.jar:0.4.1] > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146) > ~[nifi-framework-core-0.4.1.jar:0.4.1] > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139) > [nifi-framework-core-0.4.1.jar:0.4.1] > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49) > [nifi-framework-core-0.4.1.jar:0.4.1] > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119) > [nifi-framework-core-0.4.1.jar:0.4.1] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > [na:1.7.0_79] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) > [na:1.7.0_79] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) > [na:1.7.0_79] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > [na:1.7.0_79] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_79] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_79] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79] > Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 > (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN > at > org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71) > ~[na:na] > at > org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145) > ~[na:na] > ... 20 common frames omitted > {noformat} -- This message was sent by Atlassian JIRA
[GitHub] nifi issue #1976: NIFI-1613: Make use of column type correctly at ConvertJSO...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/1976 This PR contains commits from #293 which has been closed due to being inactive, but the change was reasonable. I've cherry picked commits from the PR, the 1st commit in this PR is a squashed ones those are made by @ToivoAdams . Thanks for your contribution Toivo! I also added the 2nd commit, in order to update the proposed change with the latest codebase, as well as few modification. We now have JsonTreeReader and PutDatabaseRecord those can handle type coercing better, so I think it's not necessary to handle DATE/TIME conversion at ConvertJSONToSQL. However, without this fix, it can truncate numeric values wrongly and lose part of user data, and that should be fixed at least. This PR is ready for review, thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information
[ https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074331#comment-16074331 ] ASF GitHub Bot commented on NIFI-1613: -- GitHub user ijokarumawak opened a pull request: https://github.com/apache/nifi/pull/1976 NIFI-1613: Make use of column type correctly at ConvertJSONToSQL Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [X] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ijokarumawak/nifi nifi-1613 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1976.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1976 commit 07ed78cd130c632f63e1357d2407f946e6f5f45a Author: Toivo AdamsDate: 2016-03-20T19:13:15Z NIFI-1613 Initial version, try to improve conversion for different SQL types. New test and refactored existing test to reuse DBCP service. nifi-1613 Adding numeric and Date/time types conversion and test. commit 81e8391e3bc8e26413c0f1e38a41bc01b88159e1 Author: Koji Kawamura Date: 2017-07-05T05:49:32Z NIFI-1613: ConvertJSONToSQL truncates numeric value wrongly. - Changed boolean value conversion to use Boolean.valueOf. - Updated comments in source code to reflect current situation more clearly. - Updated tests those have been added since the original commits were made. > ConvertJSONToSQL Drops Type Information > --- > > Key: NIFI-1613 > URL: https://issues.apache.org/jira/browse/NIFI-1613 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.4.1, 0.5.1 > Environment: Ubuntu 14.04 LTS >Reporter: Aaron Stephens >Assignee: Toivo Adams > Labels: ConvertJSONToSQL, Phoenix, SQL > > It appears that the ConvertJSONToSQL processor is turning Boolean (and > possibly Integer and Float) values into Strings. This is okay for some > drivers (like PostgreSQL) which can coerce a String back into a Boolean, but > it causes issues for others (specifically Phoenix in my case). > {noformat} > org.apache.phoenix.schema.ConstraintViolationException: > org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type > mismatch. VARCHAR cannot be coerced to BOOLEAN > at > org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282) > ~[na:na] > at > org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na] > at > org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442) > ~[na:na] > at > org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166) > ~[na:na] > at > org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166) > ~[na:na] > at >
[GitHub] nifi pull request #1976: NIFI-1613: Make use of column type correctly at Con...
GitHub user ijokarumawak opened a pull request: https://github.com/apache/nifi/pull/1976 NIFI-1613: Make use of column type correctly at ConvertJSONToSQL Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [X] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ijokarumawak/nifi nifi-1613 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1976.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1976 commit 07ed78cd130c632f63e1357d2407f946e6f5f45a Author: Toivo AdamsDate: 2016-03-20T19:13:15Z NIFI-1613 Initial version, try to improve conversion for different SQL types. New test and refactored existing test to reuse DBCP service. nifi-1613 Adding numeric and Date/time types conversion and test. commit 81e8391e3bc8e26413c0f1e38a41bc01b88159e1 Author: Koji Kawamura Date: 2017-07-05T05:49:32Z NIFI-1613: ConvertJSONToSQL truncates numeric value wrongly. - Changed boolean value conversion to use Boolean.valueOf. - Updated comments in source code to reflect current situation more clearly. - Updated tests those have been added since the original commits were made. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---