[GitHub] nifi issue #1312: NIFI-3147 CCDA Processor
Github user kedarchitale commented on the issue: https://github.com/apache/nifi/pull/1312 @joewitt, Thanks for the appreciation and early feedback. Will look into adding documentation to 'additionalDetails'. The properties file contains mappings for CCDA, which would not change once all sections are mapped, however wanted to externalize these from java code, so kept it in the bundle as resources. These properties would not be changed by NiFi user (at least don't foresee a need) and ideally should be changed and tested by contributors mapping any additional sections of CCDA. Considering this, options of having it as a single processor property or externalizing it from the bundle were not suitable. Will post on the dev list seeking feedback from the community. Thanks again for the feedback! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3147) Build processor to parse CCDA into attributes and JSON
[ https://issues.apache.org/jira/browse/NIFI-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737288#comment-15737288 ] ASF GitHub Bot commented on NIFI-3147: -- Github user kedarchitale commented on the issue: https://github.com/apache/nifi/pull/1312 @joewitt, Thanks for the appreciation and early feedback. Will look into adding documentation to 'additionalDetails'. The properties file contains mappings for CCDA, which would not change once all sections are mapped, however wanted to externalize these from java code, so kept it in the bundle as resources. These properties would not be changed by NiFi user (at least don't foresee a need) and ideally should be changed and tested by contributors mapping any additional sections of CCDA. Considering this, options of having it as a single processor property or externalizing it from the bundle were not suitable. Will post on the dev list seeking feedback from the community. Thanks again for the feedback! > Build processor to parse CCDA into attributes and JSON > -- > > Key: NIFI-3147 > URL: https://issues.apache.org/jira/browse/NIFI-3147 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Kedar Chitale > Labels: attributes, ccda, healthcare, json, parser > Original Estimate: 336h > Remaining Estimate: 336h > > Accept a CCDA document, Parse the document to create JSON text and individual > attributes for example code.codeSystemName=LOINC -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-3147) Build processor to parse CCDA into attributes and JSON
[ https://issues.apache.org/jira/browse/NIFI-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737238#comment-15737238 ] ASF GitHub Bot commented on NIFI-3147: -- Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/1312 @kedarchitale I've done quick scan through the contrib so not a thorough evaluation. But I want to immediately say thank you for what is quite obviously a very thoughtful and detailed contribution. You appear to have been very thoughtful about license and notice and following convention around which is extremely appreciated. Nice! I also noticed you provided excellent documentation for the processor. Instead of that being expressed in the readme file could you take a look at some of the example processors that take advantage of the 'additionalDetails' which means this wonderful information becomes part of the automated documentation and available to the user through the application? I noticed there was a properties file in the src/main/resources. I haven't looked into how that ties in but if this is something you'll want the user to be able to edit perhaps we can consider some alternative approaches. One challenge with properties files is that if it is meant to be user editable then it can become cluster unfriendly. So perhaps those things could be expressed as processor properties instead or even as a single processor property. Not sure and perhaps you've already thought through that. Just wanted to mention it. Anyway - great stuff and thanks for contributing! > Build processor to parse CCDA into attributes and JSON > -- > > Key: NIFI-3147 > URL: https://issues.apache.org/jira/browse/NIFI-3147 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Kedar Chitale > Labels: attributes, ccda, healthcare, json, parser > Original Estimate: 336h > Remaining Estimate: 336h > > Accept a CCDA document, Parse the document to create JSON text and individual > attributes for example code.codeSystemName=LOINC -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1312: NIFI-3147 CCDA Processor
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/1312 @kedarchitale I've done quick scan through the contrib so not a thorough evaluation. But I want to immediately say thank you for what is quite obviously a very thoughtful and detailed contribution. You appear to have been very thoughtful about license and notice and following convention around which is extremely appreciated. Nice! I also noticed you provided excellent documentation for the processor. Instead of that being expressed in the readme file could you take a look at some of the example processors that take advantage of the 'additionalDetails' which means this wonderful information becomes part of the automated documentation and available to the user through the application? I noticed there was a properties file in the src/main/resources. I haven't looked into how that ties in but if this is something you'll want the user to be able to edit perhaps we can consider some alternative approaches. One challenge with properties files is that if it is meant to be user editable then it can become cluster unfriendly. So perhaps those things could be expressed as processor properties instead or even as a single processor property. Not sure and perhaps you've already thought through that. Just wanted to mention it. Anyway - great stuff and thanks for contributing! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3147) Build processor to parse CCDA into attributes and JSON
[ https://issues.apache.org/jira/browse/NIFI-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737165#comment-15737165 ] ASF GitHub Bot commented on NIFI-3147: -- Github user kedarchitale commented on the issue: https://github.com/apache/nifi/pull/1312 CI build passed but AppVeyor build is failing due to unrelated issue Running org.apache.nifi.processors.standard.TestListFile Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.367 sec <<< FAILURE! - in org.apache.nifi.processors.standard.TestListFile testAttributesSet(org.apache.nifi.processors.standard.TestListFile) Time elapsed: 0.203 sec <<< FAILURE! java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.nifi.processors.standard.TestListFile.testAttributesSet(TestListFile.java:675) > Build processor to parse CCDA into attributes and JSON > -- > > Key: NIFI-3147 > URL: https://issues.apache.org/jira/browse/NIFI-3147 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Kedar Chitale > Labels: attributes, ccda, healthcare, json, parser > Original Estimate: 336h > Remaining Estimate: 336h > > Accept a CCDA document, Parse the document to create JSON text and individual > attributes for example code.codeSystemName=LOINC -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1312: NIFI-3147 CCDA Processor
Github user kedarchitale commented on the issue: https://github.com/apache/nifi/pull/1312 CI build passed but AppVeyor build is failing due to unrelated issue Running org.apache.nifi.processors.standard.TestListFile Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.367 sec <<< FAILURE! - in org.apache.nifi.processors.standard.TestListFile testAttributesSet(org.apache.nifi.processors.standard.TestListFile) Time elapsed: 0.203 sec <<< FAILURE! java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.nifi.processors.standard.TestListFile.testAttributesSet(TestListFile.java:675) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (NIFI-3183) Add "command" and "arguments" attributes to FlowFiles generated by ExecuteProcess and ExecuteStreamCommand
Randy Gelhausen created NIFI-3183: - Summary: Add "command" and "arguments" attributes to FlowFiles generated by ExecuteProcess and ExecuteStreamCommand Key: NIFI-3183 URL: https://issues.apache.org/jira/browse/NIFI-3183 Project: Apache NiFi Issue Type: Improvement Reporter: Randy Gelhausen It's common to use ExecuteProcess/ExecuteStreamCommand to generate new data sources which are routed and handled downstream dynamically based on the command executed (and arguments passed). Adding the given command and arguments as attributes of FlowFiles generated by these processors makes this pattern easier to implement. See MiNiFi-161/MiNiFi-166 for additional detail. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-3031) Support Multi-Statement Scripts in the PutHiveQL Processor
[ https://issues.apache.org/jira/browse/NIFI-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736965#comment-15736965 ] ASF GitHub Bot commented on NIFI-3031: -- GitHub user dstreev opened a pull request: https://github.com/apache/nifi/pull/1316 NIFI-3031 Support Multi-Statement Scripts in the PutHiveQL Processor You can merge this pull request into a Git repository by running: $ git pull https://github.com/dstreev/nifi-1 NIFI-3031 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1316.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1316 commit 7a18054dad40e3c21a9b8c7dd760a8e283f12287 Author: David W. StreeverDate: 2016-11-04T15:03:17Z PutHiveQL and SelectHiveQL Processor enhancements. Added support for multiple statements in a script. Options for delimiters, quotes, escaping, include header and alternate header. Add support in SelectHiveQL to get script content from the Flow File to bring consistency with patterns used for PutHiveQL and support extra query management. Changed behavior of using Flowfile to match ExecuteSQL. Handle query delimiter when embedded. Added test case for embedded delimiter Formatting and License Header PutHiveQL and SelectHiveQL Processor enhancements. Added support for multiple statements in a script. Options for delimiters, quotes, escaping, include header and alternate header. Add support in SelectHiveQL to get script content from the Flow File to bring consistency with patterns used for PutHiveQL and support extra query management. Changed behavior of using Flowfile to match ExecuteSQL. Handle query delimiter when embedded. Added test case for embedded delimiter Removing dead code. commit 31efc23963428c389ca0d6ae01b64ad1e025040e Author: David W. Streever Date: 2016-12-10T01:42:35Z Comments to Clarify test case. > Support Multi-Statement Scripts in the PutHiveQL Processor > -- > > Key: NIFI-3031 > URL: https://issues.apache.org/jira/browse/NIFI-3031 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess > > Trying to use the PutHiveQL processor to execute a HiveQL script that > contains multiple statements. > IE: > USE my_database; > FROM my_database_src.base_table > INSERT OVERWRITE refined_table > SELECT *; > -- or -- > use my_database; > create temporary table WORKING as > select a,b,c from RAW; > FROM RAW > INSERT OVERWRITE refined_table > SELECT *; > The current implementation doesn't even like it when you have a semicolon at > the end of the single statement. > Either use a default delimiter like a semi-colon to mark the boundaries of a > statement within the file or allow them to define there own. > This enables the building of pipelines that are testable by not embedding > HiveQL into a product; rather sourcing them from files. And the scripts can > be complex. Each statement should run in a linear manner and be part of the > same JDBC session to ensure things like "temporary" tables will work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1316: NIFI-3031 Support Multi-Statement Scripts in the Pu...
GitHub user dstreev opened a pull request: https://github.com/apache/nifi/pull/1316 NIFI-3031 Support Multi-Statement Scripts in the PutHiveQL Processor You can merge this pull request into a Git repository by running: $ git pull https://github.com/dstreev/nifi-1 NIFI-3031 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1316.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1316 commit 7a18054dad40e3c21a9b8c7dd760a8e283f12287 Author: David W. StreeverDate: 2016-11-04T15:03:17Z PutHiveQL and SelectHiveQL Processor enhancements. Added support for multiple statements in a script. Options for delimiters, quotes, escaping, include header and alternate header. Add support in SelectHiveQL to get script content from the Flow File to bring consistency with patterns used for PutHiveQL and support extra query management. Changed behavior of using Flowfile to match ExecuteSQL. Handle query delimiter when embedded. Added test case for embedded delimiter Formatting and License Header PutHiveQL and SelectHiveQL Processor enhancements. Added support for multiple statements in a script. Options for delimiters, quotes, escaping, include header and alternate header. Add support in SelectHiveQL to get script content from the Flow File to bring consistency with patterns used for PutHiveQL and support extra query management. Changed behavior of using Flowfile to match ExecuteSQL. Handle query delimiter when embedded. Added test case for embedded delimiter Removing dead code. commit 31efc23963428c389ca0d6ae01b64ad1e025040e Author: David W. Streever Date: 2016-12-10T01:42:35Z Comments to Clarify test case. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3031) Support Multi-Statement Scripts in the PutHiveQL Processor
[ https://issues.apache.org/jira/browse/NIFI-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736793#comment-15736793 ] ASF GitHub Bot commented on NIFI-3031: -- Github user dstreev closed the pull request at: https://github.com/apache/nifi/pull/1217 > Support Multi-Statement Scripts in the PutHiveQL Processor > -- > > Key: NIFI-3031 > URL: https://issues.apache.org/jira/browse/NIFI-3031 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess > > Trying to use the PutHiveQL processor to execute a HiveQL script that > contains multiple statements. > IE: > USE my_database; > FROM my_database_src.base_table > INSERT OVERWRITE refined_table > SELECT *; > -- or -- > use my_database; > create temporary table WORKING as > select a,b,c from RAW; > FROM RAW > INSERT OVERWRITE refined_table > SELECT *; > The current implementation doesn't even like it when you have a semicolon at > the end of the single statement. > Either use a default delimiter like a semi-colon to mark the boundaries of a > statement within the file or allow them to define there own. > This enables the building of pipelines that are testable by not embedding > HiveQL into a product; rather sourcing them from files. And the scripts can > be complex. Each statement should run in a linear manner and be part of the > same JDBC session to ensure things like "temporary" tables will work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (NIFI-3182) PublishKafka and PublishKafka_0_10 hang indefinitely, continually sending the same message, if delimiter ends at 8 KB mark
[ https://issues.apache.org/jira/browse/NIFI-3182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-3182. -- Resolution: Won't Fix Marking as Won't Fix since the issue was already resolved by NIFI-2865. > PublishKafka and PublishKafka_0_10 hang indefinitely, continually sending the > same message, if delimiter ends at 8 KB mark > -- > > Key: NIFI-3182 > URL: https://issues.apache.org/jira/browse/NIFI-3182 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0 >Reporter: Mark Payne > > If PublishKafka is configured to send using a delimiter, and it receives a > FlowFile whose content contains the delimiter, and that delimiter ends at the > 8 KB mark (8192 bytes into the FlowFile Content), then PublishKafka will get > stuck in an infinite loop, sending the same message over and over. > To replicate, create a simple Flow: > GenerateFlowFile -> ReplaceText -> PublishKafka > For GenerateFlowFile, generate a message that is 8187 bytes. > With ReplaceText, configure it to always append the value "HELLO" (without > quotes). > Configure PublishKafka to use a demarcator of "HELLO" (without quotes). > When PublishKafka attempts to send the data, it will continually send the > randomly generated data over and over and over, never relinquishing its > thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-3182) PublishKafka and PublishKafka_0_10 hang indefinitely, continually sending the same message, if delimiter ends at 8 KB mark
Mark Payne created NIFI-3182: Summary: PublishKafka and PublishKafka_0_10 hang indefinitely, continually sending the same message, if delimiter ends at 8 KB mark Key: NIFI-3182 URL: https://issues.apache.org/jira/browse/NIFI-3182 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Mark Payne If PublishKafka is configured to send using a delimiter, and it receives a FlowFile whose content contains the delimiter, and that delimiter ends at the 8 KB mark (8192 bytes into the FlowFile Content), then PublishKafka will get stuck in an infinite loop, sending the same message over and over. To replicate, create a simple Flow: GenerateFlowFile -> ReplaceText -> PublishKafka For GenerateFlowFile, generate a message that is 8187 bytes. With ReplaceText, configure it to always append the value "HELLO" (without quotes). Configure PublishKafka to use a demarcator of "HELLO" (without quotes). When PublishKafka attempts to send the data, it will continually send the randomly generated data over and over and over, never relinquishing its thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-3181) Add more options to right-click menu options when root process group is selected
Andrew Lim created NIFI-3181: Summary: Add more options to right-click menu options when root process group is selected Key: NIFI-3181 URL: https://issues.apache.org/jira/browse/NIFI-3181 Project: Apache NiFi Issue Type: Improvement Components: Core UI Reporter: Andrew Lim Priority: Minor When the root process group is selected, right-clicking on the canvas provides "Refresh" as the only option. It would be very useful to add other menu selections. At minimum, we should add the selections available in the Operate Palette (Configure, Start, Stop, Create Template, Upload Template). This should greatly reduce the amount of scrolling required if the user is actively in a part of the canvas that is far away from the Operate Palette. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-3180) unable to import a flow template to NIFI 1.2 SNAPSHOT with "null" error
Haimo Liu created NIFI-3180: --- Summary: unable to import a flow template to NIFI 1.2 SNAPSHOT with "null" error Key: NIFI-3180 URL: https://issues.apache.org/jira/browse/NIFI-3180 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.2.0 Reporter: Haimo Liu Priority: Critical I exported a flow template from NIFI 1.1 (attached), but unable to import it to a NIFI 1.2 SNAPSHOT build. After clicking "upload", i got the following error message via UI: Unable to import the specified template: null, which obviously isnt very helpful. I tried to look at the nifi-user.log, and nifi-app.log, nothing interesting there as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-3035) URL to display a particular process group in UI
[ https://issues.apache.org/jira/browse/NIFI-3035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736358#comment-15736358 ] Andrew Lim commented on NIFI-3035: -- I have also thought about this ability to provide "deep linking" in NiFi. Besides a process group, we could also provide exact URLs to other areas of the application. For example in the Data provenance window, a URL could show results for a specific search with sorting applied. > URL to display a particular process group in UI > --- > > Key: NIFI-3035 > URL: https://issues.apache.org/jira/browse/NIFI-3035 > Project: Apache NiFi > Issue Type: New Feature > Components: Core UI >Reporter: Christine Draper > > Our use case has multiple teams of users working on specific process groups. > We would like to be able to give them a URL that will launch the UI on the > specific group they are working on, rather than them having to navigate to it > from the root group. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-3179) MergeContent extracts demarcator property value bytes without specifying charset encoding
Oleg Zhurakousky created NIFI-3179: -- Summary: MergeContent extracts demarcator property value bytes without specifying charset encoding Key: NIFI-3179 URL: https://issues.apache.org/jira/browse/NIFI-3179 Project: Apache NiFi Issue Type: Bug Reporter: Oleg Zhurakousky Assignee: Oleg Zhurakousky This may cause issues with byte translation with default encodings of the machine -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (NIFI-96) Consider click to align for components on canvas
[ https://issues.apache.org/jira/browse/NIFI-96?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Scott Aslan reassigned NIFI-96: --- Assignee: Scott Aslan > Consider click to align for components on canvas > > > Key: NIFI-96 > URL: https://issues.apache.org/jira/browse/NIFI-96 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Matt Gilman >Assignee: Scott Aslan >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2585) Add attributes to track where a flow file came from when receiving over site-to-site
[ https://issues.apache.org/jira/browse/NIFI-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736129#comment-15736129 ] ASF GitHub Bot commented on NIFI-2585: -- Github user randerzander commented on the issue: https://github.com/apache/nifi/pull/1307 This may be an artifact of Docker generating hostnames that don't parse well. Here's what I see in my testing: monitor.dev_1 | 2016-12-09 19:41:34,127 INFO [Site-to-Site Worker Thread-6] o.a.nifi.remote.SocketRemoteSiteListener Received connection from techops_web-service.dev_1.techops_dev/172.18.0.5, User DN: null monitor.dev_1 | 2016-12-09 19:41:34,127 ERROR [Site-to-Site Worker Thread-6] o.a.nifi.remote.SocketRemoteSiteListener Handshake failed when communicating with nifi://techops_web-service.dev_1.techops_dev:42626; closing connection. Reason for failure: java.lang.IllegalStateException: Unable to get host or port from peerUrlnifi://techops_web-service.dev_1.techops_dev:42626 > Add attributes to track where a flow file came from when receiving over > site-to-site > > > Key: NIFI-2585 > URL: https://issues.apache.org/jira/browse/NIFI-2585 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Randy Gelhausen >Priority: Minor > > With MiNiFi starting be used to send data to a central NiFi, it would be > helpful if information about the sending host and port was added to each flow > file received over site-to-site. Currently this information is available and > used to generate the transit URI in the RECEIVE event, but this information > isn't available to downstream processors that might want to make routing > decisions. > For reference: > https://github.com/apache/nifi/blob/e23b2356172e128086585fe2c425523c3628d0e7/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-site-to-site/src/main/java/org/apache/nifi/remote/protocol/AbstractFlowFileServerProtocol.java#L452 > A possible approach might be to add two attributes to each flow file, > something like "remote.host" and "remote.address" where remote.host has only > the sending hostname, and remote.address has the sending host and port. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1307: NIFI-2585: Add attributes to track where a flow file came ...
Github user randerzander commented on the issue: https://github.com/apache/nifi/pull/1307 This may be an artifact of Docker generating hostnames that don't parse well. Here's what I see in my testing: monitor.dev_1 | 2016-12-09 19:41:34,127 INFO [Site-to-Site Worker Thread-6] o.a.nifi.remote.SocketRemoteSiteListener Received connection from techops_web-service.dev_1.techops_dev/172.18.0.5, User DN: null monitor.dev_1 | 2016-12-09 19:41:34,127 ERROR [Site-to-Site Worker Thread-6] o.a.nifi.remote.SocketRemoteSiteListener Handshake failed when communicating with nifi://techops_web-service.dev_1.techops_dev:42626; closing connection. Reason for failure: java.lang.IllegalStateException: Unable to get host or port from peerUrlnifi://techops_web-service.dev_1.techops_dev:42626 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-3160) Updating the PG name of the current PG does not get syndicated
[ https://issues.apache.org/jira/browse/NIFI-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Scott Aslan updated NIFI-3160: -- Status: Patch Available (was: In Progress) > Updating the PG name of the current PG does not get syndicated > -- > > Key: NIFI-3160 > URL: https://issues.apache.org/jira/browse/NIFI-3160 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan > Fix For: 1.2.0 > > Attachments: Operate Rename before canvas refresh.png, Rename PG.png, > acess policies before canvas refresh.png > > > When attempting to update the name of the current PG the new name does not > get syndicated in the operate palette, the breadcrumbs, or the access > policies until a refresh of the canvas has occurred. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-3160) Updating the PG name of the current PG does not get syndicated
[ https://issues.apache.org/jira/browse/NIFI-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735965#comment-15735965 ] ASF GitHub Bot commented on NIFI-3160: -- GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi/pull/1315 [NIFI-3160] reload canvas when updating PG name of current PG Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi NIFI-3160 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1315.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1315 commit 0fa5cc609abd31529b67c46f8639677fa7675059 Author: Scott AslanDate: 2016-12-09T18:29:39Z [NIFI-3160] reload canvas when updating PG name of current PG > Updating the PG name of the current PG does not get syndicated > -- > > Key: NIFI-3160 > URL: https://issues.apache.org/jira/browse/NIFI-3160 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan > Fix For: 1.2.0 > > Attachments: Operate Rename before canvas refresh.png, Rename PG.png, > acess policies before canvas refresh.png > > > When attempting to update the name of the current PG the new name does not > get syndicated in the operate palette, the breadcrumbs, or the access > policies until a refresh of the canvas has occurred. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1315: [NIFI-3160] reload canvas when updating PG name of ...
GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi/pull/1315 [NIFI-3160] reload canvas when updating PG name of current PG Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi NIFI-3160 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1315.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1315 commit 0fa5cc609abd31529b67c46f8639677fa7675059 Author: Scott AslanDate: 2016-12-09T18:29:39Z [NIFI-3160] reload canvas when updating PG name of current PG --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3178) Missing images in User Admin Guide for Operate Palette buttons
[ https://issues.apache.org/jira/browse/NIFI-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735902#comment-15735902 ] ASF GitHub Bot commented on NIFI-3178: -- GitHub user andrewmlim opened a pull request: https://github.com/apache/nifi/pull/1314 NIFI-3178 Corrected missing Operate Palette button images in User Adm… …in Guide You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrewmlim/nifi NIFI-3178 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1314.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1314 commit b9934edf3df7f561de7356519ac52780a7e6543a Author: Andrew LimDate: 2016-12-09T17:51:38Z NIFI-3178 Corrected missing Operate Palette button images in User Admin Guide > Missing images in User Admin Guide for Operate Palette buttons > -- > > Key: NIFI-3178 > URL: https://issues.apache.org/jira/browse/NIFI-3178 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > > In my PR for NIFI-3143, I updated the User Guide to reference the following > new images: > buttonStart.png > buttonStop.png > buttonDisable.png > buttonEnable.png > However, these images were mistakenly left out of the PR. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1314: NIFI-3178 Corrected missing Operate Palette button ...
GitHub user andrewmlim opened a pull request: https://github.com/apache/nifi/pull/1314 NIFI-3178 Corrected missing Operate Palette button images in User Adm⦠â¦in Guide You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrewmlim/nifi NIFI-3178 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1314.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1314 commit b9934edf3df7f561de7356519ac52780a7e6543a Author: Andrew LimDate: 2016-12-09T17:51:38Z NIFI-3178 Corrected missing Operate Palette button images in User Admin Guide --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (NIFI-3178) Missing images in User Admin Guide for Operate Palette buttons
[ https://issues.apache.org/jira/browse/NIFI-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Lim reassigned NIFI-3178: Assignee: Andrew Lim > Missing images in User Admin Guide for Operate Palette buttons > -- > > Key: NIFI-3178 > URL: https://issues.apache.org/jira/browse/NIFI-3178 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > > In my PR for NIFI-3143, I updated the User Guide to reference the following > new images: > buttonStart.png > buttonStop.png > buttonDisable.png > buttonEnable.png > However, these images were mistakenly left out of the PR. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-3178) Missing images in User Admin Guide for Operate Palette buttons
Andrew Lim created NIFI-3178: Summary: Missing images in User Admin Guide for Operate Palette buttons Key: NIFI-3178 URL: https://issues.apache.org/jira/browse/NIFI-3178 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.2.0 Reporter: Andrew Lim Priority: Minor In my PR for NIFI-3143, I updated the User Guide to reference the following new images: buttonStart.png buttonStop.png buttonDisable.png buttonEnable.png However, these images were mistakenly left out of the PR. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-3090) Should add new flow election cluster properties to Admin Guide property tables
[ https://issues.apache.org/jira/browse/NIFI-3090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735728#comment-15735728 ] ASF GitHub Bot commented on NIFI-3090: -- GitHub user andrewmlim opened a pull request: https://github.com/apache/nifi/pull/1313 NIFI-3090 Added new flow election cluster properties to Admin Guide p… …roperty tables You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrewmlim/nifi NIFI-3090 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1313.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1313 commit 1d9265252ee9b32c108591676ac6d353b5341ba8 Author: Andrew LimDate: 2016-12-09T16:33:15Z NIFI-3090 Added new flow election cluster properties to Admin Guide property tables > Should add new flow election cluster properties to Admin Guide property tables > -- > > Key: NIFI-3090 > URL: https://issues.apache.org/jira/browse/NIFI-3090 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation & Website >Affects Versions: 1.1.0 >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > > https://issues.apache.org/jira/browse/NIFI-1966 added two new properties: > nifi.cluster.flow.election.max.wait.time > nifi.cluster.flow.election.max.candidates > Documentation was added within the Admin Guide "Flow Election" section, but > these properties should also be added to the "Cluster Node Properties" table > at the end of the doc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1313: NIFI-3090 Added new flow election cluster propertie...
GitHub user andrewmlim opened a pull request: https://github.com/apache/nifi/pull/1313 NIFI-3090 Added new flow election cluster properties to Admin Guide p⦠â¦roperty tables You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrewmlim/nifi NIFI-3090 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1313.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1313 commit 1d9265252ee9b32c108591676ac6d353b5341ba8 Author: Andrew LimDate: 2016-12-09T16:33:15Z NIFI-3090 Added new flow election cluster properties to Admin Guide property tables --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (NIFI-3160) Updating the PG name of the current PG does not get syndicated
[ https://issues.apache.org/jira/browse/NIFI-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Scott Aslan reassigned NIFI-3160: - Assignee: Scott Aslan > Updating the PG name of the current PG does not get syndicated > -- > > Key: NIFI-3160 > URL: https://issues.apache.org/jira/browse/NIFI-3160 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan > Fix For: 1.2.0 > > Attachments: Operate Rename before canvas refresh.png, Rename PG.png, > acess policies before canvas refresh.png > > > When attempting to update the name of the current PG the new name does not > get syndicated in the operate palette, the breadcrumbs, or the access > policies until a refresh of the canvas has occurred. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1307: NIFI-2585: Add attributes to track where a flow file came ...
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/1307 @randerzander I haven't been able to reproduce this yet. Running a simple GetFile -> RPG on minifi-cpp to a regular NiFi with InputPort -> LogAttribute, I am seeing the two attributes come through correctly. I'm trying to look at the rest of the s2s code to see how a Peer instance could be created and not get the correct host or port. Would you be able to add the following code to the end of the Peer constructor? https://github.com/apache/nifi/blob/master/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/Peer.java#L51 ``` if (this.host == null || this.port == -1) { throw new IllegalStateException("Unable to get host or port from peerUrl" + peerUrl); } ``` My theory is that it somehow the parsing of the URI is considered successful, but it didn't actually find a host or port, since the Javadocs of uri.getPort() say it can return -1. Let me know if you can get that stacktrace from that added code to see what the peerUrl is when this happens. Thanks. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2585) Add attributes to track where a flow file came from when receiving over site-to-site
[ https://issues.apache.org/jira/browse/NIFI-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735481#comment-15735481 ] ASF GitHub Bot commented on NIFI-2585: -- Github user bbende commented on the issue: https://github.com/apache/nifi/pull/1307 @randerzander I haven't been able to reproduce this yet. Running a simple GetFile -> RPG on minifi-cpp to a regular NiFi with InputPort -> LogAttribute, I am seeing the two attributes come through correctly. I'm trying to look at the rest of the s2s code to see how a Peer instance could be created and not get the correct host or port. Would you be able to add the following code to the end of the Peer constructor? https://github.com/apache/nifi/blob/master/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/remote/Peer.java#L51 ``` if (this.host == null || this.port == -1) { throw new IllegalStateException("Unable to get host or port from peerUrl" + peerUrl); } ``` My theory is that it somehow the parsing of the URI is considered successful, but it didn't actually find a host or port, since the Javadocs of uri.getPort() say it can return -1. Let me know if you can get that stacktrace from that added code to see what the peerUrl is when this happens. Thanks. > Add attributes to track where a flow file came from when receiving over > site-to-site > > > Key: NIFI-2585 > URL: https://issues.apache.org/jira/browse/NIFI-2585 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Randy Gelhausen >Priority: Minor > > With MiNiFi starting be used to send data to a central NiFi, it would be > helpful if information about the sending host and port was added to each flow > file received over site-to-site. Currently this information is available and > used to generate the transit URI in the RECEIVE event, but this information > isn't available to downstream processors that might want to make routing > decisions. > For reference: > https://github.com/apache/nifi/blob/e23b2356172e128086585fe2c425523c3628d0e7/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-site-to-site/src/main/java/org/apache/nifi/remote/protocol/AbstractFlowFileServerProtocol.java#L452 > A possible approach might be to add two attributes to each flow file, > something like "remote.host" and "remote.address" where remote.host has only > the sending hostname, and remote.address has the sending host and port. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-3031) Support Multi-Statement Scripts in the PutHiveQL Processor
[ https://issues.apache.org/jira/browse/NIFI-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735446#comment-15735446 ] ASF GitHub Bot commented on NIFI-3031: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1217#discussion_r91723825 --- Diff: nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java --- @@ -90,11 +98,59 @@ .name("hive-query") .displayName("HiveQL Select Query") .description("HiveQL SELECT query to execute") +.required(false) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.expressionLanguageSupported(true) +.build(); + +public static final PropertyDescriptor HIVEQL_CSV_HEADER = new PropertyDescriptor.Builder() +.name("csv-header") +.displayName("CSV Header") +.description("Include Header in Output") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor HIVEQL_CSV_ALT_HEADER = new PropertyDescriptor.Builder() +.name("csv-alt-header") +.displayName("Alternate CSV Header") +.description("Comma separated list of header fields") --- End diff -- Makes sense, I'm thinking we should make that kind of point right in the documentation > Support Multi-Statement Scripts in the PutHiveQL Processor > -- > > Key: NIFI-3031 > URL: https://issues.apache.org/jira/browse/NIFI-3031 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess > > Trying to use the PutHiveQL processor to execute a HiveQL script that > contains multiple statements. > IE: > USE my_database; > FROM my_database_src.base_table > INSERT OVERWRITE refined_table > SELECT *; > -- or -- > use my_database; > create temporary table WORKING as > select a,b,c from RAW; > FROM RAW > INSERT OVERWRITE refined_table > SELECT *; > The current implementation doesn't even like it when you have a semicolon at > the end of the single statement. > Either use a default delimiter like a semi-colon to mark the boundaries of a > statement within the file or allow them to define there own. > This enables the building of pipelines that are testable by not embedding > HiveQL into a product; rather sourcing them from files. And the scripts can > be complex. Each statement should run in a linear manner and be part of the > same JDBC session to ensure things like "temporary" tables will work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1217: NIFI-3031 - Multi-Statement Script support for PutH...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1217#discussion_r91723825 --- Diff: nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java --- @@ -90,11 +98,59 @@ .name("hive-query") .displayName("HiveQL Select Query") .description("HiveQL SELECT query to execute") +.required(false) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.expressionLanguageSupported(true) +.build(); + +public static final PropertyDescriptor HIVEQL_CSV_HEADER = new PropertyDescriptor.Builder() +.name("csv-header") +.displayName("CSV Header") +.description("Include Header in Output") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor HIVEQL_CSV_ALT_HEADER = new PropertyDescriptor.Builder() +.name("csv-alt-header") +.displayName("Alternate CSV Header") +.description("Comma separated list of header fields") --- End diff -- Makes sense, I'm thinking we should make that kind of point right in the documentation --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3031) Support Multi-Statement Scripts in the PutHiveQL Processor
[ https://issues.apache.org/jira/browse/NIFI-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735442#comment-15735442 ] ASF GitHub Bot commented on NIFI-3031: -- Github user dstreev commented on a diff in the pull request: https://github.com/apache/nifi/pull/1217#discussion_r91723463 --- Diff: nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java --- @@ -90,11 +98,59 @@ .name("hive-query") .displayName("HiveQL Select Query") .description("HiveQL SELECT query to execute") +.required(false) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.expressionLanguageSupported(true) +.build(); + +public static final PropertyDescriptor HIVEQL_CSV_HEADER = new PropertyDescriptor.Builder() +.name("csv-header") +.displayName("CSV Header") +.description("Include Header in Output") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor HIVEQL_CSV_ALT_HEADER = new PropertyDescriptor.Builder() +.name("csv-alt-header") +.displayName("Alternate CSV Header") +.description("Comma separated list of header fields") --- End diff -- I've had a few instances where the declared fieldname, say '_date' or 'date_' , doesn't work well in the header. So you want the option to replace it with 'date'. > Support Multi-Statement Scripts in the PutHiveQL Processor > -- > > Key: NIFI-3031 > URL: https://issues.apache.org/jira/browse/NIFI-3031 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess > > Trying to use the PutHiveQL processor to execute a HiveQL script that > contains multiple statements. > IE: > USE my_database; > FROM my_database_src.base_table > INSERT OVERWRITE refined_table > SELECT *; > -- or -- > use my_database; > create temporary table WORKING as > select a,b,c from RAW; > FROM RAW > INSERT OVERWRITE refined_table > SELECT *; > The current implementation doesn't even like it when you have a semicolon at > the end of the single statement. > Either use a default delimiter like a semi-colon to mark the boundaries of a > statement within the file or allow them to define there own. > This enables the building of pipelines that are testable by not embedding > HiveQL into a product; rather sourcing them from files. And the scripts can > be complex. Each statement should run in a linear manner and be part of the > same JDBC session to ensure things like "temporary" tables will work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1217: NIFI-3031 - Multi-Statement Script support for PutH...
Github user dstreev commented on a diff in the pull request: https://github.com/apache/nifi/pull/1217#discussion_r91723463 --- Diff: nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java --- @@ -90,11 +98,59 @@ .name("hive-query") .displayName("HiveQL Select Query") .description("HiveQL SELECT query to execute") +.required(false) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.expressionLanguageSupported(true) +.build(); + +public static final PropertyDescriptor HIVEQL_CSV_HEADER = new PropertyDescriptor.Builder() +.name("csv-header") +.displayName("CSV Header") +.description("Include Header in Output") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor HIVEQL_CSV_ALT_HEADER = new PropertyDescriptor.Builder() +.name("csv-alt-header") +.displayName("Alternate CSV Header") +.description("Comma separated list of header fields") --- End diff -- I've had a few instances where the declared fieldname, say '_date' or 'date_' , doesn't work well in the header. So you want the option to replace it with 'date'. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3031) Support Multi-Statement Scripts in the PutHiveQL Processor
[ https://issues.apache.org/jira/browse/NIFI-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735425#comment-15735425 ] ASF GitHub Bot commented on NIFI-3031: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1217#discussion_r91722035 --- Diff: nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java --- @@ -135,42 +197,100 @@ return relationships; } +@OnScheduled +public void setup(ProcessContext context) { +// If the query is not set, then an incoming flow file is needed. Otherwise fail the initialization +if (!context.getProperty(HIVEQL_SELECT_QUERY).isSet() && !context.hasIncomingConnection()) { +final String errorString = "Either the Select Query must be specified or there must be an incoming connection " ++ "providing flowfile(s) containing a SQL select query"; +getLogger().error(errorString); +throw new ProcessException(errorString); +} +} + @Override public void onTrigger(final ProcessContext context, final ProcessSession session) throws ProcessException { -FlowFile fileToProcess = null; -if (context.hasIncomingConnection()) { -fileToProcess = session.get(); +final FlowFile fileToProcess = (context.hasIncomingConnection()? session.get():null); +FlowFile flowfile = null; -// If we have no FlowFile, and all incoming connections are self-loops then we can continue on. -// However, if we have no FlowFile and we have connections coming from other Processors, then -// we know that we should run only if we have a FlowFile. +// If we have no FlowFile, and all incoming connections are self-loops then we can continue on. +// However, if we have no FlowFile and we have connections coming from other Processors, then +// we know that we should run only if we have a FlowFile. +if (context.hasIncomingConnection()) { if (fileToProcess == null && context.hasNonLoopConnection()) { return; } } final ComponentLog logger = getLogger(); final HiveDBCPService dbcpService = context.getProperty(HIVE_DBCP_SERVICE).asControllerService(HiveDBCPService.class); -final String selectQuery = context.getProperty(HIVEQL_SELECT_QUERY).evaluateAttributeExpressions(fileToProcess).getValue(); +final Charset charset = Charset.forName(context.getProperty(CHARSET).getValue()); + +final boolean flowbased = !(context.getProperty(HIVEQL_SELECT_QUERY).isSet()); + +// Source the SQL +final String selectQuery; + +if (context.getProperty(HIVEQL_SELECT_QUERY).isSet()) { +selectQuery = context.getProperty(HIVEQL_SELECT_QUERY).evaluateAttributeExpressions(fileToProcess).getValue(); +} else { +// If the query is not set, then an incoming flow file is required, and expected to contain a valid SQL select query. +// If there is no incoming connection, onTrigger will not be called as the processor will fail when scheduled. +final StringBuilder queryContents = new StringBuilder(); +session.read(fileToProcess, new InputStreamCallback() { +@Override +public void process(InputStream in) throws IOException { +queryContents.append(IOUtils.toString(in)); +} +}); +selectQuery = queryContents.toString(); +} + + final String outputFormat = context.getProperty(HIVEQL_OUTPUT_FORMAT).getValue(); final StopWatch stopWatch = new StopWatch(true); +final boolean header = context.getProperty(HIVEQL_CSV_HEADER).asBoolean(); +final String altHeader = context.getProperty(HIVEQL_CSV_ALT_HEADER).evaluateAttributeExpressions(fileToProcess).getValue(); +final String delimiter = context.getProperty(HIVEQL_CSV_DELIMITER).evaluateAttributeExpressions(fileToProcess).getValue(); +final boolean quote = context.getProperty(HIVEQL_CSV_QUOTE).asBoolean(); +final boolean escape = context.getProperty(HIVEQL_CSV_HEADER).asBoolean(); try (final Connection con = dbcpService.getConnection(); - final Statement st = con.createStatement()) { + final Statement st = ( flowbased ? con.prepareStatement(selectQuery): con.createStatement()) --- End diff -- Isn't it possible to
[GitHub] nifi pull request #1217: NIFI-3031 - Multi-Statement Script support for PutH...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1217#discussion_r91722035 --- Diff: nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java --- @@ -135,42 +197,100 @@ return relationships; } +@OnScheduled +public void setup(ProcessContext context) { +// If the query is not set, then an incoming flow file is needed. Otherwise fail the initialization +if (!context.getProperty(HIVEQL_SELECT_QUERY).isSet() && !context.hasIncomingConnection()) { +final String errorString = "Either the Select Query must be specified or there must be an incoming connection " ++ "providing flowfile(s) containing a SQL select query"; +getLogger().error(errorString); +throw new ProcessException(errorString); +} +} + @Override public void onTrigger(final ProcessContext context, final ProcessSession session) throws ProcessException { -FlowFile fileToProcess = null; -if (context.hasIncomingConnection()) { -fileToProcess = session.get(); +final FlowFile fileToProcess = (context.hasIncomingConnection()? session.get():null); +FlowFile flowfile = null; -// If we have no FlowFile, and all incoming connections are self-loops then we can continue on. -// However, if we have no FlowFile and we have connections coming from other Processors, then -// we know that we should run only if we have a FlowFile. +// If we have no FlowFile, and all incoming connections are self-loops then we can continue on. +// However, if we have no FlowFile and we have connections coming from other Processors, then +// we know that we should run only if we have a FlowFile. +if (context.hasIncomingConnection()) { if (fileToProcess == null && context.hasNonLoopConnection()) { return; } } final ComponentLog logger = getLogger(); final HiveDBCPService dbcpService = context.getProperty(HIVE_DBCP_SERVICE).asControllerService(HiveDBCPService.class); -final String selectQuery = context.getProperty(HIVEQL_SELECT_QUERY).evaluateAttributeExpressions(fileToProcess).getValue(); +final Charset charset = Charset.forName(context.getProperty(CHARSET).getValue()); + +final boolean flowbased = !(context.getProperty(HIVEQL_SELECT_QUERY).isSet()); + +// Source the SQL +final String selectQuery; + +if (context.getProperty(HIVEQL_SELECT_QUERY).isSet()) { +selectQuery = context.getProperty(HIVEQL_SELECT_QUERY).evaluateAttributeExpressions(fileToProcess).getValue(); +} else { +// If the query is not set, then an incoming flow file is required, and expected to contain a valid SQL select query. +// If there is no incoming connection, onTrigger will not be called as the processor will fail when scheduled. +final StringBuilder queryContents = new StringBuilder(); +session.read(fileToProcess, new InputStreamCallback() { +@Override +public void process(InputStream in) throws IOException { +queryContents.append(IOUtils.toString(in)); +} +}); +selectQuery = queryContents.toString(); +} + + final String outputFormat = context.getProperty(HIVEQL_OUTPUT_FORMAT).getValue(); final StopWatch stopWatch = new StopWatch(true); +final boolean header = context.getProperty(HIVEQL_CSV_HEADER).asBoolean(); +final String altHeader = context.getProperty(HIVEQL_CSV_ALT_HEADER).evaluateAttributeExpressions(fileToProcess).getValue(); +final String delimiter = context.getProperty(HIVEQL_CSV_DELIMITER).evaluateAttributeExpressions(fileToProcess).getValue(); +final boolean quote = context.getProperty(HIVEQL_CSV_QUOTE).asBoolean(); +final boolean escape = context.getProperty(HIVEQL_CSV_HEADER).asBoolean(); try (final Connection con = dbcpService.getConnection(); - final Statement st = con.createStatement()) { + final Statement st = ( flowbased ? con.prepareStatement(selectQuery): con.createStatement()) --- End diff -- Isn't it possible to specify a parameterized query in the Select Query property, expecting that each flow file has the appropriate attributes set? I'm wondering if there's a better check to know when to call prepareStatement vs createStatement. --- If your
[jira] [Created] (NIFI-3177) (Optionally?) Don't duplicate controller servivces on template instantiation
Brandon DeVries created NIFI-3177: - Summary: (Optionally?) Don't duplicate controller servivces on template instantiation Key: NIFI-3177 URL: https://issues.apache.org/jira/browse/NIFI-3177 Project: Apache NiFi Issue Type: Improvement Reporter: Brandon DeVries Priority: Minor When placing a template onto the graph, all included / required controller services are also created... even if an exact duplicate already exists (possibly even from a previous instance of the same template). It is definitely possible you might not want to reuse an existing controller service, so that shouldn't be the default behavior. However, it is also very likely that you *do* want to reuse a controller service. Could we add a check on template instantiation that checks for controller service matches (name & configuration), and prompt with the option to either create new or use existing? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-3167) DFM should be able to delete a node using UI
[ https://issues.apache.org/jira/browse/NIFI-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735367#comment-15735367 ] Matt Gilman commented on NIFI-3167: --- What buttons are available in the actions column? Can you post a screenshot? Looking at the code, it should render the connect button and the delete button for any node that is currently DISCONNECTED. > DFM should be able to delete a node using UI > > > Key: NIFI-3167 > URL: https://issues.apache.org/jira/browse/NIFI-3167 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Andre > Attachments: Screen Shot 2016-12-08 at 9.31.15 AM.png > > > Although I am currently able to delete cluster nodes using curl, it seems the > current UI doesn't expose a mechanism for a DFM to call the > {code} > DELETE/controller/cluster/nodes/{id} > {code} > REST API endpoint -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-3176) TailFile struggles to process files that are enourmous upon start
Andre created NIFI-3176: --- Summary: TailFile struggles to process files that are enourmous upon start Key: NIFI-3176 URL: https://issues.apache.org/jira/browse/NIFI-3176 Project: Apache NiFi Issue Type: Bug Reporter: Andre Assignee: Andre While testing MiNiFi I have noticed a behaviour where TailFile seems to try to hash a full file upon discovery. This may result significant delays when starting the processor by the end of the day on folders with large number of large log files (which fits typical log concentrator -> NiFi cluster scenario) I suspect it should be safe to allow TailFile to skip hashing upon discovery (or at least makes this behaviour configurable). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-3147) Build processor to parse CCDA into attributes and JSON
[ https://issues.apache.org/jira/browse/NIFI-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734934#comment-15734934 ] ASF GitHub Bot commented on NIFI-3147: -- GitHub user kedarchitale opened a pull request: https://github.com/apache/nifi/pull/1312 NIFI-3147 CCDA Processor Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [Y] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [Y] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [Y] Has your PR been rebased against the latest commit within the target branch (typically master)? - [Y] Is your initial contribution a single, squashed commit? ### For code changes: - [N/A ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [Y] Have you written or updated unit tests to verify your changes? - [Y] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [Y] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [Y] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [Y] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [Y] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. https://issues.apache.org/jira/browse/NIFI-3147 You can merge this pull request into a Git repository by running: $ git pull https://github.com/kedarchitale/nifi NIFI-3147 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1312.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1312 commit 4cc6062fc23b62fd2528d1722cb2475824e1bf8a Author: kedarchitaleDate: 2016-12-09T10:15:19Z NIFI-3147 CCDA Processor https://issues.apache.org/jira/browse/NIFI-3147 > Build processor to parse CCDA into attributes and JSON > -- > > Key: NIFI-3147 > URL: https://issues.apache.org/jira/browse/NIFI-3147 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Kedar Chitale > Labels: attributes, ccda, healthcare, json, parser > Original Estimate: 336h > Remaining Estimate: 336h > > Accept a CCDA document, Parse the document to create JSON text and individual > attributes for example code.codeSystemName=LOINC -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1312: NIFI-3147 CCDA Processor
GitHub user kedarchitale opened a pull request: https://github.com/apache/nifi/pull/1312 NIFI-3147 CCDA Processor Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [Y] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [Y] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [Y] Has your PR been rebased against the latest commit within the target branch (typically master)? - [Y] Is your initial contribution a single, squashed commit? ### For code changes: - [N/A ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [Y] Have you written or updated unit tests to verify your changes? - [Y] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [Y] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [Y] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [Y] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [Y] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. https://issues.apache.org/jira/browse/NIFI-3147 You can merge this pull request into a Git repository by running: $ git pull https://github.com/kedarchitale/nifi NIFI-3147 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1312.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1312 commit 4cc6062fc23b62fd2528d1722cb2475824e1bf8a Author: kedarchitaleDate: 2016-12-09T10:15:19Z NIFI-3147 CCDA Processor https://issues.apache.org/jira/browse/NIFI-3147 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-3166) SocketRemoteSiteListener NPE when calling NifiProperties.getRemoteInputHttpPort
[ https://issues.apache.org/jira/browse/NIFI-3166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734694#comment-15734694 ] Andre commented on NIFI-3166: - Mark, Thanks for the work around. It does work indeed. > SocketRemoteSiteListener NPE when calling > NifiProperties.getRemoteInputHttpPort > --- > > Key: NIFI-3166 > URL: https://issues.apache.org/jira/browse/NIFI-3166 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.1.0 >Reporter: Andre > > When using the following properties: > nifi.remote.input.host=node1.textbed.internal > nifi.remote.input.secure=true > nifi.remote.input.socket.port=54321 > nifi.remote.input.http.enabled=false > nifi.remote.input.http.transaction.ttl=30 sec > I hit > {code} > 2016-12-09 00:15:26,456 ERROR [Site-to-Site Worker Thread-145] > o.a.nifi.remote.SocketRemoteSiteListener > java.lang.NullPointerException: null > at > org.apache.nifi.remote.SocketRemoteSiteListener$1$1.run(SocketRemoteSiteListener.java:280) > ~[nifi-site-to-site-1.1.0.jar:1.1.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)