[jira] [Updated] (NIFI-5622) Test certificates require SAN values
[ https://issues.apache.org/jira/browse/NIFI-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5622: Status: Patch Available (was: In Progress) > Test certificates require SAN values > > > Key: NIFI-5622 > URL: https://issues.apache.org/jira/browse/NIFI-5622 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: certificate, security, test > > During the update of OkHttp from 3.6.0 to 3.11.0 in > [NIFI-4806|https://issues.apache.org/jira/browse/NIFI-4806], it was > discovered that {{SubjectAlternativeName}} checking is now required, as > described in RFC 6125. The test resource keystore and truststores need to be > updates to provide SAN values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5622) Test certificates require SAN values
[ https://issues.apache.org/jira/browse/NIFI-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624442#comment-16624442 ] ASF GitHub Bot commented on NIFI-5622: -- Github user alopresto commented on the issue: https://github.com/apache/nifi/pull/3018 I believe @joewitt backed out the OkHttp changes from [NIFI-4806](https://issues.apache.org/jira/browse/NIFI-4806) (my previous note in the Jira) and is instead doing them in [NIFI-5623](https://issues.apache.org/jira/browse/NIFI-5623), but these changes will be necessary then nonetheless. > Test certificates require SAN values > > > Key: NIFI-5622 > URL: https://issues.apache.org/jira/browse/NIFI-5622 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: certificate, security, test > > During the update of OkHttp from 3.6.0 to 3.11.0 in > [NIFI-4806|https://issues.apache.org/jira/browse/NIFI-4806], it was > discovered that {{SubjectAlternativeName}} checking is now required, as > described in RFC 6125. The test resource keystore and truststores need to be > updates to provide SAN values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3018: NIFI-5622 Updated test resource keystores and truststores ...
Github user alopresto commented on the issue: https://github.com/apache/nifi/pull/3018 I believe @joewitt backed out the OkHttp changes from [NIFI-4806](https://issues.apache.org/jira/browse/NIFI-4806) (my previous note in the Jira) and is instead doing them in [NIFI-5623](https://issues.apache.org/jira/browse/NIFI-5623), but these changes will be necessary then nonetheless. ---
[jira] [Commented] (NIFI-5622) Test certificates require SAN values
[ https://issues.apache.org/jira/browse/NIFI-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624441#comment-16624441 ] ASF GitHub Bot commented on NIFI-5622: -- GitHub user alopresto opened a pull request: https://github.com/apache/nifi/pull/3018 NIFI-5622 Updated test resource keystores and truststores with Subjec… …tAlternativeNames to be compliant with RFC 6125. Refactored some test code to be clearer. Renamed some resources to be consistent across modules. Changed passwords to meet new minimum length requirements. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/alopresto/nifi NIFI-5622 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3018.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3018 commit 9b03aa73969ffb95e126873eac5594feb293 Author: Andy LoPresto Date: 2018-09-22T02:26:10Z NIFI-5622 Updated test resource keystores and truststores with SubjectAlternativeNames to be compliant with RFC 6125. Refactored some test code to be clearer. Renamed some resources to be consistent across modules. Changed passwords to meet new minimum length requirements. > Test certificates require SAN values > > > Key: NIFI-5622 > URL: https://issues.apache.org/jira/browse/NIFI-5622 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: certificate, security, test > > During the update of OkHttp from 3.6.0 to 3.11.0 in > [NIFI-4806|https://issues.apache.org/jira/browse/NIFI-4806], it was > discovered that {{SubjectAlternativeName}} checking is now required, as > described in RFC 6125. The test resource keystore and truststores need to be > updates to provide SAN values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3018: NIFI-5622 Updated test resource keystores and trust...
GitHub user alopresto opened a pull request: https://github.com/apache/nifi/pull/3018 NIFI-5622 Updated test resource keystores and truststores with Subjec⦠â¦tAlternativeNames to be compliant with RFC 6125. Refactored some test code to be clearer. Renamed some resources to be consistent across modules. Changed passwords to meet new minimum length requirements. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/alopresto/nifi NIFI-5622 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3018.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3018 commit 9b03aa73969ffb95e126873eac5594feb293 Author: Andy LoPresto Date: 2018-09-22T02:26:10Z NIFI-5622 Updated test resource keystores and truststores with SubjectAlternativeNames to be compliant with RFC 6125. Refactored some test code to be clearer. Renamed some resources to be consistent across modules. Changed passwords to meet new minimum length requirements. ---
[jira] [Updated] (NIFI-5599) Bump Kafka versions
[ https://issues.apache.org/jira/browse/NIFI-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5599: Affects Version/s: 1.7.1 > Bump Kafka versions > --- > > Key: NIFI-5599 > URL: https://issues.apache.org/jira/browse/NIFI-5599 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.7.1 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Labels: kafka, security > Fix For: 1.8.0 > > > I'd like to bump versions for the existing Kafka processors in order to > prevent CVE-2018-1288 > http://mail-archives.apache.org/mod_mbox/kafka-dev/201807.mbox/%3CCAOJcB3_j1XqXK3TnJaqZrga0d13=taYOVoG9cGG0og5Zf+=l...@mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5599) Bump Kafka versions
[ https://issues.apache.org/jira/browse/NIFI-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5599: Labels: kafka security (was: ) > Bump Kafka versions > --- > > Key: NIFI-5599 > URL: https://issues.apache.org/jira/browse/NIFI-5599 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Labels: kafka, security > Fix For: 1.8.0 > > > I'd like to bump versions for the existing Kafka processors in order to > prevent CVE-2018-1288 > http://mail-archives.apache.org/mod_mbox/kafka-dev/201807.mbox/%3CCAOJcB3_j1XqXK3TnJaqZrga0d13=taYOVoG9cGG0og5Zf+=l...@mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry pull request #143: NIFIREG-201 Refactoring project structure t...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/143 ---
[jira] [Commented] (NIFIREG-201) nifi-registry-extensions impacted by top-level dependency management
[ https://issues.apache.org/jira/browse/NIFIREG-201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624429#comment-16624429 ] ASF GitHub Bot commented on NIFIREG-201: Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/143 > nifi-registry-extensions impacted by top-level dependency management > > > Key: NIFIREG-201 > URL: https://issues.apache.org/jira/browse/NIFIREG-201 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.3.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 0.3.0 > > > While reviewing NIFIREG-200, I noticed that when building with the > include-ranger profile, the JARs in ext/ranger/lib ended up being affected by > the dependency management section in the root pom. > For example, the versions of Jetty and Jackson JARs were being forced to the > versions registry needs, but may not be versions that are compatible with the > ranger client. > To deal with this, I propose restructuring the the repository to something > like the following: > _nifi-registry-core (eveything that used to be at the root, except the > assembly and extensions)_ > _nifi-registry-extensions_ > _nifi-registry-assembly_ > The dependency management can then be moved to the pom of nifi-registry-core > so that it does not impact the modules under nifi-registry-extensions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFIREG-202) Release Manager - Release 0.3.0
Kevin Doran created NIFIREG-202: --- Summary: Release Manager - Release 0.3.0 Key: NIFIREG-202 URL: https://issues.apache.org/jira/browse/NIFIREG-202 Project: NiFi Registry Issue Type: Task Reporter: Kevin Doran Assignee: Kevin Doran Fix For: 0.3.0 Perform release manager activities for 0.3.0 release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
[ https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624178#comment-16624178 ] Colin Dean edited comment on NIFI-5612 at 9/21/18 9:06 PM: --- I'm 90% confident that I've deduced the problem to be with fields that are {{int\(x) unsigned default '0'}}, where `x in (1,2)`. Some are NOT NULL. I distilled the SQL schema of the database we’re crawling down to just the eight tables we seemingly cannot retrieve because of the Avro type error. I then filtered out any column from that schema that isn’t a smallint, tinyint, or int with a precision less than 9: {code} cat problem_tab...@mack-649.sql | \ grep -v varchar | \ grep -v enum | \ grep -v datetime | \ grep -v KEY | \ grep -v text | \ grep -v date | \ grep -v time | \ grep -v char\( | \ grep -v decimal | \ grep -v double\( | \ grep -v int\(11 | \ grep -v int\(16 | \ grep -v int\(10 > int-columns-only.sql {code} (I'm sure there was a more efficient way to do that. I don't have access to the database right now.) I went down the list of columns, searching for each column’s type in its entirety, e.g. {{int(4) NOT NULL default '0'}} and noting the number of hits in the whole SQL schema. Any time that number was really low – under 30 – I can more reasonably look at each search hit, look at its table, and determine if that table is in the whitelist. I eventually tossed this and started searching regardless of hits after a full pass at all under 50. Whitelist? We're crawling databases using a whitelist of tables but the SQL schema dump we have is from the whole database. Yikes, I know. 2.2 MB of schema! * If the type is found in a table is in our whitelist _and_ was crawled successfully, then I strike it out because *that type works fine*. * If the type is found in a table is in the whitelist _and_ was not crawled successfully, I continue on. * If the type isn’t found in a table that succeeded or only found in tables that _didn’t succeed_, I mark it as suspect and move to the next type to search. I'd mark other fields in the eight tables so I didn't duplicate a search. Three outcomes of each check: 1. Type not suspect, found in a working table. 2. Type suspect because it was not found in a working table. 3. Type already examined. The dead giveaway was that one table had only five relevant fields, only one of which was _not_ found in a working table. Each table that isn't working has: A, B, G, H - {{int(1) unsigned NOT NULL default '0'}} C - {{int(1) unsigned default '0'}} D, F - {{int(2) unsigned NOT NULL default '0'}} E - {{int(2) unsigned NOT NULL default '1'}} B also has {{int(6) unsigned NOT NULL default 'x'}}, where {{x in (0,1)}}. Both of these types were only used in tables that didn't work. My next step will be to run ExecuteSQL against a database with a table with columns of just these types to see what happens. was (Author: colindean): I'm 90% confident that I've deduced the problem to be with fields that are {{int(x) unsigned default '0'}}, where `x in (1,2)`. Some are NOT NULL. I distilled the SQL schema of the database we’re crawling down to just the eight tables we seemingly cannot retrieve because of the Avro type error. I then filtered out any column from that schema that isn’t a smallint, tinyint, or int with a precision less than 9: {code} cat problem_tab...@mack-649.sql | \ grep -v varchar | \ grep -v enum | \ grep -v datetime | \ grep -v KEY | \ grep -v text | \ grep -v date | \ grep -v time | \ grep -v char\( | \ grep -v decimal | \ grep -v double\( | \ grep -v int\(11 | \ grep -v int\(16 | \ grep -v int\(10 > int-columns-only.sql {code} (I'm sure there was a more efficient way to do that. I don't have access to the database right now.) I went down the list of columns, searching for each column’s type in its entirety, e.g. {{int(4) NOT NULL default '0'}} and noting the number of hits in the whole SQL schema. Any time that number was really low – under 30 – I can more reasonably look at each search hit, look at its table, and determine if that table is in the whitelist. I eventually tossed this and started searching regardless of hits after a full pass at all under 50. Whitelist? We're crawling databases using a whitelist of tables but the SQL schema dump we have is from the whole database. Yikes, I know. 2.2 MB of schema! * If the type is found in a table is in our whitelist _and_ was crawled successfully, then I strike it out because *that type works fine*. * If the type is found in a table is in the whitelist _and_ was not crawled successfully, I continue on. * If the type isn’t found in a table that succeeded or only found in tables that _didn’t succeed_, I mark it as suspect and move to the next type to search. I'd mark other fields in the eight tables so I didn't duplicate a search. Three outcomes of each check: 1. Type not suspect, found in a
[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
[ https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624178#comment-16624178 ] Colin Dean commented on NIFI-5612: -- I'm 90% confident that I've deduced the problem to be with fields that are {{int(x) unsigned default '0'}}, where `x in (1,2)`. Some are NOT NULL. I distilled the SQL schema of the database we’re crawling down to just the eight tables we seemingly cannot retrieve because of the Avro type error. I then filtered out any column from that schema that isn’t a smallint, tinyint, or int with a precision less than 9: {code} cat problem_tab...@mack-649.sql | \ grep -v varchar | \ grep -v enum | \ grep -v datetime | \ grep -v KEY | \ grep -v text | \ grep -v date | \ grep -v time | \ grep -v char\( | \ grep -v decimal | \ grep -v double\( | \ grep -v int\(11 | \ grep -v int\(16 | \ grep -v int\(10 > int-columns-only.sql {code} (I'm sure there was a more efficient way to do that. I don't have access to the database right now.) I went down the list of columns, searching for each column’s type in its entirety, e.g. {{int(4) NOT NULL default '0'}} and noting the number of hits in the whole SQL schema. Any time that number was really low – under 30 – I can more reasonably look at each search hit, look at its table, and determine if that table is in the whitelist. I eventually tossed this and started searching regardless of hits after a full pass at all under 50. Whitelist? We're crawling databases using a whitelist of tables but the SQL schema dump we have is from the whole database. Yikes, I know. 2.2 MB of schema! * If the type is found in a table is in our whitelist _and_ was crawled successfully, then I strike it out because *that type works fine*. * If the type is found in a table is in the whitelist _and_ was not crawled successfully, I continue on. * If the type isn’t found in a table that succeeded or only found in tables that _didn’t succeed_, I mark it as suspect and move to the next type to search. I'd mark other fields in the eight tables so I didn't duplicate a search. Three outcomes of each check: 1. Type not suspect, found in a working table. 2. Type suspect because it was not found in a working table. 3. Type already examined. The dead giveaway was that one table had only five relevant fields, only one of which was _not_ found in a working table. Each table that isn't working has: A, B, G, H - {{int(1) unsigned NOT NULL default '0'}} C - {{int(1) unsigned default '0'}} D, F - {{int(2) unsigned NOT NULL default '0'}} E - {{int(2) unsigned NOT NULL default '1'}} B also has {{int(6) unsigned NOT NULL default 'x'}}, where {{x in (0,1)}}. Both of these types were only used in tables that didn't work. My next step will be to run ExecuteSQL against a database with a table with columns of just these types to see what happens. > org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0 > > > Key: NIFI-5612 > URL: https://issues.apache.org/jira/browse/NIFI-5612 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0, 1.6.0, 1.7.1 > Environment: Microsoft Windows, MySQL Enterprise 5.0.80 >Reporter: Colin Dean >Priority: Major > Labels: ExecuteSQL, avro, nifi > > I'm seeing this when I execute {{SELECT * FROM }} on a few tables > but not on dozens of others in the same database. > {code} > 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] > o.a.n.controller.tasks.ConnectableTask Administratively Yielding > ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught > Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: > org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0 > org.apache.avro.file.DataFileWriter$AppendWriteException: > org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0 > at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308) > at > org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462) > at > org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625) > at > org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) > at >
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219612683 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/StandardFlowService.java --- @@ -662,6 +682,39 @@ private void handleReconnectionRequest(final ReconnectionRequestMessage request) } } +private void handleDecommissionRequest(final DecommissionMessage request) throws InterruptedException { +logger.info("Received decommission request message from manager with explanation: " + request.getExplanation()); --- End diff -- I replaced all occurrences of "from manager" with "from cluster coordinator". ---
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624102#comment-16624102 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219612683 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/StandardFlowService.java --- @@ -662,6 +682,39 @@ private void handleReconnectionRequest(final ReconnectionRequestMessage request) } } +private void handleDecommissionRequest(final DecommissionMessage request) throws InterruptedException { +logger.info("Received decommission request message from manager with explanation: " + request.getExplanation()); --- End diff -- I replaced all occurrences of "from manager" with "from cluster coordinator". > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624086#comment-16624086 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219607209 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/ThreadPoolRequestReplicator.java --- @@ -180,6 +181,15 @@ public AsyncClusterResponse replicate(NiFiUser user, String method, URI uri, Obj } } +final List decommissioning = stateMap.get(NodeConnectionState.DECOMMISSIONING); --- End diff -- I agree. If requests were replicated to nodes other than decommissioned nodes, then the decommissioned node would be out of sync with the rest of the cluster and would not be able to rejoin the cluster. I added a check for the decommissioned state. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219607209 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/ThreadPoolRequestReplicator.java --- @@ -180,6 +181,15 @@ public AsyncClusterResponse replicate(NiFiUser user, String method, URI uri, Obj } } +final List decommissioning = stateMap.get(NodeConnectionState.DECOMMISSIONING); --- End diff -- I agree. If requests were replicated to nodes other than decommissioned nodes, then the decommissioned node would be out of sync with the rest of the cluster and would not be able to rejoin the cluster. I added a check for the decommissioned state. ---
[jira] [Commented] (MINIFICPP-617) Introduce previously created python example
[ https://issues.apache.org/jira/browse/MINIFICPP-617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624082#comment-16624082 ] ASF GitHub Bot commented on MINIFICPP-617: -- Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/404#discussion_r219606415 --- Diff: python/getFile.py --- @@ -0,0 +1,51 @@ +#!/usr/bin/env python +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from minifi import * + +from argparse import ArgumentParser +from ctypes import cdll +import ctypes + + + +parser = ArgumentParser() +parser.add_argument("-s", "--dll", dest="dll_file", +help="DLL filename", metavar="FILE") + +parser.add_argument("-n", "--nifi", dest="nifi_instance", +help="NiFi Instance") + +parser.add_argument("-i", "--input", dest="input_port", +help="NiFi Input Port") + +parser.add_argument("-d", "--dir", dest="dir", + help="GetFile Dir to monitor", metavar="FILE") + +args = parser.parse_args() + +""" dll_file is the path to the shared object """ +minifi = MiNiFi(dll_file=args.dll_file,url = args.nifi_instance.encode('utf-8'), port=args.input_port.encode('utf-8')) --- End diff -- probably no need to do the encoding here > Introduce previously created python example > --- > > Key: MINIFICPP-617 > URL: https://issues.apache.org/jira/browse/MINIFICPP-617 > Project: NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > > I've had this sitting around a while. I've re-based and tested to ensure it > still works. has some bugs but is a simple proof of concept. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #404: MINIFICPP-617: Create simple python examp...
Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/404#discussion_r219606415 --- Diff: python/getFile.py --- @@ -0,0 +1,51 @@ +#!/usr/bin/env python +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from minifi import * + +from argparse import ArgumentParser +from ctypes import cdll +import ctypes + + + +parser = ArgumentParser() +parser.add_argument("-s", "--dll", dest="dll_file", +help="DLL filename", metavar="FILE") + +parser.add_argument("-n", "--nifi", dest="nifi_instance", +help="NiFi Instance") + +parser.add_argument("-i", "--input", dest="input_port", +help="NiFi Input Port") + +parser.add_argument("-d", "--dir", dest="dir", + help="GetFile Dir to monitor", metavar="FILE") + +args = parser.parse_args() + +""" dll_file is the path to the shared object """ +minifi = MiNiFi(dll_file=args.dll_file,url = args.nifi_instance.encode('utf-8'), port=args.input_port.encode('utf-8')) --- End diff -- probably no need to do the encoding here ---
[GitHub] nifi-minifi-cpp pull request #404: MINIFICPP-617: Create simple python examp...
GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/404 MINIFICPP-617: Create simple python example Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-617 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/404.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #404 commit 6c46186c4a1fc566bedd32f208126161419c06ff Author: Marc Parisi Date: 2018-09-12T15:05:25Z MINIFICPP-617: Create simple python example ---
[jira] [Commented] (MINIFICPP-617) Introduce previously created python example
[ https://issues.apache.org/jira/browse/MINIFICPP-617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624072#comment-16624072 ] ASF GitHub Bot commented on MINIFICPP-617: -- GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/404 MINIFICPP-617: Create simple python example Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-617 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/404.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #404 commit 6c46186c4a1fc566bedd32f208126161419c06ff Author: Marc Parisi Date: 2018-09-12T15:05:25Z MINIFICPP-617: Create simple python example > Introduce previously created python example > --- > > Key: MINIFICPP-617 > URL: https://issues.apache.org/jira/browse/MINIFICPP-617 > Project: NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > > I've had this sitting around a while. I've re-based and tested to ensure it > still works. has some bugs but is a simple proof of concept. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (MINIFICPP-617) Introduce previously created python example
Mr TheSegfault created MINIFICPP-617: Summary: Introduce previously created python example Key: MINIFICPP-617 URL: https://issues.apache.org/jira/browse/MINIFICPP-617 Project: NiFi MiNiFi C++ Issue Type: New Feature Reporter: Mr TheSegfault Assignee: Mr TheSegfault I've had this sitting around a while. I've re-based and tested to ensure it still works. has some bugs but is a simple proof of concept. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624066#comment-16624066 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219602056 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/test/java/org/apache/nifi/cluster/coordination/heartbeat/TestAbstractHeartbeatMonitor.java --- @@ -244,11 +245,26 @@ public synchronized void finishNodeConnection(NodeIdentifier nodeId) { statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.CONNECTED)); } +@Override +public synchronized void finishNodeDecommission(NodeIdentifier nodeId) { +statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONED)); +} + +@Override +public synchronized void requestNodeDecommission(NodeIdentifier nodeId, DecommissionCode decommissionCode, String explanation) { +statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONED)); +} + @Override public synchronized void requestNodeDisconnect(NodeIdentifier nodeId, DisconnectionCode disconnectionCode, String explanation) { statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DISCONNECTED)); } +//@Override --- End diff -- Done. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219602056 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/test/java/org/apache/nifi/cluster/coordination/heartbeat/TestAbstractHeartbeatMonitor.java --- @@ -244,11 +245,26 @@ public synchronized void finishNodeConnection(NodeIdentifier nodeId) { statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.CONNECTED)); } +@Override +public synchronized void finishNodeDecommission(NodeIdentifier nodeId) { +statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONED)); +} + +@Override +public synchronized void requestNodeDecommission(NodeIdentifier nodeId, DecommissionCode decommissionCode, String explanation) { +statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONED)); +} + @Override public synchronized void requestNodeDisconnect(NodeIdentifier nodeId, DisconnectionCode disconnectionCode, String explanation) { statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DISCONNECTED)); } +//@Override --- End diff -- Done. ---
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624064#comment-16624064 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219601868 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -841,6 +900,34 @@ void notifyOthersOfNodeStatusChange(final NodeConnectionStatus updatedStatus, fi senderListener.notifyNodeStatusChange(nodesToNotify, message); } +private void decommissionAsynchronously(final DecommissionMessage request, final int attempts, final int retrySeconds) { +final Thread decommissionThread = new Thread(new Runnable() { +@Override +public void run() { +final NodeIdentifier nodeId = request.getNodeId(); + +for (int i = 0; i < attempts; i++) { +try { +senderListener.decommission(request); +reportEvent(nodeId, Severity.INFO, "Node was decommissioned due to " + request.getExplanation()); +return; +} catch (final Exception e) { +logger.error("Failed to notify {} that it has been decommissioned from the cluster due to {}", request.getNodeId(), request.getExplanation()); --- End diff -- Done. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219601868 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -841,6 +900,34 @@ void notifyOthersOfNodeStatusChange(final NodeConnectionStatus updatedStatus, fi senderListener.notifyNodeStatusChange(nodesToNotify, message); } +private void decommissionAsynchronously(final DecommissionMessage request, final int attempts, final int retrySeconds) { +final Thread decommissionThread = new Thread(new Runnable() { +@Override +public void run() { +final NodeIdentifier nodeId = request.getNodeId(); + +for (int i = 0; i < attempts; i++) { +try { +senderListener.decommission(request); +reportEvent(nodeId, Severity.INFO, "Node was decommissioned due to " + request.getExplanation()); +return; +} catch (final Exception e) { +logger.error("Failed to notify {} that it has been decommissioned from the cluster due to {}", request.getNodeId(), request.getExplanation()); --- End diff -- Done. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599828 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -821,7 +878,9 @@ void notifyOthersOfNodeStatusChange(final NodeConnectionStatus updatedStatus, fi // Otherwise, get the active coordinator (or wait for one to become active) and then notify the coordinator. final Set nodesToNotify; if (notifyAllNodes) { -nodesToNotify = getNodeIdentifiers(NodeConnectionState.CONNECTED, NodeConnectionState.CONNECTING); +// TODO notify all nodes --- End diff -- Done. ---
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624050#comment-16624050 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599828 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -821,7 +878,9 @@ void notifyOthersOfNodeStatusChange(final NodeConnectionStatus updatedStatus, fi // Otherwise, get the active coordinator (or wait for one to become active) and then notify the coordinator. final Set nodesToNotify; if (notifyAllNodes) { -nodesToNotify = getNodeIdentifiers(NodeConnectionState.CONNECTED, NodeConnectionState.CONNECTING); +// TODO notify all nodes --- End diff -- Done. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599522 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -526,6 +579,10 @@ public void removeNode(final NodeIdentifier nodeId, final String userDn) { storeState(); } +private void onNodeDecommissioned(final NodeIdentifier nodeId) { +eventListeners.stream().forEach(listener -> listener.onNodeDecommissioned(nodeId)); +} + private void onNodeRemoved(final NodeIdentifier nodeId) { eventListeners.stream().forEach(listener -> listener.onNodeRemoved(nodeId)); --- End diff -- Done. ---
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624049#comment-16624049 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599522 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -526,6 +579,10 @@ public void removeNode(final NodeIdentifier nodeId, final String userDn) { storeState(); } +private void onNodeDecommissioned(final NodeIdentifier nodeId) { +eventListeners.stream().forEach(listener -> listener.onNodeDecommissioned(nodeId)); +} + private void onNodeRemoved(final NodeIdentifier nodeId) { eventListeners.stream().forEach(listener -> listener.onNodeRemoved(nodeId)); --- End diff -- Done. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624048#comment-16624048 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599484 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -526,6 +579,10 @@ public void removeNode(final NodeIdentifier nodeId, final String userDn) { storeState(); } +private void onNodeDecommissioned(final NodeIdentifier nodeId) { +eventListeners.stream().forEach(listener -> listener.onNodeDecommissioned(nodeId)); --- End diff -- Done. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599484 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -526,6 +579,10 @@ public void removeNode(final NodeIdentifier nodeId, final String userDn) { storeState(); } +private void onNodeDecommissioned(final NodeIdentifier nodeId) { +eventListeners.stream().forEach(listener -> listener.onNodeDecommissioned(nodeId)); --- End diff -- Done. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599165 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -494,6 +539,14 @@ public void requestNodeDisconnect(final NodeIdentifier nodeId, final Disconnecti disconnectAsynchronously(request, 10, 5); } +//@Override --- End diff -- Done. ---
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624045#comment-16624045 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599165 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -494,6 +539,14 @@ public void requestNodeDisconnect(final NodeIdentifier nodeId, final Disconnecti disconnectAsynchronously(request, 10, 5); } +//@Override --- End diff -- Done. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624044#comment-16624044 ] ASF GitHub Bot commented on NIFI-5585: -- Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219598024 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/coordination/ClusterCoordinator.java --- @@ -72,6 +91,16 @@ */ void requestNodeDisconnect(NodeIdentifier nodeId, DisconnectionCode disconnectionCode, String explanation); +///** --- End diff -- Done. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219598024 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/coordination/ClusterCoordinator.java --- @@ -72,6 +91,16 @@ */ void requestNodeDisconnect(NodeIdentifier nodeId, DisconnectionCode disconnectionCode, String explanation); +///** --- End diff -- Done. ---
[jira] [Commented] (NIFIREG-201) nifi-registry-extensions impacted by top-level dependency management
[ https://issues.apache.org/jira/browse/NIFIREG-201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623974#comment-16623974 ] ASF GitHub Bot commented on NIFIREG-201: Github user kevdoran commented on the issue: https://github.com/apache/nifi-registry/pull/143 Will review... > nifi-registry-extensions impacted by top-level dependency management > > > Key: NIFIREG-201 > URL: https://issues.apache.org/jira/browse/NIFIREG-201 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.3.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 0.3.0 > > > While reviewing NIFIREG-200, I noticed that when building with the > include-ranger profile, the JARs in ext/ranger/lib ended up being affected by > the dependency management section in the root pom. > For example, the versions of Jetty and Jackson JARs were being forced to the > versions registry needs, but may not be versions that are compatible with the > ranger client. > To deal with this, I propose restructuring the the repository to something > like the following: > _nifi-registry-core (eveything that used to be at the root, except the > assembly and extensions)_ > _nifi-registry-extensions_ > _nifi-registry-assembly_ > The dependency management can then be moved to the pom of nifi-registry-core > so that it does not impact the modules under nifi-registry-extensions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry issue #143: NIFIREG-201 Refactoring project structure to bette...
Github user kevdoran commented on the issue: https://github.com/apache/nifi-registry/pull/143 Will review... ---
[jira] [Updated] (NIFI-5624) Improve stability of EncryptedWriteAheadProvenanceRepositoryTest
[ https://issues.apache.org/jira/browse/NIFI-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5624: Affects Version/s: 1.7.1 > Improve stability of EncryptedWriteAheadProvenanceRepositoryTest > > > Key: NIFI-5624 > URL: https://issues.apache.org/jira/browse/NIFI-5624 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Joseph Witt >Assignee: Mark Payne >Priority: Major > > Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.164 s <<< > FAILURE! - in > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest > testShouldRegisterAndGetEvent(org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest) > Time elapsed: 0.021 s <<< ERROR! > java.io.FileNotFoundException: Cannot create TOC Reader because the file > target/storage/cbb4db1c-9cee-43e5-907f-8137a8178197/toc/0.toc does not exist > at > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest.testShouldRegisterAndGetEvent(EncryptedWriteAheadProvenanceRepositoryTest.groovy:277) > Wasn't sure if this was better to flag for [~alopresto] or [~markap14] so > just a heads up. In NIFI-4806 i'm going to ignore it for now and this JIRA > will be to improve it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5624) Improve stability of EncryptedWriteAheadProvenanceRepositoryTest
[ https://issues.apache.org/jira/browse/NIFI-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623967#comment-16623967 ] Andy LoPresto commented on NIFI-5624: - Hi [~markap14], I believe you'll be better suited to fix this, as I don't recall the EWAPR doing any disk interaction. It simply intercepts the data being serialized/deserialized and encrypts/decrypts it, and then delegates the file read/write to WAPR. Thanks. > Improve stability of EncryptedWriteAheadProvenanceRepositoryTest > > > Key: NIFI-5624 > URL: https://issues.apache.org/jira/browse/NIFI-5624 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Joseph Witt >Assignee: Andy LoPresto >Priority: Major > > Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.164 s <<< > FAILURE! - in > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest > testShouldRegisterAndGetEvent(org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest) > Time elapsed: 0.021 s <<< ERROR! > java.io.FileNotFoundException: Cannot create TOC Reader because the file > target/storage/cbb4db1c-9cee-43e5-907f-8137a8178197/toc/0.toc does not exist > at > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest.testShouldRegisterAndGetEvent(EncryptedWriteAheadProvenanceRepositoryTest.groovy:277) > Wasn't sure if this was better to flag for [~alopresto] or [~markap14] so > just a heads up. In NIFI-4806 i'm going to ignore it for now and this JIRA > will be to improve it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5624) Improve stability of EncryptedWriteAheadProvenanceRepositoryTest
[ https://issues.apache.org/jira/browse/NIFI-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5624: Labels: encryption provenance repository security test (was: ) > Improve stability of EncryptedWriteAheadProvenanceRepositoryTest > > > Key: NIFI-5624 > URL: https://issues.apache.org/jira/browse/NIFI-5624 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Joseph Witt >Assignee: Mark Payne >Priority: Major > Labels: encryption, provenance, repository, security, test > > Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.164 s <<< > FAILURE! - in > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest > testShouldRegisterAndGetEvent(org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest) > Time elapsed: 0.021 s <<< ERROR! > java.io.FileNotFoundException: Cannot create TOC Reader because the file > target/storage/cbb4db1c-9cee-43e5-907f-8137a8178197/toc/0.toc does not exist > at > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest.testShouldRegisterAndGetEvent(EncryptedWriteAheadProvenanceRepositoryTest.groovy:277) > Wasn't sure if this was better to flag for [~alopresto] or [~markap14] so > just a heads up. In NIFI-4806 i'm going to ignore it for now and this JIRA > will be to improve it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5624) Improve stability of EncryptedWriteAheadProvenanceRepositoryTest
[ https://issues.apache.org/jira/browse/NIFI-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5624: Component/s: Tools and Build Core Framework > Improve stability of EncryptedWriteAheadProvenanceRepositoryTest > > > Key: NIFI-5624 > URL: https://issues.apache.org/jira/browse/NIFI-5624 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Joseph Witt >Assignee: Mark Payne >Priority: Major > Labels: encryption, provenance, repository, security, test > > Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.164 s <<< > FAILURE! - in > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest > testShouldRegisterAndGetEvent(org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest) > Time elapsed: 0.021 s <<< ERROR! > java.io.FileNotFoundException: Cannot create TOC Reader because the file > target/storage/cbb4db1c-9cee-43e5-907f-8137a8178197/toc/0.toc does not exist > at > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest.testShouldRegisterAndGetEvent(EncryptedWriteAheadProvenanceRepositoryTest.groovy:277) > Wasn't sure if this was better to flag for [~alopresto] or [~markap14] so > just a heads up. In NIFI-4806 i'm going to ignore it for now and this JIRA > will be to improve it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-5624) Improve stability of EncryptedWriteAheadProvenanceRepositoryTest
[ https://issues.apache.org/jira/browse/NIFI-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto reassigned NIFI-5624: --- Assignee: Mark Payne (was: Andy LoPresto) > Improve stability of EncryptedWriteAheadProvenanceRepositoryTest > > > Key: NIFI-5624 > URL: https://issues.apache.org/jira/browse/NIFI-5624 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Joseph Witt >Assignee: Mark Payne >Priority: Major > > Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.164 s <<< > FAILURE! - in > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest > testShouldRegisterAndGetEvent(org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest) > Time elapsed: 0.021 s <<< ERROR! > java.io.FileNotFoundException: Cannot create TOC Reader because the file > target/storage/cbb4db1c-9cee-43e5-907f-8137a8178197/toc/0.toc does not exist > at > org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest.testShouldRegisterAndGetEvent(EncryptedWriteAheadProvenanceRepositoryTest.groovy:277) > Wasn't sure if this was better to flag for [~alopresto] or [~markap14] so > just a heads up. In NIFI-4806 i'm going to ignore it for now and this JIRA > will be to improve it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFIREG-201) nifi-registry-extensions impacted by top-level dependency management
[ https://issues.apache.org/jira/browse/NIFIREG-201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623964#comment-16623964 ] ASF GitHub Bot commented on NIFIREG-201: GitHub user bbende opened a pull request: https://github.com/apache/nifi-registry/pull/143 NIFIREG-201 Refactoring project structure to better isolate extensions You can merge this pull request into a Git repository by running: $ git pull https://github.com/bbende/nifi-registry NIFIREG-201 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/143.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #143 commit a5dfe1f3901b4e89d87dab6df641fe5a53be147f Author: Bryan Bende Date: 2018-09-21T17:52:53Z NIFIREG-201 Refactoring project structure to better isolate extensions > nifi-registry-extensions impacted by top-level dependency management > > > Key: NIFIREG-201 > URL: https://issues.apache.org/jira/browse/NIFIREG-201 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.3.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 0.3.0 > > > While reviewing NIFIREG-200, I noticed that when building with the > include-ranger profile, the JARs in ext/ranger/lib ended up being affected by > the dependency management section in the root pom. > For example, the versions of Jetty and Jackson JARs were being forced to the > versions registry needs, but may not be versions that are compatible with the > ranger client. > To deal with this, I propose restructuring the the repository to something > like the following: > _nifi-registry-core (eveything that used to be at the root, except the > assembly and extensions)_ > _nifi-registry-extensions_ > _nifi-registry-assembly_ > The dependency management can then be moved to the pom of nifi-registry-core > so that it does not impact the modules under nifi-registry-extensions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry pull request #143: NIFIREG-201 Refactoring project structure t...
GitHub user bbende opened a pull request: https://github.com/apache/nifi-registry/pull/143 NIFIREG-201 Refactoring project structure to better isolate extensions You can merge this pull request into a Git repository by running: $ git pull https://github.com/bbende/nifi-registry NIFIREG-201 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/143.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #143 commit a5dfe1f3901b4e89d87dab6df641fe5a53be147f Author: Bryan Bende Date: 2018-09-21T17:52:53Z NIFIREG-201 Refactoring project structure to better isolate extensions ---
[jira] [Created] (NIFI-5624) Improve stability of EncryptedWriteAheadProvenanceRepositoryTest
Joseph Witt created NIFI-5624: - Summary: Improve stability of EncryptedWriteAheadProvenanceRepositoryTest Key: NIFI-5624 URL: https://issues.apache.org/jira/browse/NIFI-5624 Project: Apache NiFi Issue Type: Bug Reporter: Joseph Witt Assignee: Andy LoPresto Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.164 s <<< FAILURE! - in org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest testShouldRegisterAndGetEvent(org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest) Time elapsed: 0.021 s <<< ERROR! java.io.FileNotFoundException: Cannot create TOC Reader because the file target/storage/cbb4db1c-9cee-43e5-907f-8137a8178197/toc/0.toc does not exist at org.apache.nifi.provenance.EncryptedWriteAheadProvenanceRepositoryTest.testShouldRegisterAndGetEvent(EncryptedWriteAheadProvenanceRepositoryTest.groovy:277) Wasn't sure if this was better to flag for [~alopresto] or [~markap14] so just a heads up. In NIFI-4806 i'm going to ignore it for now and this JIRA will be to improve it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4806) Upgrade to Tika/Tika-Parsers 1.19 to resolve TIKA-2535 and CVEs reported against all releases prior to 1.19
[ https://issues.apache.org/jira/browse/NIFI-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623936#comment-16623936 ] Joseph Witt commented on NIFI-4806: --- also updating a few other deps that need updating. Leaving many alone for other JIRAs. Also changing old prov to integration tests and ignoring an unstable encrypted prov repo test. > Upgrade to Tika/Tika-Parsers 1.19 to resolve TIKA-2535 and CVEs reported > against all releases prior to 1.19 > --- > > Key: NIFI-4806 > URL: https://issues.apache.org/jira/browse/NIFI-4806 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Joseph Witt >Assignee: Joseph Witt >Priority: Blocker > Fix For: 1.8.0 > > > The nifi-media-processors depend on Tika-Parsers 1.16 as of now. They need > to be upgraded to at least Tika-Parsers 1.18 to resolve the licensing problem > identified to the Tika team and which they resolved in > https://issues.apache.org/jira/browse/TIKA-2535 > > We aren't susceptible to the licensing problem at this time because when we > did the last update for Tika-Parsers this was flagged and excluded though we > could be exposing bugs for certain datayptes we'd do mime detection on > (maybe). I have a comment about this in our pom. > > This Jira is to upgrade, ensure no invalid libs are used, and clean up the > comments and move on. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5623) Update all usage of Okhttp to latest library
Joseph Witt created NIFI-5623: - Summary: Update all usage of Okhttp to latest library Key: NIFI-5623 URL: https://issues.apache.org/jira/browse/NIFI-5623 Project: Apache NiFi Issue Type: Bug Reporter: Joseph Witt In looking at existing dep usage and available versions we see [INFO] com.squareup.okhttp3:mockwebserver ... 3.6.0 -> 3.11.0 [INFO] com.squareup.okhttp3:okhttp . 3.10.0 -> 3.11.0 [INFO] com.squareup.okhttp3:okhttp .. 3.3.1 -> 3.11.0 [INFO] com.squareup.okhttp3:okhttp .. 3.6.0 -> 3.11.0 [INFO] com.squareup.okhttp3:okhttp .. 3.8.1 -> 3.11.0 But in changing to 3.11.0 it became clear we have work to do so we can better leverage okhttp and its improving/tightening features around things like TLS setup/cert validation/hostname verification. Several tests started failing after the update. We need to use certs with updated SANs and we need to probably improve things as called out in https://issues.apache.org/jira/browse/NIFI-1478. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-5623) Update all usage of Okhttp to latest library
[ https://issues.apache.org/jira/browse/NIFI-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt reassigned NIFI-5623: - Assignee: Andy LoPresto > Update all usage of Okhttp to latest library > > > Key: NIFI-5623 > URL: https://issues.apache.org/jira/browse/NIFI-5623 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joseph Witt >Assignee: Andy LoPresto >Priority: Major > > In looking at existing dep usage and available versions we see > [INFO] com.squareup.okhttp3:mockwebserver ... 3.6.0 -> > 3.11.0 > [INFO] com.squareup.okhttp3:okhttp . 3.10.0 -> > 3.11.0 > [INFO] com.squareup.okhttp3:okhttp .. 3.3.1 -> > 3.11.0 > [INFO] com.squareup.okhttp3:okhttp .. 3.6.0 -> > 3.11.0 > [INFO] com.squareup.okhttp3:okhttp .. 3.8.1 -> > 3.11.0 > But in changing to 3.11.0 it became clear we have work to do so we can better > leverage okhttp and its improving/tightening features around things like TLS > setup/cert validation/hostname verification. Several tests started failing > after the update. We need to use certs with updated SANs and we need to > probably improve things as called out in > https://issues.apache.org/jira/browse/NIFI-1478. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5622) Test certificates require SAN values
[ https://issues.apache.org/jira/browse/NIFI-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5622: Fix Version/s: (was: 0.5.0) > Test certificates require SAN values > > > Key: NIFI-5622 > URL: https://issues.apache.org/jira/browse/NIFI-5622 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: certificate, security, test > > During the update of OkHttp from 3.6.0 to 3.11.0 in [NIFI-|], it was > discovered that {{SubjectAlternativeName}} checking is now required, as > described in RFC 6125. The test resource keystore and truststores need to be > updates to provide SAN values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5622) Test certificates require SAN values
[ https://issues.apache.org/jira/browse/NIFI-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5622: Description: During the update of OkHttp from 3.6.0 to 3.11.0 in [NIFI-4806|https://issues.apache.org/jira/browse/NIFI-4806], it was discovered that {{SubjectAlternativeName}} checking is now required, as described in RFC 6125. The test resource keystore and truststores need to be updates to provide SAN values.(was: During the update of OkHttp from 3.6.0 to 3.11.0 in [NIFI-|], it was discovered that {{SubjectAlternativeName}} checking is now required, as described in RFC 6125. The test resource keystore and truststores need to be updates to provide SAN values. ) > Test certificates require SAN values > > > Key: NIFI-5622 > URL: https://issues.apache.org/jira/browse/NIFI-5622 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: certificate, security, test > > During the update of OkHttp from 3.6.0 to 3.11.0 in > [NIFI-4806|https://issues.apache.org/jira/browse/NIFI-4806], it was > discovered that {{SubjectAlternativeName}} checking is now required, as > described in RFC 6125. The test resource keystore and truststores need to be > updates to provide SAN values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5622) Test certificates require SAN values
[ https://issues.apache.org/jira/browse/NIFI-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5622: Description: During the update of OkHttp from 3.6.0 to 3.11.0 in [NIFI-|], it was discovered that {{SubjectAlternativeName}} checking is now required, as described in RFC 6125. The test resource keystore and truststores need to be updates to provide SAN values.(was: [~JDP10101] was working to upgrade a library dependency (com.squareup.okhttp.okhttp:2.5.0 to 2.6.0). During this attempt, the TestInvokeHttpSSL tests began failing. With the help of a Square engineer[1], it was determined that the TLS cipher suite in use during tests against Jetty server was restricted to only `TLS_DHE_DSS_WITH_AES_128_CBC_SHA`. This is an obsolete cipher suite and it was deprecated in OkHttp:2.6.0. While there is a workaround (code below) to override the OkHttp connector to use this obsolete cipher suite, the real issue was that Jetty should not be restricted to allowing that single cipher suite for incoming connections. Further investigation revealed that the test keystore[2] and truststore[3] in use did not have any valid RSA or DSA keys. Because of this, Jetty could not rely on any RSA/DSA-dependent cipher suites, and the removal of `TLS_DHE_DSS_WITH_AES_128_CBC_SHA` in the client library meant that no compatible cipher suites were available. The DSA key issued under alias `mykey` in the keystore expired in 2014. I will temporarily add a new key (valid for 1 year) into the keystore and truststore and commit. I will raise another Jira to allow for dynamic code-generated keys to avoid this problem in the future. [1] http://stackoverflow.com/questions/34498023/okhttp-upgrading-from-2-5-to-2-6-breaks-https-tests?noredirect=1#comment56840249_34498023 [2] https://github.com/alopresto/nifi/blob/aa99884782e54c54ee138f5609b3be84628e96f9/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/localhost-ks.jks [3] https://github.com/alopresto/nifi/blob/aa99884782e54c54ee138f5609b3be84628e96f9/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/localhost-ts.jks) > Test certificates require SAN values > > > Key: NIFI-5622 > URL: https://issues.apache.org/jira/browse/NIFI-5622 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: certificate, security, test > Fix For: 0.5.0 > > > During the update of OkHttp from 3.6.0 to 3.11.0 in [NIFI-|], it was > discovered that {{SubjectAlternativeName}} checking is now required, as > described in RFC 6125. The test resource keystore and truststores need to be > updates to provide SAN values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5622) Test certificates require SAN values
[ https://issues.apache.org/jira/browse/NIFI-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5622: Component/s: Tools and Build > Test certificates require SAN values > > > Key: NIFI-5622 > URL: https://issues.apache.org/jira/browse/NIFI-5622 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: certificate, security, test > Fix For: 0.5.0 > > > During the update of OkHttp from 3.6.0 to 3.11.0 in [NIFI-|], it was > discovered that {{SubjectAlternativeName}} checking is now required, as > described in RFC 6125. The test resource keystore and truststores need to be > updates to provide SAN values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5622) Test certificates require SAN values
[ https://issues.apache.org/jira/browse/NIFI-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5622: Affects Version/s: (was: 0.4.1) (was: 0.4.0) 1.7.1 > Test certificates require SAN values > > > Key: NIFI-5622 > URL: https://issues.apache.org/jira/browse/NIFI-5622 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Tools and Build >Affects Versions: 1.7.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: certificate, security, test > Fix For: 0.5.0 > > > During the update of OkHttp from 3.6.0 to 3.11.0 in [NIFI-|], it was > discovered that {{SubjectAlternativeName}} checking is now required, as > described in RFC 6125. The test resource keystore and truststores need to be > updates to provide SAN values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5622) Test certificates require SAN values
Andy LoPresto created NIFI-5622: --- Summary: Test certificates require SAN values Key: NIFI-5622 URL: https://issues.apache.org/jira/browse/NIFI-5622 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 0.4.0, 0.4.1 Reporter: Andy LoPresto Assignee: Andy LoPresto Fix For: 0.5.0 [~JDP10101] was working to upgrade a library dependency (com.squareup.okhttp.okhttp:2.5.0 to 2.6.0). During this attempt, the TestInvokeHttpSSL tests began failing. With the help of a Square engineer[1], it was determined that the TLS cipher suite in use during tests against Jetty server was restricted to only `TLS_DHE_DSS_WITH_AES_128_CBC_SHA`. This is an obsolete cipher suite and it was deprecated in OkHttp:2.6.0. While there is a workaround (code below) to override the OkHttp connector to use this obsolete cipher suite, the real issue was that Jetty should not be restricted to allowing that single cipher suite for incoming connections. Further investigation revealed that the test keystore[2] and truststore[3] in use did not have any valid RSA or DSA keys. Because of this, Jetty could not rely on any RSA/DSA-dependent cipher suites, and the removal of `TLS_DHE_DSS_WITH_AES_128_CBC_SHA` in the client library meant that no compatible cipher suites were available. The DSA key issued under alias `mykey` in the keystore expired in 2014. I will temporarily add a new key (valid for 1 year) into the keystore and truststore and commit. I will raise another Jira to allow for dynamic code-generated keys to avoid this problem in the future. [1] http://stackoverflow.com/questions/34498023/okhttp-upgrading-from-2-5-to-2-6-breaks-https-tests?noredirect=1#comment56840249_34498023 [2] https://github.com/alopresto/nifi/blob/aa99884782e54c54ee138f5609b3be84628e96f9/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/localhost-ks.jks [3] https://github.com/alopresto/nifi/blob/aa99884782e54c54ee138f5609b3be84628e96f9/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/localhost-ts.jks -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5560) Sub directory(symbolic link to directory) files are not getting listed in ListSFTP(ListSFTP does not Follow symbolic links)
[ https://issues.apache.org/jira/browse/NIFI-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623872#comment-16623872 ] ASF GitHub Bot commented on NIFI-5560: -- Github user hemantha-kumara commented on a diff in the pull request: https://github.com/apache/nifi/pull/3000#discussion_r219562808 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FileTransfer.java --- @@ -141,6 +141,13 @@ default String getAbsolutePath(FlowFile flowFile, String remotePath) throws IOEx .defaultValue("false") .allowableValues("true", "false") .build(); +public static final PropertyDescriptor FOLLOW_SYMLINK = new PropertyDescriptor.Builder() --- End diff -- Submitted changes with commit 7fede6760b4ecc5980125f19d89588827690909b > Sub directory(symbolic link to directory) files are not getting listed in > ListSFTP(ListSFTP does not Follow symbolic links) > --- > > Key: NIFI-5560 > URL: https://issues.apache.org/jira/browse/NIFI-5560 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.7.1 >Reporter: Hemantha kumara M S >Assignee: Hemantha kumara M S >Priority: Major > > *Here is the configuration* > > *SFTP Server side:* > -bash-4.2$ mkdir -p /tmp/testData > -bash-4.2$ > -bash-4.2$ mkdir -p /tmp/toRead > -bash-4.2$ ln -s /tmp/testData /tmp/toRead/data1 > -bash-4.2$ touch /tmp/testData/1.txt > -bash-4.2$ touch /tmp/testData/2.txt > -bash-4.2$ touch /tmp/toRead/t.txt > -bash-4.2$ mkdir /tmp/toRead/data2 > -bash-4.2$ touch /tmp/toRead/data2/22.txt > -bash-4.2$ cd /tmp/toRead/ > -bash-4.2$ tree > . > ├── data1 -> /tmp/testData > ├── data2 > │ └── 22.txt > └── t.txt > 2 directories, 2 files > -bash-4.2$ pwd > /tmp/toRead > -bash-4.2$ tree > . > ├── data1 -> /tmp/testData > ├── data2 > │ └── 22.txt > └── t.txt > 2 directories, 2 files > -bash-4.2$ touch data > data1/ data2/ > -bash-4.2$ touch data2/22.txt > -bash-4.2$ touch t.txt > -bash-4.2$ tree /tmp/testData > /tmp/testData > ├── 1.txt > └── 2.txt > 0 directories, 2 files > > *Nifi:* > Configured ListSFTP +Remote Path+ to +/tmp/toRead/+ and +Search Recursively+ > to +true+ > > *+Expected result:+* > Should list 4 files(1.txt, 2.txt, t.txt, data2/22.txt) > *+Actual result:+* > listed only two files(t.txt, data2/22.txt) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3000: NIFI-5560 Added Follow SYMLINK support for ListFTP ...
Github user hemantha-kumara commented on a diff in the pull request: https://github.com/apache/nifi/pull/3000#discussion_r219562808 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FileTransfer.java --- @@ -141,6 +141,13 @@ default String getAbsolutePath(FlowFile flowFile, String remotePath) throws IOEx .defaultValue("false") .allowableValues("true", "false") .build(); +public static final PropertyDescriptor FOLLOW_SYMLINK = new PropertyDescriptor.Builder() --- End diff -- Submitted changes with commit 7fede6760b4ecc5980125f19d89588827690909b ---
[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster
[ https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623836#comment-16623836 ] ASF GitHub Bot commented on NIFI-5585: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/3010 @jtstorck I did just run into an issue that we will need to address. If we decommission 2 nodes at the same time (say Node 2 and Node 3) then we can end up in a state where Node 2 is trying to send to Node 3 (or vice versa) but the node is not connected. As a result, it ends up leaving the data in the queue. > Decommision Nodes from Cluster > -- > > Key: NIFI-5585 > URL: https://issues.apache.org/jira/browse/NIFI-5585 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > > Allow a node in the cluster to be decommissioned, rebalancing flowfiles on > the node to be decommissioned to the other active nodes. This work depends > on NIFI-5516. > Similar to the client sending PUT request a DISCONNECTING message to > cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request > to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The > DECOMMISSIONING request will be idempotent. > The steps to decommission a node and remove it from the cluster are: > # Send request to disconnect the node > # Once disconnect completes, send request to decommission the node. > # Once decommission completes, send request to delete node. > When an error occurs and the node can not complete decommissioning, the user > can: > # Send request to delete the node from the cluster > # Diagnose why the node had issues with the decommission (out of memory, no > network connection, etc) and address the issue > # Restart NiFi on the node to so that it will reconnect to the cluster > # Go through the steps to decommission and remove a node > Toolkit CLI commands for retrieving a list of nodes and > disconnecting/decommissioning/deleting nodes have been added. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3010: [WIP] NIFI-5585
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/3010 @jtstorck I did just run into an issue that we will need to address. If we decommission 2 nodes at the same time (say Node 2 and Node 3) then we can end up in a state where Node 2 is trying to send to Node 3 (or vice versa) but the node is not connected. As a result, it ends up leaving the data in the queue. ---
[jira] [Commented] (NIFIREG-200) Upgrade version of Jetty
[ https://issues.apache.org/jira/browse/NIFIREG-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623830#comment-16623830 ] ASF GitHub Bot commented on NIFIREG-200: Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/142 > Upgrade version of Jetty > > > Key: NIFIREG-200 > URL: https://issues.apache.org/jira/browse/NIFIREG-200 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.2.0 >Reporter: Andy LoPresto >Assignee: Kevin Doran >Priority: Blocker > Labels: jetty > Fix For: 0.3.0 > > > Spoke with Kevin off-list; he will make this change. > Please upgrade Jetty to version 9.4.11.x. > See [NIFI-5479|https://issues.apache.org/jira/browse/NIFI-5479]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFIREG-200) Upgrade version of Jetty
[ https://issues.apache.org/jira/browse/NIFIREG-200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende resolved NIFIREG-200. - Resolution: Fixed > Upgrade version of Jetty > > > Key: NIFIREG-200 > URL: https://issues.apache.org/jira/browse/NIFIREG-200 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.2.0 >Reporter: Andy LoPresto >Assignee: Kevin Doran >Priority: Blocker > Labels: jetty > Fix For: 0.3.0 > > > Spoke with Kevin off-list; he will make this change. > Please upgrade Jetty to version 9.4.11.x. > See [NIFI-5479|https://issues.apache.org/jira/browse/NIFI-5479]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry pull request #142: NIFIREG-200 Update dependencies
Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/142 ---
[jira] [Commented] (NIFIREG-200) Upgrade version of Jetty
[ https://issues.apache.org/jira/browse/NIFIREG-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623829#comment-16623829 ] ASF GitHub Bot commented on NIFIREG-200: Github user bbende commented on the issue: https://github.com/apache/nifi-registry/pull/142 This looks good and I'll merge to master. Something I noticed while reviewing this (not a result of anything in this PR), is that the nifi-registry-extensions modules are impacted by the top-level dependency management, which could have unforeseen consequences. I created this JIRA with further information: https://issues.apache.org/jira/browse/NIFIREG-201 > Upgrade version of Jetty > > > Key: NIFIREG-200 > URL: https://issues.apache.org/jira/browse/NIFIREG-200 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.2.0 >Reporter: Andy LoPresto >Assignee: Kevin Doran >Priority: Blocker > Labels: jetty > Fix For: 0.3.0 > > > Spoke with Kevin off-list; he will make this change. > Please upgrade Jetty to version 9.4.11.x. > See [NIFI-5479|https://issues.apache.org/jira/browse/NIFI-5479]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry issue #142: NIFIREG-200 Update dependencies
Github user bbende commented on the issue: https://github.com/apache/nifi-registry/pull/142 This looks good and I'll merge to master. Something I noticed while reviewing this (not a result of anything in this PR), is that the nifi-registry-extensions modules are impacted by the top-level dependency management, which could have unforeseen consequences. I created this JIRA with further information: https://issues.apache.org/jira/browse/NIFIREG-201 ---
[jira] [Created] (NIFIREG-201) nifi-registry-extensions impacted by top-level dependency management
Bryan Bende created NIFIREG-201: --- Summary: nifi-registry-extensions impacted by top-level dependency management Key: NIFIREG-201 URL: https://issues.apache.org/jira/browse/NIFIREG-201 Project: NiFi Registry Issue Type: Improvement Affects Versions: 0.3.0 Reporter: Bryan Bende Assignee: Bryan Bende Fix For: 0.3.0 While reviewing NIFIREG-200, I noticed that when building with the include-ranger profile, the JARs in ext/ranger/lib ended up being affected by the dependency management section in the root pom. For example, the versions of Jetty and Jackson JARs were being forced to the versions registry needs, but may not be versions that are compatible with the ranger client. To deal with this, I propose restructuring the the repository to something like the following: _nifi-registry-core (eveything that used to be at the root, except the assembly and extensions)_ _nifi-registry-extensions_ _nifi-registry-assembly_ The dependency management can then be moved to the pom of nifi-registry-core so that it does not impact the modules under nifi-registry-extensions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
[ https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623794#comment-16623794 ] Colin Dean commented on NIFI-5612: -- It's worth noting that {{DATE}} and {{TIME}} SQL types can also include an Avro {{int}} type, but only when using logical types. I continue to see this error regardless of the logical types setting. > org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0 > > > Key: NIFI-5612 > URL: https://issues.apache.org/jira/browse/NIFI-5612 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0, 1.6.0, 1.7.1 > Environment: Microsoft Windows, MySQL Enterprise 5.0.80 >Reporter: Colin Dean >Priority: Major > Labels: ExecuteSQL, avro, nifi > > I'm seeing this when I execute {{SELECT * FROM }} on a few tables > but not on dozens of others in the same database. > {code} > 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] > o.a.n.controller.tasks.ConnectableTask Administratively Yielding > ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught > Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: > org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0 > org.apache.avro.file.DataFileWriter$AppendWriteException: > org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0 > at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308) > at > org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462) > at > org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625) > at > org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.avro.UnresolvedUnionException: Not in union > ["null","int"]: 0 > at > org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709) > at > org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192) > at > org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110) > at > org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73) > at > org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153) > at > org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143) > at > org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105) > at > org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73) > at > org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60) > at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302) > ... 15 common frames omitted > {code} > I don't know if I can share the database schema – still working with my team > on that – but looking at it, I think it has something to do with the > signedness of int(1) or tinyint(1) because those two are the only numerical > types common to all of the table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-616) Update appveyor.yml to remove restricted building on a branch
[ https://issues.apache.org/jira/browse/MINIFICPP-616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623795#comment-16623795 ] ASF GitHub Bot commented on MINIFICPP-616: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/403 > Update appveyor.yml to remove restricted building on a branch > - > > Key: MINIFICPP-616 > URL: https://issues.apache.org/jira/browse/MINIFICPP-616 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Aldrin Piri >Assignee: Aldrin Piri >Priority: Major > > Appveyor is currently configured to only build off of a certain named branch > from when the associated functionality was introduced. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #403: MINIFICPP-616 Run appveyor on all branche...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/403 ---
[jira] [Commented] (MINIFICPP-616) Update appveyor.yml to remove restricted building on a branch
[ https://issues.apache.org/jira/browse/MINIFICPP-616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623793#comment-16623793 ] ASF GitHub Bot commented on MINIFICPP-616: -- Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/403 @apiri sorry for leaving that in there. +1 Will merge. The travis failure is network connectivity. > Update appveyor.yml to remove restricted building on a branch > - > > Key: MINIFICPP-616 > URL: https://issues.apache.org/jira/browse/MINIFICPP-616 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Aldrin Piri >Assignee: Aldrin Piri >Priority: Major > > Appveyor is currently configured to only build off of a certain named branch > from when the associated functionality was introduced. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #403: MINIFICPP-616 Run appveyor on all branches
Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/403 @apiri sorry for leaving that in there. +1 Will merge. The travis failure is network connectivity. ---
[jira] [Updated] (MINIFICPP-616) Update appveyor.yml to remove restricted building on a branch
[ https://issues.apache.org/jira/browse/MINIFICPP-616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri updated MINIFICPP-616: -- Status: Patch Available (was: Open) > Update appveyor.yml to remove restricted building on a branch > - > > Key: MINIFICPP-616 > URL: https://issues.apache.org/jira/browse/MINIFICPP-616 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Aldrin Piri >Assignee: Aldrin Piri >Priority: Major > > Appveyor is currently configured to only build off of a certain named branch > from when the associated functionality was introduced. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFIREG-200) Upgrade version of Jetty
[ https://issues.apache.org/jira/browse/NIFIREG-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623696#comment-16623696 ] ASF GitHub Bot commented on NIFIREG-200: Github user bbende commented on the issue: https://github.com/apache/nifi-registry/pull/142 Reviewing.. > Upgrade version of Jetty > > > Key: NIFIREG-200 > URL: https://issues.apache.org/jira/browse/NIFIREG-200 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.2.0 >Reporter: Andy LoPresto >Assignee: Kevin Doran >Priority: Blocker > Labels: jetty > Fix For: 0.3.0 > > > Spoke with Kevin off-list; he will make this change. > Please upgrade Jetty to version 9.4.11.x. > See [NIFI-5479|https://issues.apache.org/jira/browse/NIFI-5479]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry issue #142: NIFIREG-200 Update dependencies
Github user bbende commented on the issue: https://github.com/apache/nifi-registry/pull/142 Reviewing.. ---
[jira] [Commented] (MINIFICPP-603) Fill gaps in C2 responses for Windows
[ https://issues.apache.org/jira/browse/MINIFICPP-603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623633#comment-16623633 ] ASF GitHub Bot commented on MINIFICPP-603: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/402 > Fill gaps in C2 responses for Windows > - > > Key: MINIFICPP-603 > URL: https://issues.apache.org/jira/browse/MINIFICPP-603 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > Fix For: 0.6.0 > > > C2 responses aren't functionally complete in windows. This ticket is meant to > fill those gaps. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5621) Create a Connection Pooling service implementation to be used in Cassandra processors
Sivaprasanna Sethuraman created NIFI-5621: - Summary: Create a Connection Pooling service implementation to be used in Cassandra processors Key: NIFI-5621 URL: https://issues.apache.org/jira/browse/NIFI-5621 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Sivaprasanna Sethuraman Assignee: Sivaprasanna Sethuraman Like how the Relational Database processors leverage 'DBCPConnectionPool' controller service, there should be one that could be used by the processors from Cassandra bundle. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-616) Update appveyor.yml to remove restricted building on a branch
[ https://issues.apache.org/jira/browse/MINIFICPP-616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623523#comment-16623523 ] ASF GitHub Bot commented on MINIFICPP-616: -- GitHub user apiri opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/403 MINIFICPP-616 Run appveyor on all branches Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/apiri/nifi-minifi-cpp MINIFICPP-616 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/403.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #403 commit ee57fcaca4e61bc7d9b91730870f5e223ec59c11 Author: Aldrin Piri Date: 2018-09-21T12:08:45Z MINIFICPP-616 Run appveyor on all branches > Update appveyor.yml to remove restricted building on a branch > - > > Key: MINIFICPP-616 > URL: https://issues.apache.org/jira/browse/MINIFICPP-616 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Aldrin Piri >Assignee: Aldrin Piri >Priority: Major > > Appveyor is currently configured to only build off of a certain named branch > from when the associated functionality was introduced. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #403: MINIFICPP-616 Run appveyor on all branche...
GitHub user apiri opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/403 MINIFICPP-616 Run appveyor on all branches Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/apiri/nifi-minifi-cpp MINIFICPP-616 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/403.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #403 commit ee57fcaca4e61bc7d9b91730870f5e223ec59c11 Author: Aldrin Piri Date: 2018-09-21T12:08:45Z MINIFICPP-616 Run appveyor on all branches ---
[jira] [Commented] (NIFI-5608) PutDatabaseRecord will remove _s in update keys if translate columns is true
[ https://issues.apache.org/jira/browse/NIFI-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623489#comment-16623489 ] Pierre Villard commented on NIFI-5608: -- Had a quick look and seems to be on purpose [1]. [~mattyb149] ? [1] https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L980 > PutDatabaseRecord will remove _s in update keys if translate columns is true > > > Key: NIFI-5608 > URL: https://issues.apache.org/jira/browse/NIFI-5608 > Project: Apache NiFi > Issue Type: Bug >Reporter: eric twilegar >Priority: Major > > I had a table where the columns names where all defined lower case. > In the nifi records it was mixed case and sort of all over the place. > translate was working well, but then i added a column with an "_" in it. > So a column like my_id which was part of the primary key and so was used the > as the WHERE clause in the update statement. So the where clause was "WHERE > myid = 5" vs "WHERE my_id =5" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-603) Fill gaps in C2 responses for Windows
[ https://issues.apache.org/jira/browse/MINIFICPP-603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623482#comment-16623482 ] ASF GitHub Bot commented on MINIFICPP-603: -- Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/402 @apiri sorry made a force push. good to go, though > Fill gaps in C2 responses for Windows > - > > Key: MINIFICPP-603 > URL: https://issues.apache.org/jira/browse/MINIFICPP-603 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > Fix For: 0.6.0 > > > C2 responses aren't functionally complete in windows. This ticket is meant to > fill those gaps. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #402: MINIFICPP-603: Add updates for windows C2 respon...
Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/402 @apiri sorry made a force push. good to go, though ---
[jira] [Created] (MINIFICPP-616) Update appveyor.yml to remove restricted building on a branch
Aldrin Piri created MINIFICPP-616: - Summary: Update appveyor.yml to remove restricted building on a branch Key: MINIFICPP-616 URL: https://issues.apache.org/jira/browse/MINIFICPP-616 Project: NiFi MiNiFi C++ Issue Type: Bug Reporter: Aldrin Piri Appveyor is currently configured to only build off of a certain named branch from when the associated functionality was introduced. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623477#comment-16623477 ] ASF GitHub Bot commented on NIFI-4914: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218858305 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/pom.xml --- @@ -0,0 +1,78 @@ + + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;> +4.0.0 + + +org.apache.nifi +nifi-pulsar-bundle +1.8.0-SNAPSHOT + + +nifi-pulsar-processors +jar + + + +org.apache.nifi +nifi-api + + +org.apache.nifi +nifi-record-serialization-service-api + + +org.apache.nifi +nifi-record + + +org.apache.nifi +nifi-utils +1.8.0-SNAPSHOT + + +org.apache.nifi +nifi-ssl-context-service-api + + +org.apache.nifi +nifi-pulsar-client-service-api +1.8.0-SNAPSHOT +provided + + +org.apache.pulsar +pulsar-client +2.0.0-rc1-incubating --- End diff -- can we set the pulsar version as a property in the root pom of the bundle and reference that version? Also upgrade to 2.1.1 if possible? > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623473#comment-16623473 ] ASF GitHub Bot commented on NIFI-4914: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218850426 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service-api/pom.xml --- @@ -0,0 +1,40 @@ + + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;> +4.0.0 + + +org.apache.nifi +nifi-pulsar-bundle +1.8.0-SNAPSHOT + + +nifi-pulsar-client-service-api +jar + + + +org.apache.nifi +nifi-api +provided + + +org.apache.pulsar +pulsar-client +2.0.1-incubating --- End diff -- 2.1.1-incubating has been released 2 days ago - should be available in mvn repo shortly > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623480#comment-16623480 ] ASF GitHub Bot commented on NIFI-4914: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218857973 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/.gitignore --- @@ -0,0 +1 @@ +/target/ --- End diff -- same here > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623475#comment-16623475 ] ASF GitHub Bot commented on NIFI-4914: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218849480 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service-api/.gitignore --- @@ -0,0 +1 @@ +/target/ --- End diff -- we probably don't want that file, no? > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623479#comment-16623479 ] ASF GitHub Bot commented on NIFI-4914: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218860810 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/AbstractPulsarConsumerProcessor.java --- @@ -0,0 +1,412 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements.See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License.You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.pulsar; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.ExecutorCompletionService; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.TimeUnit; + +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.annotation.lifecycle.OnStopped; +import org.apache.nifi.annotation.lifecycle.OnUnscheduled; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.pulsar.PulsarClientService; +import org.apache.nifi.pulsar.cache.LRUCache; +import org.apache.pulsar.client.api.Consumer; +import org.apache.pulsar.client.api.ConsumerBuilder; +import org.apache.pulsar.client.api.ConsumerCryptoFailureAction; +import org.apache.pulsar.client.api.Message; +import org.apache.pulsar.client.api.PulsarClientException; +import org.apache.pulsar.client.api.SubscriptionType; + +public abstract class AbstractPulsarConsumerProcessor extends AbstractProcessor { + +static final AllowableValue EXCLUSIVE = new AllowableValue("Exclusive", "Exclusive", "There can be only 1 consumer on the same topic with the same subscription name"); +static final AllowableValue SHARED = new AllowableValue("Shared", "Shared", "Multiple consumer will be able to use the same subscription name and the messages"); +static final AllowableValue FAILOVER = new AllowableValue("Failover", "Failover", "Multiple consumer will be able to use the same subscription name but only 1 consumer " ++ "will receive the messages. If that consumer disconnects, one of the other connected consumers will start receiving messages"); + +static final AllowableValue CONSUME = new AllowableValue(ConsumerCryptoFailureAction.CONSUME.name(), "Consume", +"Mark the message as consumed despite being unable to decrypt the contents"); +static final AllowableValue DISCARD = new AllowableValue(ConsumerCryptoFailureAction.DISCARD.name(), "Discard", +"Discard the message and don't perform any addtional processing on the message"); +static final AllowableValue FAIL = new AllowableValue(ConsumerCryptoFailureAction.FAIL.name(), "Fail", +"Report a failure condition, and the route the message contents to the FAILED relationship."); + +public static final Relationship REL_SUCCESS = new Relationship.Builder() +.name("success") +.description("FlowFiles for which all content was consumed from Pulsar.") +.build(); + +public static final
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623476#comment-16623476 ] ASF GitHub Bot commented on NIFI-4914: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218858965 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/AbstractPulsarConsumerProcessor.java --- @@ -0,0 +1,412 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements.See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License.You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.pulsar; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.ExecutorCompletionService; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.TimeUnit; + +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.annotation.lifecycle.OnStopped; +import org.apache.nifi.annotation.lifecycle.OnUnscheduled; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.pulsar.PulsarClientService; +import org.apache.nifi.pulsar.cache.LRUCache; +import org.apache.pulsar.client.api.Consumer; +import org.apache.pulsar.client.api.ConsumerBuilder; +import org.apache.pulsar.client.api.ConsumerCryptoFailureAction; +import org.apache.pulsar.client.api.Message; +import org.apache.pulsar.client.api.PulsarClientException; +import org.apache.pulsar.client.api.SubscriptionType; + +public abstract class AbstractPulsarConsumerProcessor extends AbstractProcessor { + +static final AllowableValue EXCLUSIVE = new AllowableValue("Exclusive", "Exclusive", "There can be only 1 consumer on the same topic with the same subscription name"); +static final AllowableValue SHARED = new AllowableValue("Shared", "Shared", "Multiple consumer will be able to use the same subscription name and the messages"); +static final AllowableValue FAILOVER = new AllowableValue("Failover", "Failover", "Multiple consumer will be able to use the same subscription name but only 1 consumer " ++ "will receive the messages. If that consumer disconnects, one of the other connected consumers will start receiving messages"); + +static final AllowableValue CONSUME = new AllowableValue(ConsumerCryptoFailureAction.CONSUME.name(), "Consume", +"Mark the message as consumed despite being unable to decrypt the contents"); +static final AllowableValue DISCARD = new AllowableValue(ConsumerCryptoFailureAction.DISCARD.name(), "Discard", +"Discard the message and don't perform any addtional processing on the message"); +static final AllowableValue FAIL = new AllowableValue(ConsumerCryptoFailureAction.FAIL.name(), "Fail", +"Report a failure condition, and the route the message contents to the FAILED relationship."); --- End diff -- typo: "and then route the message content" > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key:
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623478#comment-16623478 ] ASF GitHub Bot commented on NIFI-4914: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218857914 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-nar/.gitignore --- @@ -0,0 +1,2 @@ +/target/ --- End diff -- same comment here > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (MINIFICPP-616) Update appveyor.yml to remove restricted building on a branch
[ https://issues.apache.org/jira/browse/MINIFICPP-616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri reassigned MINIFICPP-616: - Assignee: Aldrin Piri > Update appveyor.yml to remove restricted building on a branch > - > > Key: MINIFICPP-616 > URL: https://issues.apache.org/jira/browse/MINIFICPP-616 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Aldrin Piri >Assignee: Aldrin Piri >Priority: Major > > Appveyor is currently configured to only build off of a certain named branch > from when the associated functionality was introduced. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623474#comment-16623474 ] ASF GitHub Bot commented on NIFI-4914: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218851139 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/.gitignore --- @@ -0,0 +1 @@ +/target/ --- End diff -- same here > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2882: NIFI-4914
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218858965 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/AbstractPulsarConsumerProcessor.java --- @@ -0,0 +1,412 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements.See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License.You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.pulsar; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.ExecutorCompletionService; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.TimeUnit; + +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.annotation.lifecycle.OnStopped; +import org.apache.nifi.annotation.lifecycle.OnUnscheduled; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.pulsar.PulsarClientService; +import org.apache.nifi.pulsar.cache.LRUCache; +import org.apache.pulsar.client.api.Consumer; +import org.apache.pulsar.client.api.ConsumerBuilder; +import org.apache.pulsar.client.api.ConsumerCryptoFailureAction; +import org.apache.pulsar.client.api.Message; +import org.apache.pulsar.client.api.PulsarClientException; +import org.apache.pulsar.client.api.SubscriptionType; + +public abstract class AbstractPulsarConsumerProcessor extends AbstractProcessor { + +static final AllowableValue EXCLUSIVE = new AllowableValue("Exclusive", "Exclusive", "There can be only 1 consumer on the same topic with the same subscription name"); +static final AllowableValue SHARED = new AllowableValue("Shared", "Shared", "Multiple consumer will be able to use the same subscription name and the messages"); +static final AllowableValue FAILOVER = new AllowableValue("Failover", "Failover", "Multiple consumer will be able to use the same subscription name but only 1 consumer " ++ "will receive the messages. If that consumer disconnects, one of the other connected consumers will start receiving messages"); + +static final AllowableValue CONSUME = new AllowableValue(ConsumerCryptoFailureAction.CONSUME.name(), "Consume", +"Mark the message as consumed despite being unable to decrypt the contents"); +static final AllowableValue DISCARD = new AllowableValue(ConsumerCryptoFailureAction.DISCARD.name(), "Discard", +"Discard the message and don't perform any addtional processing on the message"); +static final AllowableValue FAIL = new AllowableValue(ConsumerCryptoFailureAction.FAIL.name(), "Fail", +"Report a failure condition, and the route the message contents to the FAILED relationship."); --- End diff -- typo: "and then route the message content" ---
[GitHub] nifi pull request #2882: NIFI-4914
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218857973 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/.gitignore --- @@ -0,0 +1 @@ +/target/ --- End diff -- same here ---
[GitHub] nifi pull request #2882: NIFI-4914
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218858305 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/pom.xml --- @@ -0,0 +1,78 @@ + + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;> +4.0.0 + + +org.apache.nifi +nifi-pulsar-bundle +1.8.0-SNAPSHOT + + +nifi-pulsar-processors +jar + + + +org.apache.nifi +nifi-api + + +org.apache.nifi +nifi-record-serialization-service-api + + +org.apache.nifi +nifi-record + + +org.apache.nifi +nifi-utils +1.8.0-SNAPSHOT + + +org.apache.nifi +nifi-ssl-context-service-api + + +org.apache.nifi +nifi-pulsar-client-service-api +1.8.0-SNAPSHOT +provided + + +org.apache.pulsar +pulsar-client +2.0.0-rc1-incubating --- End diff -- can we set the pulsar version as a property in the root pom of the bundle and reference that version? Also upgrade to 2.1.1 if possible? ---
[GitHub] nifi pull request #2882: NIFI-4914
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218851139 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/.gitignore --- @@ -0,0 +1 @@ +/target/ --- End diff -- same here ---
[GitHub] nifi pull request #2882: NIFI-4914
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218857914 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-nar/.gitignore --- @@ -0,0 +1,2 @@ +/target/ --- End diff -- same comment here ---
[GitHub] nifi pull request #2882: NIFI-4914
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218860810 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/AbstractPulsarConsumerProcessor.java --- @@ -0,0 +1,412 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements.See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License.You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.pulsar; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.ExecutorCompletionService; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.TimeUnit; + +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.annotation.lifecycle.OnStopped; +import org.apache.nifi.annotation.lifecycle.OnUnscheduled; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.pulsar.PulsarClientService; +import org.apache.nifi.pulsar.cache.LRUCache; +import org.apache.pulsar.client.api.Consumer; +import org.apache.pulsar.client.api.ConsumerBuilder; +import org.apache.pulsar.client.api.ConsumerCryptoFailureAction; +import org.apache.pulsar.client.api.Message; +import org.apache.pulsar.client.api.PulsarClientException; +import org.apache.pulsar.client.api.SubscriptionType; + +public abstract class AbstractPulsarConsumerProcessor extends AbstractProcessor { + +static final AllowableValue EXCLUSIVE = new AllowableValue("Exclusive", "Exclusive", "There can be only 1 consumer on the same topic with the same subscription name"); +static final AllowableValue SHARED = new AllowableValue("Shared", "Shared", "Multiple consumer will be able to use the same subscription name and the messages"); +static final AllowableValue FAILOVER = new AllowableValue("Failover", "Failover", "Multiple consumer will be able to use the same subscription name but only 1 consumer " ++ "will receive the messages. If that consumer disconnects, one of the other connected consumers will start receiving messages"); + +static final AllowableValue CONSUME = new AllowableValue(ConsumerCryptoFailureAction.CONSUME.name(), "Consume", +"Mark the message as consumed despite being unable to decrypt the contents"); +static final AllowableValue DISCARD = new AllowableValue(ConsumerCryptoFailureAction.DISCARD.name(), "Discard", +"Discard the message and don't perform any addtional processing on the message"); +static final AllowableValue FAIL = new AllowableValue(ConsumerCryptoFailureAction.FAIL.name(), "Fail", +"Report a failure condition, and the route the message contents to the FAILED relationship."); + +public static final Relationship REL_SUCCESS = new Relationship.Builder() +.name("success") +.description("FlowFiles for which all content was consumed from Pulsar.") +.build(); + +public static final Relationship REL_FAILURE = new Relationship.Builder() --- End diff -- If consumer does not allow input - in what case do we route flow files in that relationship and what will be the content/attributes of that flow file? ---
[GitHub] nifi pull request #2882: NIFI-4914
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218850426 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service-api/pom.xml --- @@ -0,0 +1,40 @@ + + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;> +4.0.0 + + +org.apache.nifi +nifi-pulsar-bundle +1.8.0-SNAPSHOT + + +nifi-pulsar-client-service-api +jar + + + +org.apache.nifi +nifi-api +provided + + +org.apache.pulsar +pulsar-client +2.0.1-incubating --- End diff -- 2.1.1-incubating has been released 2 days ago - should be available in mvn repo shortly ---
[GitHub] nifi pull request #2882: NIFI-4914
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2882#discussion_r218849480 --- Diff: nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service-api/.gitignore --- @@ -0,0 +1 @@ +/target/ --- End diff -- we probably don't want that file, no? ---
[jira] [Commented] (MINIFICPP-603) Fill gaps in C2 responses for Windows
[ https://issues.apache.org/jira/browse/MINIFICPP-603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623470#comment-16623470 ] ASF GitHub Bot commented on MINIFICPP-603: -- Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/402 reviewing > Fill gaps in C2 responses for Windows > - > > Key: MINIFICPP-603 > URL: https://issues.apache.org/jira/browse/MINIFICPP-603 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > Fix For: 0.6.0 > > > C2 responses aren't functionally complete in windows. This ticket is meant to > fill those gaps. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #402: MINIFICPP-603: Add updates for windows C2 respon...
Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/402 reviewing ---
[jira] [Commented] (NIFI-5560) Sub directory(symbolic link to directory) files are not getting listed in ListSFTP(ListSFTP does not Follow symbolic links)
[ https://issues.apache.org/jira/browse/NIFI-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623322#comment-16623322 ] ASF GitHub Bot commented on NIFI-5560: -- Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3000#discussion_r219435622 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FileTransfer.java --- @@ -141,6 +141,13 @@ default String getAbsolutePath(FlowFile flowFile, String remotePath) throws IOEx .defaultValue("false") .allowableValues("true", "false") .build(); +public static final PropertyDescriptor FOLLOW_SYMLINK = new PropertyDescriptor.Builder() --- End diff -- Yep, you can choose a better name like just `follow-symlink`. The idea is: for new properties we're added to existing/new processors we want to use both `.name()` and `.displayName()`. The latter is what is actually displayed in the UI and can be changed without breaking backward compatibility while the name is what is used to uniquely reference the property and we cannot change the name without breaking existing flows. > Sub directory(symbolic link to directory) files are not getting listed in > ListSFTP(ListSFTP does not Follow symbolic links) > --- > > Key: NIFI-5560 > URL: https://issues.apache.org/jira/browse/NIFI-5560 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.7.1 >Reporter: Hemantha kumara M S >Assignee: Hemantha kumara M S >Priority: Major > > *Here is the configuration* > > *SFTP Server side:* > -bash-4.2$ mkdir -p /tmp/testData > -bash-4.2$ > -bash-4.2$ mkdir -p /tmp/toRead > -bash-4.2$ ln -s /tmp/testData /tmp/toRead/data1 > -bash-4.2$ touch /tmp/testData/1.txt > -bash-4.2$ touch /tmp/testData/2.txt > -bash-4.2$ touch /tmp/toRead/t.txt > -bash-4.2$ mkdir /tmp/toRead/data2 > -bash-4.2$ touch /tmp/toRead/data2/22.txt > -bash-4.2$ cd /tmp/toRead/ > -bash-4.2$ tree > . > ├── data1 -> /tmp/testData > ├── data2 > │ └── 22.txt > └── t.txt > 2 directories, 2 files > -bash-4.2$ pwd > /tmp/toRead > -bash-4.2$ tree > . > ├── data1 -> /tmp/testData > ├── data2 > │ └── 22.txt > └── t.txt > 2 directories, 2 files > -bash-4.2$ touch data > data1/ data2/ > -bash-4.2$ touch data2/22.txt > -bash-4.2$ touch t.txt > -bash-4.2$ tree /tmp/testData > /tmp/testData > ├── 1.txt > └── 2.txt > 0 directories, 2 files > > *Nifi:* > Configured ListSFTP +Remote Path+ to +/tmp/toRead/+ and +Search Recursively+ > to +true+ > > *+Expected result:+* > Should list 4 files(1.txt, 2.txt, t.txt, data2/22.txt) > *+Actual result:+* > listed only two files(t.txt, data2/22.txt) -- This message was sent by Atlassian JIRA (v7.6.3#76005)