[GitHub] [nifi] AnujJain7 commented on pull request #2724: NIFI-5133: Implemented Google Cloud PubSub Processors
AnujJain7 commented on pull request #2724: URL: https://github.com/apache/nifi/pull/2724#issuecomment-632985702 @zenfenan, Thanks for code fix, I have raised this issue to nifi development team as well and they have concern for testing(as you said these processor were create long back). I am using 1.9.2 version of this processor so I suggest them to change and create a newer version for this like 1.9.3 or something like this, so it will not impact latest versions and we can check this in our environment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1238) Implement features to prevent working with incomplete flow
Arpad Boda created MINIFICPP-1238: - Summary: Implement features to prevent working with incomplete flow Key: MINIFICPP-1238 URL: https://issues.apache.org/jira/browse/MINIFICPP-1238 Project: Apache NiFi MiNiFi C++ Issue Type: Epic Affects Versions: 0.7.0 Reporter: Arpad Boda We need to introduce a config property to enable these to maintain backward compatibility. When this is on, we should do the following: * Schedulers shouldn't schedule processors with non-terminated relationships * Connections with invalid source or target should not be created * Such events should be properly logged * We should define a part of C2 heartbeats that include such errors to be able to report them * Event driven processors without incoming connections shouldn't be scheduled Further items might be added to the list above. Created as an epic to be able to gather multiple changes related to this topic. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1237) Event driven scheduler shouldn't schedule processors without incoming connection
Arpad Boda created MINIFICPP-1237: - Summary: Event driven scheduler shouldn't schedule processors without incoming connection Key: MINIFICPP-1237 URL: https://issues.apache.org/jira/browse/MINIFICPP-1237 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Arpad Boda Assignee: Arpad Boda Fix For: 0.8.0 Scheduling these makes no sense as they can't receive flowfiles to work with. This means a flawed flow config, proper logging should be applied. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (NIFIREG-147) Add Keycloak authentication method
[ https://issues.apache.org/jira/browse/NIFIREG-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114352#comment-17114352 ] Christian Englert edited comment on NIFIREG-147 at 5/22/20, 7:22 PM: - Currently working on a [KeyCloak|https://www.keycloak.org/] UserGroupProvider [https://github.com/ChrisEnglert/nifi-addons#keycloak-nifi-registry-nifi-registry-keycloak] This uses a KeyCloak Admin API Client to read the users and groups of a KeyCloak realm and provide them for NiFi-Registry (readonly) I'm having some classloader issues that should be resolved by https://issues.apache.org/jira/browse/NIFIREG-394 In conjunction with OIDC Password login I can use a KeyCloak realm to manage NiFi-Registry users: [https://github.com/ChrisEnglert/nifi-addons#nifi-registry-oidc-nifi-registry-oidc] was (Author: chriseng): Currently working on a [KeyCloak|https://www.keycloak.org/] UserGroupProvider [https://github.com/ChrisEnglert/nifi-addons#keycloak-nifi-registry-nifi-registry-keycloak] This uses a KeyCloak Admin API Client to read the users and groups of a KeyCloak realm ad provide them for NiFi-Registry (readonly) I'm having some classloader issues that should be resolved by https://issues.apache.org/jira/browse/NIFIREG-394 In conjunction with OIDC Password login I can use a KeyCloak realm to manage NiFi-Registry users: [https://github.com/ChrisEnglert/nifi-addons#nifi-registry-oidc-nifi-registry-oidc] > Add Keycloak authentication method > -- > > Key: NIFIREG-147 > URL: https://issues.apache.org/jira/browse/NIFIREG-147 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Gregory Reshetniak >Priority: Major > > Keycloak does implement a lot of related functionality, including groups, > users and such. It would be great to have first-class integration available. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFIREG-147) Add Keycloak authentication method
[ https://issues.apache.org/jira/browse/NIFIREG-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114352#comment-17114352 ] Christian Englert commented on NIFIREG-147: --- Currently working on a [KeyCloak|https://www.keycloak.org/] UserGroupProvider [https://github.com/ChrisEnglert/nifi-addons#keycloak-nifi-registry-nifi-registry-keycloak] This uses a KeyCloak Admin API Client to read the users and groups of a KeyCloak realm ad provide them for NiFi-Registry (readonly) I'm having some classloader issues that should be resolved by https://issues.apache.org/jira/browse/NIFIREG-394 In conjunction with OIDC Password login I can use a KeyCloak realm to manage NiFi-Registry users: [https://github.com/ChrisEnglert/nifi-addons#nifi-registry-oidc-nifi-registry-oidc] > Add Keycloak authentication method > -- > > Key: NIFIREG-147 > URL: https://issues.apache.org/jira/browse/NIFIREG-147 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Gregory Reshetniak >Priority: Major > > Keycloak does implement a lot of related functionality, including groups, > users and such. It would be great to have first-class integration available. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7477) Get the details in ValidateRecord as an attribute
[ https://issues.apache.org/jira/browse/NIFI-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114333#comment-17114333 ] Jairo Henao commented on NIFI-7477: --- [~markap14] "You have hit the nail". My scenario: I receive a JSON file and I want to give users details about what is wrong with their content. I understand the concern of memory, it's true. But what about the ExtractText processor where I can load all the FlowFile content as an attribute? (Sure we won't, but at least let me control how much I'm willing to load with the "Maximum Capture Group Length" property). How about we do something similar? Currently I have to make a call to the Nifi REST-API, within the same NIFI integration (That's funny) > Get the details in ValidateRecord as an attribute > - > > Key: NIFI-7477 > URL: https://issues.apache.org/jira/browse/NIFI-7477 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.11.4 >Reporter: Jairo Henao >Priority: Minor > Labels: features > > When validation fails in ValidateRecord, the details are not easy to access. > Details are sent as an event to Provenance. To obtain them, we should invoke > the NIFI REST-API or export them via the Site-to-Site Reporting Task. > The ValidateRecord processor should optionally allow us to configure an > attribute to leave the details text here. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7482) InvokeHTTP should not be final
[ https://issues.apache.org/jira/browse/NIFI-7482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-7482: Fix Version/s: 1.12.0 Resolution: Fixed Status: Resolved (was: Patch Available) > InvokeHTTP should not be final > -- > > Key: NIFI-7482 > URL: https://issues.apache.org/jira/browse/NIFI-7482 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.11.4 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: InvokeHTTP, custom_processor, extension > Fix For: 1.12.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > The {{InvokeHTTP}} processor is defined as {{final}}, which means custom > processors cannot extend it. I see no reason this class should be final, as > other standard processors are not defined as such with no visible problems. > {{InvokeHTTP}} is a processor that users frequently want to modify, and > allowing custom extensions is much easier than duplicating all native > behavior in a new processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] alopresto closed pull request #4291: NIFI-7482 Changed InvokeHTTP to be extensible
alopresto closed pull request #4291: URL: https://github.com/apache/nifi/pull/4291 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7482) InvokeHTTP should not be final
[ https://issues.apache.org/jira/browse/NIFI-7482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114324#comment-17114324 ] ASF subversion and git services commented on NIFI-7482: --- Commit 97a919a3be37c8e78b623ffacf9b1f22db534644 in nifi's branch refs/heads/master from Andy LoPresto [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=97a919a ] NIFI-7482 Changed InvokeHTTP to be extensible. Added unit test. This closes #4291. Signed-off-by: Arpad Boda > InvokeHTTP should not be final > -- > > Key: NIFI-7482 > URL: https://issues.apache.org/jira/browse/NIFI-7482 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.11.4 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: InvokeHTTP, custom_processor, extension > Time Spent: 20m > Remaining Estimate: 0h > > The {{InvokeHTTP}} processor is defined as {{final}}, which means custom > processors cannot extend it. I see no reason this class should be final, as > other standard processors are not defined as such with no visible problems. > {{InvokeHTTP}} is a processor that users frequently want to modify, and > allowing custom extensions is much easier than duplicating all native > behavior in a new processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1236) GetFile processor's "Input Directory" property shouldn't have default value
Arpad Boda created MINIFICPP-1236: - Summary: GetFile processor's "Input Directory" property shouldn't have default value Key: MINIFICPP-1236 URL: https://issues.apache.org/jira/browse/MINIFICPP-1236 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Arpad Boda Assignee: Ádám Markovics Fix For: 0.8.0 Currently the default value is ".", which is an armed gun. The processor works with the default config and removes the config, logs, repos on MiNiFi. In case someone wants to uninstall MiNiFi using MiNiFi, that's fine, but in that case he has to set the value for the property. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] alopresto commented on pull request #4291: NIFI-7482 Changed InvokeHTTP to be extensible
alopresto commented on pull request #4291: URL: https://github.com/apache/nifi/pull/4291#issuecomment-632848422 Thanks Arpad. Merging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] esecules commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors
esecules commented on a change in pull request #4265: URL: https://github.com/apache/nifi/pull/4265#discussion_r429399731 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/services/azure/storage/TestAzureStorageCredentialsControllerServiceLookup.java ## @@ -71,28 +73,32 @@ public void testLookupServiceA() { final AzureStorageCredentialsDetails storageCredentialsDetails = lookupService.getStorageCredentialsDetails(attributes); assertNotNull(storageCredentialsDetails); assertEquals("Account_A", storageCredentialsDetails.getStorageAccountName()); +assertEquals("accountsuffix.core.windows.net", storageCredentialsDetails.getStorageSuffix()); } @Test public void testLookupServiceB() { Review comment: What is the default behavior when the suffix is null? in testLookupServiceB or a new test can this test that we're maintaining the current behavior? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] esecules commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors
esecules commented on a change in pull request #4265: URL: https://github.com/apache/nifi/pull/4265#discussion_r429399731 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/services/azure/storage/TestAzureStorageCredentialsControllerServiceLookup.java ## @@ -71,28 +73,32 @@ public void testLookupServiceA() { final AzureStorageCredentialsDetails storageCredentialsDetails = lookupService.getStorageCredentialsDetails(attributes); assertNotNull(storageCredentialsDetails); assertEquals("Account_A", storageCredentialsDetails.getStorageAccountName()); +assertEquals("accountsuffix.core.windows.net", storageCredentialsDetails.getStorageSuffix()); } @Test public void testLookupServiceB() { Review comment: What is the default behavior when the suffix is null? Can this test that we're maintaining the current behavior? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] esecules commented on pull request #4286: NIFI-7386: Azurite emulator support
esecules commented on pull request #4286: URL: https://github.com/apache/nifi/pull/4286#issuecomment-632842869 LGTM! what about you, @jfrazee ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] esecules commented on a change in pull request #4286: NIFI-7386: Azurite emulator support
esecules commented on a change in pull request #4286: URL: https://github.com/apache/nifi/pull/4286#discussion_r429396411 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/services/azure/storage/TestAzureStorageEmulatorCredentialsControllerService.java ## @@ -0,0 +1,56 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.services.azure.storage; + +import org.apache.nifi.reporting.InitializationException; +import org.apache.nifi.util.NoOpProcessor; +import org.apache.nifi.util.TestRunner; +import org.apache.nifi.util.TestRunners; +import org.junit.Before; +import org.junit.Test; + +public class TestAzureStorageEmulatorCredentialsControllerService { Review comment: Does the test environment for the NiFi package support making integration tests where you actually spin up an instance of azurite and try to connect to it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7422) Support aws_s3_pseudo_dir in Atlas reporting task
[ https://issues.apache.org/jira/browse/NIFI-7422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Turcsanyi updated NIFI-7422: -- Status: Patch Available (was: In Progress) > Support aws_s3_pseudo_dir in Atlas reporting task > - > > Key: NIFI-7422 > URL: https://issues.apache.org/jira/browse/NIFI-7422 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Peter Turcsanyi >Assignee: Peter Turcsanyi >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (MINIFICPP-1235) Warning message on GetFile
[ https://issues.apache.org/jira/browse/MINIFICPP-1235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114301#comment-17114301 ] Arpad Boda commented on MINIFICPP-1235: --- This warning makes no sense and I also don't think this property should be queried in onTrigger call. This is not a property that might depend on a FF attribute, so this should simply be moved to onSchedule. In onSchedule we could also validate if the directory exists. > Warning message on GetFile > -- > > Key: MINIFICPP-1235 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1235 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Pierre Villard >Priority: Minor > > I have my MiNiFi C++ agents running and I see this warning: > {code:java} > [2020-05-22 17:38:06.836] [org::apache::nifi::minifi::processors::GetFile] > [warning] Resolved missing Input Directory property value{code} > However the input directory value is correctly set (I have 2 GetFile > processors): > {code:java} > /opt/minifi/nifi-minifi-cpp-0.7.0 $ grep "Input" > /opt/minifi/minifi-current/conf/config.yml > Input Directory: /data/input > Input Directory: /data/input{code} > Not sure to understand the warning message. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1235) Warning message on GetFile
[ https://issues.apache.org/jira/browse/MINIFICPP-1235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpad Boda reassigned MINIFICPP-1235: - Assignee: Ádám Markovics > Warning message on GetFile > -- > > Key: MINIFICPP-1235 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1235 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Pierre Villard >Assignee: Ádám Markovics >Priority: Minor > > I have my MiNiFi C++ agents running and I see this warning: > {code:java} > [2020-05-22 17:38:06.836] [org::apache::nifi::minifi::processors::GetFile] > [warning] Resolved missing Input Directory property value{code} > However the input directory value is correctly set (I have 2 GetFile > processors): > {code:java} > /opt/minifi/nifi-minifi-cpp-0.7.0 $ grep "Input" > /opt/minifi/minifi-current/conf/config.yml > Input Directory: /data/input > Input Directory: /data/input{code} > Not sure to understand the warning message. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] turcsanyip opened a new pull request #4292: NIFI-7422: Support aws_s3_pseudo_dir in Atlas reporting task
turcsanyip opened a new pull request #4292: URL: https://github.com/apache/nifi/pull/4292 https://issues.apache.org/jira/browse/NIFI-7422 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7482) InvokeHTTP should not be final
[ https://issues.apache.org/jira/browse/NIFI-7482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-7482: Status: Patch Available (was: In Progress) > InvokeHTTP should not be final > -- > > Key: NIFI-7482 > URL: https://issues.apache.org/jira/browse/NIFI-7482 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.11.4 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: InvokeHTTP, custom_processor, extension > Time Spent: 10m > Remaining Estimate: 0h > > The {{InvokeHTTP}} processor is defined as {{final}}, which means custom > processors cannot extend it. I see no reason this class should be final, as > other standard processors are not defined as such with no visible problems. > {{InvokeHTTP}} is a processor that users frequently want to modify, and > allowing custom extensions is much easier than duplicating all native > behavior in a new processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] alopresto opened a new pull request #4291: NIFI-7482 Changed InvokeHTTP to be extensible
alopresto opened a new pull request #4291: URL: https://github.com/apache/nifi/pull/4291 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Previously the `InvokeHTTP` processor could not be extended in a custom processor. Now it can._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1235) Warning message on GetFile
[ https://issues.apache.org/jira/browse/MINIFICPP-1235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated MINIFICPP-1235: -- Description: I have my MiNiFi C++ agents running and I see this warning: {code:java} [2020-05-22 17:38:06.836] [org::apache::nifi::minifi::processors::GetFile] [warning] Resolved missing Input Directory property value{code} However the input directory value is correctly set (I have 2 GetFile processors): {code:java} /opt/minifi/nifi-minifi-cpp-0.7.0 $ grep "Input" /opt/minifi/minifi-current/conf/config.yml Input Directory: /data/input Input Directory: /data/input{code} Not sure to understand the warning message. was: I have my MiNiFi C++ agents running and I see this warning: [2020-05-22 17:38:06.836] [org::apache::nifi::minifi::processors::GetFile] [warning] Resolved missing Input Directory property value However the input directory value is correctly set (I have 2 GetFile processors): /opt/minifi/nifi-minifi-cpp-0.7.0 $ grep "Input" /opt/minifi/minifi-current/conf/config.yml Input Directory: /data/input Input Directory: /data/input Not sure to understand the warning message. > Warning message on GetFile > -- > > Key: MINIFICPP-1235 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1235 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Pierre Villard >Priority: Minor > > I have my MiNiFi C++ agents running and I see this warning: > {code:java} > [2020-05-22 17:38:06.836] [org::apache::nifi::minifi::processors::GetFile] > [warning] Resolved missing Input Directory property value{code} > However the input directory value is correctly set (I have 2 GetFile > processors): > {code:java} > /opt/minifi/nifi-minifi-cpp-0.7.0 $ grep "Input" > /opt/minifi/minifi-current/conf/config.yml > Input Directory: /data/input > Input Directory: /data/input{code} > Not sure to understand the warning message. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] sjyang18 commented on a change in pull request #4286: NIFI-7386: Azurite emulator support
sjyang18 commented on a change in pull request #4286: URL: https://github.com/apache/nifi/pull/4286#discussion_r429378686 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/AzureStorageEmulatorCrendentialsControllerService.java ## @@ -0,0 +1,88 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.services.azure.storage; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnEnabled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.controller.AbstractControllerService; +import org.apache.nifi.controller.ConfigurationContext; +import org.apache.nifi.processor.util.StandardValidators; + +/** + * Implementation of AbstractControllerService interface + * + * @see AbstractControllerService + */ +@Tags({ "azure", "microsoft", "emulator", "storage", "blob", "queue", "credentials" }) +@CapabilityDescription("Defines credentials for Azure Storage processors that connects to Azurite emulator. ") +public class AzureStorageEmulatorCrendentialsControllerService extends AbstractControllerService implements AzureStorageCredentialsService { + + +public static final PropertyDescriptor DEVELOPMENT_STORAGE_PROXY_URI = new PropertyDescriptor.Builder() +.name("azurite-proxy-uri") +.displayName("Azurite Proxy URI") Review comment: changed as you suggested. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1235) Warning message on GetFile
Pierre Villard created MINIFICPP-1235: - Summary: Warning message on GetFile Key: MINIFICPP-1235 URL: https://issues.apache.org/jira/browse/MINIFICPP-1235 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Affects Versions: 0.7.0 Reporter: Pierre Villard I have my MiNiFi C++ agents running and I see this warning: [2020-05-22 17:38:06.836] [org::apache::nifi::minifi::processors::GetFile] [warning] Resolved missing Input Directory property value However the input directory value is correctly set (I have 2 GetFile processors): /opt/minifi/nifi-minifi-cpp-0.7.0 $ grep "Input" /opt/minifi/minifi-current/conf/config.yml Input Directory: /data/input Input Directory: /data/input Not sure to understand the warning message. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] sjyang18 commented on a change in pull request #4286: NIFI-7386: Azurite emulator support
sjyang18 commented on a change in pull request #4286: URL: https://github.com/apache/nifi/pull/4286#discussion_r429378543 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/AzureStorageEmulatorCrendentialsControllerService.java ## @@ -0,0 +1,88 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.services.azure.storage; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnEnabled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.controller.AbstractControllerService; +import org.apache.nifi.controller.ConfigurationContext; +import org.apache.nifi.processor.util.StandardValidators; + +/** + * Implementation of AbstractControllerService interface + * + * @see AbstractControllerService + */ +@Tags({ "azure", "microsoft", "emulator", "storage", "blob", "queue", "credentials" }) +@CapabilityDescription("Defines credentials for Azure Storage processors that connects to Azurite emulator. ") +public class AzureStorageEmulatorCrendentialsControllerService extends AbstractControllerService implements AzureStorageCredentialsService { + + +public static final PropertyDescriptor DEVELOPMENT_STORAGE_PROXY_URI = new PropertyDescriptor.Builder() +.name("azurite-proxy-uri") +.displayName("Azurite Proxy URI") +.description("Default null will connect to http://127.0.0.1. Otherwise, overwrite this value with your proxy url.") Review comment: changed as you suggested. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7482) InvokeHTTP should not be final
Andy LoPresto created NIFI-7482: --- Summary: InvokeHTTP should not be final Key: NIFI-7482 URL: https://issues.apache.org/jira/browse/NIFI-7482 Project: Apache NiFi Issue Type: Improvement Components: Extensions Affects Versions: 1.11.4 Reporter: Andy LoPresto Assignee: Andy LoPresto The {{InvokeHTTP}} processor is defined as {{final}}, which means custom processors cannot extend it. I see no reason this class should be final, as other standard processors are not defined as such with no visible problems. {{InvokeHTTP}} is a processor that users frequently want to modify, and allowing custom extensions is much easier than duplicating all native behavior in a new processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7477) Get the details in ValidateRecord as an attribute
[ https://issues.apache.org/jira/browse/NIFI-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114275#comment-17114275 ] Mark Payne commented on NIFI-7477: -- [~mattyb149] I think the idea here is to add the details of the Provenance Event as an attribute. Those details are in a format that's something like: The following 2 fields were missing: abc, xyz The following 8 fields were present in the Record but not in the schema: aaa, bbb, ccc, ddd, eee, fff, ggg, hhh etc. So I am not terribly concerned about the heap utilization. However this information is not in any sort of readily parseable format. Is the intent here just for information purposes, so that it could be sent via an email or something to that effect? > Get the details in ValidateRecord as an attribute > - > > Key: NIFI-7477 > URL: https://issues.apache.org/jira/browse/NIFI-7477 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.11.4 >Reporter: Jairo Henao >Priority: Minor > Labels: features > > When validation fails in ValidateRecord, the details are not easy to access. > Details are sent as an event to Provenance. To obtain them, we should invoke > the NIFI REST-API or export them via the Site-to-Site Reporting Task. > The ValidateRecord processor should optionally allow us to configure an > attribute to leave the details text here. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7481) Improve logging around Versioned Flow version changes
Mark Payne created NIFI-7481: Summary: Improve logging around Versioned Flow version changes Key: NIFI-7481 URL: https://issues.apache.org/jira/browse/NIFI-7481 Project: Apache NiFi Issue Type: Improvement Reporter: Mark Payne Assignee: Mark Payne When a user changes the version of a flow, NiFi must perform several actions: * Stop all affected processors * Disable affected controller services * Update the flow * Re-Enable the affected controller services * Restart the affected processors There is info-level logging that indicates when each of these steps is completed. However, if there is a problem stopping processors, for instance, there is no debug level information about which processor is not stopped, etc. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7480) Allow SplitXML processor to generate XML fragments without loading entire XML into memory
[ https://issues.apache.org/jira/browse/NIFI-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114267#comment-17114267 ] Mark Payne commented on NIFI-7480: -- Quickly glancing at the processor, it looks like the processor's documentation is incorrect. The documentation claims that the entire XML document is loaded into memory as a DOM object. However, this is not the case. The XML is parsed using a SAX (streaming) parser, so it does not need to load the entire document into memory. The documentation should be fixed. That said, the processor does generate a lot of FlowFiles potentially, which can take a huge amount of memory also, so a 2-phase approach may be necessary if splitting at a level deeper than 1. However, it is generally best to avoid splitting XML documents and instead use Record-based processors if at all possible. Splitting the data apart puts dramatically more stress on the nifi framework and as a result record-based processors tend to perform about 10x better. > Allow SplitXML processor to generate XML fragments without loading entire XML > into memory > - > > Key: NIFI-7480 > URL: https://issues.apache.org/jira/browse/NIFI-7480 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Swarup Karavadi >Priority: Minor > > The current behaviour of the SplitXML processor (as documented > [here|[http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.SplitXml/index.html]]) > is to load the entire XML file in memory and then split the document into > fragments. This can get very memory intensive when processing large files. > I was wondering if it is possible to stream the file and construct XML > fragments (based on split depth). I understand there might be some issues > around this - > * setting the fragment.count attribute for the flow file containing the XML > fragment > * recovering from failures (ie., at what point during the processing should > checkpoints be committed, etc) > Thought it was worth bringing this up to see if this is something worth > picking up or even possible at all on the NiFi platform. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #791: MINIFICPP-1177 Improvements to the TailFile processor
fgerlits commented on a change in pull request #791: URL: https://github.com/apache/nifi-minifi-cpp/pull/791#discussion_r429359617 ## File path: extensions/standard-processors/processors/TailFile.cpp ## @@ -403,102 +403,141 @@ void TailFile::onTrigger(const std::shared_ptr , c state_file_ = st_file + "." + getUUIDStr(); } if (!this->state_recovered_) { -state_recovered_ = true; -// recover the state if we have not done so this->recoverState(context); +state_recovered_ = true; + } + + if (tail_mode_ == Mode::MULTIPLE) { +checkForRemovedFiles(); +checkForNewFiles(); } - /** - * iterate over file states. may modify them - */ + bool did_something = false; + + // iterate over file states. may modify them for (auto : tail_states_) { -auto fileLocation = state.second.path_; - -checkRollOver(context, state.second, state.first); -std::string fullPath = fileLocation + utils::file::FileUtils::get_separator() + state.second.current_file_name_; -struct stat statbuf; - -logger_->log_debug("Tailing file %s from %llu", fullPath, state.second.currentTailFilePosition_); -if (stat(fullPath.c_str(), ) == 0) { - if ((uint64_t) statbuf.st_size <= state.second.currentTailFilePosition_) { -logger_->log_trace("Current pos: %llu", state.second.currentTailFilePosition_); -logger_->log_trace("%s", "there are no new input for the current tail file"); -context->yield(); -return; +did_something |= processFile(context, session, state.first, state.second); + } + + if (!did_something) { +yield(); + } +} + + bool TailFile::processFile(const std::shared_ptr , + const std::shared_ptr , + const std::string , + TailState ) { +std::string full_file_name = state.fileNameWithPath(); + +bool did_something = false; + +if (utils::file::FileUtils::file_size(full_file_name) < state.position_) { + std::vector rotated_file_states = findRotatedFiles(state); + for (TailState _state : rotated_file_states) { +did_something |= processSingleFile(context, session, file_state.file_name_, file_state); } - std::size_t found = state.first.find_last_of("."); - std::string baseName = state.first.substr(0, found); - std::string extension = state.first.substr(found + 1); - - if (!delimiter_.empty()) { -char delim = delimiter_.c_str()[0]; -if (delim == '\\') { - if (delimiter_.size() > 1) { -switch (delimiter_.c_str()[1]) { - case 'r': -delim = '\r'; -break; - case 't': -delim = '\t'; -break; - case 'n': -delim = '\n'; -break; - case '\\': -delim = '\\'; -break; - default: -// previous behavior -break; -} - } -} -logger_->log_debug("Looking for delimiter 0x%X", delim); -std::vector> flowFiles; -session->import(fullPath, flowFiles, state.second.currentTailFilePosition_, delim); -logger_->log_info("%u flowfiles were received from TailFile input", flowFiles.size()); - -for (auto ffr : flowFiles) { - logger_->log_info("TailFile %s for %u bytes", state.first, ffr->getSize()); - std::string logName = baseName + "." + std::to_string(state.second.currentTailFilePosition_) + "-" + std::to_string(state.second.currentTailFilePosition_ + ffr->getSize()) + "." + extension; - ffr->updateKeyedAttribute(PATH, fileLocation); - ffr->addKeyedAttribute(ABSOLUTE_PATH, fullPath); - ffr->updateKeyedAttribute(FILENAME, logName); - session->transfer(ffr, Success); - state.second.currentTailFilePosition_ += ffr->getSize() + 1; - storeState(context); -} + state.position_ = 0; + state.checksum_ = 0; +} - } else { -std::shared_ptr flowFile = std::static_pointer_cast(session->create()); -if (flowFile) { - flowFile->updateKeyedAttribute(PATH, fileLocation); - flowFile->addKeyedAttribute(ABSOLUTE_PATH, fullPath); - session->import(fullPath, flowFile, true, state.second.currentTailFilePosition_); - session->transfer(flowFile, Success); - logger_->log_info("TailFile %s for %llu bytes", state.first, flowFile->getSize()); - std::string logName = baseName + "." + std::to_string(state.second.currentTailFilePosition_) + "-" + std::to_string(state.second.currentTailFilePosition_ + flowFile->getSize()) + "." - + extension; - flowFile->updateKeyedAttribute(FILENAME, logName); - state.second.currentTailFilePosition_ += flowFile->getSize(); - storeState(context); -} +
[jira] [Updated] (NIFI-7480) Allow SplitXML processor to generate XML fragments without loading entire XML into memory
[ https://issues.apache.org/jira/browse/NIFI-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarup Karavadi updated NIFI-7480: -- Description: The current behaviour of the SplitXML processor (as documented [here|[http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.SplitXml/index.html]]) is to load the entire XML file in memory and then split the document into fragments. This can get very memory intensive when processing large files. I was wondering if it is possible to stream the file and construct XML fragments (based on split depth). I understand there might be some issues around this - * setting the fragment.count attribute for the flow file containing the XML fragment * recovering from failures (ie., at what point during the processing should checkpoints be committed, etc) Thought it was worth bringing this up to see if this is something worth picking up or even possible at all on the NiFi platform. > Allow SplitXML processor to generate XML fragments without loading entire XML > into memory > - > > Key: NIFI-7480 > URL: https://issues.apache.org/jira/browse/NIFI-7480 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Swarup Karavadi >Priority: Minor > > The current behaviour of the SplitXML processor (as documented > [here|[http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.SplitXml/index.html]]) > is to load the entire XML file in memory and then split the document into > fragments. This can get very memory intensive when processing large files. > I was wondering if it is possible to stream the file and construct XML > fragments (based on split depth). I understand there might be some issues > around this - > * setting the fragment.count attribute for the flow file containing the XML > fragment > * recovering from failures (ie., at what point during the processing should > checkpoints be committed, etc) > Thought it was worth bringing this up to see if this is something worth > picking up or even possible at all on the NiFi platform. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7480) Allow SplitXML processor to generate XML fragments without loading entire XML into memory
[ https://issues.apache.org/jira/browse/NIFI-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarup Karavadi updated NIFI-7480: -- Priority: Minor (was: Major) > Allow SplitXML processor to generate XML fragments without loading entire XML > into memory > - > > Key: NIFI-7480 > URL: https://issues.apache.org/jira/browse/NIFI-7480 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Swarup Karavadi >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7480) Allow SplitXML processor to generate XML fragments without loading entire XML into memory
Swarup Karavadi created NIFI-7480: - Summary: Allow SplitXML processor to generate XML fragments without loading entire XML into memory Key: NIFI-7480 URL: https://issues.apache.org/jira/browse/NIFI-7480 Project: Apache NiFi Issue Type: Improvement Reporter: Swarup Karavadi -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] zenfenan opened a new pull request #4290: NIFI-6701: Fix Future execution handling
zenfenan opened a new pull request #4290: URL: https://github.com/apache/nifi/pull/4290 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR Fixes bug NIFI-6701 In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #791: MINIFICPP-1177 Improvements to the TailFile processor
fgerlits commented on a change in pull request #791: URL: https://github.com/apache/nifi-minifi-cpp/pull/791#discussion_r429328320 ## File path: extensions/standard-processors/processors/TailFile.cpp ## @@ -120,56 +158,42 @@ void TailFile::onSchedule(const std::shared_ptr , std::string value; if (context->getProperty(Delimiter.getName(), value)) { -delimiter_ = value; +delimiter_ = parseDelimiter(value); + } + + if (!context->getProperty(FileName.getName(), file_to_tail_)) { +throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "File to Tail is a required property"); } std::string mode; context->getProperty(TailMode.getName(), mode); - std::string file = ""; - if (!context->getProperty(FileName.getName(), file)) { -throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "File to Tail is a required property"); - } if (mode == "Multiple file") { -// file is a regex -std::string base_dir; -if (!context->getProperty(BaseDirectory.getName(), base_dir)) { +tail_mode_ = Mode::MULTIPLE; + +if (!context->getProperty(BaseDirectory.getName(), base_dir_)) { throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "Base directory is required for multiple tail mode."); } -auto fileRegexSelect = [&](const std::string& path, const std::string& filename) -> bool { - if (acceptFile(file, filename)) { -tail_states_.insert(std::make_pair(filename, TailState {path, filename, 0, 0})); - } - return true; -}; - -utils::file::FileUtils::list_dir(base_dir, fileRegexSelect, logger_, false); +// in multiple mode, we check for new/removed files in every onTrigger } else { +tail_mode_ = Mode::SINGLE; + std::string fileLocation, fileName; -if (utils::file::PathUtils::getFileNameAndPath(file, fileLocation, fileName)) { - tail_states_.insert(std::make_pair(fileName, TailState { fileLocation, fileName, 0, 0 })); +if (utils::file::PathUtils::getFileNameAndPath(file_to_tail_, fileLocation, fileName)) { + tail_states_.emplace(fileName, TailState{fileLocation, fileName, 0, 0, 0, 0}); Review comment: I have added a comment This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #791: MINIFICPP-1177 Improvements to the TailFile processor
fgerlits commented on a change in pull request #791: URL: https://github.com/apache/nifi-minifi-cpp/pull/791#discussion_r429326546 ## File path: extensions/standard-processors/processors/TailFile.cpp ## @@ -204,52 +230,65 @@ void TailFile::parseStateFileLine(char *buf) { } std::string value = equal; - key = trimRight(key); - value = trimRight(value); + key = utils::StringUtils::trimRight(key); + value = utils::StringUtils::trimRight(value); if (key == "FILENAME") { std::string fileLocation, fileName; if (utils::file::PathUtils::getFileNameAndPath(value, fileLocation, fileName)) { logger_->log_debug("State migration received path %s, file %s", fileLocation, fileName); - tail_states_.insert(std::make_pair(fileName, TailState { fileLocation, fileName, 0, 0 })); + state.insert(std::make_pair(fileName, TailState{fileLocation, fileName, 0, 0, 0, 0})); } else { - tail_states_.insert(std::make_pair(value, TailState { fileLocation, value, 0, 0 })); + state.insert(std::make_pair(value, TailState{fileLocation, value, 0, 0, 0, 0})); } } if (key == "POSITION") { // for backwards compatibility -if (tail_states_.size() != 1) { +if (tail_states_.size() != (std::size_t) 1) { throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "Incompatible state file types"); } const auto position = std::stoull(value); logger_->log_debug("Received position %d", position); -tail_states_.begin()->second.currentTailFilePosition_ = position; +state.begin()->second.position_ = position; } if (key.find(CURRENT_STR) == 0) { const auto file = key.substr(strlen(CURRENT_STR)); std::string fileLocation, fileName; if (utils::file::PathUtils::getFileNameAndPath(value, fileLocation, fileName)) { - tail_states_[file].path_ = fileLocation; - tail_states_[file].current_file_name_ = fileName; + state[file].path_ = fileLocation; + state[file].file_name_ = fileName; } else { throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "State file contains an invalid file name"); } } if (key.find(POSITION_STR) == 0) { const auto file = key.substr(strlen(POSITION_STR)); -tail_states_[file].currentTailFilePosition_ = std::stoull(value); +state[file].position_ = std::stoull(value); } } +bool TailFile::recoverState(const std::shared_ptr& context) { + std::map new_tail_states; + bool state_load_success = getStateFromStateManager(new_tail_states) || +getStateFromLegacyStateFile(new_tail_states); + if (!state_load_success) { +return false; + } + logger_->log_debug("load state succeeded"); -bool TailFile::recoverState(const std::shared_ptr& context) { - bool state_load_success = false; + tail_states_ = std::move(new_tail_states); + + // Save the state to the state manager + storeState(context); Review comment: I have just made some changes here to fix a bug (in Single file mode, the old state overwrote the File to Tail property if the property changed but the state wasn't cleared), so it makes more sense now than before. :) In any case, `storeState()` is not expensive, as it only updates the StateManager, which has its own schedule for how often to write changes to the DB. Also, `recoverState()` runs only once during the lifetime of the processor. So I think this is fine. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #791: MINIFICPP-1177 Improvements to the TailFile processor
fgerlits commented on a change in pull request #791: URL: https://github.com/apache/nifi-minifi-cpp/pull/791#discussion_r429326546 ## File path: extensions/standard-processors/processors/TailFile.cpp ## @@ -204,52 +230,65 @@ void TailFile::parseStateFileLine(char *buf) { } std::string value = equal; - key = trimRight(key); - value = trimRight(value); + key = utils::StringUtils::trimRight(key); + value = utils::StringUtils::trimRight(value); if (key == "FILENAME") { std::string fileLocation, fileName; if (utils::file::PathUtils::getFileNameAndPath(value, fileLocation, fileName)) { logger_->log_debug("State migration received path %s, file %s", fileLocation, fileName); - tail_states_.insert(std::make_pair(fileName, TailState { fileLocation, fileName, 0, 0 })); + state.insert(std::make_pair(fileName, TailState{fileLocation, fileName, 0, 0, 0, 0})); } else { - tail_states_.insert(std::make_pair(value, TailState { fileLocation, value, 0, 0 })); + state.insert(std::make_pair(value, TailState{fileLocation, value, 0, 0, 0, 0})); } } if (key == "POSITION") { // for backwards compatibility -if (tail_states_.size() != 1) { +if (tail_states_.size() != (std::size_t) 1) { throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "Incompatible state file types"); } const auto position = std::stoull(value); logger_->log_debug("Received position %d", position); -tail_states_.begin()->second.currentTailFilePosition_ = position; +state.begin()->second.position_ = position; } if (key.find(CURRENT_STR) == 0) { const auto file = key.substr(strlen(CURRENT_STR)); std::string fileLocation, fileName; if (utils::file::PathUtils::getFileNameAndPath(value, fileLocation, fileName)) { - tail_states_[file].path_ = fileLocation; - tail_states_[file].current_file_name_ = fileName; + state[file].path_ = fileLocation; + state[file].file_name_ = fileName; } else { throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "State file contains an invalid file name"); } } if (key.find(POSITION_STR) == 0) { const auto file = key.substr(strlen(POSITION_STR)); -tail_states_[file].currentTailFilePosition_ = std::stoull(value); +state[file].position_ = std::stoull(value); } } +bool TailFile::recoverState(const std::shared_ptr& context) { + std::map new_tail_states; + bool state_load_success = getStateFromStateManager(new_tail_states) || +getStateFromLegacyStateFile(new_tail_states); + if (!state_load_success) { +return false; + } + logger_->log_debug("load state succeeded"); -bool TailFile::recoverState(const std::shared_ptr& context) { - bool state_load_success = false; + tail_states_ = std::move(new_tail_states); + + // Save the state to the state manager + storeState(context); Review comment: I have just made some changes here to fix a bug (in Single file mode, the old state overwrote the File to Tail property if the property changed but the state wasn't cleared), so it makes more sense now than before. :) In any case, `storeState()` is not expensive, as it only updates the StateManager, which has its own schedule for how often to write changes to the DB. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #791: MINIFICPP-1177 Improvements to the TailFile processor
fgerlits commented on a change in pull request #791: URL: https://github.com/apache/nifi-minifi-cpp/pull/791#discussion_r429323720 ## File path: extensions/standard-processors/processors/TailFile.cpp ## @@ -120,56 +158,42 @@ void TailFile::onSchedule(const std::shared_ptr , std::string value; if (context->getProperty(Delimiter.getName(), value)) { -delimiter_ = value; +delimiter_ = parseDelimiter(value); + } + + if (!context->getProperty(FileName.getName(), file_to_tail_)) { +throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "File to Tail is a required property"); } std::string mode; context->getProperty(TailMode.getName(), mode); - std::string file = ""; - if (!context->getProperty(FileName.getName(), file)) { -throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "File to Tail is a required property"); - } if (mode == "Multiple file") { -// file is a regex -std::string base_dir; -if (!context->getProperty(BaseDirectory.getName(), base_dir)) { +tail_mode_ = Mode::MULTIPLE; + +if (!context->getProperty(BaseDirectory.getName(), base_dir_)) { throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "Base directory is required for multiple tail mode."); Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #791: MINIFICPP-1177 Improvements to the TailFile processor
fgerlits commented on a change in pull request #791: URL: https://github.com/apache/nifi-minifi-cpp/pull/791#discussion_r429323576 ## File path: extensions/standard-processors/processors/TailFile.cpp ## @@ -120,56 +158,42 @@ void TailFile::onSchedule(const std::shared_ptr , std::string value; if (context->getProperty(Delimiter.getName(), value)) { -delimiter_ = value; +delimiter_ = parseDelimiter(value); + } + + if (!context->getProperty(FileName.getName(), file_to_tail_)) { +throw minifi::Exception(ExceptionType::PROCESSOR_EXCEPTION, "File to Tail is a required property"); Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] zenfenan commented on pull request #2724: NIFI-5133: Implemented Google Cloud PubSub Processors
zenfenan commented on pull request #2724: URL: https://github.com/apache/nifi/pull/2724#issuecomment-632761026 @AnujJain7 Yes, it seems to be a miss from my side. I will raise a PR to get this addressed but a word of caution, these processors were created long back and the SDK used must have been outdated by now, I guess. I cannot work on getting these processors revamped since I no longer have any running GCP subscriptions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue
hunyadi-dev commented on a change in pull request #776: URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r429312813 ## File path: libminifi/src/c2/C2Agent.cpp ## @@ -75,54 +78,55 @@ C2Agent::C2Agent(const std::shared_ptr lock(request_mutex, std::adopt_lock); -if (!requests.empty()) { - int count = 0; - do { -const C2Payload payload(std::move(requests.back())); -requests.pop_back(); -try { - C2Payload && response = protocol_.load()->consumePayload(payload); - enqueue_c2_server_response(std::move(response)); -} -catch(const std::exception ) { - logger_->log_error("Exception occurred while consuming payload. error: %s", e.what()); -} -catch(...) { - logger_->log_error("Unknonwn exception occurred while consuming payload."); -} - }while(!requests.empty() && ++count < max_c2_responses); +if (protocol_.load() != nullptr) { + std::vector payload_batch; + payload_batch.reserve(max_c2_responses); + auto getRequestPayload = [_batch] (C2Payload&& payload) { payload_batch.emplace_back(std::move(payload)); }; + const std::chrono::system_clock::time_point timeout_point = std::chrono::system_clock::now() + std::chrono::milliseconds(1); + for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; ++attempt_num) { +if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) { + break; } } - try { -performHeartBeat(); - } - catch(const std::exception ) { -logger_->log_error("Exception occurred while performing heartbeat. error: %s", e.what()); - } - catch(...) { -logger_->log_error("Unknonwn exception occurred while performing heartbeat."); - } + std::for_each( +std::make_move_iterator(payload_batch.begin()), +std::make_move_iterator(payload_batch.end()), +[&] (C2Payload&& payload) { + try { +C2Payload && response = protocol_.load()->consumePayload(std::move(payload)); +enqueue_c2_server_response(std::move(response)); + } + catch(const std::exception ) { +logger_->log_error("Exception occurred while consuming payload. error: %s", e.what()); + } + catch(...) { +logger_->log_error("Unknonwn exception occurred while consuming payload."); + } +}); - checkTriggers(); +try { + performHeartBeat(); +} +catch (const std::exception ) { + logger_->log_error("Exception occurred while performing heartbeat. error: %s", e.what()); +} +catch (...) { + logger_->log_error("Unknonwn exception occurred while performing heartbeat."); +} +} + +checkTriggers(); + +return utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_)); + }; - return utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_)); -}; functions_.push_back(c2_producer_); - c2_consumer_ = [&]() { -if ( queue_mutex.try_lock_for(std::chrono::seconds(1)) ) { - C2Payload payload(Operation::HEARTBEAT); - { -std::lock_guard lock(queue_mutex, std::adopt_lock); -if (responses.empty()) { - return utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(C2RESPONSE_POLL_MS)); -} -payload = std::move(responses.back()); -responses.pop_back(); + c2_consumer_ = [&] { +if (responses.size()) { + if (!responses.consumeWaitFor([this](C2Payload&& e) { extractPayload(std::move(e)); }, std::chrono::seconds(1))) { Review comment: Corrected as requested. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue
hunyadi-dev commented on a change in pull request #776: URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r429312475 ## File path: libminifi/include/c2/C2Agent.h ## @@ -41,8 +41,15 @@ namespace org { namespace apache { namespace nifi { namespace minifi { + +namespace utils { +template +class ConditionConcurrentQueue; +} Review comment: Replaced with include. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7477) Get the details in ValidateRecord as an attribute
[ https://issues.apache.org/jira/browse/NIFI-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114146#comment-17114146 ] Matt Burgess commented on NIFI-7477: The record-based processors (including ValidateRecord) are capable of handling many thousands or millions of records in a single flow file. Would the attribute contain only the first validation error, the first N errors up to a certain size, or something else? We don't want to allow putting every single error in an attribute as this could cause NiFi to run out of memory quickly. > Get the details in ValidateRecord as an attribute > - > > Key: NIFI-7477 > URL: https://issues.apache.org/jira/browse/NIFI-7477 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.11.4 >Reporter: Jairo Henao >Priority: Minor > Labels: features > > When validation fails in ValidateRecord, the details are not easy to access. > Details are sent as an event to Provenance. To obtain them, we should invoke > the NIFI REST-API or export them via the Site-to-Site Reporting Task. > The ValidateRecord processor should optionally allow us to configure an > attribute to leave the details text here. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7479) Listening Port property on HandleHttpRequest doesn't work with parameters
David Malament created NIFI-7479: Summary: Listening Port property on HandleHttpRequest doesn't work with parameters Key: NIFI-7479 URL: https://issues.apache.org/jira/browse/NIFI-7479 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.11.4 Reporter: David Malament Attachments: image-2020-05-22-10-29-01-827.png The Listening Port property on the HandleHttpRequest processor clearly indicates that parameters are supported (see screenshot) and the processor starts up successfully, but any requests to the configured port give a "connection refused" error. Switching the property to a hard-coded value or a variable instead of a parameter restores functionality. !image-2020-05-22-10-29-01-827.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFIREG-368) Registry breaks when key password and keystore password differ
[ https://issues.apache.org/jira/browse/NIFIREG-368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114100#comment-17114100 ] Nathan Gough commented on NIFIREG-368: -- Hi Justin, I just submitted a PR which should resolve this issue. > Registry breaks when key password and keystore password differ > -- > > Key: NIFIREG-368 > URL: https://issues.apache.org/jira/browse/NIFIREG-368 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.5.0 >Reporter: Justin Rittenhouse >Assignee: Nathan Gough >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > (Running via Docker) > If nifi.registry.security.keystorePasswd and nifi.registry.security.keyPasswd > differ, the registry fails to boot. Running via Docker, the container shuts > down. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-registry] thenatog opened a new pull request #282: NIFIREG-368 - Fixed a transposition of the key password and keystore …
thenatog opened a new pull request #282: URL: https://github.com/apache/nifi-registry/pull/282 …password. Simplified the use of these variables a little bit. Added some unit tests. NIFIREG-368 - Added license header. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (MINIFICPP-1231) MergeContent processor doesn't properly validate properties
[ https://issues.apache.org/jira/browse/MINIFICPP-1231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Debreceni reassigned MINIFICPP-1231: - Assignee: Adam Debreceni > MergeContent processor doesn't properly validate properties > --- > > Key: MINIFICPP-1231 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1231 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Arpad Boda >Assignee: Adam Debreceni >Priority: Major > Fix For: 0.8.0 > > > Properties that require selecting a value ( such as MergeStrategy, > MergeFormat, KeepPath, etc) should have proper validation and allowable > values should be included in manifest. > Property validators should be used. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1234) Enable all archive tests in Windows.
Adam Debreceni created MINIFICPP-1234: - Summary: Enable all archive tests in Windows. Key: MINIFICPP-1234 URL: https://issues.apache.org/jira/browse/MINIFICPP-1234 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Adam Debreceni Assignee: Adam Debreceni Turn on all archive tests on Windows. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #788: MINIFICPP-1229 - Fix and enable CompressContentTests
adamdebreceni commented on a change in pull request #788: URL: https://github.com/apache/nifi-minifi-cpp/pull/788#discussion_r429183995 ## File path: libminifi/test/archive-tests/CompressContentTests.cpp ## @@ -91,851 +91,826 @@ class ReadCallback: public org::apache::nifi::minifi::InputStreamCallback { int archive_buffer_size_; }; -TEST_CASE("CompressFileGZip", "[compressfiletest1]") { - try { -std::ofstream expectfile; -expectfile.open(EXPECT_COMPRESS_CONTENT); +class CompressDecompressionTestController : public TestController{ +protected: + static std::string tempDir; + static std::string raw_content_path; + static std::string compressed_content_path; + static TestController global_controller; +public: + class RawContent{ +std::string content_; +RawContent(std::string&& content_): content_(std::move(content_)) {} +friend class CompressDecompressionTestController; + public: +bool operator==(const std::string& actual) const noexcept { + return content_ == actual; +} +bool operator!=(const std::string& actual) const noexcept { + return content_ != actual; +} + }; -std::mt19937 gen(std::random_device { }()); + std::string rawContentPath() const { +return raw_content_path; + } + + std::string compressedPath() const { +return compressed_content_path; + } + + RawContent getRawContent() const {; +std::ifstream file; +file.open(raw_content_path, std::ios::binary); +std::string contents((std::istreambuf_iterator(file)), std::istreambuf_iterator()); +file.close(); +return {std::move(contents)}; + } + + virtual ~CompressDecompressionTestController() = 0; +}; + +CompressDecompressionTestController::~CompressDecompressionTestController() {} Review comment: added comment This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7478) support bloom filter in orc format output
[ https://issues.apache.org/jira/browse/NIFI-7478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113939#comment-17113939 ] dingli123 commented on NIFI-7478: - in {color:#6f42c1}NiFiOrcUtils.java:{color} {code:java} return new OrcFlowFileWriter(flowFileOutputStream, path, conf, inspector, stripeSize, compress, bufferSize, rowIndexStride, getMemoryManager(conf), addBlockPadding, versionValue, null, // no callback encodingStrategy, compressionStrategy, paddingTolerance, blockSizeValue, null, // no Bloom Filter column names bloomFilterFpp); {code} {color:#6f42c1}it seems ignore the bloom filter column config{color} > support bloom filter in orc format output > - > > Key: NIFI-7478 > URL: https://issues.apache.org/jira/browse/NIFI-7478 > Project: Apache NiFi > Issue Type: Improvement >Reporter: dingli123 >Priority: Major > > current orc output don't support create bloom filter for column > in Hive, create orc table can set bloom filter config > but if hive create external table with orc file created by NiFi , > there are no bloom filter can be used for speed query > Please add bloom filter support in orc file output -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7478) support bloom filter in orc format output
dingli123 created NIFI-7478: --- Summary: support bloom filter in orc format output Key: NIFI-7478 URL: https://issues.apache.org/jira/browse/NIFI-7478 Project: Apache NiFi Issue Type: Improvement Reporter: dingli123 current orc output don't support create bloom filter for column in Hive, create orc table can set bloom filter config but if hive create external table with orc file created by NiFi , there are no bloom filter can be used for speed query Please add bloom filter support in orc file output -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #788: MINIFICPP-1229 - Fix and enable CompressContentTests
szaszm commented on a change in pull request #788: URL: https://github.com/apache/nifi-minifi-cpp/pull/788#discussion_r429137142 ## File path: libminifi/test/archive-tests/CompressContentTests.cpp ## @@ -91,851 +91,826 @@ class ReadCallback: public org::apache::nifi::minifi::InputStreamCallback { int archive_buffer_size_; }; -TEST_CASE("CompressFileGZip", "[compressfiletest1]") { - try { -std::ofstream expectfile; -expectfile.open(EXPECT_COMPRESS_CONTENT); +class CompressDecompressionTestController : public TestController{ +protected: + static std::string tempDir; + static std::string raw_content_path; + static std::string compressed_content_path; + static TestController global_controller; +public: + class RawContent{ +std::string content_; +RawContent(std::string&& content_): content_(std::move(content_)) {} +friend class CompressDecompressionTestController; + public: +bool operator==(const std::string& actual) const noexcept { + return content_ == actual; +} +bool operator!=(const std::string& actual) const noexcept { + return content_ != actual; +} + }; -std::mt19937 gen(std::random_device { }()); + std::string rawContentPath() const { +return raw_content_path; + } + + std::string compressedPath() const { +return compressed_content_path; + } + + RawContent getRawContent() const {; +std::ifstream file; +file.open(raw_content_path, std::ios::binary); +std::string contents((std::istreambuf_iterator(file)), std::istreambuf_iterator()); +file.close(); +return {std::move(contents)}; + } + + virtual ~CompressDecompressionTestController() = 0; +}; + +CompressDecompressionTestController::~CompressDecompressionTestController() {} Review comment: Thanks for teaching me something new. :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #788: MINIFICPP-1229 - Fix and enable CompressContentTests
szaszm commented on a change in pull request #788: URL: https://github.com/apache/nifi-minifi-cpp/pull/788#discussion_r429131467 ## File path: libminifi/test/archive-tests/CompressContentTests.cpp ## @@ -91,851 +91,826 @@ class ReadCallback: public org::apache::nifi::minifi::InputStreamCallback { int archive_buffer_size_; }; -TEST_CASE("CompressFileGZip", "[compressfiletest1]") { - try { -std::ofstream expectfile; -expectfile.open(EXPECT_COMPRESS_CONTENT); +class CompressDecompressionTestController : public TestController{ +protected: + static std::string tempDir; + static std::string raw_content_path; + static std::string compressed_content_path; + static TestController global_controller; +public: + class RawContent{ +std::string content_; +RawContent(std::string&& content_): content_(std::move(content_)) {} +friend class CompressDecompressionTestController; + public: +bool operator==(const std::string& actual) const noexcept { + return content_ == actual; +} +bool operator!=(const std::string& actual) const noexcept { + return content_ != actual; +} + }; -std::mt19937 gen(std::random_device { }()); + std::string rawContentPath() const { +return raw_content_path; + } + + std::string compressedPath() const { +return compressed_content_path; + } + + RawContent getRawContent() const {; +std::ifstream file; +file.open(raw_content_path, std::ios::binary); +std::string contents((std::istreambuf_iterator(file)), std::istreambuf_iterator()); +file.close(); +return {std::move(contents)}; + } + + virtual ~CompressDecompressionTestController() = 0; +}; + +CompressDecompressionTestController::~CompressDecompressionTestController() {} Review comment: You're right. I did some research and these are indeed the rules. I have never seen this technique before, but I'm fine with it now. Could you write a code comment describing why you want to make this class abstract without actually enforcing the overriding of any behavior? What is the intention behind this design choice? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
szaszm commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r425930634 ## File path: libminifi/test/script-tests/PythonExecuteScriptTests.cpp ## @@ -29,6 +29,15 @@ #include "processors/GetFile.h" #include "processors/PutFile.h" +// ,-, +// | ! | Disclaimer | ! | +// |---' '---' +// | | +// | This file contains tests for the "ExecuteScript" processor, | +// | not for the "ExecutePython" processor. | +// | | +// '-' Review comment: I'm against ASCII art in the code. https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#nl3-keep-comments-crisp ## File path: extensions/script/python/ExecutePythonProcessor.cpp ## @@ -46,144 +50,177 @@ core::Relationship ExecutePythonProcessor::Failure("failure", "Script failures") void ExecutePythonProcessor::initialize() { // initialization requires that we do a little leg work prior to onSchedule // so that we can provide manifest our processor identity - std::set properties; - - std::string prop; - getProperty(ScriptFile.getName(), prop); - - properties.insert(ScriptFile); - properties.insert(ModuleDirectory); - setSupportedProperties(properties); - - std::set relationships; - relationships.insert(Success); - relationships.insert(Failure); - setSupportedRelationships(std::move(relationships)); - setAcceptAllProperties(); - if (!prop.empty()) { -setProperty(ScriptFile, prop); -std::shared_ptr engine; -python_logger_ = logging::LoggerFactory::getAliasedLogger(getName()); + if (getProperties().empty()) { +setSupportedProperties({ + ScriptFile, + ScriptBody, + ModuleDirectory +}); +setAcceptAllProperties(); +setSupportedRelationships({ + Success, + Failure +}); +valid_init_ = false; +return; + } -engine = createEngine(); + python_logger_ = logging::LoggerFactory::getAliasedLogger(getName()); -if (engine == nullptr) { - throw std::runtime_error("No script engine available"); -} + getProperty(ModuleDirectory.getName(), module_directory_); -try { - engine->evalFile(prop); - auto me = shared_from_this(); - triggerDescribe(engine, me); - triggerInitialize(engine, me); + valid_init_ = false; + appendPathForImportModules(); + loadScript(); + try { +if ("" != script_to_exec_) { + std::shared_ptr engine = getScriptEngine(); + engine->eval(script_to_exec_); + auto shared_this = shared_from_this(); + engine->describe(shared_this); + engine->onInitialize(shared_this); + handleEngineNoLongerInUse(std::move(engine)); valid_init_ = true; -} catch (std::exception ) { - logger_->log_error("Caught Exception %s", exception.what()); - engine = nullptr; - std::rethrow_exception(std::current_exception()); - valid_init_ = false; -} catch (...) { - logger_->log_error("Caught Exception"); - engine = nullptr; - std::rethrow_exception(std::current_exception()); - valid_init_ = false; } - + } + catch (const std::exception& exception) { +logger_->log_error("Caught Exception: %s", exception.what()); +std::rethrow_exception(std::current_exception()); + } + catch (...) { +logger_->log_error("Caught Exception"); +std::rethrow_exception(std::current_exception()); } } void ExecutePythonProcessor::onSchedule(const std::shared_ptr , const std::shared_ptr ) { if (!valid_init_) { -throw std::runtime_error("Could not correctly in initialize " + getName()); - } - context->getProperty(ScriptFile.getName(), script_file_); - context->getProperty(ModuleDirectory.getName(), module_directory_); - if (script_file_.empty() && script_engine_.empty()) { -logger_->log_error("Script File must be defined"); -return; +throw std::runtime_error("Could not correctly initialize " + getName()); } - try { -std::shared_ptr engine; - -// Use an existing engine, if one is available -if (script_engine_q_.try_dequeue(engine)) { - logger_->log_debug("Using available %s script engine instance", script_engine_); -} else { - logger_->log_info("Creating new %s script instance", script_engine_); - logger_->log_info("Approximately %d %s script instances created for this processor", script_engine_q_.size_approx(), script_engine_); - - engine = createEngine(); - - if (engine == nullptr) { -throw std::runtime_error("No script engine available"); - } - - if (!script_file_.empty()) { -engine->evalFile(script_file_); - } else
[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #792: MINIFICPP-1230 - Enable on Win and refactor MergeFileTests
adamdebreceni opened a new pull request #792: URL: https://github.com/apache/nifi-minifi-cpp/pull/792 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #788: MINIFICPP-1229 - Fix and enable CompressContentTests
adamdebreceni commented on a change in pull request #788: URL: https://github.com/apache/nifi-minifi-cpp/pull/788#discussion_r429070044 ## File path: libminifi/test/archive-tests/CompressContentTests.cpp ## @@ -91,851 +91,826 @@ class ReadCallback: public org::apache::nifi::minifi::InputStreamCallback { int archive_buffer_size_; }; -TEST_CASE("CompressFileGZip", "[compressfiletest1]") { - try { -std::ofstream expectfile; -expectfile.open(EXPECT_COMPRESS_CONTENT); +class CompressDecompressionTestController : public TestController{ +protected: + static std::string tempDir; + static std::string raw_content_path; + static std::string compressed_content_path; + static TestController global_controller; +public: + class RawContent{ +std::string content_; +RawContent(std::string&& content_): content_(std::move(content_)) {} +friend class CompressDecompressionTestController; + public: +bool operator==(const std::string& actual) const noexcept { + return content_ == actual; +} +bool operator!=(const std::string& actual) const noexcept { + return content_ != actual; +} + }; -std::mt19937 gen(std::random_device { }()); + std::string rawContentPath() const { +return raw_content_path; + } + + std::string compressedPath() const { +return compressed_content_path; + } + + RawContent getRawContent() const {; +std::ifstream file; +file.open(raw_content_path, std::ios::binary); +std::string contents((std::istreambuf_iterator(file)), std::istreambuf_iterator()); +file.close(); +return {std::move(contents)}; + } + + virtual ~CompressDecompressionTestController() = 0; +}; + +CompressDecompressionTestController::~CompressDecompressionTestController() {} Review comment: having a pure virtual method (either declared in the class or inherited) only means that the class cannot be instantiated, you can have an out-of-class definition for such methods and reach them through static dispatch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org