[GitHub] [nifi] sjyang18 commented on a change in pull request #4249: NIFI-7409: Azure managed identity support to Azure Datalake processors
sjyang18 commented on a change in pull request #4249: URL: https://github.com/apache/nifi/pull/4249#discussion_r427045233 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/AbstractAzureDataLakeStorageProcessor.java ## @@ -118,9 +136,14 @@ .build(); private static final List PROPERTIES = Collections.unmodifiableList( -Arrays.asList(AbstractAzureDataLakeStorageProcessor.ACCOUNT_NAME, AbstractAzureDataLakeStorageProcessor.ACCOUNT_KEY, -AbstractAzureDataLakeStorageProcessor.SAS_TOKEN, AbstractAzureDataLakeStorageProcessor.FILESYSTEM, -AbstractAzureDataLakeStorageProcessor.DIRECTORY, AbstractAzureDataLakeStorageProcessor.FILE)); +Arrays.asList(AbstractAzureDataLakeStorageProcessor.ACCOUNT_NAME, +AbstractAzureDataLakeStorageProcessor.ACCOUNT_KEY, +AbstractAzureDataLakeStorageProcessor.SAS_TOKEN, +AbstractAzureDataLakeStorageProcessor.ENDPOINT_SUFFIX, Review comment: I have changed the order. I will follow the general rule next time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] sjyang18 commented on a change in pull request #4249: NIFI-7409: Azure managed identity support to Azure Datalake processors
sjyang18 commented on a change in pull request #4249: URL: https://github.com/apache/nifi/pull/4249#discussion_r427044249 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/AbstractAzureDataLakeStorageProcessor.java ## @@ -134,17 +157,35 @@ public static Collection validateCredentialProperties(final ValidationContext validationContext) { final List results = new ArrayList<>(); + +final boolean useManagedIdentity = validationContext.getProperty(USE_MANAGED_IDENTITY).asBoolean(); final String accountName = validationContext.getProperty(ACCOUNT_NAME).getValue(); -final String accountKey = validationContext.getProperty(ACCOUNT_KEY).getValue(); -final String sasToken = validationContext.getProperty(SAS_TOKEN).getValue(); - -if (StringUtils.isNotBlank(accountName) -&& ((StringUtils.isNotBlank(accountKey) && StringUtils.isNotBlank(sasToken)) || (StringUtils.isBlank(accountKey) && StringUtils.isBlank(sasToken { -results.add(new ValidationResult.Builder().subject("Azure Storage Credentials").valid(false) -.explanation("either " + ACCOUNT_NAME.getDisplayName() + " with " + ACCOUNT_KEY.getDisplayName() + -" or " + ACCOUNT_NAME.getDisplayName() + " with " + SAS_TOKEN.getDisplayName() + -" must be specified, not both") +final boolean accountKeyIsSet = validationContext.getProperty(ACCOUNT_KEY).isSet(); +final boolean sasTokenIsSet = validationContext.getProperty(SAS_TOKEN).isSet(); + +if(useManagedIdentity){ +if(accountKeyIsSet || sasTokenIsSet) { +final String msg = String.format( +"('%s') and ('%s' or '%s') fields cannot be set at the same time.", +USE_MANAGED_IDENTITY.getDisplayName(), +ACCOUNT_KEY.getDisplayName(), +SAS_TOKEN.getDisplayName() +); +results.add(new ValidationResult.Builder().subject("Credentials config").valid(false).explanation(msg).build()); +} +} else { +final String accountKey = validationContext.getProperty(ACCOUNT_KEY).getValue(); +final String sasToken = validationContext.getProperty(SAS_TOKEN).getValue(); +if (StringUtils.isNotBlank(accountName) && ((StringUtils.isNotBlank(accountKey) && StringUtils.isNotBlank(sasToken)) +|| (StringUtils.isBlank(accountKey) && StringUtils.isBlank(sasToken { +final String msg = String.format("either " + ACCOUNT_NAME.getDisplayName() + " with " + ACCOUNT_KEY.getDisplayName() + +" or " + ACCOUNT_NAME.getDisplayName() + " with " + SAS_TOKEN.getDisplayName() + +" must be specified, not both" +); +results.add(new ValidationResult.Builder().subject("Credentials Config").valid(false) +.explanation(msg) .build()); +} Review comment: changed the logic as you suggested. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] alopresto commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.
alopresto commented on pull request #4263: URL: https://github.com/apache/nifi/pull/4263#issuecomment-630562444 Thanks for finding all the edge cases @thenatog & @markap14. I think this is ready for your +1. I'll then merge. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7331) Grammatical and syntactic errors in log and error messages
[ https://issues.apache.org/jira/browse/NIFI-7331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] M Tien updated NIFI-7331: - Status: Patch Available (was: Open) > Grammatical and syntactic errors in log and error messages > -- > > Key: NIFI-7331 > URL: https://issues.apache.org/jira/browse/NIFI-7331 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Andy LoPresto >Assignee: M Tien >Priority: Trivial > Labels: logging, messaging > Time Spent: 10m > Remaining Estimate: 0h > > Improve the quality of the log and error messages in the OIDC identity > provider. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-registry] scottyaslan opened a new pull request #279: [NIFIREG-389] remove npx and use npm-force-resolutions
scottyaslan opened a new pull request #279: URL: https://github.com/apache/nifi-registry/pull/279 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache opened a new pull request #4283: NIFI-7331 Fixed grammatical errors in log output.
mtien-apache opened a new pull request #4283: URL: https://github.com/apache/nifi/pull/4283 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Improve log output in OIDC identity provider._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [x] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7468) Improve internal handling of SSL channels
Andy LoPresto created NIFI-7468: --- Summary: Improve internal handling of SSL channels Key: NIFI-7468 URL: https://issues.apache.org/jira/browse/NIFI-7468 Project: Apache NiFi Issue Type: Bug Components: Core Framework, Extensions Affects Versions: 1.11.4 Reporter: Andy LoPresto Assignee: Andy LoPresto While refactoring the TLS protocol version issue in NIFI-7407, I discovered that some processors make use of NiFi custom implementations of {{SSLSocketChannel}}, {{SSLCommsSession}}, and {{SSLSocketChannelInputStream}}. These implementations break on TLSv1.3. Further investigation is needed to determine why these custom implementations were provided originally, whether they are still required, and why they do not handle TLSv1.3 successfully. Diagnostic error: {code} Error reading from channel due to Tag mismatch!: javax.net.ssl.SSLException: Tag mismatch! {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] alopresto commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.
alopresto commented on pull request #4263: URL: https://github.com/apache/nifi/pull/4263#issuecomment-630485289 Running a full build this time because some of the tests were failing on ordering. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] sjyang18 commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors
sjyang18 commented on a change in pull request #4265: URL: https://github.com/apache/nifi/pull/4265#discussion_r426940750 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java ## @@ -85,6 +85,22 @@ .sensitive(true) .build(); +public static final PropertyDescriptor ENDPOINT_SUFFIX = new PropertyDescriptor.Builder() +.name("storage-endpoint-suffix") +.displayName("Storage Endpoint Suffix") +.description( +"Storage accounts in public Azure always use a common FQDN suffix. " + +"Override this endpoint suffix with a different suffix in certain circumsances (like Azure Stack or non-public Azure regions). " + +"The preferred way is to configure them through a controller service specified in the Storage Credentials property. " + +"The controller service can provide a common/shared configuration for multiple/all Azure processors. Furthermore, the credentials " + +"can also be looked up dynamically with the 'Lookup' version of the service.") +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) Review comment: Given https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#specify-an-http-proxy, the property for NIFI-7386 would be 'DevelopmentStorageProxyUri' or 'development-storage-proxy-uri' in my opinion and we should call the correct API in this case(i.e. CloudStorageAccount.getDevelopmentStorageAccount(final URI proxyUri)). And, "EndPointSuffix" is one of connection property showing in Azure portal, and thus this is the obvious configuration in non public Azure cloud environemnt. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7446) Fail when the specified path is a directory in FetchAzureDataLakeStorage
[ https://issues.apache.org/jira/browse/NIFI-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Turcsanyi updated NIFI-7446: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Fail when the specified path is a directory in FetchAzureDataLakeStorage > - > > Key: NIFI-7446 > URL: https://issues.apache.org/jira/browse/NIFI-7446 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Peter Turcsanyi >Assignee: Peter Gyori >Priority: Major > Labels: azure > Time Spent: 2h > Remaining Estimate: 0h > > FetchAzureDataLakeStorage currently returns an empty FlowFile without error > when the specified path points to a directory on ADLS (instead of a file). > FetchAzureDataLakeStorage should fail in this case. > PathProperties.isDirectory() can be used to check if the retrieved entity is > a directory or a file (available from azure-storage-file-datalake 12.1.x). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7446) Fail when the specified path is a directory in FetchAzureDataLakeStorage
[ https://issues.apache.org/jira/browse/NIFI-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110644#comment-17110644 ] ASF subversion and git services commented on NIFI-7446: --- Commit 9aae58f1178cacb6ee4814f26288bf8ca5150d71 in nifi's branch refs/heads/master from Peter Gyori [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=9aae58f ] NIFI-7446: FetchAzureDataLakeStorage processor now throws exception when the specified path points to a directory A newer version (12.1.1) of azure-storage-file-datalake is imported. This closes #4273. Signed-off-by: Peter Turcsanyi > Fail when the specified path is a directory in FetchAzureDataLakeStorage > - > > Key: NIFI-7446 > URL: https://issues.apache.org/jira/browse/NIFI-7446 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Peter Turcsanyi >Assignee: Peter Gyori >Priority: Major > Labels: azure > Time Spent: 1h 50m > Remaining Estimate: 0h > > FetchAzureDataLakeStorage currently returns an empty FlowFile without error > when the specified path points to a directory on ADLS (instead of a file). > FetchAzureDataLakeStorage should fail in this case. > PathProperties.isDirectory() can be used to check if the retrieved entity is > a directory or a file (available from azure-storage-file-datalake 12.1.x). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage
asfgit closed pull request #4273: URL: https://github.com/apache/nifi/pull/4273 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a change in pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage
turcsanyip commented on a change in pull request #4273: URL: https://github.com/apache/nifi/pull/4273#discussion_r426910774 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/FetchAzureDataLakeStorage.java ## @@ -67,6 +67,10 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro final DataLakeDirectoryClient directoryClient = dataLakeFileSystemClient.getDirectoryClient(directory); final DataLakeFileClient fileClient = directoryClient.getFileClient(fileName); +if (fileClient.getProperties().isDirectory()) { Review comment: Thanks, merging to master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MuazmaZ commented on a change in pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage
MuazmaZ commented on a change in pull request #4273: URL: https://github.com/apache/nifi/pull/4273#discussion_r426909105 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/FetchAzureDataLakeStorage.java ## @@ -67,6 +67,10 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro final DataLakeDirectoryClient directoryClient = dataLakeFileSystemClient.getDirectoryClient(directory); final DataLakeFileClient fileClient = directoryClient.getFileClient(fileName); +if (fileClient.getProperties().isDirectory()) { Review comment: Looks good to me after rebuild. +1 Failure to fetch file from Azure Data Lake Storage: org.apache.nifi.processor.exception.ProcessException: File Name (xyz) points to a directory. Full path: xyz/xyz This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] sjyang18 commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors
sjyang18 commented on a change in pull request #4265: URL: https://github.com/apache/nifi/pull/4265#discussion_r426906518 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java ## @@ -85,6 +85,22 @@ .sensitive(true) .build(); +public static final PropertyDescriptor ENDPOINT_SUFFIX = new PropertyDescriptor.Builder() +.name("storage-endpoint-suffix") +.displayName("Storage Endpoint Suffix") +.description( +"Storage accounts in public Azure always use a common FQDN suffix. " + +"Override this endpoint suffix with a different suffix in certain circumsances (like Azure Stack or non-public Azure regions). " + +"The preferred way is to configure them through a controller service specified in the Storage Credentials property. " + +"The controller service can provide a common/shared configuration for multiple/all Azure processors. Furthermore, the credentials " + +"can also be looked up dynamically with the 'Lookup' version of the service.") +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) Review comment: This PR addresses the need to support Azure Stack and Azure Gov cloud. This PR binds to CloudStorageAccount.CloudStorageAccount( final StorageCredentials storageCredentials, final boolean useHttps, final String endpointSuffix, String accountName) API, which has equivalent common pattern and endpoint suffix support in both Datalake SDK and Storage SDK. While, to bind the development environment, I see different SDK API. CloudStorageAccount.getDevelopmentStorageAccount(final URI proxyUri) , which uses the fixed DEVSTORE_ACCOUNT_NAME and DEVSTORE_ACCOUNT_KEY. private static final String DEVSTORE_ACCOUNT_KEY = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="; private static final String DEVSTORE_ACCOUNT_NAME = "devstoreaccount1"; This means that you don't even need to set account name and key to bind to the development environment. @esecules and @jfrazee , How critical is it to address NIFI-7386 in this PR? We could address the problem with 'development-environment-uri', rather than endpoint-suffix. And, using one property for endpoint_suffix in one case and for endpoint URI in another case will make it confusing, in my opinion. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.
thenatog commented on pull request #4263: URL: https://github.com/apache/nifi/pull/4263#issuecomment-630447798 Looks like there might be a small issue with JDK11 tests, if we can fix that I'll +1. Thanks for the huge contribution, Andy! Definitely valuable changes here - the SSL Contexts have needed this improvement for a long time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] alopresto commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.
alopresto commented on pull request #4263: URL: https://github.com/apache/nifi/pull/4263#issuecomment-630447908 There was another Java 11 unit test failure. Resolved that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] alopresto commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.
alopresto commented on pull request #4263: URL: https://github.com/apache/nifi/pull/4263#issuecomment-630446175 I decided to do the S2S refactor in a separate Jira as it grew larger than anticipated. @thenatog if you're satisfied with what's here, please give a +1 and I'll merge. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7467) Improve S2S peer retrieval process
Andy LoPresto created NIFI-7467: --- Summary: Improve S2S peer retrieval process Key: NIFI-7467 URL: https://issues.apache.org/jira/browse/NIFI-7467 Project: Apache NiFi Issue Type: Bug Components: Core Framework, Security Affects Versions: 1.11.4 Reporter: Andy LoPresto Assignee: Andy LoPresto During investigation for NIFI-7407, [~thenatog] and I discovered a scenario where site to site peer retrieval was sub-optimal. Some of this was related to hosting a secure cluster with multiple nodes on the same physical/virtual server, introducing hostname and SAN resolution problems. In other instances, the retrieval has a nested {{NullPointerException}}. {code} 2020-05-14 18:44:39,140 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2020-05-14 18:44:39,124 and sent to node3.nifi:11443 at 2020-05-14 18:44:39,140; send took 15 millis 2020-05-14 18:44:41,789 WARN [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector Could not communicate with node1.nifi:9443 to determine which nodes exist in the remote NiFi cluster, due to javax.net.ssl.SSLPeerUnverifiedException: Certificate for doesn't match any of the subject alternative names: [node3.nifi] 2020-05-14 18:44:41,789 WARN [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@57dfcccd Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster 2020-05-14 18:44:44,159 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2020-05-14 18:44:44,146 and sent to node3.nifi:11443 at 2020-05-14 18:44:44,159; send took 13 millis 2020-05-14 18:44:46,791 WARN [Timer-Driven Process Thread-10] o.apache.nifi.remote.client.PeerSelector Could not communicate with node1.nifi:9443 to determine which nodes exist in the remote NiFi cluster, due to javax.net.ssl.SSLPeerUnverifiedException: Certificate for doesn't match any of the subject alternative names: [node3.nifi] 2020-05-14 18:44:46,791 WARN [Timer-Driven Process Thread-10] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@57dfcccd Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster 2020-05-14 18:44:46,791 INFO [Timer-Driven Process Thread-10] o.a.nifi.remote.client.http.HttpClient Couldn't find a valid peer to communicate with. 2020-05-14 18:44:46,817 WARN [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector Could not communicate with node1.nifi:9443 to determine which nodes exist in the remote NiFi cluster, due to javax.net.ssl.SSLPeerUnverifiedException: Certificate for doesn't match any of the subject alternative names: [node3.nifi] 2020-05-14 18:44:46,817 WARN [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@57dfcccd Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster 2020-05-14 18:44:49,178 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2020-05-14 18:44:49,164 and sent to node3.nifi:11443 at 2020-05-14 18:44:49,178; send took 13 millis 2020-05-14 18:44:51,332 INFO [Timer-Driven Process Thread-6] o.a.n.remote.StandardRemoteProcessGroup Successfully refreshed Flow Contents for RemoteProcessGroup[https://node1.nifi:9441/nifi]; updated to reflect 1 Input Ports [InputPort[name=From Self, targetId=15f64e5b-0172-1000--f134169a]] and 0 Output Ports [OutputPort[name=From Self, targetId=15f64e5b-0172-1000--f134169a]] 2020-05-14 18:44:51,833 WARN [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector Could not communicate with node1.nifi:9443 to determine which nodes exist in the remote NiFi cluster, due to javax.net.ssl.SSLPeerUnverifiedException: Certificate for doesn't match any of the subject alternative names: [node3.nifi] 2020-05-14 18:44:51,833 WARN [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@57dfcccd Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] turcsanyip commented on a change in pull request #4272: NIFI-7336: Add tests for DeleteAzureDataLakeStorage
turcsanyip commented on a change in pull request #4272: URL: https://github.com/apache/nifi/pull/4272#discussion_r426894009 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/processors/azure/storage/ITDeleteAzureDataLakeStorage.java ## @@ -16,44 +16,388 @@ */ package org.apache.nifi.processors.azure.storage; +import com.azure.storage.file.datalake.DataLakeDirectoryClient; +import com.azure.storage.file.datalake.DataLakeFileClient; +import com.azure.storage.file.datalake.models.DataLakeStorageException; import org.apache.nifi.processor.Processor; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.provenance.ProvenanceEventType; import org.apache.nifi.util.MockFlowFile; -import org.junit.Before; -import org.junit.Ignore; import org.junit.Test; -import java.util.List; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; -public class ITDeleteAzureDataLakeStorage extends AbstractAzureBlobStorageIT { +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +public class ITDeleteAzureDataLakeStorage extends AbstractAzureDataLakeStorageIT { @Override protected Class getProcessorClass() { return DeleteAzureDataLakeStorage.class; } -@Before -public void setUp() { -runner.setProperty(DeleteAzureDataLakeStorage.FILE, TEST_FILE_NAME); +@Test +public void testDeleteFileFromRoot() { +// GIVEN +String directory= ""; +String filename = "testFile.txt"; +String fileContent = "AzureFileContent"; +String inputFlowFileContent = "InputFlowFileContent"; + +uploadFile(directory, filename, fileContent); + +// WHEN +// THEN +testSuccessfulDelete(fileSystemName, directory, filename, inputFlowFileContent, inputFlowFileContent); +} + +@Test +public void testDeleteFileFromDirectory() { +// GIVEN +String directory = "TestDirectory"; +String filename = "testFile.txt"; +String fileContent = "AzureFileContent"; +String inputFlowFileContent = "InputFlowFileContent"; + +createDirectoryAndUploadFile(directory, filename, fileContent); + +// WHEN +// THEN +testSuccessfulDelete(fileSystemName, directory, filename, inputFlowFileContent, inputFlowFileContent); +} + +@Test +public void testDeleteFileFromDeepDirectory() { +// GIVEN +String directory= "Directory01/Directory02/Directory03/Directory04/Directory05/Directory06/Directory07/" ++ "Directory08/Directory09/Directory10/Directory11/Directory12/Directory13/Directory14/Directory15/" ++ "Directory16/Directory17/Directory18/Directory19/Directory20/TestDirectory"; +String filename = "testFile.txt"; +String fileContent = "AzureFileContent"; +String inputFlowFileContent = "InputFlowFileContent"; + +createDirectoryAndUploadFile(directory, filename, fileContent); + +// WHEN +// THEN +testSuccessfulDelete(fileSystemName, directory, filename, inputFlowFileContent, inputFlowFileContent); +} + +@Test +public void testDeleteFileWithWhitespaceInFilename() { +// GIVEN +String directory= "TestDirectory"; +String filename = "A test file.txt"; +String fileContent = "AzureFileContent"; +String inputFlowFileContent = "InputFlowFileContent"; + +createDirectoryAndUploadFile(directory, filename, fileContent); + +// WHEN +// THEN +testSuccessfulDelete(fileSystemName, directory, filename, inputFlowFileContent, inputFlowFileContent); +} + +@Test +public void testDeleteFileWithWhitespaceInDirectoryName() { +// GIVEN +String directory= "A Test Directory"; +String filename = "testFile.txt"; +String fileContent = "AzureFileContent"; +String inputFlowFileContent = "InputFlowFileContent"; + +createDirectoryAndUploadFile(directory, filename, fileContent); + +// WHEN +// THEN +testSuccessfulDelete(fileSystemName, directory, filename, inputFlowFileContent, inputFlowFileContent); +} + +@Test +public void testDeleteEmptyDirectory() { +// GIVEN +String parentDirectory = "ParentDirectory"; +String childDirectory = "ChildDirectory"; +String inputFlowFileContent = "InputFlowFileContent"; + +fileSystemClient.createDirectory(parentDirectory + "/" + childDirectory); + +// WHEN +// THEN +testSuccessfulDelete(fileSystemName, parentDirectory, childDirectory, inputFlowFileContent, inputFlowFileContent); +} + +@Test +public void testDeleteFileCaseSensitiveFilename() { +// GIVEN +String directory = "TestDirectory"; +
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #787: MINIFICPP-1226 improve C2 heartbeat performance
arpadboda closed pull request #787: URL: https://github.com/apache/nifi-minifi-cpp/pull/787 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-6633) Allow user to copy a parameter context
[ https://issues.apache.org/jira/browse/NIFI-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110584#comment-17110584 ] Andrew Grande commented on NIFI-6633: - I don't think it's the use case I'm after here. A nifi cluster talking to both environments for sync, bridging, reconciliation purposes etc has a need to have multiple contexts. A developer experience would also call for a quick way to clone the context and experiment , as this entity is not necessarily versioned in the registry. > Allow user to copy a parameter context > -- > > Key: NIFI-6633 > URL: https://issues.apache.org/jira/browse/NIFI-6633 > Project: Apache NiFi > Issue Type: New Feature > Components: Core UI >Reporter: Rob Fellows >Priority: Major > > It would be nice to be able to copy/duplicate an existing parameter context. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6633) Allow user to copy a parameter context
[ https://issues.apache.org/jira/browse/NIFI-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110580#comment-17110580 ] Pierre Villard commented on NIFI-6633: -- The expected workflow would be to version (in the NiFi Registry) the flow you created in your development environment and check it out in the other environment. You'd have your parameter context coming with your versioned flow where you can change the values based on your target environment. If someone implements this I'd recommend that duplicated sensitive parameters would have empty values. > Allow user to copy a parameter context > -- > > Key: NIFI-6633 > URL: https://issues.apache.org/jira/browse/NIFI-6633 > Project: Apache NiFi > Issue Type: New Feature > Components: Core UI >Reporter: Rob Fellows >Priority: Major > > It would be nice to be able to copy/duplicate an existing parameter context. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 closed pull request #4264: NIFI-7380 - fix for controller service validation in NiFi Stateless
markap14 closed pull request #4264: URL: https://github.com/apache/nifi/pull/4264 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7380) NiFi Stateless does not validate CS correctly
[ https://issues.apache.org/jira/browse/NIFI-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-7380: - Fix Version/s: 1.12.0 Resolution: Fixed Status: Resolved (was: Patch Available) > NiFi Stateless does not validate CS correctly > - > > Key: NIFI-7380 > URL: https://issues.apache.org/jira/browse/NIFI-7380 > Project: Apache NiFi > Issue Type: Bug > Components: NiFi Stateless >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Critical > Labels: nifi-stateless, stateless > Fix For: 1.12.0 > > Attachments: nifi-7380-confluent-schema-registry-exception.txt, > nifi-7380-exception.txt, nifi-7380-flow-config.json, > nifi-7380-jolt-exception.txt, stateless.json > > Time Spent: 50m > Remaining Estimate: 0h > > When the flow executed with the NiFi Stateless running mode contains a > Controller Service with required properties, it'll fail as it does not take > into account the configuration when performing the validation of the > component. > In *StatelessControllerServiceLookup*, the method > {code:java} > public void enableControllerServices(final VariableRegistry variableRegistry) > {code} > first validates the configured controller services and calls > {code:java} > public Collection validate(...){code} > This will create a *StatelessProcessContext* object and a > *StatelessValidationContext* object. Then the method *validate* is called on > the controller service and pass the validation context as argument. It will > go through the properties of the controller service and will retrieve the > configured value of the properties as set in the *StatelessProcessContext* > object. The problem is that the *properties* map in the Stateless Process > Context supposed to contain the configured values is never set. As such, any > required property in a Controller Service is considered as configured with a > null value if there is no default value. This will cause the component > validation to fail and the flow won't be executed. > I opened a PR with a solution that does solve this issue. However I'm not > sure this issue does not affect other scenarios and a better approach could > be necessary (more in line with what is done in NiFi core). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 commented on pull request #4264: NIFI-7380 - fix for controller service validation in NiFi Stateless
markap14 commented on pull request #4264: URL: https://github.com/apache/nifi/pull/4264#issuecomment-630410521 Thanks for the PR @matcauf ! Code looks good. Was able to verify validation was taking place now. +1 merged to masteR! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7380) NiFi Stateless does not validate CS correctly
[ https://issues.apache.org/jira/browse/NIFI-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110579#comment-17110579 ] ASF subversion and git services commented on NIFI-7380: --- Commit 179675f0b42e61ce125932fcf57086024d5e6f57 in nifi's branch refs/heads/master from Matthieu Cauffiez [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=179675f ] NIFI-7380 - fix for controller service validation in NiFi Stateless This closes #4264. Signed-off-by: Matthieu Cauffiez Signed-off-by: Mark Payne > NiFi Stateless does not validate CS correctly > - > > Key: NIFI-7380 > URL: https://issues.apache.org/jira/browse/NIFI-7380 > Project: Apache NiFi > Issue Type: Bug > Components: NiFi Stateless >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Critical > Labels: nifi-stateless, stateless > Attachments: nifi-7380-confluent-schema-registry-exception.txt, > nifi-7380-exception.txt, nifi-7380-flow-config.json, > nifi-7380-jolt-exception.txt, stateless.json > > Time Spent: 40m > Remaining Estimate: 0h > > When the flow executed with the NiFi Stateless running mode contains a > Controller Service with required properties, it'll fail as it does not take > into account the configuration when performing the validation of the > component. > In *StatelessControllerServiceLookup*, the method > {code:java} > public void enableControllerServices(final VariableRegistry variableRegistry) > {code} > first validates the configured controller services and calls > {code:java} > public Collection validate(...){code} > This will create a *StatelessProcessContext* object and a > *StatelessValidationContext* object. Then the method *validate* is called on > the controller service and pass the validation context as argument. It will > go through the properties of the controller service and will retrieve the > configured value of the properties as set in the *StatelessProcessContext* > object. The problem is that the *properties* map in the Stateless Process > Context supposed to contain the configured values is never set. As such, any > required property in a Controller Service is considered as configured with a > null value if there is no default value. This will cause the component > validation to fail and the flow won't be executed. > I opened a PR with a solution that does solve this issue. However I'm not > sure this issue does not affect other scenarios and a better approach could > be necessary (more in line with what is done in NiFi core). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6633) Allow user to copy a parameter context
[ https://issues.apache.org/jira/browse/NIFI-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110575#comment-17110575 ] Andrew Grande commented on NIFI-6633: - Hi, this would be a great usability improvement. As a flow designer, I'd crafted a parameter context with a dozen values. Now, if I want to quickly test the flow against a different env (e.g. different data center in a remote geo), the natural motion is to clone a parameter context and update a few settings which need to change. Expected: some kind of parameter context browser and/or an action button to clone it (prompt for a new name). A user can then proceed and associate this new context with a PG as needed (same PG or a cloned one, for example). Actual: at the moment there's no experience like that, one can only create a new param context, meaning having to re-do all the work and enter every param again. Very error prone when done in the UI, too. > Allow user to copy a parameter context > -- > > Key: NIFI-6633 > URL: https://issues.apache.org/jira/browse/NIFI-6633 > Project: Apache NiFi > Issue Type: New Feature > Components: Core UI >Reporter: Rob Fellows >Priority: Major > > It would be nice to be able to copy/duplicate an existing parameter context. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] MuazmaZ commented on a change in pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage
MuazmaZ commented on a change in pull request #4273: URL: https://github.com/apache/nifi/pull/4273#discussion_r426854519 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/FetchAzureDataLakeStorage.java ## @@ -67,6 +67,10 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro final DataLakeDirectoryClient directoryClient = dataLakeFileSystemClient.getDirectoryClient(directory); final DataLakeFileClient fileClient = directoryClient.getFileClient(fileName); +if (fileClient.getProperties().isDirectory()) { Review comment: @turcsanyip re-testing it right now. I will update shortly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7466) UX - move a part of a flow in a process group
Pierre Villard created NIFI-7466: Summary: UX - move a part of a flow in a process group Key: NIFI-7466 URL: https://issues.apache.org/jira/browse/NIFI-7466 Project: Apache NiFi Issue Type: Improvement Components: Core UI Reporter: Pierre Villard Let's say I have a flow: {code:java} InputProcessor -> a flow doing things -> OutputProcessor1 \-> another flow doing things -> OutputProcessor2 {code} I'd like being able to select the "flow doing things" and have it in a process group without deleting the relationships with the InputProcessor and the OutputProcessor1 and recreating everything after (adding input port, output port, and adding four relationships). It'd automatically create the process group with the input port(s) and the output port(s) (with default name being like __, or something simpler like input/output_) and would create the appropriate relationships. *That would save a lot of manual operations.* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-7333) OIDC provider should use NiFi keystore & truststore
[ https://issues.apache.org/jira/browse/NIFI-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] M Tien reassigned NIFI-7333: Assignee: M Tien (was: Troy Melhase) > OIDC provider should use NiFi keystore & truststore > --- > > Key: NIFI-7333 > URL: https://issues.apache.org/jira/browse/NIFI-7333 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Security >Affects Versions: 1.11.4 >Reporter: Andy LoPresto >Assignee: M Tien >Priority: Major > Labels: keystore, oidc, security, tls > > The OIDC provider uses generic HTTPS requests to the OIDC IdP, but does not > configure these requests to use the NiFi keystore or truststore. Rather, it > uses the default JVM keystore and truststore, which leads to difficulty > debugging PKIX and other TLS negotiation errors. It should be switched to use > the NiFi keystore and truststore as other NiFi framework services do. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-7332) Improve communication to user when OIDC response does not contain usable claims
[ https://issues.apache.org/jira/browse/NIFI-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] M Tien reassigned NIFI-7332: Assignee: M Tien (was: Troy Melhase) > Improve communication to user when OIDC response does not contain usable > claims > --- > > Key: NIFI-7332 > URL: https://issues.apache.org/jira/browse/NIFI-7332 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Security >Affects Versions: 1.11.4 >Reporter: Andy LoPresto >Assignee: M Tien >Priority: Major > Labels: oidc, security > > The messaging displayed to the user/admin does not clearly indicate the > problem if the OIDC response does not contain a claim that NiFi is configured > to use (i.e. NiFi expects an {{email}} claim but the user does not have an > email configured on the OIDC IdP). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-7331) Grammatical and syntactic errors in log and error messages
[ https://issues.apache.org/jira/browse/NIFI-7331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] M Tien reassigned NIFI-7331: Assignee: M Tien (was: Troy Melhase) > Grammatical and syntactic errors in log and error messages > -- > > Key: NIFI-7331 > URL: https://issues.apache.org/jira/browse/NIFI-7331 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Andy LoPresto >Assignee: M Tien >Priority: Trivial > Labels: logging, messaging > > Improve the quality of the log and error messages in the OIDC identity > provider. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 commented on a change in pull request #4280: NIFI-7462: QueryRecord cast functions for Choice datatypes
markap14 commented on a change in pull request #4280: URL: https://github.com/apache/nifi/pull/4280#discussion_r426851027 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryRecord.java ## @@ -396,7 +396,7 @@ public void process(final OutputStream out) throws IOException { session.remove(createdFlowFiles); session.transfer(original, REL_FAILURE); } catch (final Exception e) { -getLogger().error("Unable to query {} due to {}", new Object[] {original, e}); +getLogger().error("Unable to query {} due to {}", new Object[] {original, e.getCause() == null ? e : e.getCause()}); Review comment: Yeah unfortunately for this processor, when an Exception is thrown it's kind of difficult to figure out without the logs. But if we were to unwrap the cause, that could make even the logs confusing, which is even worse. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a change in pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage
turcsanyip commented on a change in pull request #4273: URL: https://github.com/apache/nifi/pull/4273#discussion_r426847987 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/FetchAzureDataLakeStorage.java ## @@ -67,6 +67,10 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro final DataLakeDirectoryClient directoryClient = dataLakeFileSystemClient.getDirectoryClient(directory); final DataLakeFileClient fileClient = directoryClient.getFileClient(fileName); +if (fileClient.getProperties().isDirectory()) { Review comment: @pgyori, @MuazmaZ Thanks for the clarification and the feedback. Then it is fine as it is now. LGTM from my side. @MuazmaZ Did you manage to make it work on your side or still NoSuchMethodError? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pcgrenier commented on a change in pull request #4280: NIFI-7462: QueryRecord cast functions for Choice datatypes
pcgrenier commented on a change in pull request #4280: URL: https://github.com/apache/nifi/pull/4280#discussion_r426841301 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryRecord.java ## @@ -396,7 +396,7 @@ public void process(final OutputStream out) throws IOException { session.remove(createdFlowFiles); session.transfer(original, REL_FAILURE); } catch (final Exception e) { -getLogger().error("Unable to query {} due to {}", new Object[] {original, e}); +getLogger().error("Unable to query {} due to {}", new Object[] {original, e.getCause() == null ? e : e.getCause()}); Review comment: Yeah, this one was a strange one. Since the calcite cast function throwing the cannot handle Object exception was getting trapped here and the message was super vague so had to dig into the app logs to see the stack trace. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-6497) Allow FreeFormTextRecordSetWriter to access FlowFile Attributes
[ https://issues.apache.org/jira/browse/NIFI-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-6497: - Fix Version/s: 1.12.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Allow FreeFormTextRecordSetWriter to access FlowFile Attributes > --- > > Key: NIFI-6497 > URL: https://issues.apache.org/jira/browse/NIFI-6497 > Project: Apache NiFi > Issue Type: Improvement >Reporter: DamienDEOM >Assignee: Matt Burgess >Priority: Major > Fix For: 1.12.0 > > Time Spent: 1h > Remaining Estimate: 0h > > > I'm trying to convert json records to database insert statements using the > Splitrecords processor > To do so, I use FreeFormTextRecordSetWriter controller with following text: > {{INSERT INTO p17128.bookmark_users values ('${username}', > '${firstname:urlEncode()}', '${user_id}', '${accountnumber}', > '${lastname:urlEncode()}', '${nominal_time}'}}) > The resulting statement values are valid for all fields contained in Record > reader. > Now I'd like to add a field that is a flowfile attribute ( ${nominal_time} ), > but I always get an empty string in the output. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 commented on pull request #4275: NIFI-6497: Allow FreeFormTextRecordSetWriter to access FlowFile Attributes
markap14 commented on pull request #4275: URL: https://github.com/apache/nifi/pull/4275#issuecomment-630383140 Thanks @mattyb149 . Code looks good to me, all tests pass. And thanks @ottobackwards for verifying behavior! +1 have merged to master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-6497) Allow FreeFormTextRecordSetWriter to access FlowFile Attributes
[ https://issues.apache.org/jira/browse/NIFI-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110547#comment-17110547 ] ASF subversion and git services commented on NIFI-6497: --- Commit a3cc2c58ff05e8d0008a5953c976d7ae3169c2d7 in nifi's branch refs/heads/master from Matt Burgess [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a3cc2c5 ] NIFI-6497: Allow FreeFormTextRecordSetWriter to access FlowFile Attributes This closes #4275. Signed-off-by: Mark Payne > Allow FreeFormTextRecordSetWriter to access FlowFile Attributes > --- > > Key: NIFI-6497 > URL: https://issues.apache.org/jira/browse/NIFI-6497 > Project: Apache NiFi > Issue Type: Improvement >Reporter: DamienDEOM >Assignee: Matt Burgess >Priority: Major > Time Spent: 50m > Remaining Estimate: 0h > > > I'm trying to convert json records to database insert statements using the > Splitrecords processor > To do so, I use FreeFormTextRecordSetWriter controller with following text: > {{INSERT INTO p17128.bookmark_users values ('${username}', > '${firstname:urlEncode()}', '${user_id}', '${accountnumber}', > '${lastname:urlEncode()}', '${nominal_time}'}}) > The resulting statement values are valid for all fields contained in Record > reader. > Now I'd like to add a field that is a flowfile attribute ( ${nominal_time} ), > but I always get an empty string in the output. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 closed pull request #4275: NIFI-6497: Allow FreeFormTextRecordSetWriter to access FlowFile Attributes
markap14 closed pull request #4275: URL: https://github.com/apache/nifi/pull/4275 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7437) UI is slow when nifi.analytics.predict.enabled is true
[ https://issues.apache.org/jira/browse/NIFI-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-7437: --- Fix Version/s: 1.12.0 Resolution: Fixed Status: Resolved (was: Patch Available) > UI is slow when nifi.analytics.predict.enabled is true > -- > > Key: NIFI-7437 > URL: https://issues.apache.org/jira/browse/NIFI-7437 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI, Extensions >Affects Versions: 1.10.0, 1.11.4 > Environment: Java11, CentOS8 >Reporter: Dmitry Ibragimov >Assignee: Yolanda M. Davis >Priority: Critical > Labels: features, performance > Fix For: 1.12.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > We faced with issue when nifi.analytics.predict.enabled is true after cluster > upgrade to 1.11.4 > We have about 4000 processors in development enviroment, but most of them is > in disabled state: 256 running, 1263 stopped, 2543 disabled > After upgrade version from 1.9.2 to 1.11.4 we deicded to test back-pressure > prediction feature and enable it in configuration: > {code:java} > nifi.analytics.predict.enabled=true > nifi.analytics.predict.interval=3 mins > nifi.analytics.query.interval=5 mins > nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares > nifi.analytics.connection.model.score.name=rSquared > nifi.analytics.connection.model.score.threshold=.90 > {code} > And we faced with terrible UI performance degradataion. Root page opens in 20 > seconds instead of 200-500ms. About ~100 times slower. I've tesed it with > different environments centos7/8, java8/11, clustered secured, clustered > unsecured, standalone unsecured - all the same. > In debug log for ThreadPoolRequestReplicator: > {code:java} > 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator For GET > /nifi-api/flow/process-groups/root (Request ID > c144196f-d4cb-4053-8828-70e06f7c5100), minimum response time = 19548, max = > 20625, average = 20161.0 ms > 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator Node Responses for GET > /nifi-api/flow/process-groups/root (Request ID > c144196f-d4cb-4053-8828-70e06f7c5100): > newnifi01:8080: 19548 millis > newnifi02:8080: 20625 millis > newnifi03:8080: 20310 millis{code} > More deep debug: > > {code:java} > 2020-05-09 10:31:13,252 DEBUG [NiFi Web Server-21] > org.eclipse.jetty.server.HttpChannel REQUEST for > //newnifi01:8080/nifi-api/flow/process-groups/root on > HttpChannelOverHttp@68d3e945{r=1,c=false,c=false/false,a=IDLE,uri=//newnifi01:8080/nifi-api/flow/process-groups/root,age=0} > GET //newnifi01:8080/nifi-api/flow/process-groups/root HTTP/1.1 > Host: newnifi01:8080 > ... > 2020-05-09 10:31:13,256 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time > back pressure by content size in bytes. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time > to back pressure by object count. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > content size in bytes for next interval. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > object count for next interval. Returning -1 > 2020-05-09 10:31:13,258 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > object count for next interval. Returning -1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > content size in bytes for next interval. Returning -1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id > eb602b2a-016f-1000--2767192a: nextIntervalPercentageUseCount=-1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id > eb602b2a-016f-1000--2767192a: nextIntervalBytes=-1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id > eb602b2a-016f-1000--2767192a: timeToBytesBackpressureMillis=-1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id > eb602b2a-016f-1000--2767192a: nextIntervalCount=-1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] >
[jira] [Commented] (NIFI-7437) UI is slow when nifi.analytics.predict.enabled is true
[ https://issues.apache.org/jira/browse/NIFI-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110534#comment-17110534 ] ASF subversion and git services commented on NIFI-7437: --- Commit 13418ccb91e8f41e98287d7839d12500c3734052 in nifi's branch refs/heads/master from Yolanda M. Davis [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=13418cc ] NIFI-7437 - created separate thread for preloading predictions, refactors for performance NIFI-7437 - reduced scheduler to 15 seconds, change cache to expire after no access vs expire after write Signed-off-by: Matthew Burgess This closes #4274 > UI is slow when nifi.analytics.predict.enabled is true > -- > > Key: NIFI-7437 > URL: https://issues.apache.org/jira/browse/NIFI-7437 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI, Extensions >Affects Versions: 1.10.0, 1.11.4 > Environment: Java11, CentOS8 >Reporter: Dmitry Ibragimov >Assignee: Yolanda M. Davis >Priority: Critical > Labels: features, performance > Time Spent: 1h 10m > Remaining Estimate: 0h > > We faced with issue when nifi.analytics.predict.enabled is true after cluster > upgrade to 1.11.4 > We have about 4000 processors in development enviroment, but most of them is > in disabled state: 256 running, 1263 stopped, 2543 disabled > After upgrade version from 1.9.2 to 1.11.4 we deicded to test back-pressure > prediction feature and enable it in configuration: > {code:java} > nifi.analytics.predict.enabled=true > nifi.analytics.predict.interval=3 mins > nifi.analytics.query.interval=5 mins > nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares > nifi.analytics.connection.model.score.name=rSquared > nifi.analytics.connection.model.score.threshold=.90 > {code} > And we faced with terrible UI performance degradataion. Root page opens in 20 > seconds instead of 200-500ms. About ~100 times slower. I've tesed it with > different environments centos7/8, java8/11, clustered secured, clustered > unsecured, standalone unsecured - all the same. > In debug log for ThreadPoolRequestReplicator: > {code:java} > 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator For GET > /nifi-api/flow/process-groups/root (Request ID > c144196f-d4cb-4053-8828-70e06f7c5100), minimum response time = 19548, max = > 20625, average = 20161.0 ms > 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator Node Responses for GET > /nifi-api/flow/process-groups/root (Request ID > c144196f-d4cb-4053-8828-70e06f7c5100): > newnifi01:8080: 19548 millis > newnifi02:8080: 20625 millis > newnifi03:8080: 20310 millis{code} > More deep debug: > > {code:java} > 2020-05-09 10:31:13,252 DEBUG [NiFi Web Server-21] > org.eclipse.jetty.server.HttpChannel REQUEST for > //newnifi01:8080/nifi-api/flow/process-groups/root on > HttpChannelOverHttp@68d3e945{r=1,c=false,c=false/false,a=IDLE,uri=//newnifi01:8080/nifi-api/flow/process-groups/root,age=0} > GET //newnifi01:8080/nifi-api/flow/process-groups/root HTTP/1.1 > Host: newnifi01:8080 > ... > 2020-05-09 10:31:13,256 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time > back pressure by content size in bytes. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time > to back pressure by object count. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > content size in bytes for next interval. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > object count for next interval. Returning -1 > 2020-05-09 10:31:13,258 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > object count for next interval. Returning -1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > content size in bytes for next interval. Returning -1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id > eb602b2a-016f-1000--2767192a: nextIntervalPercentageUseCount=-1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id > eb602b2a-016f-1000--2767192a: nextIntervalBytes=-1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] >
[jira] [Commented] (NIFI-7437) UI is slow when nifi.analytics.predict.enabled is true
[ https://issues.apache.org/jira/browse/NIFI-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110535#comment-17110535 ] ASF subversion and git services commented on NIFI-7437: --- Commit 13418ccb91e8f41e98287d7839d12500c3734052 in nifi's branch refs/heads/master from Yolanda M. Davis [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=13418cc ] NIFI-7437 - created separate thread for preloading predictions, refactors for performance NIFI-7437 - reduced scheduler to 15 seconds, change cache to expire after no access vs expire after write Signed-off-by: Matthew Burgess This closes #4274 > UI is slow when nifi.analytics.predict.enabled is true > -- > > Key: NIFI-7437 > URL: https://issues.apache.org/jira/browse/NIFI-7437 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI, Extensions >Affects Versions: 1.10.0, 1.11.4 > Environment: Java11, CentOS8 >Reporter: Dmitry Ibragimov >Assignee: Yolanda M. Davis >Priority: Critical > Labels: features, performance > Time Spent: 1h 20m > Remaining Estimate: 0h > > We faced with issue when nifi.analytics.predict.enabled is true after cluster > upgrade to 1.11.4 > We have about 4000 processors in development enviroment, but most of them is > in disabled state: 256 running, 1263 stopped, 2543 disabled > After upgrade version from 1.9.2 to 1.11.4 we deicded to test back-pressure > prediction feature and enable it in configuration: > {code:java} > nifi.analytics.predict.enabled=true > nifi.analytics.predict.interval=3 mins > nifi.analytics.query.interval=5 mins > nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares > nifi.analytics.connection.model.score.name=rSquared > nifi.analytics.connection.model.score.threshold=.90 > {code} > And we faced with terrible UI performance degradataion. Root page opens in 20 > seconds instead of 200-500ms. About ~100 times slower. I've tesed it with > different environments centos7/8, java8/11, clustered secured, clustered > unsecured, standalone unsecured - all the same. > In debug log for ThreadPoolRequestReplicator: > {code:java} > 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator For GET > /nifi-api/flow/process-groups/root (Request ID > c144196f-d4cb-4053-8828-70e06f7c5100), minimum response time = 19548, max = > 20625, average = 20161.0 ms > 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator Node Responses for GET > /nifi-api/flow/process-groups/root (Request ID > c144196f-d4cb-4053-8828-70e06f7c5100): > newnifi01:8080: 19548 millis > newnifi02:8080: 20625 millis > newnifi03:8080: 20310 millis{code} > More deep debug: > > {code:java} > 2020-05-09 10:31:13,252 DEBUG [NiFi Web Server-21] > org.eclipse.jetty.server.HttpChannel REQUEST for > //newnifi01:8080/nifi-api/flow/process-groups/root on > HttpChannelOverHttp@68d3e945{r=1,c=false,c=false/false,a=IDLE,uri=//newnifi01:8080/nifi-api/flow/process-groups/root,age=0} > GET //newnifi01:8080/nifi-api/flow/process-groups/root HTTP/1.1 > Host: newnifi01:8080 > ... > 2020-05-09 10:31:13,256 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time > back pressure by content size in bytes. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time > to back pressure by object count. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > content size in bytes for next interval. Returning -1 > 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > object count for next interval. Returning -1 > 2020-05-09 10:31:13,258 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > object count for next interval. Returning -1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting > content size in bytes for next interval. Returning -1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id > eb602b2a-016f-1000--2767192a: nextIntervalPercentageUseCount=-1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] > o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id > eb602b2a-016f-1000--2767192a: nextIntervalBytes=-1 > 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] >
[GitHub] [nifi] mattyb149 closed pull request #4274: NIFI-7437 - created separate thread for preloading predictions, refac…
mattyb149 closed pull request #4274: URL: https://github.com/apache/nifi/pull/4274 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on pull request #4280: NIFI-7462: QueryRecord cast functions for Choice datatypes
markap14 commented on pull request #4280: URL: https://github.com/apache/nifi/pull/4280#issuecomment-630356879 @pcgrenier I created a new PR based off of yours - https://github.com/apache/nifi/pull/4282. Can you review that and make sure that it gives you what you need? This will allow the table schema to be more intelligent for CHOICE types and in cases like this it will allow you to use proper SQL CAST functions and also allows you to perform functions like SUM() without the need to explicitly cast them, which is important if your data does have mixed data types. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 opened a new pull request #4282: NIFI-7462: Update to allow FlowFile Table's schema to be more intelligent when using CHOICE types
markap14 opened a new pull request #4282: URL: https://github.com/apache/nifi/pull/4282 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MuazmaZ commented on a change in pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage
MuazmaZ commented on a change in pull request #4273: URL: https://github.com/apache/nifi/pull/4273#discussion_r426801050 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/FetchAzureDataLakeStorage.java ## @@ -67,6 +67,10 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro final DataLakeDirectoryClient directoryClient = dataLakeFileSystemClient.getDirectoryClient(directory); final DataLakeFileClient fileClient = directoryClient.getFileClient(fileName); +if (fileClient.getProperties().isDirectory()) { Review comment: I agree @pgyori about the flow and for large flows that would be higher memory consumption. Also, a valid flowfile with empty content could be a real scenario where sometimes the process generates empty files to trigger other flows that I have seen with customers. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on a change in pull request #4280: NIFI-7462: QueryRecord cast functions for Choice datatypes
markap14 commented on a change in pull request #4280: URL: https://github.com/apache/nifi/pull/4280#discussion_r426796947 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryRecord.java ## @@ -396,7 +396,7 @@ public void process(final OutputStream out) throws IOException { session.remove(createdFlowFiles); session.transfer(original, REL_FAILURE); } catch (final Exception e) { -getLogger().error("Unable to query {} due to {}", new Object[] {original, e}); +getLogger().error("Unable to query {} due to {}", new Object[] {original, e.getCause() == null ? e : e.getCause()}); Review comment: I don't think we want to be getting the 'cause' for the general Exception - we should log the exception itself. We get the cause for SQLException only because we know that it's generally wrapping another exception. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-7465) SNMPTrap
[ https://issues.apache.org/jira/browse/NIFI-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan reassigned NIFI-7465: Assignee: Stefan > SNMPTrap > > > Key: NIFI-7465 > URL: https://issues.apache.org/jira/browse/NIFI-7465 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Stefan >Assignee: Stefan >Priority: Trivial > Attachments: snmptrap.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > NiFi has two excellent SNMP processors SetSNMP and GetSNMP. However, an SNMP > Trap cannot be sent to a monitoring tool such as SiteScope for example. > Based on the existing two processors using 99% of existing code I manage to > send an SNMP Trap by creating a new SNMPTtap processor. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 commented on pull request #4280: NIFI-7462: QueryRecord cast functions for Choice datatypes
markap14 commented on pull request #4280: URL: https://github.com/apache/nifi/pull/4280#issuecomment-630338743 Hey @pcgrenier thanks for putting this PR up. I do think that it addresses the case when all of the values coming in have the same type. But if your schema says that field 'num' can be either an int or a float, this solutions forces the user to cast all values to either int or float. It stands to reason that with such a schema, you could have a mix of both ints and floats. And depending on the query, we may want to keep these values as such. I also would like to avoid adding our own custom "cast" functions when cast functions already exist in SQL. I definitely feel like that's a red flag that we're not doing something quite right. So I have a proposal that I think addresses this well but does so very differently. In the `FlowFileTable` class, we define the data types that Calcite should use internally for each of the types. For the CHOICE data type, we just blindly return `typeFactory.createJavaType(Object.class)` and that's why it fails - you can't cast an Object to a float, double, int, etc. So I think the better solution is to refine what we return for CHOICE types. Specifically, if we have a choice between two types of numbers where one is wider than the other, we should return the wider type (e.g., for a double/float we should return double). For two numbers where one is not wider than the other (e.g., double/int) we should return a String, as String is considered wider than both of these. For other cases, (e.g., double/Record) we should return Object as we do now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pgyori commented on a change in pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage
pgyori commented on a change in pull request #4273: URL: https://github.com/apache/nifi/pull/4273#discussion_r426788175 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/FetchAzureDataLakeStorage.java ## @@ -67,6 +67,10 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro final DataLakeDirectoryClient directoryClient = dataLakeFileSystemClient.getDirectoryClient(directory); final DataLakeFileClient fileClient = directoryClient.getFileClient(fileName); +if (fileClient.getProperties().isDirectory()) { Review comment: Unfortunately things get quite complicated in that case. If we call isDirectory() after session.write(), then the flowfile is already overwritten with the empty content, which means that after throwing the ProcessException, the flowfile that goes to the error output has no content (instead of the content of the original input flowfile). To avoid losing the content of the input flowfile, we would need to copy and store its content (before calling session.write()), and if isDirectory() returns true, we would need to load back this content to the original flowfile before throwing the exception. This would result in higher memory consumption in case of large input flowfiles. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] phrocker commented on pull request #781: MINIFICPP-1214: Converts H2O Processors to use ALv2 compliant H20-3 library
phrocker commented on pull request #781: URL: https://github.com/apache/nifi-minifi-cpp/pull/781#issuecomment-630321686 > I have a concern regarding the contribution of these processors. Myself and probably most minifi c++ contributors are unfamiliar with H2O, which makes it problematic to maintain the new processors. > > The first problem I ran into while reviewing #784 is that I couldn't easily verify whether those changes would break these new processors. > > This problem (and others) could be mostly solved if you could submit integration tests covering these processors. Could you do that? It took me a while to get around to testing this. My hope on the original merge would be that I would create a ticket and create a test for this, but I'm coming around to the fact that I'm too busy to do such things. I agree with what @szaszm is saying, especially in regards to having regression tests that would make this easier to merge. I think this solves the licensing issue; however, some tests would be super useful if possible. pytest-mock would be a useful testing framework to explore and should make this a pretty low water mark to solve this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFIREG-389) libnpx not found
Scott Aslan created NIFIREG-389: --- Summary: libnpx not found Key: NIFIREG-389 URL: https://issues.apache.org/jira/browse/NIFIREG-389 Project: NiFi Registry Issue Type: Bug Reporter: Scott Aslan Assignee: Scott Aslan Fix For: 1.0.0 Users who do not have npx installed globally can run into a build issue. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] szaszm commented on pull request #781: MINIFICPP-1214: Converts H2O Processors to use ALv2 compliant H20-3 library
szaszm commented on pull request #781: URL: https://github.com/apache/nifi-minifi-cpp/pull/781#issuecomment-630301959 I have a concern regarding the contribution of these processors. Myself and probably most minifi c++ contributors are unfamiliar with H2O, which makes it problematic to maintain the new processors. The first problem I ran into while reviewing #784 is that I couldn't easily verify whether those changes would break these new processors. This problem (and others) could be mostly solved if you could submit integration tests covering these processors. Could you do that? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] jfrazee commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors
jfrazee commented on a change in pull request #4265: URL: https://github.com/apache/nifi/pull/4265#discussion_r426742656 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java ## @@ -85,6 +85,22 @@ .sensitive(true) .build(); +public static final PropertyDescriptor ENDPOINT_SUFFIX = new PropertyDescriptor.Builder() +.name("storage-endpoint-suffix") +.displayName("Storage Endpoint Suffix") +.description( +"Storage accounts in public Azure always use a common FQDN suffix. " + +"Override this endpoint suffix with a different suffix in certain circumsances (like Azure Stack or non-public Azure regions). " + +"The preferred way is to configure them through a controller service specified in the Storage Credentials property. " + +"The controller service can provide a common/shared configuration for multiple/all Azure processors. Furthermore, the credentials " + +"can also be looked up dynamically with the 'Lookup' version of the service.") +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) Review comment: I think this is a good observation. So the question turns into whether to assume a full URL, just the suffix, or both (easy enough to detect the difference). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on pull request #4280: NIFI-7462: QueryRecord cast functions for Choice datatypes
markap14 commented on pull request #4280: URL: https://github.com/apache/nifi/pull/4280#issuecomment-630263219 Will review This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #786: MINIFICPP-1225 - Fix flaky HTTP tests.
szaszm commented on a change in pull request #786: URL: https://github.com/apache/nifi-minifi-cpp/pull/786#discussion_r426653075 ## File path: libminifi/include/io/CRCStream.h ## @@ -145,17 +145,15 @@ class CRCStream : public BaseStream { protected: /** - * Creates a vector and returns the vector using the provided - * type name. + * Populates the vector using the provided type name. + * @param buf output buffer * @param t incoming object - * @returns vector. + * @returns number of bytes read. */ template - std::vector readBuffer(const K& t) { -std::vector buf; + int readBuffer(std::vector& buf, const K& t) { Review comment: This leaves the option of a new overload with proper error handling. If existing 3rd party extensions happened to use the old overload, it should work similarly, even is without error checking. If you have a better idea, feel free to propose it. I have a few more as well, but they also highly deviate from the style of the codebase with limited benefit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #786: MINIFICPP-1225 - Fix flaky HTTP tests.
szaszm commented on a change in pull request #786: URL: https://github.com/apache/nifi-minifi-cpp/pull/786#discussion_r426648842 ## File path: libminifi/include/io/CRCStream.h ## @@ -145,17 +145,15 @@ class CRCStream : public BaseStream { protected: /** - * Creates a vector and returns the vector using the provided - * type name. + * Populates the vector using the provided type name. + * @param buf output buffer * @param t incoming object - * @returns vector. + * @returns number of bytes read. */ template - std::vector readBuffer(const K& t) { -std::vector buf; + int readBuffer(std::vector& buf, const K& t) { Review comment: In theory everything in libminifi headers is API. In practice we sometimes break non-core APIs if it fixes important issues or only breaks arcane uses. IIRC the stream APIs are considered "core" (important for 3rd party extension development), but @arpadboda will be able to confirm/correct me. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1228) MergeContent doesn't respect min entry count.
Adam Debreceni created MINIFICPP-1228: - Summary: MergeContent doesn't respect min entry count. Key: MINIFICPP-1228 URL: https://issues.apache.org/jira/browse/MINIFICPP-1228 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Adam Debreceni The MergeContent processor's "Minimum Number of Entries" property is not taken into account when files are merged. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #777: MINIFICPP-1216 - Controller Services Integration test is unstable
arpadboda closed pull request #777: URL: https://github.com/apache/nifi-minifi-cpp/pull/777 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #782: MINIFICPP-1217 - RPG should configure http client with reasonable tim…
arpadboda closed pull request #782: URL: https://github.com/apache/nifi-minifi-cpp/pull/782 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7465) SNMPTrap
[ https://issues.apache.org/jira/browse/NIFI-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan updated NIFI-7465: - Attachment: snmptrap.patch Status: Patch Available (was: Open) > SNMPTrap > > > Key: NIFI-7465 > URL: https://issues.apache.org/jira/browse/NIFI-7465 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Stefan >Priority: Trivial > Attachments: snmptrap.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > NiFi has two excellent SNMP processors SetSNMP and GetSNMP. However, an SNMP > Trap cannot be sent to a monitoring tool such as SiteScope for example. > Based on the existing two processors using 99% of existing code I manage to > send an SNMP Trap by creating a new SNMPTtap processor. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7465) SNMPTrap
Stefan created NIFI-7465: Summary: SNMPTrap Key: NIFI-7465 URL: https://issues.apache.org/jira/browse/NIFI-7465 Project: Apache NiFi Issue Type: Improvement Reporter: Stefan NiFi has two excellent SNMP processors SetSNMP and GetSNMP. However, an SNMP Trap cannot be sent to a monitoring tool such as SiteScope for example. Based on the existing two processors using 99% of existing code I manage to send an SNMP Trap by creating a new SNMPTtap processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #786: MINIFICPP-1225 - Fix flaky HTTP tests.
adamdebreceni commented on a change in pull request #786: URL: https://github.com/apache/nifi-minifi-cpp/pull/786#discussion_r426404826 ## File path: libminifi/include/io/CRCStream.h ## @@ -145,17 +145,15 @@ class CRCStream : public BaseStream { protected: /** - * Creates a vector and returns the vector using the provided - * type name. + * Populates the vector using the provided type name. + * @param buf output buffer * @param t incoming object - * @returns vector. + * @returns number of bytes read. */ template - std::vector readBuffer(const K& t) { -std::vector buf; + int readBuffer(std::vector& buf, const K& t) { Review comment: the problem is that every other read*/write* method confirms to the "return the number of read/wrote bytes or a negative number on error", throwing would deviate from that significantly, do we have a list of dependant projects, or a guide on what we consider part of the api? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #786: MINIFICPP-1225 - Fix flaky HTTP tests.
adamdebreceni commented on a change in pull request #786: URL: https://github.com/apache/nifi-minifi-cpp/pull/786#discussion_r426403568 ## File path: extensions/http-curl/tests/HTTPHandlers.h ## @@ -206,25 +207,36 @@ class FlowFileResponder : public CivetHandler { minifi::io::CivetStream civet_stream(conn); minifi::io::CRCStream < minifi::io::CivetStream > stream(_stream); uint32_t num_attributes; + int read; uint64_t total_size = 0; - total_size += stream.read(num_attributes); + read = stream.read(num_attributes); + if(!isServerRunning())return false; Review comment: apparently they do not segfault, but simply indicate error in their return value [mg_read](https://github.com/civetweb/civetweb/blob/master/docs/api/mg_read.md) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org