[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster
[ https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327105#comment-17327105 ] Josef Zahner commented on NIFI-8423: All our data sources have timestamps which we have to modify (eg. localtime to UTC). So I'll try to isolate the custom / scripted processors. This will take a while as we have more than just a few of them. I'll give feedback later. Yes I can confirm that all servers use the same OS and the same JDK - we use ansible to deploy everything from the OS as well as NiFi. > Timezone wrong in UI for an 8 node cluster > -- > > Key: NIFI-8423 > URL: https://issues.apache.org/jira/browse/NIFI-8423 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.13.2 > Environment: 8 Node NiFi Cluster on CentOS 7 > OpenJDK 1.8.0_282 > Local timezone: Europe/Zurich (CEST or UTC+2h) >Reporter: Josef Zahner >Priority: Critical > Labels: centos, cluster, openjdk, timezone > Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot > 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, > image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, > image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, > nifi-app_log.png > > > We just upgraded to NiFi 1.13.2 and Java 1.8.0_282 > On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we > have the issue that the UI does display the correct timezone (CEST, so UTC > +2h), but in fact the time is displayed as UTC. NTP is enabled and working. > The OS configuration/location is everywhere the same (doesn't matter if > single or cluster NiFi). My tests below are all done at around 15:xx:xx local > time (CEST). > As you can see below, the timezone seems to be correct, but the time itself > within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier > NiFi/java versions it was enough to multiple times restart the cluster, but > on the newest versions this doesn't help anymore. It shows most of the time > CEST with the wrong time or directly UTC. > !image-2021-04-13-15-14-06-930.png! > > The single NiFi instances or the 2 node clusters are always fine. The issue > exists only on our 8 node cluster. > NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h): > !image-2021-04-13-15-14-02-162.png! > > If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI > shows no summer time, so only GMT+1 instead of GMT+2. As well not what we > want. > {code:java} > java.arg.20=-Duser.timezone="Europe/Zurich"{code} > !image-2021-04-13-15-14-56-690.png! > > What Matt below suggested has been verified, all servers (single nodes as > well as clusters) are reporting the same time/timezone. > [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942] > > So the question remains, where on a NiFi cluster comes the time from the UI > and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm > getting CEST but the time is anyhow UTC instead of CEST... I really need to > have the correct time in the UI as I don't know what the impact could be on > our dataflows. > > Any help would be really appreciated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] granthenke commented on a change in pull request #5020: NIFI-8435 Added Kudu Client Worker Count property
granthenke commented on a change in pull request #5020: URL: https://github.com/apache/nifi/pull/5020#discussion_r618059261 ## File path: nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKuduProcessor.java ## @@ -184,10 +200,25 @@ protected KuduClient buildClient(final ProcessContext context) { final String masters = context.getProperty(KUDU_MASTERS).evaluateAttributeExpressions().getValue(); final int operationTimeout = context.getProperty(KUDU_OPERATION_TIMEOUT_MS).evaluateAttributeExpressions().asTimePeriod(TimeUnit.MILLISECONDS).intValue(); final int adminOperationTimeout = context.getProperty(KUDU_KEEP_ALIVE_PERIOD_TIMEOUT_MS).evaluateAttributeExpressions().asTimePeriod(TimeUnit.MILLISECONDS).intValue(); +final int workerCount = context.getProperty(WORKER_COUNT).asInteger(); + +// Create Executor following approach of Executors.newCachedThreadPool() using worker count as maximum pool size +final int corePoolSize = 0; +final long threadKeepAliveTime = 60; +final Executor nioExecutor = new ThreadPoolExecutor( Review comment: Is this required vs just setting the workerCount on the client? IIUC the client uses an unbound cached threadpool, but the NioEventLoopGroup will never user more than workerCount threads. ## File path: nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKuduProcessor.java ## @@ -184,10 +200,25 @@ protected KuduClient buildClient(final ProcessContext context) { final String masters = context.getProperty(KUDU_MASTERS).evaluateAttributeExpressions().getValue(); final int operationTimeout = context.getProperty(KUDU_OPERATION_TIMEOUT_MS).evaluateAttributeExpressions().asTimePeriod(TimeUnit.MILLISECONDS).intValue(); final int adminOperationTimeout = context.getProperty(KUDU_KEEP_ALIVE_PERIOD_TIMEOUT_MS).evaluateAttributeExpressions().asTimePeriod(TimeUnit.MILLISECONDS).intValue(); +final int workerCount = context.getProperty(WORKER_COUNT).asInteger(); + +// Create Executor following approach of Executors.newCachedThreadPool() using worker count as maximum pool size +final int corePoolSize = 0; +final long threadKeepAliveTime = 60; +final Executor nioExecutor = new ThreadPoolExecutor( +corePoolSize, +workerCount, +threadKeepAliveTime, +TimeUnit.SECONDS, +new SynchronousQueue<>(), +new ClientThreadFactory(getIdentifier()) +); return new KuduClient.KuduClientBuilder(masters) .defaultOperationTimeoutMs(operationTimeout) -.defaultSocketReadTimeoutMs(adminOperationTimeout) +.defaultAdminOperationTimeoutMs(adminOperationTimeout) Review comment: This code looks suspect to me it's called adminOperationTimeout but the property is called KUDU_KEEP_ALIVE_PERIOD_TIMEOUT_MS. Not sure what this is and it probably should be cleaned up, but maybe in a seperate change. ## File path: nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java ## @@ -342,38 +346,73 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final KerberosUser user = getKerberosUser(); if (user == null) { -executeOnKuduClient(kuduClient -> trigger(context, session, flowFiles, kuduClient)); +executeOnKuduClient(kuduClient -> processFlowFiles(context, session, flowFiles, kuduClient)); return; } final PrivilegedExceptionAction privilegedAction = () -> { -executeOnKuduClient(kuduClient -> trigger(context, session, flowFiles, kuduClient)); +executeOnKuduClient(kuduClient -> processFlowFiles(context, session, flowFiles, kuduClient)); return null; }; final KerberosAction action = new KerberosAction<>(user, privilegedAction, getLogger()); action.execute(); } -private void trigger(final ProcessContext context, final ProcessSession session, final List flowFiles, KuduClient kuduClient) throws ProcessException { -final RecordReaderFactory recordReaderFactory = context.getProperty(RECORD_READER).asControllerService(RecordReaderFactory.class); +private void processFlowFiles(final ProcessContext context, final ProcessSession session, final List flowFiles, final KuduClient kuduClient) { +final Map processedRecords = new HashMap<>(); +final Map flowFileFailures = new HashMap<>(); +final Map operationFlowFileMap = new HashMap<>(); +final List pendingRowErrors = new ArrayList<>(); final KuduSession kuduSession = createKuduSession(kuduClient); +try {
[jira] [Updated] (NIFI-8454) InvokeHTTP does not report final URL after following redirects
[ https://issues.apache.org/jira/browse/NIFI-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-8454: --- Fix Version/s: 1.14.0 Resolution: Fixed Status: Resolved (was: Patch Available) > InvokeHTTP does not report final URL after following redirects > -- > > Key: NIFI-8454 > URL: https://issues.apache.org/jira/browse/NIFI-8454 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.13.2 >Reporter: Paul Kelly >Assignee: Paul Kelly >Priority: Minor > Labels: InvokeHTTP > Fix For: 1.14.0 > > Time Spent: 1h > Remaining Estimate: 0h > > If InvokeHTTP is set to follow redirects, there currently is no way to > retrieve the final URL which was the ultimate target of any redirects that > were followed. > I propose adding a new attribute "invokehttp.response.url" to include the > final URL when InvokeHTTP is set to follow redirects. The current attribute > "invokehttp.request.url" contains the URL which was originally requested, and > this new attribute will contain the URL that was ultimately retrieved. > For example, if the URL "http://bitly.com/1sNZMwL; is retrieved, > invokehttp.request.url will continue to contain "http://bitly.com/1sNZMwL; > and invokehttp.response.url will contain > "https://en.wikipedia.org/wiki/Bitly;. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8454) InvokeHTTP does not report final URL after following redirects
[ https://issues.apache.org/jira/browse/NIFI-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327005#comment-17327005 ] ASF subversion and git services commented on NIFI-8454: --- Commit aedacdf86f33ef944c17f3926656e35b7742bdff in nifi's branch refs/heads/main from Paul Kelly [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=aedacdf ] NIFI-8454: Allow InvokeHTTP to output final URL from response request property This closes #5016 Signed-off-by: David Handermann > InvokeHTTP does not report final URL after following redirects > -- > > Key: NIFI-8454 > URL: https://issues.apache.org/jira/browse/NIFI-8454 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.13.2 >Reporter: Paul Kelly >Assignee: Paul Kelly >Priority: Minor > Labels: InvokeHTTP > Time Spent: 50m > Remaining Estimate: 0h > > If InvokeHTTP is set to follow redirects, there currently is no way to > retrieve the final URL which was the ultimate target of any redirects that > were followed. > I propose adding a new attribute "invokehttp.response.url" to include the > final URL when InvokeHTTP is set to follow redirects. The current attribute > "invokehttp.request.url" contains the URL which was originally requested, and > this new attribute will contain the URL that was ultimately retrieved. > For example, if the URL "http://bitly.com/1sNZMwL; is retrieved, > invokehttp.request.url will continue to contain "http://bitly.com/1sNZMwL; > and invokehttp.response.url will contain > "https://en.wikipedia.org/wiki/Bitly;. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #5016: NIFI-8454: Allow InvokeHTTP to output final URL when following redirects
asfgit closed pull request #5016: URL: https://github.com/apache/nifi/pull/5016 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak
[ https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326985#comment-17326985 ] David Handermann commented on NIFI-8435: [~jzahner] I put together PR 5020 to add a {{Kudu Client Worker Count}} property for {{PutKudu}}. The default value uses the same approach as {{KuduClient}}, but for your use case, it seems like a smaller number, matching the number of Concurrent Tasks configured for PutKudu, would be better. The PR also ensures that {{KuduSession}} is always closed. The custom worker count number also provided the opportunity to set a custom ThreadFactory so that thread names now include the Processor Identifier, which should be helpful for troubleshooting. Kudu changes from Netty 3 to Netty 4 have definitely impacted the memory footprint, so I do not expect these changes to be a complete resolution. However, if you have any feedback on the PR, feel free to add comments. > PutKudu 1.13.2 Memory Leak > -- > > Key: NIFI-8435 > URL: https://issues.apache.org/jira/browse/NIFI-8435 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.13.2 > Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu > 1.10.0 >Reporter: Josef Zahner >Assignee: Peter Gyori >Priority: Critical > Labels: kudu, nifi, oom > Attachments: Screenshot 2021-04-20 at 14.27.11.png, > grafana_heap_overview.png, kudu_inserts_per_sec.png, > putkudu_processor_config.png, visualvm_bytes_detail_view.png, > visualvm_total_bytes_used.png > > Time Spent: 10m > Remaining Estimate: 0h > > We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with > PutKudu. > PutKudu on the 1.13.2 eats up all the heap memory and garbage collection > can't anymore free up the memory. We allow Java to use 31GB memory and as you > can see with NiFi 1.11.4 it will be used like it should with GC. However with > NiFi 1.13.2 with our actual load it fills up the memory relatively fast. > Manual GC via visualvm tool didn't help at all to free up memory. > !grafana_heap_overview.png! > > Visual VM shows the following culprit: !visualvm_total_bytes_used.png! > !visualvm_bytes_detail_view.png! > The bytes array shows millions of char data which isn't cleaned up. In fact > here 14,9GB memory (heapdump has been taken after a while of full load). If > we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a > few hundred MBs. > As you could imagine we can't upload the heap dump as currently we have only > productive data on the system. But don't hesitate to ask questions about the > heapdump if you need more information. > I haven't done any screenshot of the processor config, but I can do that if > you wish (we are back to NiFi 1.11.4 at the moment). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] exceptionfactory opened a new pull request #5020: NIFI-8435 Added Kudu Client Worker Count property
exceptionfactory opened a new pull request #5020: URL: https://github.com/apache/nifi/pull/5020 Description of PR NIFI-8435 Adds the `Kudu Client Worker Count` property to `PutKudu` with a default value that matches the value used in `KuduClient`. The property sets both the worker count property for `KuduClient` as well the maximum pool size for the `ThreadPoolExecutor` that `KuduClient` uses to configure the internal Netty `NioEventLoopGroup`. This maximum pool size ensures that the thread pool cannot continue to grow unbounded in certain edge case scenarios. The custom `ThreadFactory` also names client threads using the Processor Identifier to assist with runtime troubleshooting. Updates also include refactoring processing methods to ensure that the `KuduSession` is always closed after each `onTrigger` invocation. Under the previous implementation, the Processor called `KuduSession.close()` when remaining buffered records were found. `PutKudu` is also annotated with `SystemResource.MEMORY` to indicate that multiple instances increase memory usage based on pooled byte buffering present in Netty version 4. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [X] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [X] Have you written or updated unit tests to verify your changes? - [X] Have you verified that the full build is successful on JDK 8? - [X] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [X] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [X] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] greyp9 commented on a change in pull request #5018: NIFI-3580 - configure TLS cipher suites
greyp9 commented on a change in pull request #5018: URL: https://github.com/apache/nifi/pull/5018#discussion_r617924217 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java ## @@ -1012,6 +1016,28 @@ protected static void configureSslContextFactory(SslContextFactory.Server contex contextFactory.setIncludeProtocols(TlsConfiguration.getCurrentSupportedTlsProtocolVersions()); contextFactory.setExcludeProtocols("TLS", "TLSv1", "TLSv1.1", "SSL", "SSLv2", "SSLv2Hello", "SSLv3"); +// on configuration, replace default application cipher suites with those configured +final String includeCipherSuitesProps = props.getProperty(NiFiProperties.WEB_HTTPS_CIPHERSUITES_INCLUDE); +if (StringUtils.isNotEmpty(includeCipherSuitesProps)) { +final String[] includeCipherSuitesRuntime = contextFactory.getIncludeCipherSuites(); +final String[] includeCipherSuites = includeCipherSuitesProps.split(REGEX_SPLIT_PROPERTY); +logger.info("Replacing include cipher suites with configuration; runtime = {}, raw property = {}, parsed property = {}.", Review comment: I'm worried about edge cases, but may be overly cautious. Will change to logging just the parsed value. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5018: NIFI-3580 - configure TLS cipher suites
exceptionfactory commented on a change in pull request #5018: URL: https://github.com/apache/nifi/pull/5018#discussion_r617916108 ## File path: nifi-docs/src/main/asciidoc/administration-guide.adoc ## @@ -3494,6 +3516,14 @@ Providing three total network interfaces, including `nifi.web.http.network.inte |`nifi.web.https.host`|The HTTPS host. It is blank by default. |`nifi.web.https.port`|The HTTPS port. It is blank by default. When configuring NiFi to run securely, this port should be configured. |`nifi.web.https.port.forwarding`|Same as `nifi.web.http.port.forwarding`, but with HTTPS for secure communication. It is blank by default. +|`nifi.web.https.includeciphersuites`|Cipher suites used to initialize the SSLContext of the Jetty https port. If unspecified, the runtime SSLContext defaults are used. +|`nifi.web.https.excludeciphersuites`|Cipher suites that may not be used by an SSL client to establish a connection to Jetty. If unspecified, the runtime SSLContext defaults are used. Review comment: This property name also needs to be updated: ```suggestion |`nifi.web.https.ciphersuites.exclude`|Cipher suites that may not be used by an SSL client to establish a connection to Jetty. If unspecified, the runtime SSLContext defaults are used. ``` ## File path: nifi-docs/src/main/asciidoc/administration-guide.adoc ## @@ -3494,6 +3516,14 @@ Providing three total network interfaces, including `nifi.web.http.network.inte |`nifi.web.https.host`|The HTTPS host. It is blank by default. |`nifi.web.https.port`|The HTTPS port. It is blank by default. When configuring NiFi to run securely, this port should be configured. |`nifi.web.https.port.forwarding`|Same as `nifi.web.http.port.forwarding`, but with HTTPS for secure communication. It is blank by default. +|`nifi.web.https.includeciphersuites`|Cipher suites used to initialize the SSLContext of the Jetty https port. If unspecified, the runtime SSLContext defaults are used. Review comment: Looks like this property name needs to be updated, also recommend changing `https` to uppercase `HTTPS`: ```suggestion |`nifi.web.https.ciphersuites.include`|Cipher suites used to initialize the SSLContext of the Jetty HTTPS port. If unspecified, the runtime SSLContext defaults are used. ``` ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java ## @@ -1012,6 +1016,28 @@ protected static void configureSslContextFactory(SslContextFactory.Server contex contextFactory.setIncludeProtocols(TlsConfiguration.getCurrentSupportedTlsProtocolVersions()); contextFactory.setExcludeProtocols("TLS", "TLSv1", "TLSv1.1", "SSL", "SSLv2", "SSLv2Hello", "SSLv3"); +// on configuration, replace default application cipher suites with those configured +final String includeCipherSuitesProps = props.getProperty(NiFiProperties.WEB_HTTPS_CIPHERSUITES_INCLUDE); +if (StringUtils.isNotEmpty(includeCipherSuitesProps)) { +final String[] includeCipherSuitesRuntime = contextFactory.getIncludeCipherSuites(); +final String[] includeCipherSuites = includeCipherSuitesProps.split(REGEX_SPLIT_PROPERTY); +logger.info("Replacing include cipher suites with configuration; runtime = {}, raw property = {}, parsed property = {}.", Review comment: Do you think it is likely that the raw property will be that different from the parsed property? It seems like the only difference should be the number of spaces, in which case, logging both doesn't seem very useful. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 opened a new pull request #5019: NIFI-8457: Fixed bug in load balanced connections that can result in …
markap14 opened a new pull request #5019: URL: https://github.com/apache/nifi/pull/5019 …the node never completing OFFLOAD action. Also fixed issue in which data destined for a disconnected/offloaded node was never rebalanced even for partitioning strategies that call for rebalancing on failure Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-8457) When Load-balanced connections are used, offloading a node may never complete
Mark Payne created NIFI-8457: Summary: When Load-balanced connections are used, offloading a node may never complete Key: NIFI-8457 URL: https://issues.apache.org/jira/browse/NIFI-8457 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne Fix For: 1.14.0 If NiFi has a load-balanced connections, and the node is offloaded, the offloaded node may sometimes always show itself as having data in a queue, resulting in the offload never completing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8454) InvokeHTTP does not report final URL after following redirects
[ https://issues.apache.org/jira/browse/NIFI-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Kelly updated NIFI-8454: - Status: Patch Available (was: In Progress) > InvokeHTTP does not report final URL after following redirects > -- > > Key: NIFI-8454 > URL: https://issues.apache.org/jira/browse/NIFI-8454 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.13.2 >Reporter: Paul Kelly >Assignee: Paul Kelly >Priority: Minor > Labels: InvokeHTTP > Time Spent: 50m > Remaining Estimate: 0h > > If InvokeHTTP is set to follow redirects, there currently is no way to > retrieve the final URL which was the ultimate target of any redirects that > were followed. > I propose adding a new attribute "invokehttp.response.url" to include the > final URL when InvokeHTTP is set to follow redirects. The current attribute > "invokehttp.request.url" contains the URL which was originally requested, and > this new attribute will contain the URL that was ultimately retrieved. > For example, if the URL "http://bitly.com/1sNZMwL; is retrieved, > invokehttp.request.url will continue to contain "http://bitly.com/1sNZMwL; > and invokehttp.response.url will contain > "https://en.wikipedia.org/wiki/Bitly;. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-8456) Reduce load of NiFi PRs/Commits on Github
[ https://issues.apache.org/jira/browse/NIFI-8456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt resolved NIFI-8456. Fix Version/s: 1.14.0 Assignee: Joe Witt Resolution: Fixed > Reduce load of NiFi PRs/Commits on Github > - > > Key: NIFI-8456 > URL: https://issues.apache.org/jira/browse/NIFI-8456 > Project: Apache NiFi > Issue Type: Task >Reporter: Joe Witt >Assignee: Joe Witt >Priority: Major > Fix For: 1.14.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently we have 4 builds that run each taking at least 50 minutes. This is > useful for us but excessive. We show up on the top 13 consumers of all of > ASF report - it appears. > So while it would be nice to keep it as is lets reduce. Will make FR be done > in the windows build and will drop the other 1.8 build. So keep one java 8 > build on osx with JP, one java 11 build on ubuntu with EN, one java 8 build > on windows with FR. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #5017: NIFI-8456
asfgit closed pull request #5017: URL: https://github.com/apache/nifi/pull/5017 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8456) Reduce load of NiFi PRs/Commits on Github
[ https://issues.apache.org/jira/browse/NIFI-8456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326945#comment-17326945 ] ASF subversion and git services commented on NIFI-8456: --- Commit 8207c9db202cca571344fea5852c749617e10854 in nifi's branch refs/heads/main from Joe Witt [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8207c9d ] NIFI-8456 This closes #5017. Ensure we have only three builds and we maximally cover zulu vs adopt, linux vs windows vs osx, and java 8 vs 11 and EN vs FR vs JP Signed-off-by: Joe Witt > Reduce load of NiFi PRs/Commits on Github > - > > Key: NIFI-8456 > URL: https://issues.apache.org/jira/browse/NIFI-8456 > Project: Apache NiFi > Issue Type: Task >Reporter: Joe Witt >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Currently we have 4 builds that run each taking at least 50 minutes. This is > useful for us but excessive. We show up on the top 13 consumers of all of > ASF report - it appears. > So while it would be nice to keep it as is lets reduce. Will make FR be done > in the windows build and will drop the other 1.8 build. So keep one java 8 > build on osx with JP, one java 11 build on ubuntu with EN, one java 8 build > on windows with FR. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 commented on pull request #5017: NIFI-8456
markap14 commented on pull request #5017: URL: https://github.com/apache/nifi/pull/5017#issuecomment-824369646 Thanks @joewitt this should help with some of the long build times. +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (NIFI-2892) Implement AWS Kinesis Stream Get Processor
[ https://issues.apache.org/jira/browse/NIFI-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310962#comment-17310962 ] Chris Sampson edited comment on NIFI-2892 at 4/21/21, 9:15 PM: --- Updated based on PR comments: * added (optional) Record Reader/Writer * better Thread/Session handling for the use of Kinesis Client Library * Reverted to KCL verison 1.13.3 due to [amazon-kinesis-client#796|https://github.com/awslabs/amazon-kinesis-client/issues/796] * removed double-validation of dynamic properties (introduced by NIFI-8266; raised as NIFI-8431) Updated template/flow definition attachments. was (Author: chris s): Updated based on PR comments: * added (optional) Record Reader/Writer * better Thread/Session handling for the use of Kinesis Client Library * Reverted to KCL verison 1.13.3 due to [amazon-kinesis-client#796|https://github.com/awslabs/amazon-kinesis-client/issues/796] * removed double-validation of dynamic properties (introduced by NIFI-8266) Updated template/flow definition attachments. > Implement AWS Kinesis Stream Get Processor > -- > > Key: NIFI-2892 > URL: https://issues.apache.org/jira/browse/NIFI-2892 > Project: Apache NiFi > Issue Type: New Feature >Affects Versions: 1.0.0 >Reporter: Stephane Maarek >Assignee: Chris Sampson >Priority: Major > Attachments: NIFI-2892.xml, NiFi-2892.json > > Time Spent: 10h 40m > Remaining Estimate: 0h > > As the Kinesis Put Processor was just implemented in #1540, it would be great > to have a Kinesis Get Processor. > The main challenges are making sure the SDKs we use for this have a license > that authorise NiFi to bundle them with the main binaries. > I also know there are two models to read from a Kinesis Stream (push vs > pull), and we may want to implement either or both, as part of one or > multiple processors, for performance purposes. Open for discussions -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] pkelly-nifi commented on a change in pull request #5016: NIFI-8454: Allow InvokeHTTP to output final URL when following redirects
pkelly-nifi commented on a change in pull request #5016: URL: https://github.com/apache/nifi/pull/5016#discussion_r617842956 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java ## @@ -857,6 +859,9 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro statusAttributes.put(STATUS_MESSAGE, statusMessage); statusAttributes.put(REQUEST_URL, url.toExternalForm()); statusAttributes.put(TRANSACTION_ID, txId.toString()); +if(context.getProperty(PROP_FOLLOW_REDIRECTS).asBoolean()) { Review comment: Just added the test. Thank you for your help. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] greyp9 commented on pull request #5018: NIFI-3580 - configure TLS cipher suites
greyp9 commented on pull request #5018: URL: https://github.com/apache/nifi/pull/5018#issuecomment-824324614 Bad upstream rebase in Previous PR. https://github.com/apache/nifi/pull/5008 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] greyp9 opened a new pull request #5018: NIFI-3580 - configure TLS cipher suites
greyp9 opened a new pull request #5018: URL: https://github.com/apache/nifi/pull/5018 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt opened a new pull request #5017: NIFI-8456
joewitt opened a new pull request #5017: URL: https://github.com/apache/nifi/pull/5017 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] gresockj commented on pull request #5007: [WIP] Introduce notion of an asynchronous session commit
gresockj commented on pull request #5007: URL: https://github.com/apache/nifi/pull/5007#issuecomment-824303600 Neat concept, it's easy to see the pattern of putting "clean-up" code that used to follow the commit() call inside the new commitAsync consumer. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-8456) Reduce load of NiFi PRs/Commits on Github
Joe Witt created NIFI-8456: -- Summary: Reduce load of NiFi PRs/Commits on Github Key: NIFI-8456 URL: https://issues.apache.org/jira/browse/NIFI-8456 Project: Apache NiFi Issue Type: Task Reporter: Joe Witt Currently we have 4 builds that run each taking at least 50 minutes. This is useful for us but excessive. We show up on the top 13 consumers of all of ASF report - it appears. So while it would be nice to keep it as is lets reduce. Will make FR be done in the windows build and will drop the other 1.8 build. So keep one java 8 build on osx with JP, one java 11 build on ubuntu with EN, one java 8 build on windows with FR. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] greyp9 closed pull request #5008: NIFI-3580 - configure TLS cipher suites
greyp9 closed pull request #5008: URL: https://github.com/apache/nifi/pull/5008 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] driesva commented on pull request #4065: NIFI-4239 - Adding CaptureChangePostgreSQL processor to capture data changes (INSERT/UPDATE/DELETE) in PostgreSQL tables via Logical Replicatio
driesva commented on pull request #4065: URL: https://github.com/apache/nifi/pull/4065#issuecomment-824251308 Hello @davyam Well, we wanted to work together... That's why @mathiasbosman initially opened a PR against @gerdansantos fork, but it did not get merged. Given the original PR is not accessible to us, we could not push our changes to that PR and because there was no reaction, we created a new PR against main including the initial code and our improvements. Meanwhile that PR is also out-of-date... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-8451) KeyStoreUtils Test Failures on Java 1.8.0 Update 292
[ https://issues.apache.org/jira/browse/NIFI-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Gough resolved NIFI-8451. Fix Version/s: 1.13.3 1.14.0 Resolution: Fixed > KeyStoreUtils Test Failures on Java 1.8.0 Update 292 > > > Key: NIFI-8451 > URL: https://issues.apache.org/jira/browse/NIFI-8451 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.2 > Environment: OpenJDK Runtime Environment (Zulu 8.54.0.21-CA-macosx) > (build 1.8.0_292-b10) >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Fix For: 1.14.0, 1.13.3 > > Time Spent: 40m > Remaining Estimate: 0h > > Azul Zulu JDK 8 Update 292 introduced changes resulting in unit test failures > for {{KeyStoreUtils}} tests related to PKCS12. > {quote}java.security.KeyStoreException: Key protection algorithm not found: > java.security.UnrecoverableKeyException: Encrypt Private Key failed: > unrecognized algorithm name: PBEWithSHA1AndDESede > at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:694) > at > sun.security.pkcs12.PKCS12KeyStore.engineSetKeyEntry(PKCS12KeyStore.java:594) > at java.security.KeyStore.setKeyEntry(KeyStore.java:1140) > at > org.apache.nifi.security.util.KeyStoreUtilsTest.testKeyStoreRoundTrip(KeyStoreUtilsTest.java:124) > at > org.apache.nifi.security.util.KeyStoreUtilsTest.testPkcs12KeyStoreRoundTripBcReload(KeyStoreUtilsTest.java:79) > Caused by: java.security.UnrecoverableKeyException: Encrypt Private Key > failed: unrecognized algorithm name: PBEWithSHA1AndDESede > at > sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:938) > at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:631) > ... 33 more > Caused by: java.security.NoSuchAlgorithmException: unrecognized algorithm > name: PBEWithSHA1AndDESede > at sun.security.x509.AlgorithmId.get(AlgorithmId.java:448) > at > sun.security.pkcs12.PKCS12KeyStore.mapPBEAlgorithmToOID(PKCS12KeyStore.java:955) > at > sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:912) > {quote} > The unit tests use {{KeyStore.getInstance()}} without specifying the > provider, causing a mismatch between the KeyStore instances created using > {{KeyStoreUtils.getKeyStore()}}, which determines the provider based on > internal configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8451) KeyStoreUtils Test Failures on Java 1.8.0 Update 292
[ https://issues.apache.org/jira/browse/NIFI-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326765#comment-17326765 ] ASF subversion and git services commented on NIFI-8451: --- Commit ed6d5bacba45e4a8015cf224366845b16b82b13e in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ed6d5ba ] NIFI-8451 Updated KeyStoreUtils to use KeyStore.getInstance() with provider - Refactored and consolidated KeyStoreUtils unit tests - Corrected KeyStoreUtils.loadEmptyKeyStore() to use KeyStoreUtils.getKeyStore() Signed-off-by: Nathan Gough This closes #5015. > KeyStoreUtils Test Failures on Java 1.8.0 Update 292 > > > Key: NIFI-8451 > URL: https://issues.apache.org/jira/browse/NIFI-8451 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.2 > Environment: OpenJDK Runtime Environment (Zulu 8.54.0.21-CA-macosx) > (build 1.8.0_292-b10) >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Azul Zulu JDK 8 Update 292 introduced changes resulting in unit test failures > for {{KeyStoreUtils}} tests related to PKCS12. > {quote}java.security.KeyStoreException: Key protection algorithm not found: > java.security.UnrecoverableKeyException: Encrypt Private Key failed: > unrecognized algorithm name: PBEWithSHA1AndDESede > at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:694) > at > sun.security.pkcs12.PKCS12KeyStore.engineSetKeyEntry(PKCS12KeyStore.java:594) > at java.security.KeyStore.setKeyEntry(KeyStore.java:1140) > at > org.apache.nifi.security.util.KeyStoreUtilsTest.testKeyStoreRoundTrip(KeyStoreUtilsTest.java:124) > at > org.apache.nifi.security.util.KeyStoreUtilsTest.testPkcs12KeyStoreRoundTripBcReload(KeyStoreUtilsTest.java:79) > Caused by: java.security.UnrecoverableKeyException: Encrypt Private Key > failed: unrecognized algorithm name: PBEWithSHA1AndDESede > at > sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:938) > at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:631) > ... 33 more > Caused by: java.security.NoSuchAlgorithmException: unrecognized algorithm > name: PBEWithSHA1AndDESede > at sun.security.x509.AlgorithmId.get(AlgorithmId.java:448) > at > sun.security.pkcs12.PKCS12KeyStore.mapPBEAlgorithmToOID(PKCS12KeyStore.java:955) > at > sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:912) > {quote} > The unit tests use {{KeyStore.getInstance()}} without specifying the > provider, causing a mismatch between the KeyStore instances created using > {{KeyStoreUtils.getKeyStore()}}, which determines the provider based on > internal configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] thenatog closed pull request #5015: NIFI-8451 Corrected KeyStoreUtils usage of KeyStore.getInstance()
thenatog closed pull request #5015: URL: https://github.com/apache/nifi/pull/5015 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog commented on pull request #5015: NIFI-8451 Corrected KeyStoreUtils usage of KeyStore.getInstance()
thenatog commented on pull request #5015: URL: https://github.com/apache/nifi/pull/5015#issuecomment-824241631 Looks good to me, will merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5016: NIFI-8454: Allow InvokeHTTP to output final URL when following redirects
exceptionfactory commented on a change in pull request #5016: URL: https://github.com/apache/nifi/pull/5016#discussion_r617729422 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java ## @@ -857,6 +859,9 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro statusAttributes.put(STATUS_MESSAGE, statusMessage); statusAttributes.put(REQUEST_URL, url.toExternalForm()); statusAttributes.put(TRANSACTION_ID, txId.toString()); +if(context.getProperty(PROP_FOLLOW_REDIRECTS).asBoolean()) { Review comment: Thanks for the response and updates @pkelly-nifi, the changes look good. Could you also update the InvokeHTTPTest to check for the existence of the Response URL attribute? The `assertStatusCodeEquals()` method has a check for the InvokeHTTP.REQUEST_URL attribute, so just adding a line to check for the Response URL attribute should be sufficient. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on pull request #5015: NIFI-8451 Corrected KeyStoreUtils usage of KeyStore.getInstance()
exceptionfactory commented on pull request #5015: URL: https://github.com/apache/nifi/pull/5015#issuecomment-824181474 > The KeystoreUtils changes look good, I'm just curious about the unit test refactoring. Can you expand on your reasoning for the refactoring there? For example, I'm wondering why testKeyStoreRoundTrip is no longer being used, etc. Thanks for the feedback @gresockj. The core concept of the round trip methods remains, but I removed the unnecessary KeyStoreSupplier interface and streamlined the methods. The `assertKeyEntryStoredLoaded` and `assertCertificateEntryStoredLoaded` methods follow the same approach as before by populating a source KeyStore with an entry, writing it to an OutputStream, and then reading it into a destination KeyStore instance. I wrapped all of in a loop that tests all values in the `KeystoreType` enum. Previously there were individual test methods for each KeystoreType, so this approach ensures that any changes to `KeystoreType` will automatically be tested. Let me know if you have any other questions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8431) Redundant validation of Dynamic Properties
[ https://issues.apache.org/jira/browse/NIFI-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326659#comment-17326659 ] Chris Sampson commented on NIFI-8431: - FYI I stumbled upon this when testing my changes for NIFI-2892 - included a change to [AbstractConfigurableComponent|https://github.com/apache/nifi/pull/4822/files#diff-d3c2e8ceca3ff4fb750d6cd36e94a9123c0d00c33f06cc8bf61e1c72d55a0f6fL129], although that's part of a much larger PR than would be needed to address just this ticket. > Redundant validation of Dynamic Properties > -- > > Key: NIFI-8431 > URL: https://issues.apache.org/jira/browse/NIFI-8431 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.2 >Reporter: Mark Bean >Priority: Major > > The validate method of AbstractConfigurableComponent was modified recently. > Where it use to loop through only supported property descriptors (via > getSupportedPropertyDescriptors method call), it now loops through all > descriptors in the context (via ValidationContext.getProperties.keySet() > call). This includes dynamic properties. Therefore, the subsequent validation > of dynamic properties is redundant, and can produce duplicate validation > errors. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] pkelly-nifi commented on a change in pull request #5016: NIFI-8454: Allow InvokeHTTP to output final URL when following redirects
pkelly-nifi commented on a change in pull request #5016: URL: https://github.com/apache/nifi/pull/5016#discussion_r617670684 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java ## @@ -857,6 +859,9 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro statusAttributes.put(STATUS_MESSAGE, statusMessage); statusAttributes.put(REQUEST_URL, url.toExternalForm()); statusAttributes.put(TRANSACTION_ID, txId.toString()); +if(context.getProperty(PROP_FOLLOW_REDIRECTS).asBoolean()) { Review comment: Thank you for your feedback. I debated the same thing but went for it this way to avoid duplication since if redirects are disabled, the "response" url would always be identical to the "request" url. I'd be happy to always add it as well. It might make some flows cleaner if they can rely on the "response" attribute always being set. Let me submit that change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] phrocker edited a comment on pull request #4973: NIFI-8373 Add Kerberos support to Accumulo processors
phrocker edited a comment on pull request #4973: URL: https://github.com/apache/nifi/pull/4973#issuecomment-824166125 @pvillard31 I will try and find some time to get this tested. The code looks good, so I will see if I can spin an instance up to run this. thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] phrocker commented on pull request #4973: NIFI-8373 Add Kerberos support to Accumulo processors
phrocker commented on pull request #4973: URL: https://github.com/apache/nifi/pull/4973#issuecomment-824166125 @pvillard31 I will try and find some time to get this tested. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5016: NIFI-8454: Allow InvokeHTTP to output final URL when following redirects
exceptionfactory commented on a change in pull request #5016: URL: https://github.com/apache/nifi/pull/5016#discussion_r617664623 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java ## @@ -857,6 +859,9 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro statusAttributes.put(STATUS_MESSAGE, statusMessage); statusAttributes.put(REQUEST_URL, url.toExternalForm()); statusAttributes.put(TRANSACTION_ID, txId.toString()); +if(context.getProperty(PROP_FOLLOW_REDIRECTS).asBoolean()) { Review comment: Rather than making the response.url attribute conditional on the Follow Redirects property, it seems like it would be useful to always add the attribute. What do you think? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8453) HandleHttpRequest does not receive HTTP request
[ https://issues.apache.org/jira/browse/NIFI-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326649#comment-17326649 ] Belkacem D commented on NIFI-8453: -- For more details, here is the stack of the blocked HandleHttpRequest thread: {code:java} "Timer-Driven Process Thread-183" Id=232 WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@22cd9c3b at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at org.apache.nifi.controller.repository.FileSystemRepository$ContainerState.waitForArchiveExpiration(FileSystemRepository.java:1660) at org.apache.nifi.controller.repository.FileSystemRepository.create(FileSystemRepository.java:609) at org.apache.nifi.controller.repository.claim.StandardContentClaimWriteCache.getContentClaim(StandardContentClaimWriteCache.java:63) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2658) at org.apache.nifi.processors.standard.HandleHttpRequest.onTrigger(HandleHttpRequest.java:695) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Number of Locked Synchronizers: 1 - java.util.concurrent.ThreadPoolExecutor$Worker@48c35d7b {code} > HandleHttpRequest does not receive HTTP request > --- > > Key: NIFI-8453 > URL: https://issues.apache.org/jira/browse/NIFI-8453 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.13.2 > Environment: Linux CentOS 7.5 >Reporter: Belkacem D >Priority: Major > > After starting Nifi, HandleHttpRequest thread remains blocked and does not > receive HTTP requests. > A listening TCP socket is openned, a new request opens a TCP socket and data > is buffered. > But HandleHttpRequest does not receive the request and the process thread is > blocked. > No errors are raised in Nifi logs. > Restarting Nifi solves the issue. > Finally, note that this bug was not encountered with Nifi 1.13.0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (NIFI-8454) InvokeHTTP does not report final URL after following redirects
[ https://issues.apache.org/jira/browse/NIFI-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326644#comment-17326644 ] David Handermann edited comment on NIFI-8454 at 4/21/21, 3:37 PM: -- This sounds like a useful feature [~pkelly.nifi]! The OkHttp Response object has an associated [request|https://square.github.io/okhttp/4.x/okhttp/okhttp3/-response/request/] property that contains the associated request details, which appears to contain the final URL. I will take a look at the associated PR. was (Author: exceptionfactory): This sounds like a useful feature [~pkelly.nifi]! The OkHttp Response object has an associated [request|https://square.github.io/okhttp/4.x/okhttp/okhttp3/-response/request/] property that contains the associated request details, which appears to contain the final URL. > InvokeHTTP does not report final URL after following redirects > -- > > Key: NIFI-8454 > URL: https://issues.apache.org/jira/browse/NIFI-8454 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.13.2 >Reporter: Paul Kelly >Assignee: Paul Kelly >Priority: Minor > Labels: InvokeHTTP > Time Spent: 10m > Remaining Estimate: 0h > > If InvokeHTTP is set to follow redirects, there currently is no way to > retrieve the final URL which was the ultimate target of any redirects that > were followed. > I propose adding a new attribute "invokehttp.response.url" to include the > final URL when InvokeHTTP is set to follow redirects. The current attribute > "invokehttp.request.url" contains the URL which was originally requested, and > this new attribute will contain the URL that was ultimately retrieved. > For example, if the URL "http://bitly.com/1sNZMwL; is retrieved, > invokehttp.request.url will continue to contain "http://bitly.com/1sNZMwL; > and invokehttp.response.url will contain > "https://en.wikipedia.org/wiki/Bitly;. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8454) InvokeHTTP does not report final URL after following redirects
[ https://issues.apache.org/jira/browse/NIFI-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326644#comment-17326644 ] David Handermann commented on NIFI-8454: This sounds like a useful feature [~pkelly.nifi]! The OkHttp Response object has an associated [request|https://square.github.io/okhttp/4.x/okhttp/okhttp3/-response/request/] property that contains the associated request details, which appears to contain the final URL. > InvokeHTTP does not report final URL after following redirects > -- > > Key: NIFI-8454 > URL: https://issues.apache.org/jira/browse/NIFI-8454 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.13.2 >Reporter: Paul Kelly >Assignee: Paul Kelly >Priority: Minor > Labels: InvokeHTTP > Time Spent: 10m > Remaining Estimate: 0h > > If InvokeHTTP is set to follow redirects, there currently is no way to > retrieve the final URL which was the ultimate target of any redirects that > were followed. > I propose adding a new attribute "invokehttp.response.url" to include the > final URL when InvokeHTTP is set to follow redirects. The current attribute > "invokehttp.request.url" contains the URL which was originally requested, and > this new attribute will contain the URL that was ultimately retrieved. > For example, if the URL "http://bitly.com/1sNZMwL; is retrieved, > invokehttp.request.url will continue to contain "http://bitly.com/1sNZMwL; > and invokehttp.response.url will contain > "https://en.wikipedia.org/wiki/Bitly;. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] pkelly-nifi opened a new pull request #5016: NIFI-8454: Allow InvokeHTTP to output final URL when following redirects
pkelly-nifi opened a new pull request #5016: URL: https://github.com/apache/nifi/pull/5016 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Writes an attribute with the final URL requested when following redirects (NIFI-8454)_ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [X] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [X] Have you verified that the full build is successful on JDK 8? - [X] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak
[ https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326617#comment-17326617 ] David Handermann commented on NIFI-8435: Thanks for the confirmation [~jzahner]. In attempting to reproduce the issue, I observed that the each KuduClient instance can spawn a number of threads in some circumstances. On a small NiFi instance, stopping the PutKudu processor temporarily increases the number of threads. More recent versions of the Kudu client library incorporated an upgrade from Netty 3 to Netty 4, which included changes to how Netty handles pooled byte buffering. Initial testing with Netty leak detection enabled did not yield any definite findings, but memory usage is clearly connected directly to the number of PutKudu Processor instances, and the associated KuduClient instances. The KuduClient supports configuring the number of worker threads, so this may be an area of improvement, although it does not appear to be a complete answer to the problem. > PutKudu 1.13.2 Memory Leak > -- > > Key: NIFI-8435 > URL: https://issues.apache.org/jira/browse/NIFI-8435 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.13.2 > Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu > 1.10.0 >Reporter: Josef Zahner >Assignee: Peter Gyori >Priority: Critical > Labels: kudu, nifi, oom > Attachments: Screenshot 2021-04-20 at 14.27.11.png, > grafana_heap_overview.png, kudu_inserts_per_sec.png, > putkudu_processor_config.png, visualvm_bytes_detail_view.png, > visualvm_total_bytes_used.png > > > We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with > PutKudu. > PutKudu on the 1.13.2 eats up all the heap memory and garbage collection > can't anymore free up the memory. We allow Java to use 31GB memory and as you > can see with NiFi 1.11.4 it will be used like it should with GC. However with > NiFi 1.13.2 with our actual load it fills up the memory relatively fast. > Manual GC via visualvm tool didn't help at all to free up memory. > !grafana_heap_overview.png! > > Visual VM shows the following culprit: !visualvm_total_bytes_used.png! > !visualvm_bytes_detail_view.png! > The bytes array shows millions of char data which isn't cleaned up. In fact > here 14,9GB memory (heapdump has been taken after a while of full load). If > we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a > few hundred MBs. > As you could imagine we can't upload the heap dump as currently we have only > productive data on the system. But don't hesitate to ask questions about the > heapdump if you need more information. > I haven't done any screenshot of the processor config, but I can do that if > you wish (we are back to NiFi 1.11.4 at the moment). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] exceptionfactory opened a new pull request #5015: NIFI-8451 Corrected KeyStoreUtils usage of KeyStore.getInstance()
exceptionfactory opened a new pull request #5015: URL: https://github.com/apache/nifi/pull/5015 Description of PR NIFI-8451 Corrects `KeyStoreUtils` and related unit tests usage of `java.security.KeyStore.getInstance()` to ensure that a Security Provider is always specified. Java 8 Update 292 uncovered an issue with `KeyStoreUtils.loadEmptyKeyStore()` and a unit test method where the Security Provider was not supplied when instantiating a PKCS12 KeyStore. This resulted in test failures when attempting to store and load PrivateKey entries between the BouncyCastle PKCS12 KeyStore implementation and the JDK PKCS12 implementation. Updates include refactoring KeyStoreUtils unit tests into a single test class, removing ignored test methods, and iterating over all KeystoreType values in place of separate test methods. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [X] Have you written or updated unit tests to verify your changes? - [X] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-8450) Add Kerberos support as an authentication type for accumulo
[ https://issues.apache.org/jira/browse/NIFI-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marc Parisi resolved NIFI-8450. --- Resolution: Duplicate Duplicate of NIFI-8373 > Add Kerberos support as an authentication type for accumulo > --- > > Key: NIFI-8450 > URL: https://issues.apache.org/jira/browse/NIFI-8450 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.12.1 >Reporter: Marc Parisi >Priority: Minor > Original Estimate: 336h > Remaining Estimate: 336h > > We should add kerberos support for the accumulo client. this possibly be done > by way of adding the service principal and link to a keytab file that may be > provided via the configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8450) Add Kerberos support as an authentication type for accumulo
[ https://issues.apache.org/jira/browse/NIFI-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326602#comment-17326602 ] Marc Parisi commented on NIFI-8450: --- [~bbende]wow my search foo is getting really bad. Thanks. I'll close this. I see josh reviewed it and appears good with this. > Add Kerberos support as an authentication type for accumulo > --- > > Key: NIFI-8450 > URL: https://issues.apache.org/jira/browse/NIFI-8450 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.12.1 >Reporter: Marc Parisi >Priority: Minor > Original Estimate: 336h > Remaining Estimate: 336h > > We should add kerberos support for the accumulo client. this possibly be done > by way of adding the service principal and link to a keytab file that may be > provided via the configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8339) Input Threads get Interrupted and stuck indefinitely
[ https://issues.apache.org/jira/browse/NIFI-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-8339: - Fix Version/s: 1.14.0 > Input Threads get Interrupted and stuck indefinitely > > > Key: NIFI-8339 > URL: https://issues.apache.org/jira/browse/NIFI-8339 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.13.0 >Reporter: Rene Weidlinger >Priority: Major > Fix For: 1.14.0 > > Attachments: firefox_Yf6NUeQe5X.png, nifi-app.log, nifi.properties, > td1.txt > > > After some seconds we see this stack trace in nifi on one of our inputs: > {noformat} > 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] > o.a.nifi.web.api.ApplicationResource Unexpected exception occurred. > portId=c4d93fb6-5e5b-1382-b39b-66fbc04660f0 > 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] > o.a.nifi.web.api.ApplicationResource Exception detail: > org.apache.nifi.processor.exception.ProcessException: > org.apache.nifi.processor.exception.ProcessException: Interrupted while > waiting for site-to-site request to be serviced > at > org.apache.nifi.remote.StandardPublicPort.receiveFlowFiles(StandardPublicPort.java:588) > at > org.apache.nifi.web.api.DataTransferResource.receiveFlowFiles(DataTransferResource.java:277) > at sun.reflect.GeneratedMethodAccessor198.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191) > at > org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104) > at > org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268) > at org.glassfish.jersey.internal.Errors.process(Errors.java:316) > at org.glassfish.jersey.internal.Errors.process(Errors.java:298) > at org.glassfish.jersey.internal.Errors.process(Errors.java:268) > at > org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289) > at > org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256) > at > org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703) > at > org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416) > at > org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) > at > org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791) > at > org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626) > at > org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66) > at > org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) > at > org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) > at > org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) > at >
[jira] [Resolved] (NIFI-8339) Input Threads get Interrupted and stuck indefinitely
[ https://issues.apache.org/jira/browse/NIFI-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-8339. -- Resolution: Duplicate > Input Threads get Interrupted and stuck indefinitely > > > Key: NIFI-8339 > URL: https://issues.apache.org/jira/browse/NIFI-8339 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.13.0 >Reporter: Rene Weidlinger >Priority: Major > Attachments: firefox_Yf6NUeQe5X.png, nifi-app.log, nifi.properties, > td1.txt > > > After some seconds we see this stack trace in nifi on one of our inputs: > {noformat} > 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] > o.a.nifi.web.api.ApplicationResource Unexpected exception occurred. > portId=c4d93fb6-5e5b-1382-b39b-66fbc04660f0 > 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] > o.a.nifi.web.api.ApplicationResource Exception detail: > org.apache.nifi.processor.exception.ProcessException: > org.apache.nifi.processor.exception.ProcessException: Interrupted while > waiting for site-to-site request to be serviced > at > org.apache.nifi.remote.StandardPublicPort.receiveFlowFiles(StandardPublicPort.java:588) > at > org.apache.nifi.web.api.DataTransferResource.receiveFlowFiles(DataTransferResource.java:277) > at sun.reflect.GeneratedMethodAccessor198.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191) > at > org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104) > at > org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268) > at org.glassfish.jersey.internal.Errors.process(Errors.java:316) > at org.glassfish.jersey.internal.Errors.process(Errors.java:298) > at org.glassfish.jersey.internal.Errors.process(Errors.java:268) > at > org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289) > at > org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256) > at > org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703) > at > org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416) > at > org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) > at > org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791) > at > org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626) > at > org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66) > at > org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) > at > org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) > at > org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) > at > org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) > at >
[jira] [Commented] (NIFI-8339) Input Threads get Interrupted and stuck indefinitely
[ https://issues.apache.org/jira/browse/NIFI-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326585#comment-17326585 ] Mark Payne commented on NIFI-8339: -- Thanks for confirming. Will do. > Input Threads get Interrupted and stuck indefinitely > > > Key: NIFI-8339 > URL: https://issues.apache.org/jira/browse/NIFI-8339 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.13.0 >Reporter: Rene Weidlinger >Priority: Major > Attachments: firefox_Yf6NUeQe5X.png, nifi-app.log, nifi.properties, > td1.txt > > > After some seconds we see this stack trace in nifi on one of our inputs: > {noformat} > 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] > o.a.nifi.web.api.ApplicationResource Unexpected exception occurred. > portId=c4d93fb6-5e5b-1382-b39b-66fbc04660f0 > 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] > o.a.nifi.web.api.ApplicationResource Exception detail: > org.apache.nifi.processor.exception.ProcessException: > org.apache.nifi.processor.exception.ProcessException: Interrupted while > waiting for site-to-site request to be serviced > at > org.apache.nifi.remote.StandardPublicPort.receiveFlowFiles(StandardPublicPort.java:588) > at > org.apache.nifi.web.api.DataTransferResource.receiveFlowFiles(DataTransferResource.java:277) > at sun.reflect.GeneratedMethodAccessor198.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191) > at > org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104) > at > org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268) > at org.glassfish.jersey.internal.Errors.process(Errors.java:316) > at org.glassfish.jersey.internal.Errors.process(Errors.java:298) > at org.glassfish.jersey.internal.Errors.process(Errors.java:268) > at > org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289) > at > org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256) > at > org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703) > at > org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416) > at > org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) > at > org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791) > at > org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626) > at > org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66) > at > org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) > at > org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) > at > org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) > at >
[jira] [Resolved] (NIFI-8455) HandleHttpRequest does not receive HTTP request
[ https://issues.apache.org/jira/browse/NIFI-8455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Belkacem D resolved NIFI-8455. -- Resolution: Duplicate > HandleHttpRequest does not receive HTTP request > --- > > Key: NIFI-8455 > URL: https://issues.apache.org/jira/browse/NIFI-8455 > Project: Apache NiFi > Issue Type: Bug > Environment: Linus Centos 7.5 >Reporter: Belkacem D >Priority: Major > Fix For: 1.13.2 > > > After starting Nifi, HandleHttpRequest thread remains blocked and does not > receive HTTP requests. > Nifi is listening on TCP port, a new request opens a TCP socket and data is > buffered. > But HandleHttpRequest does not receive the request and the process thread is > blocked. > No errors are raised in Nifi logs. > Restarting nifi solves the issue. > Finally, note that this bug was not encountered with Nifi 1.13.0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-8455) HandleHttpRequest does not receive HTTP request
Belkacem D created NIFI-8455: Summary: HandleHttpRequest does not receive HTTP request Key: NIFI-8455 URL: https://issues.apache.org/jira/browse/NIFI-8455 Project: Apache NiFi Issue Type: Bug Environment: Linus Centos 7.5 Reporter: Belkacem D Fix For: 1.13.2 After starting Nifi, HandleHttpRequest thread remains blocked and does not receive HTTP requests. Nifi is listening on TCP port, a new request opens a TCP socket and data is buffered. But HandleHttpRequest does not receive the request and the process thread is blocked. No errors are raised in Nifi logs. Restarting nifi solves the issue. Finally, note that this bug was not encountered with Nifi 1.13.0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-8454) InvokeHTTP does not report final URL after following redirects
Paul Kelly created NIFI-8454: Summary: InvokeHTTP does not report final URL after following redirects Key: NIFI-8454 URL: https://issues.apache.org/jira/browse/NIFI-8454 Project: Apache NiFi Issue Type: Improvement Affects Versions: 1.13.2 Reporter: Paul Kelly Assignee: Paul Kelly If InvokeHTTP is set to follow redirects, there currently is no way to retrieve the final URL which was the ultimate target of any redirects that were followed. I propose adding a new attribute "invokehttp.response.url" to include the final URL when InvokeHTTP is set to follow redirects. The current attribute "invokehttp.request.url" contains the URL which was originally requested, and this new attribute will contain the URL that was ultimately retrieved. For example, if the URL "http://bitly.com/1sNZMwL; is retrieved, invokehttp.request.url will continue to contain "http://bitly.com/1sNZMwL; and invokehttp.response.url will contain "https://en.wikipedia.org/wiki/Bitly;. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-8453) HandleHttpRequest does not receive HTTP request
Belkacem D created NIFI-8453: Summary: HandleHttpRequest does not receive HTTP request Key: NIFI-8453 URL: https://issues.apache.org/jira/browse/NIFI-8453 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.13.2 Environment: Linux CentOS 7.5 Reporter: Belkacem D After starting Nifi, HandleHttpRequest thread remains blocked and does not receive HTTP requests. A listening TCP socket is openned, a new request opens a TCP socket and data is buffered. But HandleHttpRequest does not receive the request and the process thread is blocked. No errors are raised in Nifi logs. Restarting Nifi solves the issue. Finally, note that this bug was not encountered with Nifi 1.13.0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1510) Register and fix InvokeHTTPTests
[ https://issues.apache.org/jira/browse/MINIFICPP-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Gyimesi reassigned MINIFICPP-1510: Assignee: Gabor Gyimesi > Register and fix InvokeHTTPTests > > > Key: MINIFICPP-1510 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1510 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Reporter: Amina Dinari >Assignee: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > > InvokeHTTPTests is not registered in ctest and produces an error in case it's > made to run separately. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #1058: MINIFICPP-1547 - Change default c2 protocol
adamdebreceni opened a new pull request #1058: URL: https://github.com/apache/nifi-minifi-cpp/pull/1058 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1052: MINIFICPP-1244 Support the Initial Start Position property in TailFile
fgerlits commented on a change in pull request #1052: URL: https://github.com/apache/nifi-minifi-cpp/pull/1052#discussion_r617430402 ## File path: extensions/standard-processors/processors/TailFile.cpp ## @@ -653,29 +690,56 @@ void TailFile::onTrigger(const std::shared_ptr &, const st if (!session->existsFlowFileInRelationship(Success)) { yield(); } + + first_trigger_ = false; +} + +bool TailFile::isOldFileInitiallyRead(TailState ) const { + // This is our initial processing and no stored state was found + return first_trigger_ && state.checksum_ == 0; Review comment: Checking `state.last_read_time_ == 0` instead of `state.checksum_ == 0` would be safer, as there is a tiny chance the checksum happens to be zero on a non-new file, but the timestamp can't be from 1970. ## File path: extensions/standard-processors/processors/TailFile.h ## @@ -147,13 +161,27 @@ class TailFile : public core::Processor { std::string rolling_filename_pattern_; + std::string initial_start_position_; Review comment: I think it would be nicer to store this as an enum rather than a string. You could use @adamdebreceni's `SMART_ENUM`, which has a `values()` method that could be used instead of `INITIAL_START_POSITIONS`. ## File path: extensions/standard-processors/tests/unit/TailFileTests.cpp ## @@ -1536,3 +1514,197 @@ TEST_CASE("TailFile interprets the lookup frequency property correctly", "[multi REQUIRE(LogTestController::getInstance().contains("Logged 1 flow files")); } } + +TEST_CASE("TailFile reads from a single file when Initial Start Position is set", "[initialStartPosition]") { + TestController testController; + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setDebug(); + + std::shared_ptr plan = testController.createPlan(); + std::shared_ptr tailfile = plan->addProcessor("TailFile", "tailfileProc"); + std::shared_ptr logattribute = plan->addProcessor("LogAttribute", "logattribute", core::Relationship("success", "description"), true); + + auto dir = minifi::utils::createTempDir(); + createTempFile(dir, ROLLED_OVER_TMP_FILE, ROLLED_OVER_TAIL_DATA); + auto temp_file_path = createTempFile(dir, TMP_FILE, NEWLINE_FILE); + + plan->setProperty(logattribute, org::apache::nifi::minifi::processors::LogAttribute::FlowFilesToLog.getName(), "0"); + plan->setProperty(tailfile, org::apache::nifi::minifi::processors::TailFile::FileName.getName(), temp_file_path); + plan->setProperty(tailfile, org::apache::nifi::minifi::processors::TailFile::Delimiter.getName(), "\n"); + + SECTION("Initial Start Position is set to Beginning of File") { +plan->setProperty(tailfile, org::apache::nifi::minifi::processors::TailFile::InitialStartPosition.getName(), "Beginning of File"); + +testController.runSession(plan); + +REQUIRE(LogTestController::getInstance().contains("Logged 1 flow files")); +REQUIRE(LogTestController::getInstance().contains("Size:" + std::to_string(NEWLINE_FILE.find_first_of('\n') + 1) + " Offset:0")); + +plan->reset(true); + LogTestController::getInstance().resetStream(LogTestController::getInstance().log_output); + +appendTempFile(dir, TMP_FILE, NEW_TAIL_DATA); + +testController.runSession(plan); + +REQUIRE(LogTestController::getInstance().contains("Logged 1 flow files")); +REQUIRE(LogTestController::getInstance().contains("Size:" + std::to_string(NEWLINE_FILE.size() - NEWLINE_FILE.find_first_of('\n') + NEW_TAIL_DATA.find_first_of('\n')) + " Offset:0")); + } + + SECTION("Initial Start Position is set to Beginning of Time") { +plan->setProperty(tailfile, org::apache::nifi::minifi::processors::TailFile::InitialStartPosition.getName(), "Beginning of Time"); + +testController.runSession(plan); + +REQUIRE(LogTestController::getInstance().contains("Logged 2 flow files")); +REQUIRE(LogTestController::getInstance().contains("Size:" + std::to_string(NEWLINE_FILE.find_first_of('\n') + 1) + " Offset:0")); +REQUIRE(LogTestController::getInstance().contains("Size:" + std::to_string(ROLLED_OVER_TAIL_DATA.find_first_of('\n') + 1) + " Offset:0")); + +plan->reset(true); + LogTestController::getInstance().resetStream(LogTestController::getInstance().log_output); + +appendTempFile(dir, TMP_FILE, NEW_TAIL_DATA); + +testController.runSession(plan); + +REQUIRE(LogTestController::getInstance().contains("Logged 1 flow files")); +REQUIRE(LogTestController::getInstance().contains("Size:" + std::to_string(NEWLINE_FILE.size() - NEWLINE_FILE.find_first_of('\n') + NEW_TAIL_DATA.find_first_of('\n')) + " Offset:0")); + } + + SECTION("Initial Start Position is set to Current Time") { +plan->setProperty(tailfile, org::apache::nifi::minifi::processors::TailFile::InitialStartPosition.getName(), "Current Time"); + +testController.runSession(plan); + +
[jira] [Commented] (NIFI-8452) Extend FetchGridFS with NIFI-5916 behavior
[ https://issues.apache.org/jira/browse/NIFI-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326523#comment-17326523 ] Damien T commented on NIFI-8452: Improvement created as requested (https://github.com/apache/nifi/pull/3315) > Extend FetchGridFS with NIFI-5916 behavior > --- > > Key: NIFI-8452 > URL: https://issues.apache.org/jira/browse/NIFI-8452 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.13.2 >Reporter: Damien T >Priority: Major > > The behavior of NIFI-5916 in GetMongo processor was not extended to > FetchGridFS. > Is it possible to add it? > Thanks -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-8452) Extend FetchGridFS with NIFI-5916 behavior
Damien T created NIFI-8452: -- Summary: Extend FetchGridFS with NIFI-5916 behavior Key: NIFI-8452 URL: https://issues.apache.org/jira/browse/NIFI-8452 Project: Apache NiFi Issue Type: Improvement Components: Extensions Affects Versions: 1.13.2 Reporter: Damien T The behavior of NIFI-5916 in GetMongo processor was not extended to FetchGridFS. Is it possible to add it? Thanks -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8451) KeyStoreUtils Test Failures on Java 1.8.0 Update 292
[ https://issues.apache.org/jira/browse/NIFI-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-8451: --- Description: Azul Zulu JDK 8 Update 292 introduced changes resulting in unit test failures for {{KeyStoreUtils}} tests related to PKCS12. {quote}java.security.KeyStoreException: Key protection algorithm not found: java.security.UnrecoverableKeyException: Encrypt Private Key failed: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:694) at sun.security.pkcs12.PKCS12KeyStore.engineSetKeyEntry(PKCS12KeyStore.java:594) at java.security.KeyStore.setKeyEntry(KeyStore.java:1140) at org.apache.nifi.security.util.KeyStoreUtilsTest.testKeyStoreRoundTrip(KeyStoreUtilsTest.java:124) at org.apache.nifi.security.util.KeyStoreUtilsTest.testPkcs12KeyStoreRoundTripBcReload(KeyStoreUtilsTest.java:79) Caused by: java.security.UnrecoverableKeyException: Encrypt Private Key failed: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:938) at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:631) ... 33 more Caused by: java.security.NoSuchAlgorithmException: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.x509.AlgorithmId.get(AlgorithmId.java:448) at sun.security.pkcs12.PKCS12KeyStore.mapPBEAlgorithmToOID(PKCS12KeyStore.java:955) at sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:912) {quote} The unit tests use {{KeyStore.getInstance()}} without specifying the provider, causing a mismatch between the KeyStore instances created using {{KeyStoreUtils.getKeyStore()}}, which determines the provider based on internal configuration. was: Azul Zulu JDK 8 Update 292 introduced changes resulting in unit test failures for {{KeyStoreUtils}} tests related to PKCS12. {quote}java.security.KeyStoreException: Key protection algorithm not found: java.security.UnrecoverableKeyException: Encrypt Private Key failed: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:694) at sun.security.pkcs12.PKCS12KeyStore.engineSetKeyEntry(PKCS12KeyStore.java:594) at java.security.KeyStore.setKeyEntry(KeyStore.java:1140) at org.apache.nifi.security.util.KeyStoreUtilsTest.testKeyStoreRoundTrip(KeyStoreUtilsTest.java:124) at org.apache.nifi.security.util.KeyStoreUtilsTest.testPkcs12KeyStoreRoundTripBcReload(KeyStoreUtilsTest.java:79) Caused by: java.security.UnrecoverableKeyException: Encrypt Private Key failed: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:938) at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:631) ... 33 more Caused by: java.security.NoSuchAlgorithmException: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.x509.AlgorithmId.get(AlgorithmId.java:448) at sun.security.pkcs12.PKCS12KeyStore.mapPBEAlgorithmToOID(PKCS12KeyStore.java:955) at sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:912){quote} The unit tests use {{KeyStore.getInstance()}} without specifying the provider, causing a mismatch between the KeyStore instances created using {{KeyStoreUtils.getInstance()}}, which determines the provider based on internal configuration. > KeyStoreUtils Test Failures on Java 1.8.0 Update 292 > > > Key: NIFI-8451 > URL: https://issues.apache.org/jira/browse/NIFI-8451 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.2 > Environment: OpenJDK Runtime Environment (Zulu 8.54.0.21-CA-macosx) > (build 1.8.0_292-b10) >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > Azul Zulu JDK 8 Update 292 introduced changes resulting in unit test failures > for {{KeyStoreUtils}} tests related to PKCS12. > {quote}java.security.KeyStoreException: Key protection algorithm not found: > java.security.UnrecoverableKeyException: Encrypt Private Key failed: > unrecognized algorithm name: PBEWithSHA1AndDESede > at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:694) > at > sun.security.pkcs12.PKCS12KeyStore.engineSetKeyEntry(PKCS12KeyStore.java:594) > at java.security.KeyStore.setKeyEntry(KeyStore.java:1140) > at > org.apache.nifi.security.util.KeyStoreUtilsTest.testKeyStoreRoundTrip(KeyStoreUtilsTest.java:124) > at > org.apache.nifi.security.util.KeyStoreUtilsTest.testPkcs12KeyStoreRoundTripBcReload(KeyStoreUtilsTest.java:79) > Caused by:
[jira] [Created] (NIFI-8451) KeyStoreUtils Test Failures on Java 1.8.0 Update 292
David Handermann created NIFI-8451: -- Summary: KeyStoreUtils Test Failures on Java 1.8.0 Update 292 Key: NIFI-8451 URL: https://issues.apache.org/jira/browse/NIFI-8451 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.13.2 Environment: OpenJDK Runtime Environment (Zulu 8.54.0.21-CA-macosx) (build 1.8.0_292-b10) Reporter: David Handermann Assignee: David Handermann Azul Zulu JDK 8 Update 292 introduced changes resulting in unit test failures for {{KeyStoreUtils}} tests related to PKCS12. {quote}java.security.KeyStoreException: Key protection algorithm not found: java.security.UnrecoverableKeyException: Encrypt Private Key failed: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:694) at sun.security.pkcs12.PKCS12KeyStore.engineSetKeyEntry(PKCS12KeyStore.java:594) at java.security.KeyStore.setKeyEntry(KeyStore.java:1140) at org.apache.nifi.security.util.KeyStoreUtilsTest.testKeyStoreRoundTrip(KeyStoreUtilsTest.java:124) at org.apache.nifi.security.util.KeyStoreUtilsTest.testPkcs12KeyStoreRoundTripBcReload(KeyStoreUtilsTest.java:79) Caused by: java.security.UnrecoverableKeyException: Encrypt Private Key failed: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:938) at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:631) ... 33 more Caused by: java.security.NoSuchAlgorithmException: unrecognized algorithm name: PBEWithSHA1AndDESede at sun.security.x509.AlgorithmId.get(AlgorithmId.java:448) at sun.security.pkcs12.PKCS12KeyStore.mapPBEAlgorithmToOID(PKCS12KeyStore.java:955) at sun.security.pkcs12.PKCS12KeyStore.encryptPrivateKey(PKCS12KeyStore.java:912){quote} The unit tests use {{KeyStore.getInstance()}} without specifying the provider, causing a mismatch between the KeyStore instances created using {{KeyStoreUtils.getInstance()}}, which determines the provider based on internal configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster
[ https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326501#comment-17326501 ] Joseph Gresock commented on NIFI-8423: -- I suppose a custom processor could affect this, depending on what it's doing. You could try isolating the problem by enabling parts of your flow until the problem hits. Also, can you confirm that all of your nodes are running the same JDK and OS? > Timezone wrong in UI for an 8 node cluster > -- > > Key: NIFI-8423 > URL: https://issues.apache.org/jira/browse/NIFI-8423 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.13.2 > Environment: 8 Node NiFi Cluster on CentOS 7 > OpenJDK 1.8.0_282 > Local timezone: Europe/Zurich (CEST or UTC+2h) >Reporter: Josef Zahner >Priority: Critical > Labels: centos, cluster, openjdk, timezone > Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot > 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, > image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, > image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, > nifi-app_log.png > > > We just upgraded to NiFi 1.13.2 and Java 1.8.0_282 > On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we > have the issue that the UI does display the correct timezone (CEST, so UTC > +2h), but in fact the time is displayed as UTC. NTP is enabled and working. > The OS configuration/location is everywhere the same (doesn't matter if > single or cluster NiFi). My tests below are all done at around 15:xx:xx local > time (CEST). > As you can see below, the timezone seems to be correct, but the time itself > within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier > NiFi/java versions it was enough to multiple times restart the cluster, but > on the newest versions this doesn't help anymore. It shows most of the time > CEST with the wrong time or directly UTC. > !image-2021-04-13-15-14-06-930.png! > > The single NiFi instances or the 2 node clusters are always fine. The issue > exists only on our 8 node cluster. > NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h): > !image-2021-04-13-15-14-02-162.png! > > If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI > shows no summer time, so only GMT+1 instead of GMT+2. As well not what we > want. > {code:java} > java.arg.20=-Duser.timezone="Europe/Zurich"{code} > !image-2021-04-13-15-14-56-690.png! > > What Matt below suggested has been verified, all servers (single nodes as > well as clusters) are reporting the same time/timezone. > [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942] > > So the question remains, where on a NiFi cluster comes the time from the UI > and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm > getting CEST but the time is anyhow UTC instead of CEST... I really need to > have the correct time in the UI as I don't know what the impact could be on > our dataflows. > > Any help would be really appreciated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (NIFI-8423) Timezone wrong in UI for an 8 node cluster
[ https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326478#comment-17326478 ] Josef Zahner edited comment on NIFI-8423 at 4/21/21, 12:34 PM: --- Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed the wrong time) I've just restarted p-li-nifi-05. Then every single node showed the same timestamp. Very unpredictable. Today I've restarted all NiFi nodes (not the OS, just the application) multiple times and the result is still shocking, even though I've configured the timezone manually in bootstrap.conf. What I get in the UI on top right: * UTC keyword, but time is in CEST in the UI * CEST keyword, but time is in UTC in the UI * CEST keyword and time is really CEST Additionally the logmessages from the nodes directly on the PGs/Processors can show another time than the UI on the top right. What I found is that if I'm stopping all my ListSFTP (and any other processor/input which could load/generate flowfiles) and the cluster doesn't use any thread, most of the time the cluster UI on the top right is showing the correct timezone and time. So if there is no load on the cluster, it's very likely the the UI time & timezone is correct. If everything is up and running it's nearly impossible after a restart of all NiFi at the same time, to get a correct timezone. To sum up, there is clearly huge bug which leads to this behavior and in our case it seems to be load dependent. I have screenshots for all the cases above. And it's not possible in my eyes that just one node or the OS is causing the issue as it looks random to me where the issue is. Question, I'm restarting always everything at the same time. What is best practices to restart a cluster? Node by node or with a big bang? EDIT: we are using multiple custom processor.s Could one of the custom processors lead to this bug? was (Author: jzahner): Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed the wrong time) I've just restarted p-li-nifi-05. Then every single node showed the same timestamp. Very unpredictable. Today I've restarted all NiFi nodes (not the OS, just the application) multiple times and the result is still shocking, even though I've configured the timezone manually in bootstrap.conf. What I get in the UI on top right: * UTC keyword, but time is in CEST in the UI * CEST keyword, but time is in UTC in the UI * CEST keyword and time is really CEST Additionally the logmessages from the nodes directly on the PGs/Processors can show another time than the UI on the top right. What I found is that if I'm stopping all my ListSFTP (and any other processor/input which could load/generate flowfiles) and the cluster doesn't use any thread, most of the time the cluster UI on the top right is showing the correct timezone and time. So if there is no load on the cluster, it's very likely the the UI time & timezone is correct. If everything is up and running it's nearly impossible after a restart of all NiFi at the same time, to get a correct timezone. To sum up, there is clearly huge bug which leads to this behavior and in our case it seems to be load dependent. I have screenshots for all the cases above. And it's not possible in my eyes that just one node or the OS is causing the issue as it looks random to me where the issue is. Question, I'm restarting always everything at the same time. What is best practices to restart a cluster? Node by node or with a big bang? > Timezone wrong in UI for an 8 node cluster > -- > > Key: NIFI-8423 > URL: https://issues.apache.org/jira/browse/NIFI-8423 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.13.2 > Environment: 8 Node NiFi Cluster on CentOS 7 > OpenJDK 1.8.0_282 > Local timezone: Europe/Zurich (CEST or UTC+2h) >Reporter: Josef Zahner >Priority: Critical > Labels: centos, cluster, openjdk, timezone > Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot > 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, > image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, > image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, > nifi-app_log.png > > > We just upgraded to NiFi 1.13.2 and Java 1.8.0_282 > On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we > have the issue that the UI does display the correct timezone (CEST, so UTC > +2h), but in fact the time is displayed as UTC. NTP is enabled and working. > The OS configuration/location is everywhere the same (doesn't matter if > single or cluster NiFi). My tests below are all done at around 15:xx:xx local > time (CEST). > As you can see below, the
[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster
[ https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326478#comment-17326478 ] Josef Zahner commented on NIFI-8423: Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed the wrong time I've just restarted p-li-nifi-05. Then every single node showed the same timestamp. Very unpredictable. Today I've restarted all NiFi nodes (not the OS, just the application) multiple times and the result is still shocking, even though I've configured the timezone manually in bootstrap.conf. What I get in the UI on top right: * UTC keyword, but time is in CEST in the UI * CEST keyword, but time is in UTC in the UI * CEST keyword and time is really CEST Additionally the logmessages from the nodes directly on the PGs/Processors can show another time than the UI on the top right. What I found is that if I'm stopping all my ListSFTP (and any other processor/input which could load/generate flowfiles) and the cluster doesn't use any thread, most of the time the cluster UI on the top right is showing the correct timezone and time. So if there is no load on the cluster, it's very likely the the UI time & timezone is correct. If everything is up and running it's nearly impossible after a restart of all NiFi at the same time, to get a correct timezone. To sum up, there is clearly huge bug which leads to this behavior and in our case it seems to be load dependent. I have screenshots for all the cases above. And it's not possible in my eyes that just one node or the OS is causing the issue as it looks random to me where the issue is. Question, I'm restarting always everything at the same time. What is best practices to restart a cluster? Node by node or with a big bang? > Timezone wrong in UI for an 8 node cluster > -- > > Key: NIFI-8423 > URL: https://issues.apache.org/jira/browse/NIFI-8423 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.13.2 > Environment: 8 Node NiFi Cluster on CentOS 7 > OpenJDK 1.8.0_282 > Local timezone: Europe/Zurich (CEST or UTC+2h) >Reporter: Josef Zahner >Priority: Critical > Labels: centos, cluster, openjdk, timezone > Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot > 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, > image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, > image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, > nifi-app_log.png > > > We just upgraded to NiFi 1.13.2 and Java 1.8.0_282 > On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we > have the issue that the UI does display the correct timezone (CEST, so UTC > +2h), but in fact the time is displayed as UTC. NTP is enabled and working. > The OS configuration/location is everywhere the same (doesn't matter if > single or cluster NiFi). My tests below are all done at around 15:xx:xx local > time (CEST). > As you can see below, the timezone seems to be correct, but the time itself > within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier > NiFi/java versions it was enough to multiple times restart the cluster, but > on the newest versions this doesn't help anymore. It shows most of the time > CEST with the wrong time or directly UTC. > !image-2021-04-13-15-14-06-930.png! > > The single NiFi instances or the 2 node clusters are always fine. The issue > exists only on our 8 node cluster. > NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h): > !image-2021-04-13-15-14-02-162.png! > > If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI > shows no summer time, so only GMT+1 instead of GMT+2. As well not what we > want. > {code:java} > java.arg.20=-Duser.timezone="Europe/Zurich"{code} > !image-2021-04-13-15-14-56-690.png! > > What Matt below suggested has been verified, all servers (single nodes as > well as clusters) are reporting the same time/timezone. > [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942] > > So the question remains, where on a NiFi cluster comes the time from the UI > and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm > getting CEST but the time is anyhow UTC instead of CEST... I really need to > have the correct time in the UI as I don't know what the impact could be on > our dataflows. > > Any help would be really appreciated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (NIFI-8423) Timezone wrong in UI for an 8 node cluster
[ https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326478#comment-17326478 ] Josef Zahner edited comment on NIFI-8423 at 4/21/21, 12:19 PM: --- Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed the wrong time) I've just restarted p-li-nifi-05. Then every single node showed the same timestamp. Very unpredictable. Today I've restarted all NiFi nodes (not the OS, just the application) multiple times and the result is still shocking, even though I've configured the timezone manually in bootstrap.conf. What I get in the UI on top right: * UTC keyword, but time is in CEST in the UI * CEST keyword, but time is in UTC in the UI * CEST keyword and time is really CEST Additionally the logmessages from the nodes directly on the PGs/Processors can show another time than the UI on the top right. What I found is that if I'm stopping all my ListSFTP (and any other processor/input which could load/generate flowfiles) and the cluster doesn't use any thread, most of the time the cluster UI on the top right is showing the correct timezone and time. So if there is no load on the cluster, it's very likely the the UI time & timezone is correct. If everything is up and running it's nearly impossible after a restart of all NiFi at the same time, to get a correct timezone. To sum up, there is clearly huge bug which leads to this behavior and in our case it seems to be load dependent. I have screenshots for all the cases above. And it's not possible in my eyes that just one node or the OS is causing the issue as it looks random to me where the issue is. Question, I'm restarting always everything at the same time. What is best practices to restart a cluster? Node by node or with a big bang? was (Author: jzahner): Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed the wrong time I've just restarted p-li-nifi-05. Then every single node showed the same timestamp. Very unpredictable. Today I've restarted all NiFi nodes (not the OS, just the application) multiple times and the result is still shocking, even though I've configured the timezone manually in bootstrap.conf. What I get in the UI on top right: * UTC keyword, but time is in CEST in the UI * CEST keyword, but time is in UTC in the UI * CEST keyword and time is really CEST Additionally the logmessages from the nodes directly on the PGs/Processors can show another time than the UI on the top right. What I found is that if I'm stopping all my ListSFTP (and any other processor/input which could load/generate flowfiles) and the cluster doesn't use any thread, most of the time the cluster UI on the top right is showing the correct timezone and time. So if there is no load on the cluster, it's very likely the the UI time & timezone is correct. If everything is up and running it's nearly impossible after a restart of all NiFi at the same time, to get a correct timezone. To sum up, there is clearly huge bug which leads to this behavior and in our case it seems to be load dependent. I have screenshots for all the cases above. And it's not possible in my eyes that just one node or the OS is causing the issue as it looks random to me where the issue is. Question, I'm restarting always everything at the same time. What is best practices to restart a cluster? Node by node or with a big bang? > Timezone wrong in UI for an 8 node cluster > -- > > Key: NIFI-8423 > URL: https://issues.apache.org/jira/browse/NIFI-8423 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.13.2 > Environment: 8 Node NiFi Cluster on CentOS 7 > OpenJDK 1.8.0_282 > Local timezone: Europe/Zurich (CEST or UTC+2h) >Reporter: Josef Zahner >Priority: Critical > Labels: centos, cluster, openjdk, timezone > Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot > 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, > image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, > image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, > nifi-app_log.png > > > We just upgraded to NiFi 1.13.2 and Java 1.8.0_282 > On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we > have the issue that the UI does display the correct timezone (CEST, so UTC > +2h), but in fact the time is displayed as UTC. NTP is enabled and working. > The OS configuration/location is everywhere the same (doesn't matter if > single or cluster NiFi). My tests below are all done at around 15:xx:xx local > time (CEST). > As you can see below, the timezone seems to be correct, but the time itself > within NiFi is 2h behind (so in fact UTC) compared to
[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster
[ https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326433#comment-17326433 ] Joseph Gresock commented on NIFI-8423: -- Hi Josef, just checking in. I haven't been able to find anything in the code that would explain this behavior, but I wanted to see if you were still having the issues after the latest node restart. > Timezone wrong in UI for an 8 node cluster > -- > > Key: NIFI-8423 > URL: https://issues.apache.org/jira/browse/NIFI-8423 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.13.2 > Environment: 8 Node NiFi Cluster on CentOS 7 > OpenJDK 1.8.0_282 > Local timezone: Europe/Zurich (CEST or UTC+2h) >Reporter: Josef Zahner >Priority: Critical > Labels: centos, cluster, openjdk, timezone > Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot > 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, > image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, > image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, > nifi-app_log.png > > > We just upgraded to NiFi 1.13.2 and Java 1.8.0_282 > On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we > have the issue that the UI does display the correct timezone (CEST, so UTC > +2h), but in fact the time is displayed as UTC. NTP is enabled and working. > The OS configuration/location is everywhere the same (doesn't matter if > single or cluster NiFi). My tests below are all done at around 15:xx:xx local > time (CEST). > As you can see below, the timezone seems to be correct, but the time itself > within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier > NiFi/java versions it was enough to multiple times restart the cluster, but > on the newest versions this doesn't help anymore. It shows most of the time > CEST with the wrong time or directly UTC. > !image-2021-04-13-15-14-06-930.png! > > The single NiFi instances or the 2 node clusters are always fine. The issue > exists only on our 8 node cluster. > NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h): > !image-2021-04-13-15-14-02-162.png! > > If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI > shows no summer time, so only GMT+1 instead of GMT+2. As well not what we > want. > {code:java} > java.arg.20=-Duser.timezone="Europe/Zurich"{code} > !image-2021-04-13-15-14-56-690.png! > > What Matt below suggested has been verified, all servers (single nodes as > well as clusters) are reporting the same time/timezone. > [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942] > > So the question remains, where on a NiFi cluster comes the time from the UI > and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm > getting CEST but the time is anyhow UTC instead of CEST... I really need to > have the correct time in the UI as I don't know what the impact could be on > our dataflows. > > Any help would be really appreciated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #1057: MINIFICPP-1537 - Log heartbeats on demand
adamdebreceni opened a new pull request #1057: URL: https://github.com/apache/nifi-minifi-cpp/pull/1057 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1547) Change default c2 protocol to RESTSender
Adam Debreceni created MINIFICPP-1547: - Summary: Change default c2 protocol to RESTSender Key: MINIFICPP-1547 URL: https://issues.apache.org/jira/browse/MINIFICPP-1547 Project: Apache NiFi MiNiFi C++ Issue Type: Story Reporter: Adam Debreceni Assignee: Adam Debreceni The current default c2 protocol is CoapProtocol, change it to RESTSender. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1543) MacOS builds are broken in CI
[ https://issues.apache.org/jira/browse/MINIFICPP-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Gerlits resolved MINIFICPP-1543. --- Resolution: Fixed > MacOS builds are broken in CI > - > > Key: MINIFICPP-1543 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1543 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Ferenc Gerlits >Assignee: Ferenc Gerlits >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > MacOS builds fail with > {noformat} > brew install ossp-uuid boost flex openssl python lua@5.3 xz libssh2 ccache > sqliteodbc > [...] > ==> Downloading > https://homebrew.bintray.com/bottles/ossp-uuid-1.6.2_2.catalina.bottle.tar.gz > curl: (22) The requested URL returned error: 403 Forbidden > Error: Failed to download resource "ossp-uuid" > Download failed: > https://homebrew.bintray.com/bottles/ossp-uuid-1.6.2_2.catalina.bottle.tar.gz > Error: Process completed with exit code 1. {noformat} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1539) PublishKafka incorrectly says that Message Key Field is set
[ https://issues.apache.org/jira/browse/MINIFICPP-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Gerlits resolved MINIFICPP-1539. --- Resolution: Fixed > PublishKafka incorrectly says that Message Key Field is set > --- > > Key: MINIFICPP-1539 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1539 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Ferenc Gerlits >Assignee: Ferenc Gerlits >Priority: Trivial > Time Spent: 20m > Remaining Estimate: 0h > > {{PublishKafka::onSchedule()}} always logs "The Message Key Field property is > set. This property is DEPRECATED and has no effect; please use Kafka Key > instead." because {{getProperty()}} always returns true on properties added > by {{setSupportedProperties()}}. We need to check whether it is non-empty. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1040: MINIFICPP-1329- Fix implementation and usages of string to bool
fgerlits commented on a change in pull request #1040: URL: https://github.com/apache/nifi-minifi-cpp/pull/1040#discussion_r617341668 ## File path: docker/test/integration/features/hashcontent.feature ## @@ -0,0 +1,10 @@ +Feature: Adding hash value for a FlowFile using HashContent + In order to calculate a hash value for a FlowFile + As a user of MiNiFi + I need to have HashContent Processor + +Background: + Given the content of "/tmp/output" is monitored + +Scenario: HashContent adds hash attribute to flowfiles + Given a GetFile processor with "Input Directory" property set to "/tmp/input" Review comment: this file was included in this PR by accident, I think ## File path: libminifi/include/utils/StringUtils.h ## @@ -75,13 +75,10 @@ struct string_traits{ class StringUtils { public: /** - * Converts a string to a boolean - * Better handles mixed case. + * Checks and converts a string to a boolean * @param input input string - * @param output output string. + * @returns an optional of a boolean: true if the string is "true" (ignoring case), false if it is "false" (ignoring case), nullopt for any other value */ - static bool StringToBool(std::string input, bool ); Review comment: This is good, but `StringToBool()` is still there in the cpp file, and it is also called from one place. Type `git grep StringToBool` to find all usages. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8339) Input Threads get Interrupted and stuck indefinitely
[ https://issues.apache.org/jira/browse/NIFI-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326357#comment-17326357 ] Rene Weidlinger commented on NIFI-8339: --- Yes you can go ahead and close it as duplicate :) > Input Threads get Interrupted and stuck indefinitely > > > Key: NIFI-8339 > URL: https://issues.apache.org/jira/browse/NIFI-8339 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.13.0 >Reporter: Rene Weidlinger >Priority: Major > Attachments: firefox_Yf6NUeQe5X.png, nifi-app.log, nifi.properties, > td1.txt > > > After some seconds we see this stack trace in nifi on one of our inputs: > {noformat} > 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] > o.a.nifi.web.api.ApplicationResource Unexpected exception occurred. > portId=c4d93fb6-5e5b-1382-b39b-66fbc04660f0 > 2021-03-18 07:33:34,703 ERROR [NiFi Web Server-18] > o.a.nifi.web.api.ApplicationResource Exception detail: > org.apache.nifi.processor.exception.ProcessException: > org.apache.nifi.processor.exception.ProcessException: Interrupted while > waiting for site-to-site request to be serviced > at > org.apache.nifi.remote.StandardPublicPort.receiveFlowFiles(StandardPublicPort.java:588) > at > org.apache.nifi.web.api.DataTransferResource.receiveFlowFiles(DataTransferResource.java:277) > at sun.reflect.GeneratedMethodAccessor198.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191) > at > org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104) > at > org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268) > at org.glassfish.jersey.internal.Errors.process(Errors.java:316) > at org.glassfish.jersey.internal.Errors.process(Errors.java:298) > at org.glassfish.jersey.internal.Errors.process(Errors.java:268) > at > org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289) > at > org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256) > at > org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703) > at > org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416) > at > org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) > at > org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791) > at > org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626) > at > org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66) > at > org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) > at > org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) > at > org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) > at >
[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak
[ https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326351#comment-17326351 ] Josef Zahner commented on NIFI-8435: [~exceptionfactory] yes I have a list. My heapdump which I'm referencing to has about *1000* "kudu-nio-xxx" threads running. The whole heap dump has a size of about 15 GB. I've also a heap dump which I've created shortly after a fresh start of NiFi and it shows only about 120 "kudu-nio-xxx" threads, size is only 2.5 GB. So you see the number of threads is growing over time. > PutKudu 1.13.2 Memory Leak > -- > > Key: NIFI-8435 > URL: https://issues.apache.org/jira/browse/NIFI-8435 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.13.2 > Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu > 1.10.0 >Reporter: Josef Zahner >Assignee: Peter Gyori >Priority: Critical > Labels: kudu, nifi, oom > Attachments: Screenshot 2021-04-20 at 14.27.11.png, > grafana_heap_overview.png, kudu_inserts_per_sec.png, > putkudu_processor_config.png, visualvm_bytes_detail_view.png, > visualvm_total_bytes_used.png > > > We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with > PutKudu. > PutKudu on the 1.13.2 eats up all the heap memory and garbage collection > can't anymore free up the memory. We allow Java to use 31GB memory and as you > can see with NiFi 1.11.4 it will be used like it should with GC. However with > NiFi 1.13.2 with our actual load it fills up the memory relatively fast. > Manual GC via visualvm tool didn't help at all to free up memory. > !grafana_heap_overview.png! > > Visual VM shows the following culprit: !visualvm_total_bytes_used.png! > !visualvm_bytes_detail_view.png! > The bytes array shows millions of char data which isn't cleaned up. In fact > here 14,9GB memory (heapdump has been taken after a while of full load). If > we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a > few hundred MBs. > As you could imagine we can't upload the heap dump as currently we have only > productive data on the system. But don't hesitate to ask questions about the > heapdump if you need more information. > I haven't done any screenshot of the processor config, but I can do that if > you wish (we are back to NiFi 1.11.4 at the moment). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1479) Rework integration tests for HashContent
[ https://issues.apache.org/jira/browse/MINIFICPP-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1479: Description: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. The unit tests seem to test for logging and the docker based integration test only test for the flowfile content being unmodified. The actual expected behaviour of appending attributes currently seems untested. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario Outline: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And the "Hash Algorithm" of the HashContent processor is set to "" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} Details: 1. In HashContentTest.cpp, HashAlgorithm property was being checked at a point where FailOnEmpty had to be supposedly checked; thereby giving incorrect results for the test. Plan: 1. Read about Behavior Driven Development framework and Python-backed behave test framework. was: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. The unit tests seem to test for logging and the docker based integration test only test for the flowfile content being unmodified. The actual expected behaviour of appending attributes currently seems untested. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And the "Hash Algorithm" of the HashContent processor is set to "" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} Details: 1. In HashContentTest.cpp, HashAlgorithm property was being checked at a point where FailOnEmpty had to be supposedly checked; thereby giving incorrect results for the test. Plan: 1. Read about Behavior Driven Development framework and Python-backed behave test framework. > Rework integration
[jira] [Commented] (NIFI-7516) Predictions model throws intermittent SingularMatrixExceptions
[ https://issues.apache.org/jira/browse/NIFI-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326341#comment-17326341 ] Denes Arvay commented on NIFI-7516: --- [~mattyb149], I bumped into this issue recently and it seems it's been merged but the ticket hasn't been closed. Could you please mark this as resolved? Thanks. > Predictions model throws intermittent SingularMatrixExceptions > -- > > Key: NIFI-7516 > URL: https://issues.apache.org/jira/browse/NIFI-7516 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > Under some circumstances, the Connection Status Analytics model (specifically > the Ordinary Least Squares model) throws a SingularMatrix exception: > org.apache.commons.math3.linear.SingularMatrixException: matrix is singular > This can happen (usually intermittently) when the data points used to update > the model form a matrix that has no inverse (i.e. singular). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1056: MINIFICPP-1546 Added documentation for enabling OPC functionality at build time
fgerlits closed pull request #1056: URL: https://github.com/apache/nifi-minifi-cpp/pull/1056 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1479) Rework integration tests for HashContent
[ https://issues.apache.org/jira/browse/MINIFICPP-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amina Dinari updated MINIFICPP-1479: Description: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. The unit tests seem to test for logging and the docker based integration test only test for the flowfile content being unmodified. The actual expected behaviour of appending attributes currently seems untested. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And the "Hash Algorithm" of the HashContent processor is set to "" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} Details: 1. In HashContentTest.cpp, HashAlgorithm property was being checked at a point where FailOnEmpty had to be supposedly checked; thereby giving incorrect results for the test. Plan: 1. Read about Behavior Driven Development framework and Python-backed behave test framework. was: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. The unit tests seem to test for logging and the docker based integration test only test for the flowfile content being unmodified. The actual expected behaviour of appending attributes currently seems untested. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And the "Hash Algorithm" of the HashContent processor is set to "" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} Details: 1. In HashContentTest.cpp, HashAlgorithm property was being checked at a point where FailOnEmpty had to be supposedly checked; thereby giving incorrect results for the test. > Rework integration tests for HashContent > > > Key: MINIFICPP-1479 >
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617254864 ## File path: docker/test/integration/minifi/validators/FileOutputValidator.py ## @@ -1,8 +1,46 @@ +import logging +import os + +from os import listdir +from os.path import join + from .OutputValidator import OutputValidator class FileOutputValidator(OutputValidator): def set_output_dir(self, output_dir): self.output_dir = output_dir +@staticmethod +def num_files_matching_content_in_dir(dir_path, expected_content): + listing = listdir(dir_path) Review comment: Corrected. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617254535 ## File path: docker/test/integration/MiNiFi_integration_test_driver.py ## @@ -90,35 +102,56 @@ def acquire_cluster(self, name): return self.clusters.setdefault(name, DockerTestCluster()) def set_up_cluster_network(self): -self.docker_network = SingleNodeDockerCluster.create_docker_network() -for cluster in self.clusters.values(): -cluster.set_network(self.docker_network) +if self.docker_network is None: +logging.info("Setting up new network.") +self.docker_network = SingleNodeDockerCluster.create_docker_network() +for cluster in self.clusters.values(): +cluster.set_network(self.docker_network) +else: +logging.info("Network is already set.") + +def wait_for_cluster_startup_finish(self, cluster): +startup_success = True +logging.info("Engine: %s", cluster.get_engine()) +if cluster.get_engine() == "minifi-cpp": +startup_success = cluster.wait_for_app_logs("Starting Flow Controller", 120) +elif cluster.get_engine() == "nifi": +startup_success = cluster.wait_for_app_logs("Starting Flow Controller...", 120) +elif cluster.get_engine() == "kafka-broker": +startup_success = cluster.wait_for_app_logs("Startup complete.", 120) +elif cluster.get_engine() == "http-proxy": +startup_success = cluster.wait_for_app_logs("Accepting HTTP Socket connections at", 120) +elif cluster.get_engine() == "s3-server": +startup_success = cluster.wait_for_app_logs("Started S3MockApplication", 120) +elif cluster.get_engine() == "azure-storage-server": +startup_success = cluster.wait_for_app_logs("Azurite Queue service is successfully listening at", 120) +if not startup_success: +cluster.log_nifi_output() +return startup_success + +def start_single_cluster(self, cluster_name): +self.set_up_cluster_network() +cluster = self.clusters[cluster_name] +cluster.deploy_flow() +assert self.wait_for_cluster_startup_finish(cluster) +time.sleep(10) Review comment: We do not, removed. It was a leftover artifact from throttling the topic test until I was sure there is no transient issue on startup. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617250914 ## File path: docker/test/integration/steps/steps.py ## @@ -259,9 +368,127 @@ def step_impl(context, content, file_name, path, seconds): time.sleep(seconds) context.test.add_test_data(path, content, file_name) +@when("a message with content \"{content}\" is published to the \"{topic_name}\" topic") +def step_impl(context, content, topic_name): +p = Producer({"bootstrap.servers": "localhost:29092", "client.id": socket.gethostname()}) +def delivery_report(err, msg): +if err is not None: +logging.info('Message delivery failed: {}'.format(err)) +else: +logging.info('Message delivered to {} [{}]'.format(msg.topic(), msg.partition())) Review comment: Extracted to a standalone function. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617228251 ## File path: docker/test/integration/steps/steps.py ## @@ -259,9 +368,127 @@ def step_impl(context, content, file_name, path, seconds): time.sleep(seconds) context.test.add_test_data(path, content, file_name) +@when("a message with content \"{content}\" is published to the \"{topic_name}\" topic") +def step_impl(context, content, topic_name): +p = Producer({"bootstrap.servers": "localhost:29092", "client.id": socket.gethostname()}) Review comment: Renamed to `producer` here and in subsequent functions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617226961 ## File path: docker/test/integration/steps/steps.py ## @@ -248,9 +330,36 @@ def step_impl(context, cluster_name): cluster.set_engine("azure-storage-server") cluster.set_flow(None) +@given("the kafka broker \"{cluster_name}\" is started") +def step_impl(context, cluster_name): +context.test.start_single_cluster(cluster_name) + +@given("the topic \"{topic_name}\" is initialized on the kafka broker") +def step_impl(context, topic_name): +a = AdminClient({'bootstrap.servers': "localhost:29092"}) +new_topics = [NewTopic(topic_name, num_partitions=1, replication_factor=1)] +fs = a.create_topics(new_topics) +# Block until the topic is created +for topic, f in fs.items(): Review comment: Renamed to `admin`, `future` and `futures` accordingly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617226321 ## File path: docker/test/integration/minifi/validators/FileOutputValidator.py ## @@ -1,8 +1,46 @@ +import logging +import os + +from os import listdir +from os.path import join + from .OutputValidator import OutputValidator class FileOutputValidator(OutputValidator): def set_output_dir(self, output_dir): self.output_dir = output_dir +@staticmethod +def num_files_matching_content_in_dir(dir_path, expected_content): + listing = listdir(dir_path) + if listing: +files_of_matching_content_found = 0 +for file_name in listing: + full_path = join(dir_path, file_name) + if not os.path.isfile(full_path): +continue + with open(full_path, 'r') as out_file: +contents = out_file.read() +logging.info("dir %s -- name %s", dir_path, file_name) +logging.info("expected content: %s -- actual: %s, match: %r", expected_content, contents, expected_content == contents) +if expected_content in contents: + files_of_matching_content_found += 1 +return files_of_matching_content_found + return 0 + +@staticmethod +def get_num_files(dir_path): + listing = listdir(dir_path) + logging.info("Num files in %s: %d", dir_path, len(listing)) + if listing: Review comment: Updated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617225561 ## File path: docker/test/integration/minifi/validators/FileOutputValidator.py ## @@ -1,8 +1,46 @@ +import logging +import os + +from os import listdir +from os.path import join + from .OutputValidator import OutputValidator class FileOutputValidator(OutputValidator): def set_output_dir(self, output_dir): self.output_dir = output_dir +@staticmethod +def num_files_matching_content_in_dir(dir_path, expected_content): + listing = listdir(dir_path) + if listing: Review comment: Good idea, updated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617225082 ## File path: docker/test/integration/minifi/core/SingleNodeDockerCluster.py ## @@ -234,22 +237,32 @@ def deploy_kafka_broker(self): detach=True, name='kafka-broker', network=self.network.name, -ports={'9092/tcp': 9092}, - environment=["KAFKA_LISTENERS=PLAINTEXT://kafka-broker:9092,SSL://kafka-broker:9093", "KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181"], +ports={'9092/tcp': 9092, '29092/tcp' : 29092}, +# environment=["KAFKA_LISTENERS=PLAINTEXT://kafka-broker:9092,SSL://kafka-broker:9093", "KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181"], Review comment: Removed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617223981 ## File path: docker/test/integration/MiNiFi_integration_test_driver.py ## @@ -90,35 +102,56 @@ def acquire_cluster(self, name): return self.clusters.setdefault(name, DockerTestCluster()) def set_up_cluster_network(self): -self.docker_network = SingleNodeDockerCluster.create_docker_network() -for cluster in self.clusters.values(): -cluster.set_network(self.docker_network) +if self.docker_network is None: +logging.info("Setting up new network.") +self.docker_network = SingleNodeDockerCluster.create_docker_network() +for cluster in self.clusters.values(): +cluster.set_network(self.docker_network) +else: +logging.info("Network is already set.") + +def wait_for_cluster_startup_finish(self, cluster): +startup_success = True +logging.info("Engine: %s", cluster.get_engine()) +if cluster.get_engine() == "minifi-cpp": +startup_success = cluster.wait_for_app_logs("Starting Flow Controller", 120) +elif cluster.get_engine() == "nifi": +startup_success = cluster.wait_for_app_logs("Starting Flow Controller...", 120) +elif cluster.get_engine() == "kafka-broker": +startup_success = cluster.wait_for_app_logs("Startup complete.", 120) +elif cluster.get_engine() == "http-proxy": +startup_success = cluster.wait_for_app_logs("Accepting HTTP Socket connections at", 120) +elif cluster.get_engine() == "s3-server": +startup_success = cluster.wait_for_app_logs("Started S3MockApplication", 120) +elif cluster.get_engine() == "azure-storage-server": +startup_success = cluster.wait_for_app_logs("Azurite Queue service is successfully listening at", 120) +if not startup_success: +cluster.log_nifi_output() +return startup_success + +def start_single_cluster(self, cluster_name): +self.set_up_cluster_network() +cluster = self.clusters[cluster_name] +cluster.deploy_flow() +assert self.wait_for_cluster_startup_finish(cluster) +time.sleep(10) def start(self): logging.info("MiNiFi_integration_test start") self.set_up_cluster_network() for cluster in self.clusters.values(): -logging.info("Starting cluster %s with an engine of %s", cluster.get_name(), cluster.get_engine()) - cluster.set_directory_bindings(self.docker_directory_bindings.get_directory_bindings(self.test_id)) -cluster.deploy_flow() -for cluster_name, cluster in self.clusters.items(): -startup_success = True -logging.info("Engine: %s", cluster.get_engine()) -if cluster.get_engine() == "minifi-cpp": -startup_success = cluster.wait_for_app_logs("Starting Flow Controller", 120) -elif cluster.get_engine() == "nifi": -startup_success = cluster.wait_for_app_logs("Starting Flow Controller...", 120) -elif cluster.get_engine() == "kafka-broker": -startup_success = cluster.wait_for_app_logs("Startup complete.", 120) -elif cluster.get_engine() == "http-proxy": -startup_success = cluster.wait_for_app_logs("Accepting HTTP Socket connections at", 120) -elif cluster.get_engine() == "s3-server": -startup_success = cluster.wait_for_app_logs("Started S3MockApplication", 120) -elif cluster.get_engine() == "azure-storage-server": -startup_success = cluster.wait_for_app_logs("Azurite Queue service is successfully listening at", 120) -if not startup_success: -cluster.log_nifi_output() -assert startup_success +if len(cluster.containers) == 0: +logging.info("Starting cluster %s with an engine of %s", cluster.get_name(), cluster.get_engine()) + cluster.set_directory_bindings(self.docker_directory_bindings.get_directory_bindings(self.test_id)) +cluster.deploy_flow() +else: +logging.info("Container %s is already started with an engine of %s", cluster.get_name(), cluster.get_engine()) +for cluster in self.clusters.values(): +assert self.wait_for_cluster_startup_finish(cluster) +# Seems like some extra time needed for consumers to negotiate with the broker +for cluster in self.clusters.values(): +if cluster.get_engine() == "kafka-broker": +time.sleep(10) Review comment: We already check for `"Startup complete."` it seems like this is insufficient. -- This is an automated message from the Apache Git Service. To respond to the message, please log
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1053: MINIFICPP-1373 - Add integration tests for ConsumeKafka and test cleanup issues
hunyadi-dev commented on a change in pull request #1053: URL: https://github.com/apache/nifi-minifi-cpp/pull/1053#discussion_r617222970 ## File path: docker/test/integration/environment.py ## @@ -1,26 +1,29 @@ -from behave import fixture, use_fixture import sys sys.path.append('../minifi') import logging +import datetime from MiNiFi_integration_test_driver import MiNiFi_integration_test from minifi import * def raise_exception(exception): -raise exception + raise exception -@fixture -def test_driver_fixture(context): -context.test = MiNiFi_integration_test(context) -yield context.test -logging.info("Integration test teardown...") -del context.test +def integration_test_cleanup(test): + logging.info("Integration test cleanup...") + del test def before_scenario(context, scenario): -use_fixture(test_driver_fixture, context) + logging.info("Integration test setup at {time:%H:%M:%S:%f}".format(time=datetime.datetime.now())) + context.test = MiNiFi_integration_test(context) def after_scenario(context, scenario): - pass + logging.info("Integration test teardown at {time:%H:%M:%S:%f}".format(time=datetime.datetime.now())) + if context is not None and hasattr(context, "test"): +context.test.cleanup() # force invocation +del context.test + else: +raise Exception("Test") Review comment: I think it was already rewritten by the time you commented :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org