[jira] [Updated] (NIFI-8607) UpdateAttribute Advanced Button displaying prompt outside dialog screen
[ https://issues.apache.org/jira/browse/NIFI-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Bean updated NIFI-8607: Attachment: Screen Shot 2021-05-26 at 9.20.07 PM.png > UpdateAttribute Advanced Button displaying prompt outside dialog screen > --- > > Key: NIFI-8607 > URL: https://issues.apache.org/jira/browse/NIFI-8607 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.13.2 > Environment: Centos 7. - MacOS >Reporter: Phil >Priority: Minor > Attachments: Screen Shot 2021-05-26 at 9.20.07 PM.png, > image-2021-05-17-13-50-18-264.png, image-2021-05-17-13-50-59-978.png > > > Works fine on Windows 10 Both Chrome and Microsoft Edge > {color:#FF} but NOT on macOS with Chrome or Safari{color} > {color:#FF}Chrome Version 90.0.4430.212 (Official Build) (x86_64){color} > {color:#FF}Safari Version 13.1.2 (13609.3.5.1.5){color} > {color:#FF}macOS High Sierra 10.13.6{color} > > Select Advanced > !image-2021-05-17-13-50-18-264.png! > > !image-2021-05-17-13-50-59-978.png! > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8607) UpdateAttribute Advanced Button displaying prompt outside dialog screen
[ https://issues.apache.org/jira/browse/NIFI-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352225#comment-17352225 ] Mark Bean commented on NIFI-8607: - Cannot reproduce. Works fine with: NiFI 1.13.2 Java: openjdk version "1.8.0_282" MacOS 11.2.3 Chrome Version 90.0.4430.85 (Official Build) (x86_64) Safari Version 14.0.3 (16610.4.3.1.7) Perhaps you need to upgrade the OS to latest version? > UpdateAttribute Advanced Button displaying prompt outside dialog screen > --- > > Key: NIFI-8607 > URL: https://issues.apache.org/jira/browse/NIFI-8607 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.13.2 > Environment: Centos 7. - MacOS >Reporter: Phil >Priority: Minor > Attachments: image-2021-05-17-13-50-18-264.png, > image-2021-05-17-13-50-59-978.png > > > Works fine on Windows 10 Both Chrome and Microsoft Edge > {color:#FF} but NOT on macOS with Chrome or Safari{color} > {color:#FF}Chrome Version 90.0.4430.212 (Official Build) (x86_64){color} > {color:#FF}Safari Version 13.1.2 (13609.3.5.1.5){color} > {color:#FF}macOS High Sierra 10.13.6{color} > > Select Advanced > !image-2021-05-17-13-50-18-264.png! > > !image-2021-05-17-13-50-59-978.png! > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8629) Add periodic logging to Stateless that provides component statuses
[ https://issues.apache.org/jira/browse/NIFI-8629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-8629: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Add periodic logging to Stateless that provides component statuses > -- > > Key: NIFI-8629 > URL: https://issues.apache.org/jira/browse/NIFI-8629 > Project: Apache NiFi > Issue Type: Improvement > Components: NiFi Stateless >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.14.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > NiFi provides a lot of details about how each component is operating in the > UI. However, with stateless, that is not available. NiFi also provides a > ControllerStatusReportingTask that logs most of the information. However, > that's part of the standard nar currently, which is almost 80 MB, and is not > included by default. We should add something similar to the > ControllerStatusReportingTask to stateless but have it always run (as long as > the log level is sufficiently high) rather than requiring the extra steps of > configuring the Reporting Task. The output will likely be similar but not > identical to ControllerStatusReportingTask, as we will want to make sure that > the output is tailored well to Stateless. > Additionally, the ControllerStatusReportingTask is fairly expensive to run > because, as an extension, it cannot have access to the framework-level > objects such as Counters, Processors, etc. and as a result must build an > expensive ProcessGroupStatus data model to operate on. By implementing this > in the stateless framework, it can be done more efficiently. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
asfgit closed pull request #5101: URL: https://github.com/apache/nifi/pull/5101 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8629) Add periodic logging to Stateless that provides component statuses
[ https://issues.apache.org/jira/browse/NIFI-8629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352220#comment-17352220 ] ASF subversion and git services commented on NIFI-8629: --- Commit 08edc33eb7eb87c3790b7512948bdec5ed58652f in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=08edc33 ] NIFI-8629: Implemented the LogComponentStatuses task that runs periodically in stateless. Also noticed a typo in the ControllerStatusReportingTask and found in comparing outputs that it had a bug that caused it to log counters generated only by processors at the root level so fixed that. This closes #5101 Signed-off-by: David Handermann > Add periodic logging to Stateless that provides component statuses > -- > > Key: NIFI-8629 > URL: https://issues.apache.org/jira/browse/NIFI-8629 > Project: Apache NiFi > Issue Type: Improvement > Components: NiFi Stateless >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.14.0 > > Time Spent: 2h > Remaining Estimate: 0h > > NiFi provides a lot of details about how each component is operating in the > UI. However, with stateless, that is not available. NiFi also provides a > ControllerStatusReportingTask that logs most of the information. However, > that's part of the standard nar currently, which is almost 80 MB, and is not > included by default. We should add something similar to the > ControllerStatusReportingTask to stateless but have it always run (as long as > the log level is sufficiently high) rather than requiring the extra steps of > configuring the Reporting Task. The output will likely be similar but not > identical to ControllerStatusReportingTask, as we will want to make sure that > the output is tailored well to Stateless. > Additionally, the ControllerStatusReportingTask is fairly expensive to run > because, as an extension, it cannot have access to the framework-level > objects such as Counters, Processors, etc. and as a result must build an > expensive ProcessGroupStatus data model to operate on. By implementing this > in the stateless framework, it can be done more efficiently. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] github-actions[bot] closed pull request #2154: NIFI-2663: Add WebSocket support for MQTT processors
github-actions[bot] closed pull request #2154: URL: https://github.com/apache/nifi/pull/2154 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2105: NIFI-4169: Enhance PutWebSocket error handling
github-actions[bot] closed pull request #2105: URL: https://github.com/apache/nifi/pull/2105 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2222: NIFI-3926 - Edit Template information
github-actions[bot] closed pull request #: URL: https://github.com/apache/nifi/pull/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2287: NIFI-4625 - Added External Version to the PutElastic5 Processor. Added Testing around the version usage.
github-actions[bot] closed pull request #2287: URL: https://github.com/apache/nifi/pull/2287 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2132: NIFI-4359 Based on field node type, whether value node or not, fetching the value of field.
github-actions[bot] closed pull request #2132: URL: https://github.com/apache/nifi/pull/2132 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #1403: NIFI-3293 - Expose counters to reporting tasks, and send counters dat…
github-actions[bot] closed pull request #1403: URL: https://github.com/apache/nifi/pull/1403 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2031: NIFI-617 - Message destination option in MonitorActivity
github-actions[bot] closed pull request #2031: URL: https://github.com/apache/nifi/pull/2031 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2269: NIFI-4400 - Advanced UI with code editor for...
github-actions[bot] closed pull request #2269: URL: https://github.com/apache/nifi/pull/2269 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #1541: NIFI-329 - Introduce IRC Client Services and ConsumeIRC processor
github-actions[bot] closed pull request #1541: URL: https://github.com/apache/nifi/pull/1541 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2157: NIFI-4390 - Add a keyboard shortcut for Connection...
github-actions[bot] closed pull request #2157: URL: https://github.com/apache/nifi/pull/2157 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2241: NIFI-4463 New MQTT Consume logic
github-actions[bot] closed pull request #2241: URL: https://github.com/apache/nifi/pull/2241 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2138: NIFI-4371 - add support for query timeout in Hive processors
github-actions[bot] closed pull request #2138: URL: https://github.com/apache/nifi/pull/2138 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #1016: NIFI-2724 New JMX Processor
github-actions[bot] closed pull request #1016: URL: https://github.com/apache/nifi/pull/1016 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2066: NIFI-4256 - Add support for all AWS S3 Encryption Options
github-actions[bot] closed pull request #2066: URL: https://github.com/apache/nifi/pull/2066 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2286: NIFI-4630 Added Satori RTM processor bundle (ConsumeSatoriRTM, PublishSatoriRTM)
github-actions[bot] closed pull request #2286: URL: https://github.com/apache/nifi/pull/2286 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2336: NIFI-4688 - PutParquet should have RemoteOwner and RemoteGroup LE(Language Expression) turn to TRUE
github-actions[bot] closed pull request #2336: URL: https://github.com/apache/nifi/pull/2336 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2408: localization using the JSTL standard fmt tag for multilingualization and nf._.msg() function in resource.js
github-actions[bot] closed pull request #2408: URL: https://github.com/apache/nifi/pull/2408 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2411: NIFI-4789 Extract grok multi pattern support
github-actions[bot] closed pull request #2411: URL: https://github.com/apache/nifi/pull/2411 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2375: autoclosesocket
github-actions[bot] closed pull request #2375: URL: https://github.com/apache/nifi/pull/2375 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2502: NIFI-4165: Added RemoveFlowFilesWithMissingContent.java and associate…
github-actions[bot] closed pull request #2502: URL: https://github.com/apache/nifi/pull/2502 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2425: Emit failures array
github-actions[bot] closed pull request #2425: URL: https://github.com/apache/nifi/pull/2425 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #2467: NIFI-4862 Add Copy original attributtes to SelectHiveQL processer
github-actions[bot] closed pull request #2467: URL: https://github.com/apache/nifi/pull/2467 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] Kasonnara edited a comment on pull request #4954: NIFI-8377 Fix CSVReader quoting and trimming with value separator inconsistency
Kasonnara edited a comment on pull request #4954: URL: https://github.com/apache/nifi/pull/4954#issuecomment-849121855 I apologize for the late reply, the last months were quite busy and then it took me some time to find out why I didn't replicate the CI test results on my window (some issues with long path, env vars, deps download that fail or tests that failed because the language of my computed isn't english). Well in the end I switched to a another computer with Linux that I control more in depth, and everything went better. As requested I added a property (disabled by default) to control the use of `withIgnoreSurroundingSpaces` and I had to rebase the commits or else some deps wouldn't be found anymore. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] Kasonnara commented on pull request #4954: NIFI-8377 Fix CSVReader quoting and trimming with value separator inconsistency
Kasonnara commented on pull request #4954: URL: https://github.com/apache/nifi/pull/4954#issuecomment-849121855 I apologize for the late reply, the last months were quite busy and then it took me some time to find out why I didn't replicate the CI test results on my window (some issues with long path, env vars, deps download that fail or test that failed because the language of my computed isn't english). Well in the end I switched to a another computer with Linux that I control more and everything went better. As requested I added a property (disabled by default) to control the use of `withIgnoreSurroundingSpaces` and I had to rebase the commits or else some deps wouldn't be found anymore. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8632) StandardOidcIdentityProviderGroovyTest hits localhost:443
[ https://issues.apache.org/jira/browse/NIFI-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352055#comment-17352055 ] Joseph Gresock commented on NIFI-8632: -- A couple observations.. since nifi-registry is set up to run ITs anyway, it's not as trivial as switching this to an IT. Secondly, when I switched to using an available port, the test itself runs and passes just fine, but if the OidcServiceTest runs *before* it, the StandardOidcIdentityProviderGroovyTest hangs. Apparently it hangs only if the following lines are executed in OidcService: {code:java} this.stateLookupForPendingRequests = CacheBuilder.newBuilder().expireAfterWrite(duration, units).build(); this.jwtLookupForCompletedRequests = CacheBuilder.newBuilder().expireAfterWrite(duration, units).build(); {code} If these lines are commented out, the StandardOidcIdentityProviderGroovyTest test no longer hangs. I haven't yet tracked down why this would be. > StandardOidcIdentityProviderGroovyTest hits localhost:443 > - > > Key: NIFI-8632 > URL: https://issues.apache.org/jira/browse/NIFI-8632 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.14.0 >Reporter: Joseph Gresock >Assignee: Joseph Gresock >Priority: Minor > > The following test tries to contact https://localhost:443 and fails if > something is actually running on that port. It should be changed to an > Integration Test instead. > {code:bash} > Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.867 sec > <<< FAILURE! - in > org.apache.nifi.registry.web.security.authentication.oidc.StandardOidcIdentityProviderGroovyTest > testConvertOIDCTokenToNiFiTokenShouldHandleBlankIdentityAndNoEmailClaim(org.apache.nifi.registry.web.security.authentication.oidc.StandardOidcIdentityProviderGroovyTest) > Time elapsed: 0.926 sec <<< FAILURE! > java.lang.AssertionError: Closure > org.apache.nifi.registry.web.security.authentication.oidc.StandardOidcIdentityProviderGroovyTest$_testConvertOIDCTokenToNiFiTokenShouldHandleBlankIdentityAndNoEmailClaim_closure5@27b2faa6 > should have failed with an exception of type java.net.ConnectException, > instead got Exception javax.net.ssl.SSLHandshakeException: PKIX path building > failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to > find valid certification path to requested target > at org.junit.Assert.fail(Assert.java:89) > at groovy.test.GroovyAssert.shouldFail(GroovyAssert.java:129) > at groovy.util.GroovyTestCase.shouldFail(GroovyTestCase.java:221) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43) > at > org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:190) > at > org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:58) > at > org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:156) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:176) > at > org.apache.nifi.registry.web.security.authentication.oidc.StandardOidcIdentityProviderGroovyTest.testConvertOIDCTokenToNiFiTokenShouldHandleBlankIdentityAndNoEmailClaim(StandardOidcIdentityProviderGroovyTest.groovy:428) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8631) ConsumeGCPubSub acknowledges messages without committing the session
[ https://issues.apache.org/jira/browse/NIFI-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352044#comment-17352044 ] ASF subversion and git services commented on NIFI-8631: --- Commit 46b1f6755c5ca3cc4bdb55e48fd09e1216b66d71 in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=46b1f67 ] NIFI-8631: Ensure that GCP Pub/Sub messages are not acknowledged until session has been committed, in order ot ensure that we don't have data loss This closes #5102. Signed-off-by: Peter Turcsanyi > ConsumeGCPubSub acknowledges messages without committing the session > > > Key: NIFI-8631 > URL: https://issues.apache.org/jira/browse/NIFI-8631 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Time Spent: 10m > Remaining Estimate: 0h > > ConsumeGCPubSub always acknowledges receipt of messages without committing > the session. As a result, if NiFi fails to commit the session, the message > will be lost. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8631) ConsumeGCPubSub acknowledges messages without committing the session
[ https://issues.apache.org/jira/browse/NIFI-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Turcsanyi updated NIFI-8631: -- Fix Version/s: 1.14.0 Resolution: Fixed Status: Resolved (was: Patch Available) > ConsumeGCPubSub acknowledges messages without committing the session > > > Key: NIFI-8631 > URL: https://issues.apache.org/jira/browse/NIFI-8631 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.14.0 > > Time Spent: 20m > Remaining Estimate: 0h > > ConsumeGCPubSub always acknowledges receipt of messages without committing > the session. As a result, if NiFi fails to commit the session, the message > will be lost. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #5102: NIFI-8631: Ensure that GCP Pub/Sub messages are not acknowledged unti…
asfgit closed pull request #5102: URL: https://github.com/apache/nifi/pull/5102 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
markap14 commented on pull request #5101: URL: https://github.com/apache/nifi/pull/5101#issuecomment-849045088 Thanks for reviewing @exceptionfactory! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
exceptionfactory commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r640037705 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/controller/reporting/LogComponentStatuses.java ## @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.reporting; + +import org.apache.nifi.controller.Counter; +import org.apache.nifi.controller.ProcessorNode; +import org.apache.nifi.controller.flow.FlowManager; +import org.apache.nifi.controller.repository.CounterRepository; +import org.apache.nifi.controller.repository.FlowFileEvent; +import org.apache.nifi.controller.repository.FlowFileEventRepository; +import org.apache.nifi.groups.ProcessGroup; +import org.apache.nifi.util.FormatUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +public class LogComponentStatuses implements Runnable { +private static final Logger logger = LoggerFactory.getLogger(LogComponentStatuses.class); + +private static final String PROCESSOR_LINE_FORMAT = "| %1$-30.30s | %2$-36.36s | %3$-30.30s | %4$28.28s | %5$30.30s | %6$14.14s | %714.14s | %8$28.28s |\n"; +private static final String COUNTER_LINE_FORMAT = "| %1$-36.36s | %2$-36.36s | %3$28.28s | %4$28.28s |\n"; + +private final FlowFileEventRepository flowFileEventRepository; +private final CounterRepository counterRepository; +private final FlowManager flowManager; + +private final String processorHeader; +private final String processorBorderLine; +private final String counterHeader; +private final String counterBorderLine; + +private final Map previousCounterValues = new ConcurrentHashMap<>(); +private volatile long lastTriggerTime = System.currentTimeMillis(); + +public LogComponentStatuses(final FlowFileEventRepository flowFileEventRepository, final CounterRepository counterRepository, final FlowManager flowManager) { +this.flowFileEventRepository = flowFileEventRepository; +this.counterRepository = counterRepository; +this.flowManager = flowManager; + +processorHeader = String.format(PROCESSOR_LINE_FORMAT, "Processor Name", "Processor ID", "Processor Type", "Bytes Read/sec", "Bytes Written/sec", "Tasks/sec", "Nanos/Task", +"Percent of Processing Time"); +processorBorderLine = createLine(processorHeader); + +counterHeader = String.format(COUNTER_LINE_FORMAT, "Counter Context", "Counter Name", "Counter Value", "Increase/sec"); +counterBorderLine = createLine(counterHeader); +} + +private String createLine(final String valueToUnderscore) { +final StringBuilder processorBorderBuilder = new StringBuilder(valueToUnderscore.length()); +for (int i = 0; i < valueToUnderscore.length(); i++) { +processorBorderBuilder.append('-'); +} +return processorBorderBuilder.toString(); +} + +@Override +public void run() { +try { +if (!logger.isInfoEnabled()) { +return; +} + +logFlowFileEvents(); +logCounters(); +} catch (final Exception e) { +logger.error("Failed to log component statuses", e); +} +} + +private void logFlowFileEvents() { +final long timestamp = System.currentTimeMillis(); +final ProcessGroup rootGroup = flowManager.getRootGroup(); +final List allProcessors = rootGroup.findAllProcessors(); + +long totalNanos = 0L; +final List processorsAndEvents = new ArrayList<>(); +for (final ProcessorNode processorNode : allProcessors) { +final FlowFileEvent flowFileEvent = flowFileEventRepository.reportTransferEvents(processorNode.getIdentifier(), timestamp); +if (flowFileEvent == null) { +continue; +} + +processorsAndEvents.add(new ProcessorAndEvent(processorNode, flowFileEvent)); +
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
exceptionfactory commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r640036572 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/controller/reporting/LogComponentStatuses.java ## @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.reporting; + +import org.apache.nifi.controller.Counter; +import org.apache.nifi.controller.ProcessorNode; +import org.apache.nifi.controller.flow.FlowManager; +import org.apache.nifi.controller.repository.CounterRepository; +import org.apache.nifi.controller.repository.FlowFileEvent; +import org.apache.nifi.controller.repository.FlowFileEventRepository; +import org.apache.nifi.groups.ProcessGroup; +import org.apache.nifi.util.FormatUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +public class LogComponentStatuses implements Runnable { +private static final Logger logger = LoggerFactory.getLogger(LogComponentStatuses.class); + +private static final String PROCESSOR_LINE_FORMAT = "| %1$-30.30s | %2$-36.36s | %3$-30.30s | %4$28.28s | %5$30.30s | %6$14.14s | %714.14s | %8$28.28s |\n"; +private static final String COUNTER_LINE_FORMAT = "| %1$-36.36s | %2$-36.36s | %3$28.28s | %4$28.28s |\n"; + +private final FlowFileEventRepository flowFileEventRepository; +private final CounterRepository counterRepository; +private final FlowManager flowManager; + +private final String processorHeader; +private final String processorBorderLine; +private final String counterHeader; +private final String counterBorderLine; + +private final Map previousCounterValues = new ConcurrentHashMap<>(); +private volatile long lastTriggerTime = System.currentTimeMillis(); + +public LogComponentStatuses(final FlowFileEventRepository flowFileEventRepository, final CounterRepository counterRepository, final FlowManager flowManager) { +this.flowFileEventRepository = flowFileEventRepository; +this.counterRepository = counterRepository; +this.flowManager = flowManager; + +processorHeader = String.format(PROCESSOR_LINE_FORMAT, "Processor Name", "Processor ID", "Processor Type", "Bytes Read/sec", "Bytes Written/sec", "Tasks/sec", "Nanos/Task", +"Percent of Processing Time"); +processorBorderLine = createLine(processorHeader); + +counterHeader = String.format(COUNTER_LINE_FORMAT, "Counter Context", "Counter Name", "Counter Value", "Increase/sec"); +counterBorderLine = createLine(counterHeader); +} + +private String createLine(final String valueToUnderscore) { +final StringBuilder processorBorderBuilder = new StringBuilder(valueToUnderscore.length()); +for (int i = 0; i < valueToUnderscore.length(); i++) { +processorBorderBuilder.append('-'); +} +return processorBorderBuilder.toString(); +} + +@Override +public void run() { +try { +if (!logger.isInfoEnabled()) { Review comment: Thanks for the explanation, that makes sense. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8361) UnpackContent failing for Deflate:Maximum
[ https://issues.apache.org/jira/browse/NIFI-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352008#comment-17352008 ] ASF subversion and git services commented on NIFI-8361: --- Commit 1e6161c0aaddb1a11aae27dec2ea9761ca3a10ac in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1e6161c ] NIFI-8361 Upgraded Zip4j to 2.8.0 - Upgrade resolves issue unpacking Zip files with temporary spanning markers Signed-off-by: Pierre Villard This closes #5103. > UnpackContent failing for Deflate:Maximum > - > > Key: NIFI-8361 > URL: https://issues.apache.org/jira/browse/NIFI-8361 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0, 1.13.2 > Environment: Ubuntu 18.04.5 (1.13.2) > RHEL 7.7 (1.12.1) >Reporter: Tom P >Assignee: David Handermann >Priority: Major > Labels: newbie > Attachments: zip_deflate-maximum.xml > > Time Spent: 10m > Remaining Estimate: 0h > > Hi team, > Using 1.13.2 and running a pipeline pulling down a bunch of ZIP files, and > noticed a regression in behaviour between my two environments. > The 1.12.1 instance (running on RHEL) was able to unpack the file > successfully, whereas the 1.13.2 instance complains of an error stating > {code:java} > 2021-03-23 04:12:39,361 ERROR [Timer-Driven Process Thread-8] > o.a.n.processors.standard.UnpackContent > UnpackContent[id=5d0fda44-0178-1000-872e-6c183c633c89] Unable to unpack > StandardFlowFileRecord[uuid=9fa7650e-8557-465c-b39b-0e9b5e25ee0a,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1616471834548-154, > container=default, section=154], offset=0, > length=11095546],offset=0,name=3b70dbf9-b0a1-4d63-b2fd-0efe2a7291b8,size=11095546] > because it does not appear to have any entries; routing to failure{code} > The only discernable difference between this file and files that were able to > be unpacked was that the offending files have > * a "compression method" of Deflate:Maximum (as opposed to Deflate on the > working files) > * an "offset" of 4 (as opposed to "0" on the working files) > See attached for the template I used for testing the same functionality on my > 1.13.2 and 1.12.1 NiFi instances. I've downgraded the Ubuntu instance to > 1.12.1 and noted that the UnpackContent processor functions as expected, so I > don't believe it's an issue with the host OS. > I'm not sure whether the issue introduced in 1.13.1 might also have impacted > this, with the offset, or if it's something to do with the compression > method, or something else entirely. > Happy to provide further detail if needed :) > Cheers, > Tom > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8361) UnpackContent failing for Deflate:Maximum
[ https://issues.apache.org/jira/browse/NIFI-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-8361: - Fix Version/s: 1.14.0 Resolution: Fixed Status: Resolved (was: Patch Available) > UnpackContent failing for Deflate:Maximum > - > > Key: NIFI-8361 > URL: https://issues.apache.org/jira/browse/NIFI-8361 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0, 1.13.2 > Environment: Ubuntu 18.04.5 (1.13.2) > RHEL 7.7 (1.12.1) >Reporter: Tom P >Assignee: David Handermann >Priority: Major > Labels: newbie > Fix For: 1.14.0 > > Attachments: zip_deflate-maximum.xml > > Time Spent: 10m > Remaining Estimate: 0h > > Hi team, > Using 1.13.2 and running a pipeline pulling down a bunch of ZIP files, and > noticed a regression in behaviour between my two environments. > The 1.12.1 instance (running on RHEL) was able to unpack the file > successfully, whereas the 1.13.2 instance complains of an error stating > {code:java} > 2021-03-23 04:12:39,361 ERROR [Timer-Driven Process Thread-8] > o.a.n.processors.standard.UnpackContent > UnpackContent[id=5d0fda44-0178-1000-872e-6c183c633c89] Unable to unpack > StandardFlowFileRecord[uuid=9fa7650e-8557-465c-b39b-0e9b5e25ee0a,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1616471834548-154, > container=default, section=154], offset=0, > length=11095546],offset=0,name=3b70dbf9-b0a1-4d63-b2fd-0efe2a7291b8,size=11095546] > because it does not appear to have any entries; routing to failure{code} > The only discernable difference between this file and files that were able to > be unpacked was that the offending files have > * a "compression method" of Deflate:Maximum (as opposed to Deflate on the > working files) > * an "offset" of 4 (as opposed to "0" on the working files) > See attached for the template I used for testing the same functionality on my > 1.13.2 and 1.12.1 NiFi instances. I've downgraded the Ubuntu instance to > 1.12.1 and noted that the UnpackContent processor functions as expected, so I > don't believe it's an issue with the host OS. > I'm not sure whether the issue introduced in 1.13.1 might also have impacted > this, with the offset, or if it's something to do with the compression > method, or something else entirely. > Happy to provide further detail if needed :) > Cheers, > Tom > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #5103: NIFI-8361 Upgraded Zip4j to 2.8.0
asfgit closed pull request #5103: URL: https://github.com/apache/nifi/pull/5103 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
markap14 commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r640015013 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/flow/StandardStatelessFlow.java ## @@ -33,6 +33,7 @@ import org.apache.nifi.controller.ReportingTaskNode; import org.apache.nifi.controller.queue.FlowFileQueue; import org.apache.nifi.controller.queue.QueueSize; +import org.apache.nifi.controller.reporting.LogComponentStatuses; Review comment: Thanks. Will address. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
markap14 commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r640014876 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/controller/reporting/LogComponentStatuses.java ## @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.reporting; + +import org.apache.nifi.controller.Counter; +import org.apache.nifi.controller.ProcessorNode; +import org.apache.nifi.controller.flow.FlowManager; +import org.apache.nifi.controller.repository.CounterRepository; +import org.apache.nifi.controller.repository.FlowFileEvent; +import org.apache.nifi.controller.repository.FlowFileEventRepository; +import org.apache.nifi.groups.ProcessGroup; +import org.apache.nifi.util.FormatUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +public class LogComponentStatuses implements Runnable { +private static final Logger logger = LoggerFactory.getLogger(LogComponentStatuses.class); + +private static final String PROCESSOR_LINE_FORMAT = "| %1$-30.30s | %2$-36.36s | %3$-30.30s | %4$28.28s | %5$30.30s | %6$14.14s | %714.14s | %8$28.28s |\n"; +private static final String COUNTER_LINE_FORMAT = "| %1$-36.36s | %2$-36.36s | %3$28.28s | %4$28.28s |\n"; + +private final FlowFileEventRepository flowFileEventRepository; +private final CounterRepository counterRepository; +private final FlowManager flowManager; + +private final String processorHeader; +private final String processorBorderLine; +private final String counterHeader; +private final String counterBorderLine; + +private final Map previousCounterValues = new ConcurrentHashMap<>(); +private volatile long lastTriggerTime = System.currentTimeMillis(); + +public LogComponentStatuses(final FlowFileEventRepository flowFileEventRepository, final CounterRepository counterRepository, final FlowManager flowManager) { +this.flowFileEventRepository = flowFileEventRepository; +this.counterRepository = counterRepository; +this.flowManager = flowManager; + +processorHeader = String.format(PROCESSOR_LINE_FORMAT, "Processor Name", "Processor ID", "Processor Type", "Bytes Read/sec", "Bytes Written/sec", "Tasks/sec", "Nanos/Task", +"Percent of Processing Time"); +processorBorderLine = createLine(processorHeader); + +counterHeader = String.format(COUNTER_LINE_FORMAT, "Counter Context", "Counter Name", "Counter Value", "Increase/sec"); +counterBorderLine = createLine(counterHeader); +} + +private String createLine(final String valueToUnderscore) { +final StringBuilder processorBorderBuilder = new StringBuilder(valueToUnderscore.length()); +for (int i = 0; i < valueToUnderscore.length(); i++) { +processorBorderBuilder.append('-'); +} +return processorBorderBuilder.toString(); +} + +@Override +public void run() { +try { +if (!logger.isInfoEnabled()) { Review comment: No, this is very intentional, as the logback.xml can be changed while running. If that happens, we want this to take effect. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
markap14 commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r640014538 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/flow/StandardStatelessFlow.java ## @@ -212,23 +217,39 @@ public void initialize() { logger.info("Successfully initialized components in {} millis ({} millis to perform validation, {} millis for services to enable)", initializationMillis, validationMillis, serviceEnableMillis); -runDataflowExecutor = Executors.newFixedThreadPool(1, r -> { -final Thread thread = Executors.defaultThreadFactory().newThread(r); -final String flowName = dataflowDefinition.getFlowName(); -if (flowName == null) { -thread.setName("Run Dataflow"); -} else { -thread.setName("Run Dataflow " + flowName); -} +// Create executor for dataflow +final String flowName = dataflowDefinition.getFlowName(); +final String threadName = (flowName == null) ? "Run Dataflow" : "Run Dataflow " + flowName; +runDataflowExecutor = Executors.newFixedThreadPool(1, createNamedThreadFactory(threadName, false)); -return thread; -}); +// Periodically log component statuses +backgroundTaskExecutor = Executors.newScheduledThreadPool(1, createNamedThreadFactory("Background Tasks", true)); +backgroundTasks.forEach(task -> backgroundTaskExecutor.scheduleWithFixedDelay(task.getTask(), task.getSchedulingPeriod(), task.getSchedulingPeriod(), task.getSchedulingUnit())); } catch (final Throwable t) { processScheduler.shutdown(); throw t; } } +private ThreadFactory createNamedThreadFactory(final String name, final boolean daemon) { +return (Runnable r) -> { +final Thread thread = Executors.defaultThreadFactory().newThread(r); +thread.setName(name); +thread.setDaemon(daemon); +return thread; +}; +} + +/** + * Schedules the given background task to run periodically after the dataflow has been initialized until it has been shutdown + * @param task the task to run + * @param period how often to run it + * @param unit the unit for the time period + */ +public void scheduleBackgroundTask(final Runnable task, final long period, final TimeUnit unit) { Review comment: This is intentionally kept in StandardStatelessFlow. It is something that is made available to the framework (the instantiator of the StandardStatelessFlow, specifically), definitely not something that we want made publicly available. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
markap14 commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r640013919 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/controller/reporting/LogComponentStatuses.java ## @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.reporting; + +import org.apache.nifi.controller.Counter; +import org.apache.nifi.controller.ProcessorNode; +import org.apache.nifi.controller.flow.FlowManager; +import org.apache.nifi.controller.repository.CounterRepository; +import org.apache.nifi.controller.repository.FlowFileEvent; +import org.apache.nifi.controller.repository.FlowFileEventRepository; +import org.apache.nifi.groups.ProcessGroup; +import org.apache.nifi.util.FormatUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +public class LogComponentStatuses implements Runnable { +private static final Logger logger = LoggerFactory.getLogger(LogComponentStatuses.class); + +private static final String PROCESSOR_LINE_FORMAT = "| %1$-30.30s | %2$-36.36s | %3$-30.30s | %4$28.28s | %5$30.30s | %6$14.14s | %714.14s | %8$28.28s |\n"; +private static final String COUNTER_LINE_FORMAT = "| %1$-36.36s | %2$-36.36s | %3$28.28s | %4$28.28s |\n"; + +private final FlowFileEventRepository flowFileEventRepository; +private final CounterRepository counterRepository; +private final FlowManager flowManager; + +private final String processorHeader; +private final String processorBorderLine; +private final String counterHeader; +private final String counterBorderLine; + +private final Map previousCounterValues = new ConcurrentHashMap<>(); +private volatile long lastTriggerTime = System.currentTimeMillis(); + +public LogComponentStatuses(final FlowFileEventRepository flowFileEventRepository, final CounterRepository counterRepository, final FlowManager flowManager) { +this.flowFileEventRepository = flowFileEventRepository; +this.counterRepository = counterRepository; +this.flowManager = flowManager; + +processorHeader = String.format(PROCESSOR_LINE_FORMAT, "Processor Name", "Processor ID", "Processor Type", "Bytes Read/sec", "Bytes Written/sec", "Tasks/sec", "Nanos/Task", +"Percent of Processing Time"); +processorBorderLine = createLine(processorHeader); + +counterHeader = String.format(COUNTER_LINE_FORMAT, "Counter Context", "Counter Name", "Counter Value", "Increase/sec"); +counterBorderLine = createLine(counterHeader); +} + +private String createLine(final String valueToUnderscore) { +final StringBuilder processorBorderBuilder = new StringBuilder(valueToUnderscore.length()); +for (int i = 0; i < valueToUnderscore.length(); i++) { +processorBorderBuilder.append('-'); +} +return processorBorderBuilder.toString(); +} + +@Override +public void run() { +try { +if (!logger.isInfoEnabled()) { +return; +} + +logFlowFileEvents(); +logCounters(); +} catch (final Exception e) { +logger.error("Failed to log component statuses", e); +} +} + +private void logFlowFileEvents() { +final long timestamp = System.currentTimeMillis(); +final ProcessGroup rootGroup = flowManager.getRootGroup(); +final List allProcessors = rootGroup.findAllProcessors(); + +long totalNanos = 0L; +final List processorsAndEvents = new ArrayList<>(); +for (final ProcessorNode processorNode : allProcessors) { +final FlowFileEvent flowFileEvent = flowFileEventRepository.reportTransferEvents(processorNode.getIdentifier(), timestamp); +if (flowFileEvent == null) { +continue; +} + +processorsAndEvents.add(new ProcessorAndEvent(processorNode, flowFileEvent)); +totalNanos +=
[GitHub] [nifi] markap14 commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
markap14 commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r640013747 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/controller/reporting/LogComponentStatuses.java ## @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.reporting; + +import org.apache.nifi.controller.Counter; +import org.apache.nifi.controller.ProcessorNode; +import org.apache.nifi.controller.flow.FlowManager; +import org.apache.nifi.controller.repository.CounterRepository; +import org.apache.nifi.controller.repository.FlowFileEvent; +import org.apache.nifi.controller.repository.FlowFileEventRepository; +import org.apache.nifi.groups.ProcessGroup; +import org.apache.nifi.util.FormatUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +public class LogComponentStatuses implements Runnable { +private static final Logger logger = LoggerFactory.getLogger(LogComponentStatuses.class); + +private static final String PROCESSOR_LINE_FORMAT = "| %1$-30.30s | %2$-36.36s | %3$-30.30s | %4$28.28s | %5$30.30s | %6$14.14s | %714.14s | %8$28.28s |\n"; +private static final String COUNTER_LINE_FORMAT = "| %1$-36.36s | %2$-36.36s | %3$28.28s | %4$28.28s |\n"; + +private final FlowFileEventRepository flowFileEventRepository; +private final CounterRepository counterRepository; +private final FlowManager flowManager; + +private final String processorHeader; +private final String processorBorderLine; +private final String counterHeader; +private final String counterBorderLine; + +private final Map previousCounterValues = new ConcurrentHashMap<>(); +private volatile long lastTriggerTime = System.currentTimeMillis(); + +public LogComponentStatuses(final FlowFileEventRepository flowFileEventRepository, final CounterRepository counterRepository, final FlowManager flowManager) { +this.flowFileEventRepository = flowFileEventRepository; +this.counterRepository = counterRepository; +this.flowManager = flowManager; + +processorHeader = String.format(PROCESSOR_LINE_FORMAT, "Processor Name", "Processor ID", "Processor Type", "Bytes Read/sec", "Bytes Written/sec", "Tasks/sec", "Nanos/Task", +"Percent of Processing Time"); +processorBorderLine = createLine(processorHeader); + +counterHeader = String.format(COUNTER_LINE_FORMAT, "Counter Context", "Counter Name", "Counter Value", "Increase/sec"); +counterBorderLine = createLine(counterHeader); +} + +private String createLine(final String valueToUnderscore) { +final StringBuilder processorBorderBuilder = new StringBuilder(valueToUnderscore.length()); +for (int i = 0; i < valueToUnderscore.length(); i++) { +processorBorderBuilder.append('-'); +} +return processorBorderBuilder.toString(); +} + +@Override +public void run() { +try { +if (!logger.isInfoEnabled()) { +return; +} + +logFlowFileEvents(); +logCounters(); +} catch (final Exception e) { +logger.error("Failed to log component statuses", e); +} +} + +private void logFlowFileEvents() { +final long timestamp = System.currentTimeMillis(); +final ProcessGroup rootGroup = flowManager.getRootGroup(); +final List allProcessors = rootGroup.findAllProcessors(); + +long totalNanos = 0L; +final List processorsAndEvents = new ArrayList<>(); +for (final ProcessorNode processorNode : allProcessors) { +final FlowFileEvent flowFileEvent = flowFileEventRepository.reportTransferEvents(processorNode.getIdentifier(), timestamp); +if (flowFileEvent == null) { +continue; +} + +processorsAndEvents.add(new ProcessorAndEvent(processorNode, flowFileEvent)); +totalNanos +=
[GitHub] [nifi] markap14 commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
markap14 commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r640012388 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/controller/reporting/LogComponentStatuses.java ## @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.reporting; + +import org.apache.nifi.controller.Counter; +import org.apache.nifi.controller.ProcessorNode; +import org.apache.nifi.controller.flow.FlowManager; +import org.apache.nifi.controller.repository.CounterRepository; +import org.apache.nifi.controller.repository.FlowFileEvent; +import org.apache.nifi.controller.repository.FlowFileEventRepository; +import org.apache.nifi.groups.ProcessGroup; +import org.apache.nifi.util.FormatUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +public class LogComponentStatuses implements Runnable { +private static final Logger logger = LoggerFactory.getLogger(LogComponentStatuses.class); + +private static final String PROCESSOR_LINE_FORMAT = "| %1$-30.30s | %2$-36.36s | %3$-30.30s | %4$28.28s | %5$30.30s | %6$14.14s | %714.14s | %8$28.28s |\n"; +private static final String COUNTER_LINE_FORMAT = "| %1$-36.36s | %2$-36.36s | %3$28.28s | %4$28.28s |\n"; + +private final FlowFileEventRepository flowFileEventRepository; +private final CounterRepository counterRepository; +private final FlowManager flowManager; + +private final String processorHeader; +private final String processorBorderLine; +private final String counterHeader; +private final String counterBorderLine; + +private final Map previousCounterValues = new ConcurrentHashMap<>(); +private volatile long lastTriggerTime = System.currentTimeMillis(); + +public LogComponentStatuses(final FlowFileEventRepository flowFileEventRepository, final CounterRepository counterRepository, final FlowManager flowManager) { +this.flowFileEventRepository = flowFileEventRepository; +this.counterRepository = counterRepository; +this.flowManager = flowManager; + +processorHeader = String.format(PROCESSOR_LINE_FORMAT, "Processor Name", "Processor ID", "Processor Type", "Bytes Read/sec", "Bytes Written/sec", "Tasks/sec", "Nanos/Task", +"Percent of Processing Time"); +processorBorderLine = createLine(processorHeader); + +counterHeader = String.format(COUNTER_LINE_FORMAT, "Counter Context", "Counter Name", "Counter Value", "Increase/sec"); +counterBorderLine = createLine(counterHeader); +} + +private String createLine(final String valueToUnderscore) { +final StringBuilder processorBorderBuilder = new StringBuilder(valueToUnderscore.length()); +for (int i = 0; i < valueToUnderscore.length(); i++) { +processorBorderBuilder.append('-'); +} +return processorBorderBuilder.toString(); +} + +@Override +public void run() { +try { +if (!logger.isInfoEnabled()) { +return; +} + +logFlowFileEvents(); +logCounters(); Review comment: I don't think so. I am a big proponent of the fail-fast or escape-fast approach. The original here conveys the meaning more clearly: "if logger's INFO level is not enabled, we're done. Do nothing." The refactored approach says "if the logger is enabled, here are the different things I want to do." As the code is refactored/updated in the future, the logic within the if conditional is likely to grow, to the point of it not being as obvious what the intent is. This is a general practice that I try to always adhere to, as you'll see all throughout the codebase and tends to lead to cleaner code over time, I think. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact
[jira] [Created] (NIFI-8634) Create Controller Services that allows user to choose between Record Readers/Writers based on Expression Language
Mark Payne created NIFI-8634: Summary: Create Controller Services that allows user to choose between Record Readers/Writers based on Expression Language Key: NIFI-8634 URL: https://issues.apache.org/jira/browse/NIFI-8634 Project: Apache NiFi Issue Type: New Feature Components: Extensions Reporter: Mark Payne Typically when a user builds a flow, the user must choose a JSON Reader, CSV Reader, JSON Writer, CSV Writer, etc. whatever makes sense for their use case. The downside to this approach is that it makes it difficult to build a flow is more generically distributable. We should create a RecordReaderLookup service and a RecordWriterLookup service. Each would have a single well-known property for the name of the Record Reader/Writer service to use. User-defined properties would then be used to define those Record Readers/Writers. For example, one might configure RecordReaderLookup as such: {code:java} Selected Reader: #{DataFOrmat} csv: CSVReader json: JsonTreeReader avro: AvroReader {code} In this case, CSVReader, JsonTreeReader, and AvroReader are other Record Reader Controller Services. Now, a parameter can be defined with the name {{DataFormat}}. If that parameter has a value of {{csv}}, the CSVReader would be used. If the parameter has a value of {{json}}, the JsonTreeReader would be used, and so forth. Same principle would be followed for the Record Writer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8633) Content Repository can be improved to make fewer disks accesses on read
[ https://issues.apache.org/jira/browse/NIFI-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351938#comment-17351938 ] Mark Payne commented on NIFI-8633: -- For those interested in the actual performance numbers here, I ran a pretty simple flow that generated a lot of tiny JSON messages, and then used ConvertRecord to convert from JSON to Avro. Ran a profiler against it and found that about 50% of the time for ConvertRecord was spent in {{FileSystemRepository.read()}}. This is called twice - once when we read the data for inferring schema, a second time when we parse the data. Of the time spent in {{FileSystemRepository.read()}}, about 50% of that time was spent in {{Files.exists()}}. So this should improve performance of that flow by something like 25% > Content Repository can be improved to make fewer disks accesses on read > --- > > Key: NIFI-8633 > URL: https://issues.apache.org/jira/browse/NIFI-8633 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.14.0 > > Time Spent: 10m > Remaining Estimate: 0h > > When {{FileSystemRepository.read(ContentClaim)}} or > {{FileSystemRepository.read(ResourceClaim)}} is called, the repository > determines the file path for the claim via {{getPath(claim, true);}} where > the true indicates that we should verify that the file exists. > This is done so that if we were to pass in a ContentClaim that does not > exist, we throw a more meaningful ContentNotFoundException instead of just > letting a FileNotFoundException fly. > However, this call to {{Files.exists(Path)}} is fairly expensive, as it's a > disk access. For a flow that uses a lot of smaller files, this can be > extremely expensive. > We can improve this by removing the call to {{Files.exists}} all together. > Instead, just blindly create the {{FileInputStream}} in a try/catch block and > catch FileNotFoundException, and then wrap that in a > {{ContentNotFoundException}}. This results in the same API and the same > contracts as before but avoids the overhead of additional disk accesses/seeks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8633) Content Repository can be improved to make fewer disks accesses on read
[ https://issues.apache.org/jira/browse/NIFI-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-8633: - Fix Version/s: 1.14.0 Status: Patch Available (was: Open) > Content Repository can be improved to make fewer disks accesses on read > --- > > Key: NIFI-8633 > URL: https://issues.apache.org/jira/browse/NIFI-8633 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.14.0 > > > When {{FileSystemRepository.read(ContentClaim)}} or > {{FileSystemRepository.read(ResourceClaim)}} is called, the repository > determines the file path for the claim via {{getPath(claim, true);}} where > the true indicates that we should verify that the file exists. > This is done so that if we were to pass in a ContentClaim that does not > exist, we throw a more meaningful ContentNotFoundException instead of just > letting a FileNotFoundException fly. > However, this call to {{Files.exists(Path)}} is fairly expensive, as it's a > disk access. For a flow that uses a lot of smaller files, this can be > extremely expensive. > We can improve this by removing the call to {{Files.exists}} all together. > Instead, just blindly create the {{FileInputStream}} in a try/catch block and > catch FileNotFoundException, and then wrap that in a > {{ContentNotFoundException}}. This results in the same API and the same > contracts as before but avoids the overhead of additional disk accesses/seeks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 opened a new pull request #5104: NIFI-8633: When reading a Content/Resource Claim from FileSystemRepos…
markap14 opened a new pull request #5104: URL: https://github.com/apache/nifi/pull/5104 …itory, avoid the unnecessary Files.exists call and instead just create a FileInputStream, catching FileNotFoundException Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-8633) Content Repository can be improved to make fewer disks accesses on read
Mark Payne created NIFI-8633: Summary: Content Repository can be improved to make fewer disks accesses on read Key: NIFI-8633 URL: https://issues.apache.org/jira/browse/NIFI-8633 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne When {{FileSystemRepository.read(ContentClaim)}} or {{FileSystemRepository.read(ResourceClaim)}} is called, the repository determines the file path for the claim via {{getPath(claim, true);}} where the true indicates that we should verify that the file exists. This is done so that if we were to pass in a ContentClaim that does not exist, we throw a more meaningful ContentNotFoundException instead of just letting a FileNotFoundException fly. However, this call to {{Files.exists(Path)}} is fairly expensive, as it's a disk access. For a flow that uses a lot of smaller files, this can be extremely expensive. We can improve this by removing the call to {{Files.exists}} all together. Instead, just blindly create the {{FileInputStream}} in a try/catch block and catch FileNotFoundException, and then wrap that in a {{ContentNotFoundException}}. This results in the same API and the same contracts as before but avoids the overhead of additional disk accesses/seeks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
martinzink commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639893584 ## File path: extensions/pdh/tests/PerformanceDataCounterTests.cpp ## @@ -122,8 +122,8 @@ TEST_CASE("PDHCounterDataCollectionTest", "[pdhcounterdatacollectiontest]") { REQUIRE(int_counter.collectData()); rapidjson::Document document(rapidjson::kObjectType); - double_counter.addToJson(document, document.GetAllocator()); - int_counter.addToJson(document, document.GetAllocator()); + double_counter.addToJson(document, document.GetAllocator(), utils::optional()); + int_counter.addToJson(document, document.GetAllocator(), utils::optional()); Review comment: yeah that makes the code more readable, changed it -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm opened a new pull request #1083: MINIFICPP-1507 convert OutputStream::write to size_t
szaszm opened a new pull request #1083: URL: https://github.com/apache/nifi-minifi-cpp/pull/1083 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically main)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-8361) UnpackContent failing for Deflate:Maximum
[ https://issues.apache.org/jira/browse/NIFI-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-8361: --- Status: Patch Available (was: In Progress) > UnpackContent failing for Deflate:Maximum > - > > Key: NIFI-8361 > URL: https://issues.apache.org/jira/browse/NIFI-8361 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.2, 1.13.0 > Environment: Ubuntu 18.04.5 (1.13.2) > RHEL 7.7 (1.12.1) >Reporter: Tom P >Assignee: David Handermann >Priority: Major > Labels: newbie > Attachments: zip_deflate-maximum.xml > > Time Spent: 10m > Remaining Estimate: 0h > > Hi team, > Using 1.13.2 and running a pipeline pulling down a bunch of ZIP files, and > noticed a regression in behaviour between my two environments. > The 1.12.1 instance (running on RHEL) was able to unpack the file > successfully, whereas the 1.13.2 instance complains of an error stating > {code:java} > 2021-03-23 04:12:39,361 ERROR [Timer-Driven Process Thread-8] > o.a.n.processors.standard.UnpackContent > UnpackContent[id=5d0fda44-0178-1000-872e-6c183c633c89] Unable to unpack > StandardFlowFileRecord[uuid=9fa7650e-8557-465c-b39b-0e9b5e25ee0a,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1616471834548-154, > container=default, section=154], offset=0, > length=11095546],offset=0,name=3b70dbf9-b0a1-4d63-b2fd-0efe2a7291b8,size=11095546] > because it does not appear to have any entries; routing to failure{code} > The only discernable difference between this file and files that were able to > be unpacked was that the offending files have > * a "compression method" of Deflate:Maximum (as opposed to Deflate on the > working files) > * an "offset" of 4 (as opposed to "0" on the working files) > See attached for the template I used for testing the same functionality on my > 1.13.2 and 1.12.1 NiFi instances. I've downgraded the Ubuntu instance to > 1.12.1 and noted that the UnpackContent processor functions as expected, so I > don't believe it's an issue with the host OS. > I'm not sure whether the issue introduced in 1.13.1 might also have impacted > this, with the offset, or if it's something to do with the compression > method, or something else entirely. > Happy to provide further detail if needed :) > Cheers, > Tom > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] exceptionfactory opened a new pull request #5103: NIFI-8361 Upgraded Zip4j to 2.8.0
exceptionfactory opened a new pull request #5103: URL: https://github.com/apache/nifi/pull/5103 Description of PR NIFI-8361 Upgrades Zip4j from 2.7.0 to 2.8.0 in order to resolve issues with reading certain Zip files that contain temporary spanning markers. This upgrade resolves the issue with `UnpackContent` through [PR 303](https://github.com/srikanth-lingala/zip4j/pull/303) in Zip4j. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [X] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [X] Have you verified that the full build is successful on JDK 8? - [X] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [X] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-8628) Variable Registry - Variable count doesn't reset when opening the variable dialog
[ https://issues.apache.org/jira/browse/NIFI-8628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-8628: - Fix Version/s: 1.14.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Variable Registry - Variable count doesn't reset when opening the variable > dialog > - > > Key: NIFI-8628 > URL: https://issues.apache.org/jira/browse/NIFI-8628 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Hsin-Ying Lee >Assignee: Hsin-Ying Lee >Priority: Minor > Fix For: 1.14.0 > > Time Spent: 10m > Remaining Estimate: 0h > > `variablesCount` doesn't reset when opening the variable dialogs, > `variablesCount` will grow each time. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8628) Variable Registry - Variable count doesn't reset when opening the variable dialog
[ https://issues.apache.org/jira/browse/NIFI-8628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351900#comment-17351900 ] ASF subversion and git services commented on NIFI-8628: --- Commit 1e1c4462431776716b731bb59abe022be2ff70bb in nifi's branch refs/heads/main from s9514171 [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1e1c446 ] NIFI-8628 - Variable Registry - Variable count doesn't reset when opening the variable dialog Signed-off-by: Pierre Villard This closes #5097. > Variable Registry - Variable count doesn't reset when opening the variable > dialog > - > > Key: NIFI-8628 > URL: https://issues.apache.org/jira/browse/NIFI-8628 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Hsin-Ying Lee >Assignee: Hsin-Ying Lee >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > `variablesCount` doesn't reset when opening the variable dialogs, > `variablesCount` will grow each time. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #5097: NIFI-8628 - Variable Registry - Variable count doesn't reset when ope…
asfgit closed pull request #5097: URL: https://github.com/apache/nifi/pull/5097 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
szaszm commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639858130 ## File path: extensions/pdh/tests/PerformanceDataCounterTests.cpp ## @@ -122,8 +122,8 @@ TEST_CASE("PDHCounterDataCollectionTest", "[pdhcounterdatacollectiontest]") { REQUIRE(int_counter.collectData()); rapidjson::Document document(rapidjson::kObjectType); - double_counter.addToJson(document, document.GetAllocator()); - int_counter.addToJson(document, document.GetAllocator()); + double_counter.addToJson(document, document.GetAllocator(), utils::optional()); + int_counter.addToJson(document, document.GetAllocator(), utils::optional()); Review comment: You can use `utils::nullopt` for empty optionals. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8625) ExecuteScript processor always stuck after restart or multi thread
[ https://issues.apache.org/jira/browse/NIFI-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351886#comment-17351886 ] Otto Fowler commented on NIFI-8625: --- so this is a duplicate? > ExecuteScript processor always stuck after restart or multi thread > -- > > Key: NIFI-8625 > URL: https://issues.apache.org/jira/browse/NIFI-8625 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.2 >Reporter: KevinSky >Priority: Critical > Labels: ExecuteScript,stcuk > Attachments: executeScript_nifi_8625.xml, > image-2021-05-21-16-22-34-775.png > > > In single thread, executeScript just stop and start, flow file always stuck > in connections. > In multi thread.executeScript always throw exception like is already marked > for transfer in or is not known in this session. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8631) ConsumeGCPubSub acknowledges messages without committing the session
[ https://issues.apache.org/jira/browse/NIFI-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-8631: - Status: Patch Available (was: Open) > ConsumeGCPubSub acknowledges messages without committing the session > > > Key: NIFI-8631 > URL: https://issues.apache.org/jira/browse/NIFI-8631 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > > ConsumeGCPubSub always acknowledges receipt of messages without committing > the session. As a result, if NiFi fails to commit the session, the message > will be lost. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] martinzink commented on pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
martinzink commented on pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#issuecomment-848873724 > I would pass `utils::optional` by value. `sizeof(utils::optional)` should be 2 and copying is not expensive. > > As a rule of thumb, if `sizeof(T) <= 32` and copying doesn't incur allocation or other expensive operations, I would pass by value. > > https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#cp31-pass-small-amounts-of-data-between-threads-by-value-rather-than-by-reference-or-pointer Changed it [aee9ab ](https://github.com/martinzink/nifi-minifi-cpp/commit/aee9ab6b573cb5c5d699dfc5f7aee6d77015f64d) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 opened a new pull request #5102: NIFI-8631: Ensure that GCP Pub/Sub messages are not acknowledged unti…
markap14 opened a new pull request #5102: URL: https://github.com/apache/nifi/pull/5102 …l session has been committed, in order ot ensure that we don't have data loss Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-8632) StandardOidcIdentityProviderGroovyTest hits localhost:443
Joseph Gresock created NIFI-8632: Summary: StandardOidcIdentityProviderGroovyTest hits localhost:443 Key: NIFI-8632 URL: https://issues.apache.org/jira/browse/NIFI-8632 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.14.0 Reporter: Joseph Gresock Assignee: Joseph Gresock The following test tries to contact https://localhost:443 and fails if something is actually running on that port. It should be changed to an Integration Test instead. {code:bash} Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.867 sec <<< FAILURE! - in org.apache.nifi.registry.web.security.authentication.oidc.StandardOidcIdentityProviderGroovyTest testConvertOIDCTokenToNiFiTokenShouldHandleBlankIdentityAndNoEmailClaim(org.apache.nifi.registry.web.security.authentication.oidc.StandardOidcIdentityProviderGroovyTest) Time elapsed: 0.926 sec <<< FAILURE! java.lang.AssertionError: Closure org.apache.nifi.registry.web.security.authentication.oidc.StandardOidcIdentityProviderGroovyTest$_testConvertOIDCTokenToNiFiTokenShouldHandleBlankIdentityAndNoEmailClaim_closure5@27b2faa6 should have failed with an exception of type java.net.ConnectException, instead got Exception javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at org.junit.Assert.fail(Assert.java:89) at groovy.test.GroovyAssert.shouldFail(GroovyAssert.java:129) at groovy.util.GroovyTestCase.shouldFail(GroovyTestCase.java:221) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:190) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:58) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:156) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:176) at org.apache.nifi.registry.web.security.authentication.oidc.StandardOidcIdentityProviderGroovyTest.testConvertOIDCTokenToNiFiTokenShouldHandleBlankIdentityAndNoEmailClaim(StandardOidcIdentityProviderGroovyTest.groovy:428) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
lordgamez commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639823830 ## File path: extensions/pdh/PerformanceDataMonitor.cpp ## @@ -264,16 +281,29 @@ void PerformanceDataMonitor::setupMembersFromProperties(const std::shared_ptrgetProperty(OutputFormatProperty.getName(), output_format_string)) { -if (output_format_string == OPEN_TELEMETRY_FORMAT_STR) { - logger_->log_trace("OutputFormat is configured to be OpenTelemetry"); +if (output_format_string == PRETTY_OPEN_TELEMETRY_FORMAT_STR || output_format_string == COMPACT_OPEN_TELEMETRY_FORMAT_STR) { output_format_ = OutputFormat::OPENTELEMETRY; -} else if (output_format_string == JSON_FORMAT_STR) { - logger_->log_trace("OutputFormat is configured to be JSON"); + pretty_output_ = output_format_string == PRETTY_OPEN_TELEMETRY_FORMAT_STR; + logger_->log_trace("OutputFormat is configured to be %s OpenTelemetry", pretty_output_ ? "pretty" : "compact"); +} else if (output_format_string == PRETTY_JSON_FORMAT_STR || output_format_string == COMPACT_JSON_FORMAT_STR) { output_format_ = OutputFormat::JSON; + pretty_output_ = output_format_string == PRETTY_JSON_FORMAT_STR; + logger_->log_trace("OutputFormat is configured to be %s JSON", pretty_output_ ? "pretty" : "compact"); } else { - logger_->log_error("Invalid OutputFormat, defaulting to JSON"); output_format_ = OutputFormat::JSON; + pretty_output_ = true; + logger_->log_error("Invalid OutputFormat, defaulting to %s JSON", pretty_output_ ? "pretty" : "compact"); +} + } + + std::string double_precision_string; + if (context->getProperty(DoublePrecisionProperty.getName(), double_precision_string)) { Review comment: Sure it's okay in a separate PR, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-8631) ConsumeGCPubSub acknowledges messages without committing the session
Mark Payne created NIFI-8631: Summary: ConsumeGCPubSub acknowledges messages without committing the session Key: NIFI-8631 URL: https://issues.apache.org/jira/browse/NIFI-8631 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Mark Payne Assignee: Mark Payne ConsumeGCPubSub always acknowledges receipt of messages without committing the session. As a result, if NiFi fails to commit the session, the message will be lost. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1028: MINIFICPP-1507 convert InputStream::read to size_t
szaszm commented on a change in pull request #1028: URL: https://github.com/apache/nifi-minifi-cpp/pull/1028#discussion_r639767608 ## File path: libminifi/test/BufferReader.h ## @@ -44,7 +44,7 @@ class BufferReader : public org::apache::nifi::minifi::InputStreamCallback { } int64_t process(const std::shared_ptr& stream) { -return write(*stream.get(), stream->size()); +return static_cast(write(*stream.get(), stream->size())); Review comment: Because I wanted `static_cast(-1)` to become `-1`. I will change it to explicitly check for error. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1028: MINIFICPP-1507 convert InputStream::read to size_t
szaszm commented on a change in pull request #1028: URL: https://github.com/apache/nifi-minifi-cpp/pull/1028#discussion_r639766408 ## File path: libminifi/src/io/InputStream.cpp ## @@ -84,9 +85,9 @@ int InputStream::read(std::string , bool widen) { } std::vector buffer(len); - uint32_t bytes_read = gsl::narrow(read(buffer.data(), len)); + const auto bytes_read = gsl::narrow(read(buffer.data(), len)); if (bytes_read != len) { -return -1; +return static_cast(-1); Review comment: It would, but there is only one occurrence of returning -2 and I think it doesn't hit this path. That one occurrence should be cleaned up in the future anyway, so it's not an important issue IMO. I will change it to forward the original return value, but if it mattered, we would already have an issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-8613) Improve FlattenJson Processor
[ https://issues.apache.org/jira/browse/NIFI-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nadeem updated NIFI-8613: - Description: Improvement to FlattenJson Processor to support following: 1. Unflattening a flattened json 2. Preserving primitive arrays in nested json such as string, integers, boolean etc 3. Logging errors on fail 4. Pretty printing resulted json 5. Characet set Property was: Improvement to FlattenJson Processor to support following: 1. Unflattening a flattened json 2. Preserving primitive arrays in nested json such as string, integers, boolean etc 3. Logging errors on fail 4. Pretty printing resultedure json > Improve FlattenJson Processor > - > > Key: NIFI-8613 > URL: https://issues.apache.org/jira/browse/NIFI-8613 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Nadeem >Assignee: Nadeem >Priority: Minor > Fix For: 1.14.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Improvement to FlattenJson Processor to support following: > 1. Unflattening a flattened json > 2. Preserving primitive arrays in nested json such as string, integers, > boolean etc > 3. Logging errors on fail > 4. Pretty printing resulted json > 5. Characet set Property > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8613) Improve FlattenJson Processor
[ https://issues.apache.org/jira/browse/NIFI-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nadeem updated NIFI-8613: - Description: Improvement to FlattenJson Processor to support following: 1. Unflattening a flattened json 2. Preserving primitive arrays in nested json such as string, integers, boolean etc 3. Logging errors on fail 4. Pretty printing resultedure json was: Improvement to FlattenJson Processor to support following: 1. Unflattening a flattened json 2. Preserving primitive arrays in nested json such as string, integers, boolean etc 3. Logging errors on failure 4. Pretty printing resulted json > Improve FlattenJson Processor > - > > Key: NIFI-8613 > URL: https://issues.apache.org/jira/browse/NIFI-8613 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Nadeem >Assignee: Nadeem >Priority: Minor > Fix For: 1.14.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Improvement to FlattenJson Processor to support following: > 1. Unflattening a flattened json > 2. Preserving primitive arrays in nested json such as string, integers, > boolean etc > 3. Logging errors on fail > 4. Pretty printing resultedure json > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
martinzink commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639760892 ## File path: libminifi/include/utils/MathUtils.h ## @@ -0,0 +1,43 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace utils { + +class MathUtils { + public: + static double round_to(double original, int8_t precision) { +if (precision < 0) { + return original; +} else { + double power_ten = pow(10, precision); + return std::round(original * power_ten) / power_ten; Review comment: You are right, changed the function to round_to_decimal_places so its more clear what the function does. ## File path: extensions/pdh/PerformanceDataMonitor.cpp ## @@ -264,16 +281,29 @@ void PerformanceDataMonitor::setupMembersFromProperties(const std::shared_ptrgetProperty(OutputFormatProperty.getName(), output_format_string)) { -if (output_format_string == OPEN_TELEMETRY_FORMAT_STR) { - logger_->log_trace("OutputFormat is configured to be OpenTelemetry"); +if (output_format_string == PRETTY_OPEN_TELEMETRY_FORMAT_STR || output_format_string == COMPACT_OPEN_TELEMETRY_FORMAT_STR) { output_format_ = OutputFormat::OPENTELEMETRY; -} else if (output_format_string == JSON_FORMAT_STR) { - logger_->log_trace("OutputFormat is configured to be JSON"); + pretty_output_ = output_format_string == PRETTY_OPEN_TELEMETRY_FORMAT_STR; + logger_->log_trace("OutputFormat is configured to be %s OpenTelemetry", pretty_output_ ? "pretty" : "compact"); +} else if (output_format_string == PRETTY_JSON_FORMAT_STR || output_format_string == COMPACT_JSON_FORMAT_STR) { output_format_ = OutputFormat::JSON; + pretty_output_ = output_format_string == PRETTY_JSON_FORMAT_STR; + logger_->log_trace("OutputFormat is configured to be %s JSON", pretty_output_ ? "pretty" : "compact"); } else { - logger_->log_error("Invalid OutputFormat, defaulting to JSON"); output_format_ = OutputFormat::JSON; + pretty_output_ = true; + logger_->log_error("Invalid OutputFormat, defaulting to %s JSON", pretty_output_ ? "pretty" : "compact"); Review comment: good idea changed it in [26802f7](https://github.com/martinzink/nifi-minifi-cpp/commit/26802f77ef539757fcedbffea0f596b2ef0cc828) ## File path: extensions/pdh/PerformanceDataMonitor.cpp ## @@ -102,25 +108,36 @@ void PerformanceDataMonitor::onTrigger(core::ProcessContext* context, core::Proc rapidjson::Value& body = prepareJSONBody(root); for (auto& counter : resource_consumption_counters_) { if (counter->collectData()) - counter->addToJson(body, root.GetAllocator()); + counter->addToJson(body, root.GetAllocator(), double_precision_); + } + if (pretty_output_) { +utils::PrettyJsonOutputCallback callback(std::move(root)); +session->write(flowFile, ); +session->transfer(flowFile, Success); + } else { +utils::JsonOutputCallback callback(std::move(root)); +session->write(flowFile, ); +session->transfer(flowFile, Success); } - utils::JsonOutputCallback callback(std::move(root)); - session->write(flowFile, ); - session->transfer(flowFile, Success); } void PerformanceDataMonitor::initialize() { - setSupportedProperties({ CustomPDHCounters, PredefinedGroups, OutputFormatProperty }); + setSupportedProperties({ CustomPDHCounters, PredefinedGroups, OutputFormatProperty, DoublePrecisionProperty }); Review comment: changed to DecimalPlaces in [26802f7](https://github.com/martinzink/nifi-minifi-cpp/commit/26802f77ef539757fcedbffea0f596b2ef0cc828) ## File path: extensions/pdh/PerformanceDataMonitor.cpp ## @@ -45,9 +45,15 @@ core::Property PerformanceDataMonitor::CustomPDHCounters( core::Property PerformanceDataMonitor::OutputFormatProperty( core::PropertyBuilder::createProperty("Output Format")-> withDescription("Format of the created flowfiles")-> -withAllowableValue(JSON_FORMAT_STR)-> -withAllowableValue(OPEN_TELEMETRY_FORMAT_STR)-> -
[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
martinzink commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639761624 ## File path: libminifi/include/utils/MathUtils.h ## @@ -0,0 +1,43 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace utils { + +class MathUtils { + public: + static double round_to(double original, int8_t precision) { +if (precision < 0) { + return original; Review comment: good idea thanks, removed the IF now it works as you suggested (also renamed to round_to_decimal_places) in [26802f7](https://github.com/martinzink/nifi-minifi-cpp/commit/26802f77ef539757fcedbffea0f596b2ef0cc828) ## File path: extensions/pdh/PerformanceDataMonitor.cpp ## @@ -45,9 +45,15 @@ core::Property PerformanceDataMonitor::CustomPDHCounters( core::Property PerformanceDataMonitor::OutputFormatProperty( core::PropertyBuilder::createProperty("Output Format")-> withDescription("Format of the created flowfiles")-> -withAllowableValue(JSON_FORMAT_STR)-> -withAllowableValue(OPEN_TELEMETRY_FORMAT_STR)-> -withDefaultValue(JSON_FORMAT_STR)->build()); +withAllowableValue(PRETTY_JSON_FORMAT_STR)-> +withAllowableValue(COMPACT_JSON_FORMAT_STR)-> +withAllowableValue(PRETTY_OPEN_TELEMETRY_FORMAT_STR)-> +withAllowableValue(COMPACT_OPEN_TELEMETRY_FORMAT_STR)-> +withDefaultValue(PRETTY_JSON_FORMAT_STR)->build()); + +core::Property PerformanceDataMonitor::DoublePrecisionProperty( + core::PropertyBuilder::createProperty("Double Precision")-> + withDescription("Rounds the double values to this precision (blank for maximum precision)")->build()); Review comment: Good point, changed it to "The number of decimal places to round the values to (blank for no rounding)" in [26802f7](https://github.com/martinzink/nifi-minifi-cpp/commit/26802f77ef539757fcedbffea0f596b2ef0cc828) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on pull request #5065: NIFI-8528 Migrate NiFi Registry into NiFi codebase
markap14 commented on pull request #5065: URL: https://github.com/apache/nifi/pull/5065#issuecomment-848775366 Thanks @tpalfy for merging these! And thanks to @exceptionfactory and @kevdoran for the review & feedback provided also. I agree with you both that things look good now and I'm a +1. Merged to main! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
exceptionfactory commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r639751565 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/flow/StandardStatelessFlow.java ## @@ -33,6 +33,7 @@ import org.apache.nifi.controller.ReportingTaskNode; import org.apache.nifi.controller.queue.FlowFileQueue; import org.apache.nifi.controller.queue.QueueSize; +import org.apache.nifi.controller.reporting.LogComponentStatuses; Review comment: It looks like the automated build tagged this line as an unused import Checkstyle violation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5101: NIFI-8629: Implemented the LogComponentStatuses task that runs period…
exceptionfactory commented on a change in pull request #5101: URL: https://github.com/apache/nifi/pull/5101#discussion_r639747815 ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/flow/StandardStatelessFlow.java ## @@ -212,23 +217,39 @@ public void initialize() { logger.info("Successfully initialized components in {} millis ({} millis to perform validation, {} millis for services to enable)", initializationMillis, validationMillis, serviceEnableMillis); -runDataflowExecutor = Executors.newFixedThreadPool(1, r -> { -final Thread thread = Executors.defaultThreadFactory().newThread(r); -final String flowName = dataflowDefinition.getFlowName(); -if (flowName == null) { -thread.setName("Run Dataflow"); -} else { -thread.setName("Run Dataflow " + flowName); -} +// Create executor for dataflow +final String flowName = dataflowDefinition.getFlowName(); +final String threadName = (flowName == null) ? "Run Dataflow" : "Run Dataflow " + flowName; +runDataflowExecutor = Executors.newFixedThreadPool(1, createNamedThreadFactory(threadName, false)); -return thread; -}); +// Periodically log component statuses +backgroundTaskExecutor = Executors.newScheduledThreadPool(1, createNamedThreadFactory("Background Tasks", true)); +backgroundTasks.forEach(task -> backgroundTaskExecutor.scheduleWithFixedDelay(task.getTask(), task.getSchedulingPeriod(), task.getSchedulingPeriod(), task.getSchedulingUnit())); } catch (final Throwable t) { processScheduler.shutdown(); throw t; } } +private ThreadFactory createNamedThreadFactory(final String name, final boolean daemon) { +return (Runnable r) -> { +final Thread thread = Executors.defaultThreadFactory().newThread(r); +thread.setName(name); +thread.setDaemon(daemon); +return thread; +}; +} + +/** + * Schedules the given background task to run periodically after the dataflow has been initialized until it has been shutdown + * @param task the task to run + * @param period how often to run it + * @param unit the unit for the time period + */ +public void scheduleBackgroundTask(final Runnable task, final long period, final TimeUnit unit) { Review comment: Should this be a new method in the `StatelessDataflow` interface, or is it intentional that this exists only on `StandardStatelessFlow`? ## File path: nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/controller/reporting/LogComponentStatuses.java ## @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.reporting; + +import org.apache.nifi.controller.Counter; +import org.apache.nifi.controller.ProcessorNode; +import org.apache.nifi.controller.flow.FlowManager; +import org.apache.nifi.controller.repository.CounterRepository; +import org.apache.nifi.controller.repository.FlowFileEvent; +import org.apache.nifi.controller.repository.FlowFileEventRepository; +import org.apache.nifi.groups.ProcessGroup; +import org.apache.nifi.util.FormatUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +public class LogComponentStatuses implements Runnable { +private static final Logger logger = LoggerFactory.getLogger(LogComponentStatuses.class); + +private static final String PROCESSOR_LINE_FORMAT = "| %1$-30.30s | %2$-36.36s | %3$-30.30s | %4$28.28s | %5$30.30s | %6$14.14s | %714.14s | %8$28.28s |\n"; +private static final String COUNTER_LINE_FORMAT = "| %1$-36.36s | %2$-36.36s | %3$28.28s | %4$28.28s |\n"; + +private final FlowFileEventRepository flowFileEventRepository; +
[jira] [Commented] (NIFI-8528) Migrate NiFi Registry into NiFi codebase
[ https://issues.apache.org/jira/browse/NIFI-8528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351788#comment-17351788 ] ASF subversion and git services commented on NIFI-8528: --- Commit dfa683af0e00cd631905d45365bc797fbdce66ee in nifi's branch refs/heads/main from tpalfy [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=dfa683a ] NIFI-8528 Migrate NiFi Registry into NiFi codebase (#5065) NIFI-8528 Migrate NiFi Registry fully codebase into NiFi as a module. No changes except certain dependency scopes to preserve the NiFi Registry original by overriding the new parent (nifi). - Version adjustments. Removed distinct checkstye rules form nifi-registry. (Using nifi's instead.) - Made some tests Windows-compatible. - Consolidated LICENSE, NOTICE and README.md. - Fixed CryptoKeyLoaderGroovyTest.groovy. - Disable frontend-maven-plugin on Windows. - Skipping all goals of the frontend-maven-plugin on Windows. - Registry integration tests not to run in github jobs (same as the original settings). Skip all registry tests (build and run) on Windows. - Removed Husky from registry. > Migrate NiFi Registry into NiFi codebase > > > Key: NIFI-8528 > URL: https://issues.apache.org/jira/browse/NIFI-8528 > Project: Apache NiFi > Issue Type: Task >Reporter: Tamas Palfy >Assignee: Tamas Palfy >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > NiFi twin task for https://issues.apache.org/jira/browse/NIFIREG-452 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8528) Migrate NiFi Registry into NiFi codebase
[ https://issues.apache.org/jira/browse/NIFI-8528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351789#comment-17351789 ] ASF subversion and git services commented on NIFI-8528: --- Commit dfa683af0e00cd631905d45365bc797fbdce66ee in nifi's branch refs/heads/main from tpalfy [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=dfa683a ] NIFI-8528 Migrate NiFi Registry into NiFi codebase (#5065) NIFI-8528 Migrate NiFi Registry fully codebase into NiFi as a module. No changes except certain dependency scopes to preserve the NiFi Registry original by overriding the new parent (nifi). - Version adjustments. Removed distinct checkstye rules form nifi-registry. (Using nifi's instead.) - Made some tests Windows-compatible. - Consolidated LICENSE, NOTICE and README.md. - Fixed CryptoKeyLoaderGroovyTest.groovy. - Disable frontend-maven-plugin on Windows. - Skipping all goals of the frontend-maven-plugin on Windows. - Registry integration tests not to run in github jobs (same as the original settings). Skip all registry tests (build and run) on Windows. - Removed Husky from registry. > Migrate NiFi Registry into NiFi codebase > > > Key: NIFI-8528 > URL: https://issues.apache.org/jira/browse/NIFI-8528 > Project: Apache NiFi > Issue Type: Task >Reporter: Tamas Palfy >Assignee: Tamas Palfy >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > NiFi twin task for https://issues.apache.org/jira/browse/NIFIREG-452 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 merged pull request #5065: NIFI-8528 Migrate NiFi Registry into NiFi codebase
markap14 merged pull request #5065: URL: https://github.com/apache/nifi/pull/5065 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-8528) Migrate NiFi Registry into NiFi codebase
[ https://issues.apache.org/jira/browse/NIFI-8528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-8528. -- Resolution: Fixed > Migrate NiFi Registry into NiFi codebase > > > Key: NIFI-8528 > URL: https://issues.apache.org/jira/browse/NIFI-8528 > Project: Apache NiFi > Issue Type: Task >Reporter: Tamas Palfy >Assignee: Tamas Palfy >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > NiFi twin task for https://issues.apache.org/jira/browse/NIFIREG-452 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFIREG-452) Migrate NiFi Registry into NiFi codebase
[ https://issues.apache.org/jira/browse/NIFIREG-452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFIREG-452. Resolution: Fixed > Migrate NiFi Registry into NiFi codebase > > > Key: NIFIREG-452 > URL: https://issues.apache.org/jira/browse/NIFIREG-452 > Project: NiFi Registry > Issue Type: Task >Reporter: Mark Payne >Priority: Major > > When NiFi Registry was created, it was created as a sub-project. There were > good reasons for this. The release cadence wasn't yet known, the smaller > codebase would make things less intimidating for developers who were newer to > the community, etc. > However, we've seen over the last few years that the cons have been much > weightier than the pros. There has been a lot of code duplication. Registry > often lags in features behind NiFi. There's significantly more maintenance > because of this code duplication. > There's also the problem of circular dependencies. NiFi depends on registry, > but often a new feature is added to NiFi. We want Registry's data model to > support this. So we update registry's data model. But then registry must be > released before NiFi can be updated to use the new data model and map into > it. > Also, the VersionedProcessGroup, etc. that exist in the registry data model > have become integral to NiFi. > We need to merge these two codebases. This will result in a single release > that encompasses both projects. This is much easier for the Release Manager > as well as the community who votes on it. The data model will be dramatically > easier to update. Registry will benefit from third-party dependency updates > to avoid potential CVE's, etc. by improving the maintainability. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1028: MINIFICPP-1507 convert InputStream::read to size_t
szaszm commented on a change in pull request #1028: URL: https://github.com/apache/nifi-minifi-cpp/pull/1028#discussion_r639710821 ## File path: libminifi/include/io/Stream.h ## @@ -24,14 +24,23 @@ namespace nifi { namespace minifi { namespace io { +inline bool isError(const size_t read_return) noexcept { + return read_return == static_cast(-1) // general error + || read_return == static_cast(-2); // Socket EAGAIN, to be refactored to eliminate this error condition Review comment: `static_cast(-2)` is a special case, I don't want to touch that. I'm open to extracting `static_cast(-1)` to a symbolic constant, will probably do it later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1028: MINIFICPP-1507 convert InputStream::read to size_t
szaszm commented on a change in pull request #1028: URL: https://github.com/apache/nifi-minifi-cpp/pull/1028#discussion_r639708731 ## File path: extensions/tensorflow/TFExtractTopLabels.cpp ## @@ -134,9 +136,9 @@ int64_t TFExtractTopLabels::LabelsReadCallback::process(const std::shared_ptrsize()) { -auto read = stream->read(reinterpret_cast([0]), static_cast(buf_size)); - -for (auto i = 0; i < read; i++) { +const auto read = stream->read(reinterpret_cast([0]), buf_size); +if (io::isError(read)) break; Review comment: read/write usually signal an error, but it's not consistent. I think Callback::process just forwards these results. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy commented on a change in pull request #5093: NIFI-4344: Improve bulletin messages with exception details.
tpalfy commented on a change in pull request #5093: URL: https://github.com/apache/nifi/pull/5093#discussion_r639668296 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/processor/SimpleProcessLogger.java ## @@ -448,4 +454,20 @@ public void log(LogLevel level, String msg, Object[] os, Throwable t) { } } +private String getCauses(final Throwable throwable) { Review comment: We could leverage Java for more readability to reverse the order of the error messages. Something like this: ```java final LinkedList causes = new LinkedList<>(); for (Throwable t = throwable; t != null; t = t.getCause()) { causes.push(t.toString()); } Collections.reverse(causes); return causes.stream().collect(Collectors.joining(System.lineSeparator() + CAUSES)); ``` Also this implementation adds a new line at the start. Is that intentional? ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/test/java/org/apache/nifi/processor/TestSimpleProcessLogger.java ## @@ -68,7 +68,7 @@ public void before() { @Test public void validateDelegateLoggerReceivesThrowableToStringOnError() { componentLog.error("Hello {}", e); -verify(logger, times(1)).error(anyString(), eq(task), eq(e.toString()), eq(e)); +verify(logger, times(1)).error(anyString(), eq(task), anyString(), eq(e)); Review comment: I think these modifications are insufficient to test the added functionality. We don't necessarily need to check the full error message in all the tests - i.e. the `anyString()` may be okay in these ones, but then we should add new tests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #1079: MINIFICPP-1564 - Remove agent update
arpadboda closed pull request #1079: URL: https://github.com/apache/nifi-minifi-cpp/pull/1079 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] timeabarna commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors
timeabarna commented on a change in pull request #4948: URL: https://github.com/apache/nifi/pull/4948#discussion_r639581961 ## File path: nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ScriptedRouterProcessor.java ## @@ -0,0 +1,267 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.script; + +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.Restricted; +import org.apache.nifi.annotation.behavior.Restriction; +import org.apache.nifi.annotation.behavior.SideEffectFree; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.RequiredPermission; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.io.InputStreamCallback; +import org.apache.nifi.schema.access.SchemaNotFoundException; +import org.apache.nifi.script.ScriptingComponentUtils; +import org.apache.nifi.serialization.MalformedRecordException; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.RecordSetWriter; +import org.apache.nifi.serialization.RecordSetWriterFactory; +import org.apache.nifi.serialization.WriteResult; +import org.apache.nifi.serialization.record.ListRecordSet; +import org.apache.nifi.serialization.record.PushBackRecordSet; +import org.apache.nifi.serialization.record.Record; +import org.apache.nifi.serialization.record.RecordSchema; +import org.apache.nifi.serialization.record.RecordSet; + +import javax.script.ScriptEngine; +import javax.script.ScriptException; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.Arrays; +import java.util.HashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +@EventDriven +@SupportsBatching +@SideEffectFree +@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED) +@Restricted(restrictions = { +@Restriction(requiredPermission = RequiredPermission.EXECUTE_CODE, +explanation = "Provides operator the ability to execute arbitrary code assuming all permissions that NiFi has.") +}) +@WritesAttributes({ +@WritesAttribute(attribute = "mime.type", description = "Sets the mime.type attribute to the MIME Type specified by the Record Writer"), +@WritesAttribute(attribute = "record.error.message", description = "This attribute provides on failure the error message encountered by the Reader or Writer.") +}) +public abstract class ScriptedRouterProcessor extends ScriptedProcessor { + +static final PropertyDescriptor RECORD_READER = new PropertyDescriptor.Builder() +.name("Record Reader") +.displayName("Record Reader") +.description("The Record Reader to use parsing the incoming FlowFile into Records") +.required(true) +.identifiesControllerService(RecordReaderFactory.class) +.build(); + +static final PropertyDescriptor RECORD_WRITER = new PropertyDescriptor.Builder() +.name("Record Writer") +.displayName("Record Writer") +.description("The Record Writer to use for serializing Records after they have been transformed") +.required(true) +.identifiesControllerService(RecordSetWriterFactory.class) +.build(); + +static final PropertyDescriptor LANGUAGE = new PropertyDescriptor.Builder() +.name("Script Engine") +.displayName("Script Language") +.description("The Language to use for the script") +
[GitHub] [nifi] timeabarna commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors
timeabarna commented on a change in pull request #4948: URL: https://github.com/apache/nifi/pull/4948#discussion_r639552943 ## File path: nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/test/java/org/apache/nifi/processors/script/TestScriptedFilterRecord.java ## @@ -0,0 +1,131 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.script; + +import org.apache.nifi.processor.Processor; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.util.MockFlowFile; +import org.junit.Assert; +import org.junit.Test; + +public class TestScriptedFilterRecord extends TestScriptedRouterProcessor { +private static final String SCRIPT = "return record.getValue(\"first\") == 1"; + +private static final Object[] MATCHING_RECORD_1 = new Object[] {1, "lorem"}; +private static final Object[] MATCHING_RECORD_2 = new Object[] {1, "ipsum"}; +private static final Object[] NON_MATCHING_RECORD_1 = new Object[] {2, "lorem"}; +private static final Object[] NON_MATCHING_RECORD_2 = new Object[] {2, "ipsum"}; + +@Test +public void testIncomingFlowFileContainsMatchingRecordsOnly() throws Exception { +// given +recordReader.addRecord(MATCHING_RECORD_1); +recordReader.addRecord(MATCHING_RECORD_2); + +// when +whenTriggerProcessor(); + +// then +thenIncomingFlowFileIsRoutedToOriginal(); +thenMatchingFlowFileContains(new Object[][]{MATCHING_RECORD_1, MATCHING_RECORD_2}); +} + +@Test +public void testIncomingFlowFileContainsNonMatchingRecordsOnly() throws Exception { +// given +recordReader.addRecord(NON_MATCHING_RECORD_1); +recordReader.addRecord(NON_MATCHING_RECORD_2); + +// when +whenTriggerProcessor(); + +// then +thenIncomingFlowFileIsRoutedToOriginal(); +thenMatchingFlowFileIsEmpty(); +} + +@Test +public void testIncomingFlowFileContainsMatchingAndNonMatchingRecords() throws Exception { +// given +recordReader.addRecord(MATCHING_RECORD_1); +recordReader.addRecord(NON_MATCHING_RECORD_1); +recordReader.addRecord(MATCHING_RECORD_2); +recordReader.addRecord(NON_MATCHING_RECORD_2); + +// when +whenTriggerProcessor(); + +// then +thenIncomingFlowFileIsRoutedToOriginal(); +thenMatchingFlowFileContains(new Object[][]{MATCHING_RECORD_1, MATCHING_RECORD_2}); +} + +@Test +public void testIncomingFlowFileContainsNoRecords() throws Exception { Review comment: Exception is never thrown, it can be removed. ## File path: nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/test/java/org/apache/nifi/processors/script/TestScriptedValidateRecord.java ## @@ -0,0 +1,146 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.script; + +import org.apache.nifi.processor.Processor; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.util.MockFlowFile; +import org.junit.Assert; +import org.junit.Test; + +public class TestScriptedValidateRecord extends TestScriptedRouterProcessor { +private static final String SCRIPT = "return record.getValue(\"first\") == 1"; + +private static final Object[] VALID_RECORD_1 = new Object[] {1, "lorem"}; +private static final Object[] VALID_RECORD_2
[GitHub] [nifi] timeabarna commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors
timeabarna commented on a change in pull request #4948: URL: https://github.com/apache/nifi/pull/4948#discussion_r639552668 ## File path: nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/test/java/org/apache/nifi/processors/script/TestScriptedFilterRecord.java ## @@ -0,0 +1,131 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.script; + +import org.apache.nifi.processor.Processor; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.util.MockFlowFile; +import org.junit.Assert; +import org.junit.Test; + +public class TestScriptedFilterRecord extends TestScriptedRouterProcessor { +private static final String SCRIPT = "return record.getValue(\"first\") == 1"; + +private static final Object[] MATCHING_RECORD_1 = new Object[] {1, "lorem"}; +private static final Object[] MATCHING_RECORD_2 = new Object[] {1, "ipsum"}; +private static final Object[] NON_MATCHING_RECORD_1 = new Object[] {2, "lorem"}; +private static final Object[] NON_MATCHING_RECORD_2 = new Object[] {2, "ipsum"}; + +@Test +public void testIncomingFlowFileContainsMatchingRecordsOnly() throws Exception { +// given +recordReader.addRecord(MATCHING_RECORD_1); +recordReader.addRecord(MATCHING_RECORD_2); + +// when +whenTriggerProcessor(); + +// then +thenIncomingFlowFileIsRoutedToOriginal(); +thenMatchingFlowFileContains(new Object[][]{MATCHING_RECORD_1, MATCHING_RECORD_2}); +} + +@Test +public void testIncomingFlowFileContainsNonMatchingRecordsOnly() throws Exception { Review comment: Exception is never thrown, it can be removed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (MINIFICPP-1568) PerformanceDataMonitorTests transiently fail
[ https://issues.apache.org/jira/browse/MINIFICPP-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Gyimesi reassigned MINIFICPP-1568: Assignee: Gabor Gyimesi > PerformanceDataMonitorTests transiently fail > > > Key: MINIFICPP-1568 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1568 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Assignee: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: PerformanceDataMonitorTests_failure_win_vs2017.log > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-4890) OIDC Token Refresh is not done correctly
[ https://issues.apache.org/jira/browse/NIFI-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351611#comment-17351611 ] Johan Buntinx commented on NIFI-4890: - Any updates on this? If the access token has a short lifetime, the Nifi UI is hardly usable because the user is constantly interrupted with the error message "Unknown user with identity 'anonymous'. Contact the system administrator." Clicking the home link refeshes the access token but I think this should be handled transparantly in the background, without bothering the user. > OIDC Token Refresh is not done correctly > > > Key: NIFI-4890 > URL: https://issues.apache.org/jira/browse/NIFI-4890 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.5.0 > Environment: Environment: > Browser: Chrome / Firefox > Configuration of NiFi: > - SSL certificate for the server (no client auth) > - OIDC configuration including end_session_endpoint (see the link > https://auth.s.orchestracities.com/auth/realms/default/.well-known/openid-configuration) > >Reporter: Federico Michele Facca >Assignee: Raz Dobkies >Priority: Major > > It looks like the NIFI UI is not refreshing the OIDC token in background, and > because of that, when the token expires, tells you that your session is > expired. and you need to refresh the page, to get a new token. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
adamdebreceni commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639470449 ## File path: libminifi/include/utils/MathUtils.h ## @@ -0,0 +1,43 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace utils { + +class MathUtils { + public: + static double round_to(double original, int8_t precision) { +if (precision < 0) { + return original; +} else { + double power_ten = pow(10, precision); + return std::round(original * power_ten) / power_ten; Review comment: a thought: since the `rapidjson::Document::Accept` is templated, we could wrap a writer (`rapidjons::Writer<...>`, `rapidjson::PrettyWriter<...>`) in a custom class, inheriting from it and shadowing the `Double` method to implement our custom double formatting, although this might not be trivial -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
adamdebreceni commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639465764 ## File path: libminifi/include/utils/MathUtils.h ## @@ -0,0 +1,43 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace utils { + +class MathUtils { + public: + static double round_to(double original, int8_t precision) { +if (precision < 0) { + return original; +} else { + double power_ten = pow(10, precision); + return std::round(original * power_ten) / power_ten; Review comment: as far as I know, precision has a different meaning, it is the number of "significant digits", so pinting `0.00123` with a precision of `2` should result in `0.0012` instead of `0` as this function does -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
adamdebreceni commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639449741 ## File path: extensions/pdh/PerformanceDataMonitor.cpp ## @@ -264,16 +281,29 @@ void PerformanceDataMonitor::setupMembersFromProperties(const std::shared_ptrgetProperty(OutputFormatProperty.getName(), output_format_string)) { -if (output_format_string == OPEN_TELEMETRY_FORMAT_STR) { - logger_->log_trace("OutputFormat is configured to be OpenTelemetry"); +if (output_format_string == PRETTY_OPEN_TELEMETRY_FORMAT_STR || output_format_string == COMPACT_OPEN_TELEMETRY_FORMAT_STR) { output_format_ = OutputFormat::OPENTELEMETRY; -} else if (output_format_string == JSON_FORMAT_STR) { - logger_->log_trace("OutputFormat is configured to be JSON"); + pretty_output_ = output_format_string == PRETTY_OPEN_TELEMETRY_FORMAT_STR; + logger_->log_trace("OutputFormat is configured to be %s OpenTelemetry", pretty_output_ ? "pretty" : "compact"); +} else if (output_format_string == PRETTY_JSON_FORMAT_STR || output_format_string == COMPACT_JSON_FORMAT_STR) { output_format_ = OutputFormat::JSON; + pretty_output_ = output_format_string == PRETTY_JSON_FORMAT_STR; + logger_->log_trace("OutputFormat is configured to be %s JSON", pretty_output_ ? "pretty" : "compact"); } else { - logger_->log_error("Invalid OutputFormat, defaulting to JSON"); output_format_ = OutputFormat::JSON; + pretty_output_ = true; + logger_->log_error("Invalid OutputFormat, defaulting to %s JSON", pretty_output_ ? "pretty" : "compact"); Review comment: we should throw here instead of defaulting, the user gave an invalid output format, that is an error IMO -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
adamdebreceni commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639447848 ## File path: extensions/pdh/PerformanceDataMonitor.cpp ## @@ -102,25 +108,36 @@ void PerformanceDataMonitor::onTrigger(core::ProcessContext* context, core::Proc rapidjson::Value& body = prepareJSONBody(root); for (auto& counter : resource_consumption_counters_) { if (counter->collectData()) - counter->addToJson(body, root.GetAllocator()); + counter->addToJson(body, root.GetAllocator(), double_precision_); + } + if (pretty_output_) { +utils::PrettyJsonOutputCallback callback(std::move(root)); +session->write(flowFile, ); +session->transfer(flowFile, Success); + } else { +utils::JsonOutputCallback callback(std::move(root)); +session->write(flowFile, ); +session->transfer(flowFile, Success); } - utils::JsonOutputCallback callback(std::move(root)); - session->write(flowFile, ); - session->transfer(flowFile, Success); } void PerformanceDataMonitor::initialize() { - setSupportedProperties({ CustomPDHCounters, PredefinedGroups, OutputFormatProperty }); + setSupportedProperties({ CustomPDHCounters, PredefinedGroups, OutputFormatProperty, DoublePrecisionProperty }); Review comment: I don't think we usually suffix properties with `Property` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1081: MINIFICPP-1565: Minor improvements to PerformanceDataMonitor
adamdebreceni commented on a change in pull request #1081: URL: https://github.com/apache/nifi-minifi-cpp/pull/1081#discussion_r639446256 ## File path: extensions/pdh/PerformanceDataMonitor.cpp ## @@ -45,9 +45,15 @@ core::Property PerformanceDataMonitor::CustomPDHCounters( core::Property PerformanceDataMonitor::OutputFormatProperty( core::PropertyBuilder::createProperty("Output Format")-> withDescription("Format of the created flowfiles")-> -withAllowableValue(JSON_FORMAT_STR)-> -withAllowableValue(OPEN_TELEMETRY_FORMAT_STR)-> -withDefaultValue(JSON_FORMAT_STR)->build()); +withAllowableValue(PRETTY_JSON_FORMAT_STR)-> +withAllowableValue(COMPACT_JSON_FORMAT_STR)-> +withAllowableValue(PRETTY_OPEN_TELEMETRY_FORMAT_STR)-> +withAllowableValue(COMPACT_OPEN_TELEMETRY_FORMAT_STR)-> Review comment: it seems `withAllowableValues` is more prevalent than `withAllowableValue` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org