[jira] [Created] (MINIFICPP-1356) Flexible schema of supporting C runtime redist., for continuous delivery, on Windows
Ivan Serdyuk created MINIFICPP-1356: --- Summary: Flexible schema of supporting C runtime redist., for continuous delivery, on Windows Key: MINIFICPP-1356 URL: https://issues.apache.org/jira/browse/MINIFICPP-1356 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: master Reporter: Ivan Serdyuk I am experiencing some issues during automated resolving of VS 2019 folders. ??CMake Error at CMakeLists.txt:664 (message):?? ?? Could not find the VC Redistributable. Please set?? ?? VCRUNTIME_X86_REDIST_CRT_DIR and VCRUNTIME_X64_REDIST_CRT_DIR manually!?? [https://github.com/apache/nifi-minifi-cpp/blob/08398da0579dd5a06d9b7af90acc3e934eb0d7af/CMakeLists.txt#L627|CMake's message yielded] It is also pretty obvious that the bleading edge version would not be found - build config. should reference another version (142), at [https://github.com/apache/nifi-minifi-cpp/blob/08398da0579dd5a06d9b7af90acc3e934eb0d7af/CMakeLists.txt#L660] [https://github.com/apache/nifi-minifi-cpp/blob/08398da0579dd5a06d9b7af90acc3e934eb0d7af/CMakeLists.txt#L661] . -file(GLOB VCRUNTIME_X86_REDIST_CRT_DIR "${VCRUNTIME_REDIST_DIR}/x86/Microsoft.VC141.CRT")- -file(GLOB VCRUNTIME_X64_REDIST_CRT_DIR "${VCRUNTIME_REDIST_DIR}/x64/Microsoft.VC141.CRT")- Such modification works for me: {code:java} file(GLOB VCRUNTIME_X86_REDIST_CRT_DIR "${VCRUNTIME_REDIST_DIR}/x86/Microsoft.VC142.CRT") file(GLOB VCRUNTIME_X64_REDIST_CRT_DIR "${VCRUNTIME_REDIST_DIR}/x64/Microsoft.VC142.CRT") {code} . My proposition is to avoid tidal lock on non-persistent naming conventions and enforce requests to Visual Studio installer/InstallShield component registry. Besides there is a potential possibility of future support on ARM64 Win 10. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7588) InvokeHTTP ignoring custom parameters when stop+finalize+start
[ https://issues.apache.org/jira/browse/NIFI-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190938#comment-17190938 ] Otto Fowler commented on NIFI-7588: --- can you create and attach a template that reproduces the issue? When you say parameters do you mean dynamic properties or the new parameters feature? > InvokeHTTP ignoring custom parameters when stop+finalize+start > -- > > Key: NIFI-7588 > URL: https://issues.apache.org/jira/browse/NIFI-7588 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.11.4 > Environment: Amazon Linux >Reporter: Alejandro Fiel Martínez >Priority: Major > Attachments: invokeHTTP_NiFi_bug.png > > > I have an InvokeHTTP processor, with 3 custom paramenters to be passed as > headers, If I add an SSL Context Service and remove it, the processor stop > using those 3 paramenters and I have to delete and recreate then, they are > there, but I see in DEBUG that there are not used in the GET request. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 opened a new pull request #4512: NIFI-1121: Support making properties dependent upon one another
markap14 opened a new pull request #4512: URL: https://github.com/apache/nifi/pull/4512 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7791) Add PutClickHouse Processor for Writing Large Streams
[ https://issues.apache.org/jira/browse/NIFI-7791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190887#comment-17190887 ] Ricky Saltzer commented on NIFI-7791: - To provide a little more context to the processing portion. Here's a snippet of code that streams a FlowFile without doing any processing within NiFi. Instead, we tell ClickHouse "Hey, here's a stream of data...process it in real-time as CSV" {{ getConnection().createStatement()}} {{ .write() // direct write call}} {{ .sql("INSERT INTO my_table")}} {{ .data(inputStream) // pass in FlowFile's InputStream}} {{ .format("CSV") // format the InputStream is expected to be}} {{ .send()}} > Add PutClickHouse Processor for Writing Large Streams > - > > Key: NIFI-7791 > URL: https://issues.apache.org/jira/browse/NIFI-7791 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Ricky Saltzer >Assignee: Ricky Saltzer >Priority: Minor > > ClickHouse supports streaming a number of file formats directly using their > JDBC (superset) library. Often times it's much more convenient to stream the > contents of a file directly to ClickHouse, rather than bothering to process > the data in NiFi and then using the native JDBC processor. > One workaround is to just use PutHTTP to stream the file directly to > ClickHouse using it's HTTP endpoint. However, this can get a bit tedious, > especially if you need to pass credentials as part of the HTTP method call. > I'm creating this Jira to support creating a simple PutClickHouse processor > that can stream a FlowFile directly to ClickHouse with the following features > * CSV, CSVWithNames, TSV and JSONEachRow > * Ability to modify column name ordering > * Custom delimiters for CSV and TSV > * SSL support (with and without strict mode) > * Multiple hosts (comma separated) to utilize the > {{BalancedClickhouseDataSource}} > * Username and Password > I'm currently wrapping up a PR for this. I wrote it using Kotlin, which uses > a processor-scope maven plugin. If there's enough objection, it can be > rewritten in native Java. > +[~joewitt] since I spoke with him regarding this a while back. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (NIFI-7791) Add PutClickHouse Processor for Writing Large Streams
[ https://issues.apache.org/jira/browse/NIFI-7791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190887#comment-17190887 ] Ricky Saltzer edited comment on NIFI-7791 at 9/4/20, 6:58 PM: -- To provide a little more context to the processing portion. Here's a snippet of code that streams a FlowFile without doing any processing within NiFi. Instead, we tell ClickHouse "Hey, here's a stream of data...process it in real-time as CSV" getConnection().createStatement() {{ .write() // direct write call}} {{ .sql("INSERT INTO my_table")}} {{ .data(inputStream) // pass in FlowFile's InputStream}} {{ .format("CSV") // format the InputStream is expected to be}} {{ .send()}} was (Author: rickysaltzer): To provide a little more context to the processing portion. Here's a snippet of code that streams a FlowFile without doing any processing within NiFi. Instead, we tell ClickHouse "Hey, here's a stream of data...process it in real-time as CSV" {{ getConnection().createStatement()}} {{ .write() // direct write call}} {{ .sql("INSERT INTO my_table")}} {{ .data(inputStream) // pass in FlowFile's InputStream}} {{ .format("CSV") // format the InputStream is expected to be}} {{ .send()}} > Add PutClickHouse Processor for Writing Large Streams > - > > Key: NIFI-7791 > URL: https://issues.apache.org/jira/browse/NIFI-7791 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Ricky Saltzer >Assignee: Ricky Saltzer >Priority: Minor > > ClickHouse supports streaming a number of file formats directly using their > JDBC (superset) library. Often times it's much more convenient to stream the > contents of a file directly to ClickHouse, rather than bothering to process > the data in NiFi and then using the native JDBC processor. > One workaround is to just use PutHTTP to stream the file directly to > ClickHouse using it's HTTP endpoint. However, this can get a bit tedious, > especially if you need to pass credentials as part of the HTTP method call. > I'm creating this Jira to support creating a simple PutClickHouse processor > that can stream a FlowFile directly to ClickHouse with the following features > * CSV, CSVWithNames, TSV and JSONEachRow > * Ability to modify column name ordering > * Custom delimiters for CSV and TSV > * SSL support (with and without strict mode) > * Multiple hosts (comma separated) to utilize the > {{BalancedClickhouseDataSource}} > * Username and Password > I'm currently wrapping up a PR for this. I wrote it using Kotlin, which uses > a processor-scope maven plugin. If there's enough objection, it can be > rewritten in native Java. > +[~joewitt] since I spoke with him regarding this a while back. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7791) Add PutClickHouse Processor for Writing Large Streams
[ https://issues.apache.org/jira/browse/NIFI-7791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190885#comment-17190885 ] Ricky Saltzer commented on NIFI-7791: - Hey Joe - Yeah I totally understand where you're coming from, and I believe it's already possible to use the JDBC library to achieve just that when writing to ClickHouse. The point of not using the recordreader/writer is to allow complete offloading of the processing to the database, which is one of ClickHouse's capabilities[1]. This is beneficial where you have a set of really large/capable ClickHouse machines and extremely large (10s of GBs) files you wish to write. There may not be enough demand for this processor, and I can see how adding a custom processor something already possible might pollute the already massive amount of processors. I found a lot of success using it internally since it resulted in really fast turnaround for dumping data that I didn't want to bother applying a schema to within NiFi. [1] [https://clickhouse.tech/docs/en/interfaces/formats/] > Add PutClickHouse Processor for Writing Large Streams > - > > Key: NIFI-7791 > URL: https://issues.apache.org/jira/browse/NIFI-7791 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Ricky Saltzer >Assignee: Ricky Saltzer >Priority: Minor > > ClickHouse supports streaming a number of file formats directly using their > JDBC (superset) library. Often times it's much more convenient to stream the > contents of a file directly to ClickHouse, rather than bothering to process > the data in NiFi and then using the native JDBC processor. > One workaround is to just use PutHTTP to stream the file directly to > ClickHouse using it's HTTP endpoint. However, this can get a bit tedious, > especially if you need to pass credentials as part of the HTTP method call. > I'm creating this Jira to support creating a simple PutClickHouse processor > that can stream a FlowFile directly to ClickHouse with the following features > * CSV, CSVWithNames, TSV and JSONEachRow > * Ability to modify column name ordering > * Custom delimiters for CSV and TSV > * SSL support (with and without strict mode) > * Multiple hosts (comma separated) to utilize the > {{BalancedClickhouseDataSource}} > * Username and Password > I'm currently wrapping up a PR for this. I wrote it using Kotlin, which uses > a processor-scope maven plugin. If there's enough objection, it can be > rewritten in native Java. > +[~joewitt] since I spoke with him regarding this a while back. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7791) Add PutClickHouse Processor for Writing Large Streams
[ https://issues.apache.org/jira/browse/NIFI-7791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190881#comment-17190881 ] Joe Witt commented on NIFI-7791: Ricky - strongly recommend you approach this using the recordreader/writer construct rather than specific formats. You could have just a RecordReader and then let the user indicate how to serialize for writes to ClickHouse. But we'd like to avoid more processors that are specific format aware/dependent to the extent possible. > Add PutClickHouse Processor for Writing Large Streams > - > > Key: NIFI-7791 > URL: https://issues.apache.org/jira/browse/NIFI-7791 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Ricky Saltzer >Assignee: Ricky Saltzer >Priority: Minor > > ClickHouse supports streaming a number of file formats directly using their > JDBC (superset) library. Often times it's much more convenient to stream the > contents of a file directly to ClickHouse, rather than bothering to process > the data in NiFi and then using the native JDBC processor. > One workaround is to just use PutHTTP to stream the file directly to > ClickHouse using it's HTTP endpoint. However, this can get a bit tedious, > especially if you need to pass credentials as part of the HTTP method call. > I'm creating this Jira to support creating a simple PutClickHouse processor > that can stream a FlowFile directly to ClickHouse with the following features > * CSV, CSVWithNames, TSV and JSONEachRow > * Ability to modify column name ordering > * Custom delimiters for CSV and TSV > * SSL support (with and without strict mode) > * Multiple hosts (comma separated) to utilize the > {{BalancedClickhouseDataSource}} > * Username and Password > I'm currently wrapping up a PR for this. I wrote it using Kotlin, which uses > a processor-scope maven plugin. If there's enough objection, it can be > rewritten in native Java. > +[~joewitt] since I spoke with him regarding this a while back. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7791) Add PutClickHouse Processor for Writing Large Streams
Ricky Saltzer created NIFI-7791: --- Summary: Add PutClickHouse Processor for Writing Large Streams Key: NIFI-7791 URL: https://issues.apache.org/jira/browse/NIFI-7791 Project: Apache NiFi Issue Type: New Feature Reporter: Ricky Saltzer Assignee: Ricky Saltzer ClickHouse supports streaming a number of file formats directly using their JDBC (superset) library. Often times it's much more convenient to stream the contents of a file directly to ClickHouse, rather than bothering to process the data in NiFi and then using the native JDBC processor. One workaround is to just use PutHTTP to stream the file directly to ClickHouse using it's HTTP endpoint. However, this can get a bit tedious, especially if you need to pass credentials as part of the HTTP method call. I'm creating this Jira to support creating a simple PutClickHouse processor that can stream a FlowFile directly to ClickHouse with the following features * CSV, CSVWithNames, TSV and JSONEachRow * Ability to modify column name ordering * Custom delimiters for CSV and TSV * SSL support (with and without strict mode) * Multiple hosts (comma separated) to utilize the {{BalancedClickhouseDataSource}} * Username and Password I'm currently wrapping up a PR for this. I wrote it using Kotlin, which uses a processor-scope maven plugin. If there's enough objection, it can be rewritten in native Java. +[~joewitt] since I spoke with him regarding this a while back. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7790) XML record reader - failure on well-formed XML
Pierre Gramme created NIFI-7790: --- Summary: XML record reader - failure on well-formed XML Key: NIFI-7790 URL: https://issues.apache.org/jira/browse/NIFI-7790 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.11.4 Reporter: Pierre Gramme Attachments: bug-parse-xml.xml I am using ConvertRecord in order to parse XML flowfiles to Avro, with the Infer Schema strategy. Some input flowfiles are sent to the failure output queue whereas they are well-formed: {code:java} Neil Gaiman Hachette {code} Note the use of authors/item/name on one side, and editors/item/commercialName on the other side. On the other hand, this gets correctly parsed: {code:java} Neil Gaiman Hachette {code} See the attached template for minimal reproducible example. My interpretation is that the failure in the first case is due to 2 independent XML node types having the same name ( in this case) but having different types and occurring in different parents with different types. In the second case, both 's actually have the same node type. I didn't use any Schema Inference Cache, so both item types should be inferred independently. Since the first document is legal XML (an XSD could be written for it) and can also be represented in Avro, its conversion shouldn't fail. I'll be happy to provide more details if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7789) Correction in additionalDetails of ScriptedTransformRecord documentation
[ https://issues.apache.org/jira/browse/NIFI-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-7789: --- Fix Version/s: 1.12.1 > Correction in additionalDetails of ScriptedTransformRecord documentation > > > Key: NIFI-7789 > URL: https://issues.apache.org/jira/browse/NIFI-7789 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Affects Versions: 1.12.0 >Reporter: Nadeem >Assignee: Nadeem >Priority: Minor > Fix For: 1.13.0, 1.12.1 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Small Correction in ScriptedTransformRecord documentation (additionalDetails > section). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7789) Correction in additionalDetails of ScriptedTransformRecord documentation
[ https://issues.apache.org/jira/browse/NIFI-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190785#comment-17190785 ] ASF subversion and git services commented on NIFI-7789: --- Commit 2ebb8716e173c753b06b45586082a0c52743ac5d in nifi's branch refs/heads/support/nifi-1.12.x from Mohammed Nadeem [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2ebb871 ] NIFI-7789: Small Correction in additionalDetials of ScriptedTransformRecord processor documentation Signed-off-by: Matthew Burgess This closes #4511 > Correction in additionalDetails of ScriptedTransformRecord documentation > > > Key: NIFI-7789 > URL: https://issues.apache.org/jira/browse/NIFI-7789 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Affects Versions: 1.12.0 >Reporter: Nadeem >Assignee: Nadeem >Priority: Minor > Fix For: 1.13.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Small Correction in ScriptedTransformRecord documentation (additionalDetails > section). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7789) Correction in additionalDetails of ScriptedTransformRecord documentation
[ https://issues.apache.org/jira/browse/NIFI-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190780#comment-17190780 ] ASF subversion and git services commented on NIFI-7789: --- Commit 0d4def7843874b7f97cd53b878048d054170dc64 in nifi's branch refs/heads/main from Mohammed Nadeem [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0d4def7 ] NIFI-7789: Small Correction in additionalDetials of ScriptedTransformRecord processor documentation Signed-off-by: Matthew Burgess This closes #4511 > Correction in additionalDetails of ScriptedTransformRecord documentation > > > Key: NIFI-7789 > URL: https://issues.apache.org/jira/browse/NIFI-7789 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Affects Versions: 1.12.0 >Reporter: Nadeem >Assignee: Nadeem >Priority: Minor > Fix For: 1.13.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Small Correction in ScriptedTransformRecord documentation (additionalDetails > section). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-7789) Correction in additionalDetails of ScriptedTransformRecord documentation
[ https://issues.apache.org/jira/browse/NIFI-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess resolved NIFI-7789. Resolution: Fixed > Correction in additionalDetails of ScriptedTransformRecord documentation > > > Key: NIFI-7789 > URL: https://issues.apache.org/jira/browse/NIFI-7789 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Affects Versions: 1.12.0 >Reporter: Nadeem >Assignee: Nadeem >Priority: Minor > Fix For: 1.13.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Small Correction in ScriptedTransformRecord documentation (additionalDetails > section). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] mattyb149 closed pull request #4511: NIFI-7789: Small Correction in additionalDetails of ScriptedTransformRecord
mattyb149 closed pull request #4511: URL: https://github.com/apache/nifi/pull/4511 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mattyb149 commented on pull request #4511: NIFI-7789: Small Correction in additionalDetails of ScriptedTransformRecord
mattyb149 commented on pull request #4511: URL: https://github.com/apache/nifi/pull/4511#issuecomment-687227664 +1 LGTM, thanks for the improvement, merging to main This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on pull request #4484: NIFI-1207 Create processors to get/put data from XMPP
joewitt commented on pull request #4484: URL: https://github.com/apache/nifi/pull/4484#issuecomment-687219610 FastInfoset-1.2.15.jar jaxb-impl-2.3.0.jar stax-ex-1.8.jar xmpp-extensions-client-0.8.2.jar istack-commons-runtime-3.0.7.jar jaxb-runtime-2.3.1.jar txw2-2.3.1.jar xmpp-extensions-common-0.8.2.jar javax.activation-api-1.2.0.jar nifi-utils-1.13.0-SNAPSHOT.jar xmpp-addr-0.8.2.jar jaxb-api-2.3.0.jar nifi-xmpp-processors-1.13.0-SNAPSHOT.jar xmpp-core-client-0.8.2.jar jaxb-core-2.3.0.jar precis-1.0.0.jar xmpp-core-common-0.8.2.jar Those all show up in the nar content. We need to ensure our LICENSE/NOTICE exists within the nar to reflect these binary dependencies as appropriate. So far the rocks/xmpp bits are just MIT and I dont yet see any copyright bits. But these need to be in the LICENSE. I've not checked other items yet to esnure they're ok. But that needs to be done. We cannot include the nar by default in the overall nifi assembly as we're very space constrained just now. But easy for folks to add the nar as needed once this makes it in. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #874: Minificpp 1325 - Refactor and test YAML connection parsing
hunyadi-dev commented on a change in pull request #874: URL: https://github.com/apache/nifi-minifi-cpp/pull/874#discussion_r483683083 ## File path: libminifi/src/core/yaml/CheckRequiredField.cpp ## @@ -0,0 +1,55 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include + +#include "core/yaml/CheckRequiredField.h" + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace core { +namespace yaml { + +void checkRequiredField(const YAML::Node *yamlNode, const std::string , const std::shared_ptr& logger, const std::string , const std::string ) { + std::string errMsg = errorMessage; + if (!yamlNode->as()[fieldName]) { +if (errMsg.empty()) { + const YAML::Node name_node = yamlNode->as()["name"]; + // Build a helpful error message for the user so they can fix the + // invalid YAML config file, using the component name if present + errMsg = + name_node ? + "Unable to parse configuration file for component named '" + name_node.as() + "' as required field '" + fieldName + "' is missing" : + "Unable to parse configuration file as required field '" + fieldName + "' is missing"; + if (!yamlSection.empty()) { +errMsg += " [in '" + yamlSection + "' section of configuration file]"; + } Review comment: Added as requested. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #885: MINIFICPP-1344 - Correct rest endpoint for buckets on updateFromPayload
hunyadi-dev commented on a change in pull request #885: URL: https://github.com/apache/nifi-minifi-cpp/pull/885#discussion_r483661338 ## File path: libminifi/src/core/FlowConfiguration.cpp ## @@ -67,33 +67,24 @@ std::shared_ptr FlowConfiguration::createProvenanceReportTask() return processor; } -std::unique_ptr FlowConfiguration::updateFromPayload(const std::string , const std::string ) { +std::unique_ptr FlowConfiguration::updateFromPayload(const std::string& url, const std::string& yamlConfigPayload) { auto old_services = controller_services_; auto old_provider = service_provider_; controller_services_ = std::make_shared(); service_provider_ = std::make_shared(controller_services_, nullptr, configuration_); auto payload = getRootFromPayload(yamlConfigPayload); - if (!source.empty() && payload != nullptr) { -std::string host, protocol, path, query, url = source; -int port = -1; -utils::parse_url(, , , , , ); - + if (!url.empty() && payload != nullptr) { std::string flow_id, bucket_id; -auto path_split = utils::StringUtils::split(path, "/"); -for (size_t i = 0; i < path_split.size(); i++) { - const std::string = path_split.at(i); - if (str == "flows") { -if (i + 1 < path_split.size()) { - flow_id = path_split.at(i + 1); - i++; -} - } - - if (str == "bucket") { -if (i + 1 < path_split.size()) { - bucket_id = path_split.at(i + 1); - i++; -} +auto path_split = utils::StringUtils::split(url, "/"); +// This function might not do what the original implementater expected from it (https://issues.apache.org/jira/browse/MINIFICPP-1344) Review comment: Deleted line. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #885: MINIFICPP-1344 - Correct rest endpoint for buckets on updateFromPayload
hunyadi-dev commented on a change in pull request #885: URL: https://github.com/apache/nifi-minifi-cpp/pull/885#discussion_r483661338 ## File path: libminifi/src/core/FlowConfiguration.cpp ## @@ -67,33 +67,24 @@ std::shared_ptr FlowConfiguration::createProvenanceReportTask() return processor; } -std::unique_ptr FlowConfiguration::updateFromPayload(const std::string , const std::string ) { +std::unique_ptr FlowConfiguration::updateFromPayload(const std::string& url, const std::string& yamlConfigPayload) { auto old_services = controller_services_; auto old_provider = service_provider_; controller_services_ = std::make_shared(); service_provider_ = std::make_shared(controller_services_, nullptr, configuration_); auto payload = getRootFromPayload(yamlConfigPayload); - if (!source.empty() && payload != nullptr) { -std::string host, protocol, path, query, url = source; -int port = -1; -utils::parse_url(, , , , , ); - + if (!url.empty() && payload != nullptr) { std::string flow_id, bucket_id; -auto path_split = utils::StringUtils::split(path, "/"); -for (size_t i = 0; i < path_split.size(); i++) { - const std::string = path_split.at(i); - if (str == "flows") { -if (i + 1 < path_split.size()) { - flow_id = path_split.at(i + 1); - i++; -} - } - - if (str == "bucket") { -if (i + 1 < path_split.size()) { - bucket_id = path_split.at(i + 1); - i++; -} +auto path_split = utils::StringUtils::split(url, "/"); +// This function might not do what the original implementater expected from it (https://issues.apache.org/jira/browse/MINIFICPP-1344) Review comment: Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #885: MINIFICPP-1344 - Correct rest endpoint for buckets on updateFromPayload
hunyadi-dev commented on a change in pull request #885: URL: https://github.com/apache/nifi-minifi-cpp/pull/885#discussion_r483658768 ## File path: libminifi/src/core/FlowConfiguration.cpp ## @@ -67,33 +67,24 @@ std::shared_ptr FlowConfiguration::createProvenanceReportTask() return processor; } -std::unique_ptr FlowConfiguration::updateFromPayload(const std::string , const std::string ) { +std::unique_ptr FlowConfiguration::updateFromPayload(const std::string& url, const std::string& yamlConfigPayload) { auto old_services = controller_services_; auto old_provider = service_provider_; controller_services_ = std::make_shared(); service_provider_ = std::make_shared(controller_services_, nullptr, configuration_); auto payload = getRootFromPayload(yamlConfigPayload); - if (!source.empty() && payload != nullptr) { -std::string host, protocol, path, query, url = source; -int port = -1; -utils::parse_url(, , , , , ); - + if (!url.empty() && payload != nullptr) { std::string flow_id, bucket_id; -auto path_split = utils::StringUtils::split(path, "/"); -for (size_t i = 0; i < path_split.size(); i++) { - const std::string = path_split.at(i); - if (str == "flows") { -if (i + 1 < path_split.size()) { - flow_id = path_split.at(i + 1); - i++; -} - } - - if (str == "bucket") { -if (i + 1 < path_split.size()) { - bucket_id = path_split.at(i + 1); - i++; -} +auto path_split = utils::StringUtils::split(url, "/"); +// This function might not do what the original implementater expected from it (https://issues.apache.org/jira/browse/MINIFICPP-1344) +// Registry API docs: nifi.apache.org/docs/nifi-registry-docs/rest-api/index.html +// GET /buckets/{bucketId}/flows/{flowId}: Gets a flow +const auto bucket_token_found = std::find(path_split.cbegin(), path_split.cend(), "buckets"); +if (bucket_token_found != path_split.cend() && std::next(bucket_token_found) != path_split.cend()) { + bucket_id = *std::next(bucket_token_found); + const auto flows_token_found = std::find(std::next(bucket_token_found, 2), path_split.cend(), "flows"); Review comment: The original implementation was even less strict, looking for the token anywhere. It is a good question how we want to treat the example you listed, to me it seems like it clearly has a `bucket` and a `flow` we can extract. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1355) Investigate and fix the initialization of ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1355: Description: *Acceptance criteria:* - GIVEN a flow set up as highlighted in blue below - WHEN the flow is with a python script set to add a new attribute to a flow file - THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: {color:#0747A6}GenerateFlowFile -(success)-> ExecutePythonProcessor -(success,failure)-> LogAttribute{color} When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup already shows some kind of errors, although they seem different: {code:python|title=Error log} [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception ModuleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): {code} *Proposal:* One should investigate and fix the error. was: *Acceptance criteria:* - GIVEN a flow set up as highlighted in blue below - WHEN the flow is with a python script set to add a new attribute to a flow file - THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: {color:#0747A6}GenerateFlowFile -(success)-> ExecutePythonProcessor -(success,failure)-> LogAttribute{color} When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config
[jira] [Updated] (MINIFICPP-1355) Investigate and fix the initialization of ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1355: Description: *Acceptance criteria:* - GIVEN a flow set up as highlighted in blue below - WHEN the flow is with a python script set to add a new attribute to a flow file - THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: {color:#0747A6}GenerateFlowFile -(success)-> ExecutePythonProcessor -(success,failure)-> LogAttribute{color} When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup already shows some kind of errors, although they seem different: {code:python|title=Error log} [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception Mod uleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): {code} *Proposal:* One should investigate and fix the error. was: *Acceptance criteria:* - GIVEN a flow set up in EFM illustrated below - WHEN the flow is with a python script set to add a new attribute to a flow file - THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to
[GitHub] [nifi] pgyori commented on a change in pull request #4481: NIFI-7624: ListenFTP processor
pgyori commented on a change in pull request #4481: URL: https://github.com/apache/nifi/pull/4481#discussion_r483647107 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ftp/NifiFtpServer.java ## @@ -0,0 +1,275 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard.ftp; + +import org.apache.ftpserver.ConnectionConfig; +import org.apache.ftpserver.ConnectionConfigFactory; +import org.apache.ftpserver.FtpServer; +import org.apache.ftpserver.FtpServerConfigurationException; +import org.apache.ftpserver.FtpServerFactory; +import org.apache.ftpserver.command.Command; +import org.apache.ftpserver.command.CommandFactory; +import org.apache.ftpserver.command.CommandFactoryFactory; +import org.apache.ftpserver.command.impl.ABOR; +import org.apache.ftpserver.command.impl.AUTH; +import org.apache.ftpserver.command.impl.CDUP; +import org.apache.ftpserver.command.impl.CWD; +import org.apache.ftpserver.command.impl.EPRT; +import org.apache.ftpserver.command.impl.EPSV; +import org.apache.ftpserver.command.impl.FEAT; +import org.apache.ftpserver.command.impl.LIST; +import org.apache.ftpserver.command.impl.MDTM; +import org.apache.ftpserver.command.impl.MKD; +import org.apache.ftpserver.command.impl.MLSD; +import org.apache.ftpserver.command.impl.MLST; +import org.apache.ftpserver.command.impl.MODE; +import org.apache.ftpserver.command.impl.NLST; +import org.apache.ftpserver.command.impl.NOOP; +import org.apache.ftpserver.command.impl.OPTS; +import org.apache.ftpserver.command.impl.PASS; +import org.apache.ftpserver.command.impl.PASV; +import org.apache.ftpserver.command.impl.PBSZ; +import org.apache.ftpserver.command.impl.PORT; +import org.apache.ftpserver.command.impl.PROT; +import org.apache.ftpserver.command.impl.PWD; +import org.apache.ftpserver.command.impl.QUIT; +import org.apache.ftpserver.command.impl.REIN; +import org.apache.ftpserver.command.impl.RMD; +import org.apache.ftpserver.command.impl.SITE; +import org.apache.ftpserver.command.impl.SITE_DESCUSER; +import org.apache.ftpserver.command.impl.SITE_HELP; +import org.apache.ftpserver.command.impl.SITE_STAT; +import org.apache.ftpserver.command.impl.SITE_WHO; +import org.apache.ftpserver.command.impl.SITE_ZONE; +import org.apache.ftpserver.command.impl.SIZE; +import org.apache.ftpserver.command.impl.STAT; +import org.apache.ftpserver.command.impl.STRU; +import org.apache.ftpserver.command.impl.SYST; +import org.apache.ftpserver.command.impl.TYPE; +import org.apache.ftpserver.command.impl.USER; +import org.apache.ftpserver.ftplet.Authority; +import org.apache.ftpserver.ftplet.User; +import org.apache.ftpserver.listener.Listener; +import org.apache.ftpserver.listener.ListenerFactory; +import org.apache.ftpserver.usermanager.impl.BaseUser; +import org.apache.ftpserver.usermanager.impl.WritePermission; +import org.apache.nifi.processor.ProcessSessionFactory; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processors.standard.ftp.commands.FtpCommandHELP; +import org.apache.nifi.processors.standard.ftp.commands.FtpCommandSTOR; +import org.apache.nifi.processors.standard.ftp.commands.NotSupportedCommand; +import org.apache.nifi.processors.standard.ftp.filesystem.DefaultVirtualFileSystem; +import org.apache.nifi.processors.standard.ftp.filesystem.VirtualFileSystem; +import org.apache.nifi.processors.standard.ftp.filesystem.VirtualFileSystemFactory; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicReference; + +public class NifiFtpServer { + +private final Map commandMap = new HashMap<>(); +private final FtpCommandHELP customHelpCommand = new FtpCommandHELP(); + +private final FtpServer server; +private static final String HOME_DIRECTORY = "/virtual/ftproot"; + +private NifiFtpServer(Builder builder) throws ProcessException { +try { +initializeCommandMap(builder.sessionFactory, builder.sessionFactorySetSignal); + +
[jira] [Updated] (MINIFICPP-1355) Investigate and fix the initialization of ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1355: Description: *Acceptance criteria:* - GIVEN a flow set up in EFM illustrated below - WHEN the flow is with a python script set to add a new attribute to a flow file - THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup already shows some kind of errors, although they seem different: {code:python|title=Error log} [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception Mod uleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): {code} *Proposal:* One should investigate and fix the error. was: *Acceptance criteria:* GIVEN a flow set up in EFM illustrated below WHEN the flow is with a python script set to add a new attribute to a flow file THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup
[jira] [Updated] (MINIFICPP-1355) Investigate and fix the initialization of ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1355: Description: *Acceptance criteria:* - GIVEN a flow set up in EFM illustrated below - WHEN the flow is with a python script set to add a new attribute to a flow file - THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup already shows some kind of errors, although they seem different: {code:python|title=Error log} [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception Mod uleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): {code} *Proposal:* One should investigate and fix the error. was: *Acceptance criteria:* - GIVEN a flow set up in EFM illustrated below - WHEN the flow is with a python script set to add a new attribute to a flow file - THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the
[jira] [Updated] (MINIFICPP-1355) Investigate and fix the initialization of ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1355: Description: *Acceptance criteria:* GIVEN a flow set up in EFM illustrated below WHEN the flow is with a python script set to add a new attribute to a flow file THEN no error is produced and the newly added attribute is logged in LogAttribute Example script: {code:c++|title=Trace of where the property is overridden} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup already shows some kind of errors, although they seem different: {code:python|title=Error log} [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception Mod uleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): {code} *Proposal:* One should investigate and fix the error. was: *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup already shows some kind of errors, although they seem different: {code:python|title=Error log} [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception Mod uleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No module named 'google' At:
[jira] [Updated] (MINIFICPP-1355) Investigate and fix the initialization of ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1355: Description: *Acceptance criteria:* GIVEN a flow set up in EFM illustrated below WHEN the flow is with a python script set to add a new attribute to a flow file THEN no error is produced and the newly added attribute is logged in LogAttribute {code:c++|title=Example script} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup already shows some kind of errors, although they seem different: {code:python|title=Error log} [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception Mod uleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): {code} *Proposal:* One should investigate and fix the error. was: *Acceptance criteria:* GIVEN a flow set up in EFM illustrated below WHEN the flow is with a python script set to add a new attribute to a flow file THEN no error is produced and the newly added attribute is logged in LogAttribute Example script: {code:c++|title=Trace of where the property is overridden} def describe(processor): processor.setDescription("Adds an attribute to your flow files") def onInitialize(processor): processor.setSupportsDynamicProperties() def onTrigger(context, session): flow_file = session.get() if flow_file is not None: flow_file.addAttribute("Python attribute","attributevalue") session.transfer(flow_file, REL_SUCCESS) {code} *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0
[jira] [Created] (MINIFICPP-1355) Investigate and fix the initialization of ExecutePythonProcessor
Adam Hunyadi created MINIFICPP-1355: --- Summary: Investigate and fix the initialization of ExecutePythonProcessor Key: MINIFICPP-1355 URL: https://issues.apache.org/jira/browse/MINIFICPP-1355 Project: Apache NiFi MiNiFi C++ Issue Type: Task Affects Versions: 0.7.0 Reporter: Adam Hunyadi Fix For: 1.0.0 Attachments: Screenshot 2020-09-04 at 16.02.41.png *Background:* Currently, even though the tests for ExecutePythonProcessor are passing, if I were to try and load up a configuration that contains an ExecutePythonProcessor, it fails due to trying to load an incorrect script file. Sample flow: !Screenshot 2020-09-04 at 16.02.41.png|width=467,height=100! When trying to check in debugger, it seems like the processors script file is always replaced with an incorrect one, and the processor fails to start. !https://files.slack.com/files-pri/T024BEHTP-F01942KD4BV/screenshot_2020-08-19_at_13.08.46.png|width=1427,height=288! This is how it is set: {code:c++|title=Trace of where the property is overridden} ConfigurableComponent::setProperty() std::shared_ptr create() ClassLoader::instantiate() PythonCreator::configure() <- here the first element of classpaths_ is read to overwrite the config FlowController::initializeExternalComponents() {code} When trying to perform the same thing on the 0.7.0 release version, the startup already shows some kind of errors, although they seem different: {code:python|title=Error log} [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception Mod uleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No module named 'google' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] [error] Caught Exception ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No module named 'vaderSentiment' At: /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] simonbence commented on a change in pull request #4420: NIFI-7429 Adding status history for system level metrics
simonbence commented on a change in pull request #4420: URL: https://github.com/apache/nifi/pull/4420#discussion_r483633449 ## File path: nifi-api/src/main/java/org/apache/nifi/controller/status/NodeStatus.java ## @@ -0,0 +1,229 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.controller.status; + +import java.util.ArrayList; +import java.util.List; + +/** + * The status of a NiFi node. + */ +public class NodeStatus implements Cloneable { Review comment: I checked for StorageStatus versus StorageUsage and now I remember: StorageUsage (the original) is from the `nifi-framework-core` module, but the places we intend to use StorageStatus is in the nifi-api (as these instances are exposed, together with other metrics related DTOs), so I needed to add these. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-6539) Ability to initialize nifi-stateless from a flow.xml.gz
[ https://issues.apache.org/jira/browse/NIFI-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Kegley resolved NIFI-6539. Resolution: Won't Fix > Ability to initialize nifi-stateless from a flow.xml.gz > --- > > Key: NIFI-6539 > URL: https://issues.apache.org/jira/browse/NIFI-6539 > Project: Apache NiFi > Issue Type: New Feature > Components: NiFi Stateless >Reporter: David Kegley >Priority: Minor > Time Spent: 4h 10m > Remaining Estimate: 0h > > nifi-stateless currently supports running a flow from a registry given a > flow_id and a registry endpoint. To facilitate environments where there is > no connectivity to a registry, I would like the ability to run nifi-stateless > from a flow.xml.gz on the filesystem. When running in a container the > flow.xml.gz could be added to the container image and tagged with a version, > or mounted as a volume at runtime. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] simonbence commented on a change in pull request #4420: NIFI-7429 Adding status history for system level metrics
simonbence commented on a change in pull request #4420: URL: https://github.com/apache/nifi/pull/4420#discussion_r483623029 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/status/history/VolatileComponentStatusRepository.java ## @@ -164,6 +183,182 @@ public StatusHistory getRemoteProcessGroupStatusHistory(final String remoteGroup return getStatusHistory(remoteGroupId, true, DEFAULT_RPG_METRICS, start, end, preferredDataPoints); } +@Override +public StatusHistory getNodeStatusHistory() { Review comment: It is covered in VolatileComponentStatusRepositoryTest#testNodeHistory (which should be renamed however) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] naddym opened a new pull request #4511: NIFI-7789: Small Correction in additionalDetials of ScriptedTransform…
naddym opened a new pull request #4511: URL: https://github.com/apache/nifi/pull/4511 …Record processor documentation Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7789) Correction in additionalDetails of ScriptedTransformRecord documentation
Nadeem created NIFI-7789: Summary: Correction in additionalDetails of ScriptedTransformRecord documentation Key: NIFI-7789 URL: https://issues.apache.org/jira/browse/NIFI-7789 Project: Apache NiFi Issue Type: Improvement Components: Documentation Website Affects Versions: 1.12.0 Reporter: Nadeem Assignee: Nadeem Fix For: 1.13.0 Small Correction in ScriptedTransform Record documentation (additionalDetails section). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7789) Correction in additionalDetails of ScriptedTransformRecord documentation
[ https://issues.apache.org/jira/browse/NIFI-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nadeem updated NIFI-7789: - Description: Small Correction in ScriptedTransformRecord documentation (additionalDetails section). (was: Small Correction in ScriptedTransform Record documentation (additionalDetails section).) > Correction in additionalDetails of ScriptedTransformRecord documentation > > > Key: NIFI-7789 > URL: https://issues.apache.org/jira/browse/NIFI-7789 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Affects Versions: 1.12.0 >Reporter: Nadeem >Assignee: Nadeem >Priority: Minor > Fix For: 1.13.0 > > > Small Correction in ScriptedTransformRecord documentation (additionalDetails > section). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] tpalfy commented on a change in pull request #4420: NIFI-7429 Adding status history for system level metrics
tpalfy commented on a change in pull request #4420: URL: https://github.com/apache/nifi/pull/4420#discussion_r483607459 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/status/history/VolatileComponentStatusRepository.java ## @@ -164,6 +183,182 @@ public StatusHistory getRemoteProcessGroupStatusHistory(final String remoteGroup return getStatusHistory(remoteGroupId, true, DEFAULT_RPG_METRICS, start, end, preferredDataPoints); } +@Override +public StatusHistory getNodeStatusHistory() { +final List nodeStatusList = nodeStatuses.asList(); +final List> gcStatusList = gcStatuses.asList(); +final LinkedList snapshots = new LinkedList<>(); + +final Set> metricDescriptors = new HashSet<>(DEFAULT_NODE_METRICS); +final List>> gcMetricDescriptors = new LinkedList<>(); +final List>> gcMetricDescriptorsDifferential = new LinkedList<>(); +final List> contentStorageStatusDescriptors = new LinkedList<>(); +final List> provenanceStorageStatusDescriptors = new LinkedList<>(); + +int ordinal = DEFAULT_NODE_METRICS.size() - 1; Review comment: Instead of calculating the `counter`, `final AtomicInteger index = new AtomicInteger(DEFAULT_NODE_METRICS.size());` could be used with `index.getAndIncrement()` in every `new StandardMetricDescriptor` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy commented on a change in pull request #4420: NIFI-7429 Adding status history for system level metrics
tpalfy commented on a change in pull request #4420: URL: https://github.com/apache/nifi/pull/4420#discussion_r483607459 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/status/history/VolatileComponentStatusRepository.java ## @@ -164,6 +183,182 @@ public StatusHistory getRemoteProcessGroupStatusHistory(final String remoteGroup return getStatusHistory(remoteGroupId, true, DEFAULT_RPG_METRICS, start, end, preferredDataPoints); } +@Override +public StatusHistory getNodeStatusHistory() { +final List nodeStatusList = nodeStatuses.asList(); +final List> gcStatusList = gcStatuses.asList(); +final LinkedList snapshots = new LinkedList<>(); + +final Set> metricDescriptors = new HashSet<>(DEFAULT_NODE_METRICS); +final List>> gcMetricDescriptors = new LinkedList<>(); +final List>> gcMetricDescriptorsDifferential = new LinkedList<>(); +final List> contentStorageStatusDescriptors = new LinkedList<>(); +final List> provenanceStorageStatusDescriptors = new LinkedList<>(); + +int ordinal = DEFAULT_NODE_METRICS.size() - 1; Review comment: Instead of calculating the `counter`, `final AtomicInteger index = new AtomicInteger(DEFAULT_NODE_METRICS.size());` could be used with `index.incrementAndGet()` in every `new StandardMetricDescriptor` ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/status/history/VolatileComponentStatusRepository.java ## @@ -164,6 +183,182 @@ public StatusHistory getRemoteProcessGroupStatusHistory(final String remoteGroup return getStatusHistory(remoteGroupId, true, DEFAULT_RPG_METRICS, start, end, preferredDataPoints); } +@Override +public StatusHistory getNodeStatusHistory() { +final List nodeStatusList = nodeStatuses.asList(); +final List> gcStatusList = gcStatuses.asList(); +final LinkedList snapshots = new LinkedList<>(); + +final Set> metricDescriptors = new HashSet<>(DEFAULT_NODE_METRICS); +final List>> gcMetricDescriptors = new LinkedList<>(); +final List>> gcMetricDescriptorsDifferential = new LinkedList<>(); +final List> contentStorageStatusDescriptors = new LinkedList<>(); +final List> provenanceStorageStatusDescriptors = new LinkedList<>(); + +int ordinal = DEFAULT_NODE_METRICS.size() - 1; + +// Uses the first measurement as reference for repository metrics descriptors +if (nodeStatusList.size() > 0) { +final NodeStatus referenceNodeStatus = nodeStatusList.get(0); +int contentStorageNumber = 0; +int provenanceStorageNumber = 0; + +for (int i = 0; i < referenceNodeStatus.getContentRepositories().size(); i++) { +final int storageNumber = i; +final int counter = metricDescriptors.size() - 1 + NUMBER_OF_STORAGE_METRICS * contentStorageNumber; + +contentStorageStatusDescriptors.add(new StandardMetricDescriptor<>( Review comment: Could use ```suggestion metricDescriptors.add(new StandardMetricDescriptor( ``` With this approach we could get rid of all the intermediary lists. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7779) ExecuteScript/ScriptedTransformRecord throws NullPointerException if properties are null during a component search from canvas
[ https://issues.apache.org/jira/browse/NIFI-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nadeem updated NIFI-7779: - Description: I have a flow that has ExecuteScript and ScriptedTransformRecord processor with null value properties. When I search for any component from canvas search bar, I immediately see ExecuteScript and ScriptedTransformRecord throwing NPE error as shown in the attachment . To replicate the issue, the following steps should be followed: # Drag and drop ExecuteScript/ScriptedTransformRecord processor onto the canvas # Navigate to components search bar (canvas) and search for any component. !image-2020-09-01-11-20-01-717.png! was: I have a flow that has ExecuteScript processor with null value properties. When I search for any component from canvas search bar, I immediately see ExecuteScript throwing NPE error as shown in the attachment . To replicate the issue, the following steps should be followed: # Drag and drop ExecuteScript processor onto the canvas # Navigate to components search bar (canvas) and search for any component. !image-2020-09-01-11-20-01-717.png! Summary: ExecuteScript/ScriptedTransformRecord throws NullPointerException if properties are null during a component search from canvas (was: ExecuteScript throws NullPointerException if properties are null during a component search from canvas) > ExecuteScript/ScriptedTransformRecord throws NullPointerException if > properties are null during a component search from canvas > -- > > Key: NIFI-7779 > URL: https://issues.apache.org/jira/browse/NIFI-7779 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.12.0, 1.11.4 >Reporter: Nadeem >Assignee: Nadeem >Priority: Major > Fix For: 1.13.0, 1.12.1 > > Attachments: image-2020-09-01-11-20-01-717.png > > Time Spent: 40m > Remaining Estimate: 0h > > I have a flow that has ExecuteScript and ScriptedTransformRecord processor > with null value properties. When I search for any component from canvas > search bar, I immediately see ExecuteScript and ScriptedTransformRecord > throwing NPE error as shown in the attachment . > To replicate the issue, the following steps should be followed: > # Drag and drop ExecuteScript/ScriptedTransformRecord processor onto the > canvas > # Navigate to components search bar (canvas) and search for any component. > !image-2020-09-01-11-20-01-717.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] tpalfy commented on a change in pull request #4420: NIFI-7429 Adding status history for system level metrics
tpalfy commented on a change in pull request #4420: URL: https://github.com/apache/nifi/pull/4420#discussion_r483599157 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/status/history/StatusHistoryUtil.java ## @@ -76,14 +76,14 @@ public static StatusDescriptorDTO createStatusDescriptorDto(final MetricDescript public static List createFieldDescriptorDtos(final Collection> metricDescriptors) { final List dtos = new ArrayList<>(); +final Map> orderedDescriptors = new HashMap<>(); -final Set> allDescriptors = new LinkedHashSet<>(); for (final MetricDescriptor metricDescriptor : metricDescriptors) { Review comment: I think I didn't explain myself clearly. I understand the reasoning and constraints but those don't deny my statements. My code snippet was not 100% correct but the concept was. Here's a correct version: ```java final StatusDescriptorDTO[] dtos = new StatusDescriptorDTO[metricDescriptors.size()]; for (final MetricDescriptor metricDescriptor : metricDescriptors) { dtos[metricDescriptor.getMetricIdentifier()] = createStatusDescriptorDto(metricDescriptor); } return Arrays.asList(dtos); ``` Here's a unit test (which - or something similar - would be useful to add): ```java public class StatusHistoryUtilTest { @Test public void testCreateFieldDescriptorDtos() throws Exception { // GIVEN Collection> metricDescriptors = Arrays.asList( new StandardMetricDescriptor<>( () -> 1, "field2", "Field2", "Field 2", MetricDescriptor.Formatter.COUNT, __ -> 2L ), new StandardMetricDescriptor<>( () -> 0, "field1", "Field1", "Field 1", MetricDescriptor.Formatter.COUNT, __ -> 1L ) ); List expected = Arrays.asList( new StatusDescriptorDTO("field1", "Field1", "Field 1", MetricDescriptor.Formatter.COUNT.name()), new StatusDescriptorDTO("field2", "Field2", "Field 2", MetricDescriptor.Formatter.COUNT.name()) ); // WHEN List actual = StatusHistoryUtil.createFieldDescriptorDtos(metricDescriptors); // THEN assertEquals(expected, actual); } } ``` There are two metric indexes: 1, 0 (coming from `() -> 1` and `() -> 0,` respectively). - If you change `() -> 1` to `() -> 2`, the method throws an expection (yours a NullPointerException, mine an ArrayIndexOutOfBoundsException) - Because index 1 is missing. That's what I meant by "metricDescriptors cannot have a gap in their getMetricIdentifier() values" - All we do is make sure the order of the output is based on the metric index. That can done with the simple for iteration I presented, or simply sorting it, mapping them into a dto and collecting them into a list (which _would_ allow a gap in the getMetricIdentifier() values as well) like this: ```java public static List createFieldDescriptorDtos(final Collection> metricDescriptors) { return metricDescriptors.stream() .sorted(Comparator.comparingInt(MetricDescriptor::getMetricIdentifier)) .map(StatusHistoryUtil::createStatusDescriptorDto) .collect(Collectors.toList()); } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy commented on a change in pull request #4420: NIFI-7429 Adding status history for system level metrics
tpalfy commented on a change in pull request #4420: URL: https://github.com/apache/nifi/pull/4420#discussion_r483599157 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/status/history/StatusHistoryUtil.java ## @@ -76,14 +76,14 @@ public static StatusDescriptorDTO createStatusDescriptorDto(final MetricDescript public static List createFieldDescriptorDtos(final Collection> metricDescriptors) { final List dtos = new ArrayList<>(); +final Map> orderedDescriptors = new HashMap<>(); -final Set> allDescriptors = new LinkedHashSet<>(); for (final MetricDescriptor metricDescriptor : metricDescriptors) { Review comment: I think I didn't explain myself clearly. I understand the reasoning and constraints but those don't deny my statements. My code snippet was not 100% correct but the concept was. Here's a correct version: ```java final StatusDescriptorDTO[] dtos = new StatusDescriptorDTO[metricDescriptors.size()]; for (final MetricDescriptor metricDescriptor : metricDescriptors) { dtos[metricDescriptor.getMetricIdentifier()] = createStatusDescriptorDto(metricDescriptor); } return Arrays.asList(dtos); ``` Here's a unit test (which - or something similar - would be useful to add): ```java public class StatusHistoryUtilTest { @Test public void name() throws Exception { // GIVEN Collection> metricDescriptors = Arrays.asList( new StandardMetricDescriptor<>( () -> 1, "field2", "Field2", "Field 2", MetricDescriptor.Formatter.COUNT, __ -> 2L ), new StandardMetricDescriptor<>( () -> 0, "field1", "Field1", "Field 1", MetricDescriptor.Formatter.COUNT, __ -> 1L ) ); List expected = Arrays.asList( new StatusDescriptorDTO("field1", "Field1", "Field 1", MetricDescriptor.Formatter.COUNT.name()), new StatusDescriptorDTO("field2", "Field2", "Field 2", MetricDescriptor.Formatter.COUNT.name()) ); // WHEN List actual = StatusHistoryUtil.createFieldDescriptorDtos(metricDescriptors); // THEN assertEquals(expected, actual); } } ``` There are two metric indexes: 1, 0 (coming from `() -> 1` and `() -> 0,` respectively). - If you change `() -> 1` to `() -> 2`, the method throws an expection (yours a NullPointerException, mine an ArrayIndexOutOfBoundsException) - Because index 1 is missing. That's what I meant by "metricDescriptors cannot have a gap in their getMetricIdentifier() values" - All we do is make sure the order of the output is based on the metric index. That can done with the simple for iteration I presented, or simply sorting it, mapping them into a dto and collecting them into a list (which _would_ allow a gap in the getMetricIdentifier() values as well) like this: ```java public static List createFieldDescriptorDtos(final Collection> metricDescriptors) { return metricDescriptors.stream() .sorted((a, b) -> a.getMetricIdentifier() < b.getMetricIdentifier() ? -1 : a.getMetricIdentifier() > b.getMetricIdentifier() ? 1 :0) .map(StatusHistoryUtil::createStatusDescriptorDto) .collect(Collectors.toList()); } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1354) Fix minor errors found by the address sanitizer
Ferenc Gerlits created MINIFICPP-1354: - Summary: Fix minor errors found by the address sanitizer Key: MINIFICPP-1354 URL: https://issues.apache.org/jira/browse/MINIFICPP-1354 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Ferenc Gerlits Assignee: Ferenc Gerlits Fix For: 0.9.0 If we run gcc's (version 8.4.0) address sanitizer on the unit tests, it finds one "mismatched malloc vs operator delete" error in nanofi (triggered from 4 unit tests), and 49 memory leaks, which all seem to be in the unit tests. I don't think any of these errors affect production minifi, but still we should fix them. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #896: MINIFICPP-1353 Fix a heap-use-after-free error
fgerlits commented on a change in pull request #896: URL: https://github.com/apache/nifi-minifi-cpp/pull/896#discussion_r483587276 ## File path: extensions/http-curl/processors/InvokeHTTP.cpp ## @@ -283,6 +283,10 @@ void InvokeHTTP::onTrigger(const std::shared_ptr , // create a transaction id std::string tx_id = generateId(); + std::unique_ptr callback = nullptr; + std::unique_ptr callbackObj = nullptr; + + // Client declared after the callbacks to make sure the callbacks are still available when the client is destructed Review comment: Good point, I have added a comment. This is a good example when a comment is necessary, but it is also a code smell. :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #896: MINIFICPP-1353 Fix a heap-use-after-free error
hunyadi-dev commented on a change in pull request #896: URL: https://github.com/apache/nifi-minifi-cpp/pull/896#discussion_r483574962 ## File path: extensions/http-curl/processors/InvokeHTTP.cpp ## @@ -283,6 +283,10 @@ void InvokeHTTP::onTrigger(const std::shared_ptr , // create a transaction id std::string tx_id = generateId(); + std::unique_ptr callback = nullptr; + std::unique_ptr callbackObj = nullptr; + + // Client declared after the callbacks to make sure the callbacks are still available when the client is destructed Review comment: Might be worth mentioning that the destruction of `callback` and `callbackObj` also matters. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] simonbence commented on a change in pull request #4420: NIFI-7429 Adding status history for system level metrics
simonbence commented on a change in pull request #4420: URL: https://github.com/apache/nifi/pull/4420#discussion_r483548523 ## File path: nifi-api/src/main/java/org/apache/nifi/controller/status/NodeStatus.java ## @@ -0,0 +1,229 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.controller.status; + +import java.util.ArrayList; +import java.util.List; + +/** + * The status of a NiFi node. + */ +public class NodeStatus implements Cloneable { Review comment: As for SystemDiagnostics, NodeStatus consists only a part of it and also consists information from other source. As for StorageStatus, I do check on if we can spare that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] simonbence commented on a change in pull request #4420: NIFI-7429 Adding status history for system level metrics
simonbence commented on a change in pull request #4420: URL: https://github.com/apache/nifi/pull/4420#discussion_r483547447 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/status/history/StatusHistoryUtil.java ## @@ -76,14 +76,14 @@ public static StatusDescriptorDTO createStatusDescriptorDto(final MetricDescript public static List createFieldDescriptorDtos(final Collection> metricDescriptors) { final List dtos = new ArrayList<>(); +final Map> orderedDescriptors = new HashMap<>(); -final Set> allDescriptors = new LinkedHashSet<>(); for (final MetricDescriptor metricDescriptor : metricDescriptors) { Review comment: The order of the items counts, as it determines the order of the metrics in the answer JSON, thus the order of the items in the UI. The two-step mechanism was introduced to provide the ascending order specified by the identifier of the metric. As the result StatusDescriptorDTO does not contain this information, only its position carries the expected ordering, I had to ask back to the original MetricDescriptor. The incoming collection is not guaranteed to be ordered. An other possible way would be to sort, as you mention, and not in the end result but the input argument or more precisely on its copy. Copying and then sorting that collection does not look much nicer in my perspective so if you do not insist, I would keep it this way. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits opened a new pull request #896: MINIFICPP-1353 Fix a heap-use-after-free error
fgerlits opened a new pull request #896: URL: https://github.com/apache/nifi-minifi-cpp/pull/896 https://issues.apache.org/jira/browse/MINIFICPP-1353 Declare the variables in the right order in `InvokeHTTP::onTrigger()` to make sure that the destructors run in the right order. In the long term, the client should probably own the callbacks (and callbacks should own their sub-callbacks), but that would be a bigger change, and is less important. --- Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically main)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1353) Fix heap-use-after-free errors
Ferenc Gerlits created MINIFICPP-1353: - Summary: Fix heap-use-after-free errors Key: MINIFICPP-1353 URL: https://issues.apache.org/jira/browse/MINIFICPP-1353 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Ferenc Gerlits Assignee: Ferenc Gerlits Fix For: 0.9.0 Address sanitizer finds one heap-use-after-free error when run on the unit tests: {noformat} ==26761==ERROR: AddressSanitizer: heap-use-after-free on address 0x6062c4a8 at pc 0x55d957b02e44 bp 0x7f6e736875d0 sp 0x7f6e736875c0 WRITE of size 1 at 0x6062c4a8 thread T56 #0 0x55d957b02e43 in std::__atomic_base::store(bool, std::memory_order) /usr/include/c++/8/bits/atomic_base.h:374 #1 0x55d957b02e43 in std::__atomic_base::operator=(bool) /usr/include/c++/8/bits/atomic_base.h:267 #2 0x55d957acb3c8 in std::atomic::operator=(bool) /usr/include/c++/8/atomic:79 #3 0x55d9581a02b9 in org::apache::nifi::minifi::utils::HTTPClient::forceClose() /home/fgerlits/src/minifi2/extensions/http-curl/client/HTTPClient.cpp:75 #4 0x55d9581a00f1 in org::apache::nifi::minifi::utils::HTTPClient::~HTTPClient() /home/fgerlits/src/minifi2/extensions/http-curl/client/HTTPClient.cpp:64 #5 0x55d9581c9f00 in org::apache::nifi::minifi::processors::InvokeHTTP::onTrigger(std::shared_ptr const&, std::shared_ptr const&) /home/fgerlits/src/minifi2/extensions/http-curl/processors/InvokeHTTP.cpp:286 [...] 0x6062c4a8 is located 40 bytes inside of 64-byte region [0x6062c480,0x6062c4c0) freed by thread T56 here: #0 0x7f6e795c8a50 in operator delete(void*) (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xf0a50) #1 0x55d9581970e3 in std::default_delete::operator()(org::apache::nifi::minifi::utils::HTTPUploadCallback*) const /usr/include/c++/8/bits/unique_ptr.h:81 #2 0x55d958195e2a in std::unique_ptr >::~unique_ptr() /usr/include/c++/8/bits/unique_ptr.h:277 #3 0x55d9581c9ee2 in org::apache::nifi::minifi::processors::InvokeHTTP::onTrigger(std::shared_ptr const&, std::shared_ptr const&) /home/fgerlits/src/minifi2/extensions/http-curl/processors/InvokeHTTP.cpp:306 [...] previously allocated by thread T56 here: #0 0x7f6e795c7ba0 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xefba0) #1 0x55d9581c86f7 in org::apache::nifi::minifi::processors::InvokeHTTP::onTrigger(std::shared_ptr const&, std::shared_ptr const&) /home/fgerlits/src/minifi2/extensions/http-curl/processors/InvokeHTTP.cpp:313 [...] {noformat} Fix this bug. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] ravitejatvs commented on pull request #2231: NIFI-4521 MS SQL CDC Processor
ravitejatvs commented on pull request #2231: URL: https://github.com/apache/nifi/pull/2231#issuecomment-687021666 I've been getting this error since the time I started using the processor after your changes in the build. Can you please check? The details given are all correct at my end. I tried with 2 source systems. I am getting response when I run a query using execute sql processor https://user-images.githubusercontent.com/62201885/92215966-3b51e180-eeb3-11ea-90a9-cabd56d250c5.png;> This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1298) Improve test coverage of TimeUtil.h
[ https://issues.apache.org/jira/browse/MINIFICPP-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Gerlits updated MINIFICPP-1298: -- Summary: Improve test coverage of TimeUtil.h (was: Improve test coverage of TimeUtil.h and move it to a proper namespace) > Improve test coverage of TimeUtil.h > --- > > Key: MINIFICPP-1298 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1298 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Arpad Boda >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > > See summary -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (MINIFICPP-1298) Improve test coverage of TimeUtil.h and move it to a proper namespace
[ https://issues.apache.org/jira/browse/MINIFICPP-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190625#comment-17190625 ] Ferenc Gerlits commented on MINIFICPP-1298: --- The namespace part of this was done by @szaszm in [https://github.com/apache/nifi-minifi-cpp/pull/873], so I have changed the title from "Improve test coverage of TimeUtil.h and move it to a proper namespace" to "Improve test coverage of TimeUtil.h". > Improve test coverage of TimeUtil.h and move it to a proper namespace > - > > Key: MINIFICPP-1298 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1298 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Arpad Boda >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > > See summary -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] hunyadi-dev opened a new pull request #895: MINIFICPP-1352 - Comment out unused parameters (for enabling -Wall)
hunyadi-dev opened a new pull request #895: URL: https://github.com/apache/nifi-minifi-cpp/pull/895 Script used: cat build_logs.log | grep -v "thirdparty" | sort | uniq | egrep "\[-Wunused-parameter*\]" | tr ":" " " | cut -d" " -f1,2,9 | tr "'" " " | tr -s " " | xargs -n3 python -c 'import os,sys; print("%s %s %s" % (os.path.abspath(sys.argv[1]), sys.argv[2], sys.argv[3]))' | sort | uniq | xargs -n 3 sh -c 'perl -pi -e "$. == $2 && s;( *([*&])? *)$3( *= *?[\w:]*(\(\))?)* *(?=[\),]);\2 /*$3*/\3;" $1' sh Manual edits (due to C compatibility enforcing naming arguments) in: nanofi/include/sitetosite/CPeer.h nanofi/include/sitetosite/CRawSocketProtocol.h nanofi/src/sitetosite/CRawSocketProtocol.c This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (MINIFICPP-1341) no matching conversion for static_cast from 'const org::apache::nifi::minifi::core::PropertyValue' to 'std::__1::chrono::duration
[ https://issues.apache.org/jira/browse/MINIFICPP-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Gerlits resolved MINIFICPP-1341. --- Resolution: Fixed > no matching conversion for static_cast from 'const > org::apache::nifi::minifi::core::PropertyValue' to > 'std::__1::chrono::duration >' > > > Key: MINIFICPP-1341 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1341 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.8.0 > Environment: $ cmake --version > cmake version 3.16.3 > $ clang --version > clang version 10.0.0-4ubuntu1 > Target: x86_64-pc-linux-gnu > Thread model: posix >Reporter: Ivan Serdyuk >Assignee: Ferenc Gerlits >Priority: Minor > Labels: clang > Fix For: 0.9.0 > > Attachments: TailFile_build_error.log > > Time Spent: 1h 10m > Remaining Estimate: 0h > > I was compiling MiNiFi using Clang 10.0.0-4ubuntu1 release. > I did like this: > $ cmake -DENABLE_COAP=ON -DASAN_BUILD=ON -DSKIP_TESTS=ON -DUSE_SHARED_LIBS=ON > -DPORTABLE=ON -DBUILD_ROCKSDB=ON -DBUILD_IDENTIFIER= > -DCMAKE_BUILD_TYPE=MinSizeRel -DFAIL_ON_WARNINGS= -DCMAKE_C_COMPILER=clang > -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_CXX_FLAGS="-stdlib=libc++" .. > And (eventually) got this: > [ 47%] Building CXX object > extensions/standard-processors/CMakeFiles/minifi-standard-processors.dir/processors/TailFile.cpp.o > In file included from > /home/ubuntu/minifi_cpp/extensions/standard-processors/processors/TailFile.cpp:40: > In file included from > /home/ubuntu/minifi_cpp/extensions/standard-processors/processors/TailFile.h:30: > In file included from > /home/ubuntu/minifi_cpp/extensions/standard-processors/../../libminifi/include/core/Processor.h:39: > /home/ubuntu/minifi_cpp/extensions/standard-processors/../../libminifi/include/core/ConfigurableComponent.h:230:13: > error: no matching conversion for static_cast from 'const > org::apache::nifi::minifi::core::PropertyValue' to > 'std::__1::chrono::duration >' > value = static_cast(item.getValue()); > ^~~ > /home/ubuntu/minifi_cpp/extensions/standard-processors/../../libminifi/include/core/ProcessorNode.h:71:30: > note: in instantiation of function template specialization > 'org::apache::nifi::minifi::core::ConfigurableComponent::getProperty long, std::__1::ratio<1, 1000> > >' requested here > return processor_cast->getProperty(name, value); > ^ > /home/ubuntu/minifi_cpp/extensions/standard-processors/../../libminifi/include/core/ProcessContext.h:329:29: > note: in instantiation of function template specialization > 'org::apache::nifi::minifi::core::ProcessorNode::getProperty long, std::__1::ratio<1, 1000> > >' requested here > return processor_node_->getProperty std::common_type::type>(name, value); > ^ > /home/ubuntu/minifi_cpp/extensions/standard-processors/../../libminifi/include/core/ProcessContext.h:102:12: > note: in instantiation of function template specialization > 'org::apache::nifi::minifi::core::ProcessContext::getPropertyImp long, std::__1::ratio<1, 1000> > >' requested here > return getPropertyImp::type>(name, value); > ^ > /home/ubuntu/minifi_cpp/extensions/standard-processors/processors/TailFile.cpp:367:14: > note: in instantiation of function template specialization > 'org::apache::nifi::minifi::core::ProcessContext::getProperty long, std::__1::ratio<1, 1000> > >' requested here > context->getProperty(LookupFrequency.getName(), lookup_frequency_); > ^ > /usr/lib/llvm-10/bin/../include/c++/v1/chrono:1021:28: note: candidate > constructor (the implicit copy constructor) not viable: no known conversion > from 'const org::apache::nifi::minifi::core::PropertyValue' to 'const > std::__1::chrono::duration >' for 1st > argument > class _LIBCPP_TEMPLATE_VIS duration > ^ > /usr/lib/llvm-10/bin/../include/c++/v1/chrono:1021:28: note: candidate > constructor (the implicit move constructor) not viable: no known conversion > from 'const org::apache::nifi::minifi::core::PropertyValue' to > 'std::__1::chrono::duration >' for 1st > argument > /usr/lib/llvm-10/bin/../include/c++/v1/chrono:1073:18: note: candidate > template ignored: requirement > 'is_convertible long>::value' was not satisfied [with _Rep2 = > org::apache::nifi::minifi::core::PropertyValue] > explicit duration(const _Rep2& __r, > ^ > /usr/lib/llvm-10/bin/../include/c++/v1/chrono:1085:9: note: candidate > template ignored: could not match 'duration type-parameter-0-1>' against 'const > org::apache::nifi::minifi::core::PropertyValue' > duration(const duration<_Rep2, _Period2>& __d, > ^ >
[jira] [Created] (MINIFICPP-1352) Enable -Wall and -Wextra behind a CMake flag and resolve related warnings
Adam Hunyadi created MINIFICPP-1352: --- Summary: Enable -Wall and -Wextra behind a CMake flag and resolve related warnings Key: MINIFICPP-1352 URL: https://issues.apache.org/jira/browse/MINIFICPP-1352 Project: Apache NiFi MiNiFi C++ Issue Type: Task Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* The compiler flags -Wall and -Wextra can potentially show important issues with our current codebase. *Proposal:* As we do not want to be dependent on what exactly is involved in the compiler implementations of -Wall and -Wextra for building a project with a new compiler. We should: # Fix the warnings reported by the current major compilers. # Allow the ones currently listed one by one. # Add a cmake flag to turn -Wall and -Wextra on. # Have at least a CI job with clang defined that runs all the -Wall and -Wextra. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] simonbence commented on a change in pull request #4481: NIFI-7624: ListenFTP processor
simonbence commented on a change in pull request #4481: URL: https://github.com/apache/nifi/pull/4481#discussion_r483437039 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenFTP.java ## @@ -0,0 +1,215 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.annotation.lifecycle.OnStopped; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.processor.AbstractSessionFactoryProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSessionFactory; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.ftp.NifiFtpServer; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicReference; + +@InputRequirement(Requirement.INPUT_FORBIDDEN) +@Tags({"ingest", "ftp", "listen"}) +@CapabilityDescription("Starts an FTP Server and listens on a given port to transform incoming files into FlowFiles. " ++ "The URI of the Service will be ftp://{hostname}:{port}. The default port is 2221.") +public class ListenFTP extends AbstractSessionFactoryProcessor { + +public static final Relationship RELATIONSHIP_SUCCESS = new Relationship.Builder() +.name("success") +.description("Relationship for successfully received files") +.build(); + +public static final PropertyDescriptor BIND_ADDRESS = new PropertyDescriptor.Builder() +.name("bind-address") +.displayName("Bind Address") +.description("The address the FTP server should be bound to. If not provided, the server binds to all available addresses.") +.required(false) +.addValidator(StandardValidators.NON_BLANK_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) +.build(); + +public static final PropertyDescriptor PORT = new PropertyDescriptor.Builder() +.name("listening-port") +.displayName("Listening Port") +.description("The Port to listen on for incoming connections. On Linux, root privileges are required to use port numbers below 1024.") +.required(true) +.defaultValue("2221") + .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) +.addValidator(StandardValidators.PORT_VALIDATOR) +.build(); + +public static final PropertyDescriptor USERNAME = new PropertyDescriptor.Builder() +.name("username") +.displayName("Username") +.description("The name of the user that is allowed to log in to the FTP server. " + +"If a username is provided, a password must also be provided. " + +"If no username is specified, anonymous connections will be permitted.") +.required(false) + .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) +.addValidator(StandardValidators.NON_BLANK_VALIDATOR) +.build(); + +public static final PropertyDescriptor PASSWORD = new PropertyDescriptor.Builder() +.name("password") +.displayName("Password") +.description("If a Username is specified, then a