[jira] [Created] (NIFI-12641) Add local file upload option in PutS3 processor
Balázs Gerner created NIFI-12641: Summary: Add local file upload option in PutS3 processor Key: NIFI-12641 URL: https://issues.apache.org/jira/browse/NIFI-12641 Project: Apache NiFi Issue Type: Improvement Reporter: Balázs Gerner Assignee: Peter Turcsanyi Fix For: 2.0.0-M1, 1.23.0 There are cases when the files to be uploaded to Azure Storage are available on the local filesystem where NiFi is running. That is, the flow could read and upload the files directly from the filesystem without adding it in NiFi's content repo which is an overhead in this case (can be relevant for huge files). Add "Data to Upload" property with options "FlowFile's Content" (default, current behaviour) and "Local File". Using the latter, the user can by-pass the content repo and upload the file from the local filesystem to Azure Storage directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINIFICPP-2288 Remove the caching of loggers [nifi-minifi-cpp]
szaszm commented on code in PR #1718: URL: https://github.com/apache/nifi-minifi-cpp/pull/1718#discussion_r1458513484 ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -267,52 +257,56 @@ std::shared_ptr LoggerConfiguration::initialize_names return root_namespace; } -std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr& logger, const std::shared_ptr _namespace, std::string_view name_view, -const std::shared_ptr& formatter, bool remove_if_present) { - std::string name{name_view}; - std::shared_ptr spdlogger = spdlog::get(name); - if (spdlogger) { -if (remove_if_present) { - spdlog::drop(name); -} else { - return spdlogger; -} +std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr _namespace, const std::string_view name_view, +const std::shared_ptr& formatter) { + const std::string name{name_view}; + if (auto spdlogger = spdlog::get(name)) { +return spdlogger; } + return create_logger(root_namespace, name, formatter); +} + +void LoggerConfiguration::setupSpdLogger(const std::shared_ptr& spd_logger, +const std::shared_ptr& root_namespace, +const std::string& name, +const std::shared_ptr& formatter) { + if (!spd_logger) +return; std::shared_ptr current_namespace = root_namespace; std::vector> sinks = root_namespace->sinks; std::vector> inherited_sinks; spdlog::level::level_enum level = root_namespace->level; std::string current_namespace_str; - std::string sink_namespace_str = "root"; - std::string level_namespace_str = "root"; for (auto const & name_segment : utils::string::split(name, "::")) { current_namespace_str += name_segment; auto child_pair = current_namespace->children.find(name_segment); if (child_pair == current_namespace->children.end()) { break; } -std::copy(current_namespace->exported_sinks.begin(), current_namespace->exported_sinks.end(), std::back_inserter(inherited_sinks)); +ranges::copy(current_namespace->exported_sinks, std::back_inserter(inherited_sinks)); + current_namespace = child_pair->second; if (!current_namespace->sinks.empty()) { sinks = current_namespace->sinks; - sink_namespace_str = current_namespace_str; } if (current_namespace->has_level) { level = current_namespace->level; - level_namespace_str = current_namespace_str; } current_namespace_str += "::"; } - if (logger != nullptr) { -logger->log_debug("{} logger got sinks from namespace {} and level {} from namespace {}", name, sink_namespace_str, spdlog::level::to_string_view(level), level_namespace_str); - } - std::copy(inherited_sinks.begin(), inherited_sinks.end(), std::back_inserter(sinks)); - spdlogger = std::make_shared(name, begin(sinks), end(sinks)); - spdlogger->set_level(level); - spdlogger->set_formatter(formatter->clone()); - spdlogger->flush_on(std::max(spdlog::level::info, current_namespace->level)); + ranges::copy(inherited_sinks, std::back_inserter(sinks)); + spd_logger->sinks() = sinks; + spd_logger->set_level(level); + spd_logger->set_formatter(formatter->clone()); + spd_logger->flush_on(std::max(spdlog::level::info, current_namespace->level)); Review Comment: Leaving this unsynchronized and accepting occasional partially applied config changes is also an option. I don't think this is a thread safety issue, I'd expect spdlog to be internally synchronized. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12506 /nifi-api/flow/metrics endpoint times out if flow is big [nifi]
timeabarna commented on PR #8158: URL: https://github.com/apache/nifi/pull/8158#issuecomment-1899838731 Thanks @exceptionfactory, I've updated both PRs with shutdown. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12640) Move servlet-api and jetty-schemas to jetty-bundle
[ https://issues.apache.org/jira/browse/NIFI-12640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-12640: Status: Patch Available (was: Open) > Move servlet-api and jetty-schemas to jetty-bundle > -- > > Key: NIFI-12640 > URL: https://issues.apache.org/jira/browse/NIFI-12640 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 2.0.0-M2 > > Time Spent: 10m > Remaining Estimate: 0h > > Following framework upgrades to Jetty 12, the servlet-api and jetty-schemas > JAR libraries can be relocated to the nifi-jetty-bundle NAR instead of the > application lib directory. > NiFi 1.2.0 moved servlet-api and jetty-schemas to the root lib directory due > to Jetty loading issues with the AnnotationConfiguration class. Jetty 12 logs > an informational message when attempting to load the Logback > ServletContainerInitializer, but does not have any runtime issues. NiFi uses > a configured shutdown hook instead of the ServletContainerInitializer, so > loading the Logback implementation is not required, allowing the servlet-api > JAR to be relocated. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12640 Move servlet-api and jetty-schemas to nifi-jetty-bundle [nifi]
exceptionfactory opened a new pull request, #8272: URL: https://github.com/apache/nifi/pull/8272 # Summary [NIFI-12640](https://issues.apache.org/jira/browse/NIFI-12640) Moves the `servlet-api` and `jetty-schemas` JAR libraries from the root `lib` directory to the `nifi-jetty-bundle` NAR. [NIFI-3694](https://issues.apache.org/jira/browse/NIFI-3694) Moved these libraries to the root `lib` directory due to Logback `ServletContextInitializer` issues with Jetty 9. Jetty 12 has different behavior that avoids startup failures related to `ServletContainerInitializer` implementations, instead logging an informational message. NiFi already disables the Logback `ServletContainerInitializer` using an environment variable, so it is not required for any logging operations. Moving the `servlet-api` and `jetty-schemas` JAR libraries to `nifi-jetty-bundle` maintains their position at the effective root of the NAR Class Loading hierarchy while keeping them packaged in the Jetty NAR. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12640) Move servlet-api and jetty-schemas to jetty-bundle
David Handermann created NIFI-12640: --- Summary: Move servlet-api and jetty-schemas to jetty-bundle Key: NIFI-12640 URL: https://issues.apache.org/jira/browse/NIFI-12640 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: David Handermann Assignee: David Handermann Fix For: 2.0.0-M2 Following framework upgrades to Jetty 12, the servlet-api and jetty-schemas JAR libraries can be relocated to the nifi-jetty-bundle NAR instead of the application lib directory. NiFi 1.2.0 moved servlet-api and jetty-schemas to the root lib directory due to Jetty loading issues with the AnnotationConfiguration class. Jetty 12 logs an informational message when attempting to load the Logback ServletContainerInitializer, but does not have any runtime issues. NiFi uses a configured shutdown hook instead of the ServletContainerInitializer, so loading the Logback implementation is not required, allowing the servlet-api JAR to be relocated. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12236 Improving fault tolerancy of the QuestDB backed metrics repository [nifi]
exceptionfactory commented on code in PR #8152: URL: https://github.com/apache/nifi/pull/8152#discussion_r1458179457 ## nifi-nar-bundles/nifi-questdb-bundle/nifi-questdb/pom.xml: ## @@ -0,0 +1,67 @@ + + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd;> +4.0.0 + +org.apache.nifi +nifi-questdb-bundle +2.0.0-SNAPSHOT + +nifi-questdb + + + +org.apache.nifi +nifi-utils +2.0.0-SNAPSHOT + + + +org.apache.commons +commons-lang3 + + +commons-io +commons-io + + +org.slf4j +slf4j-api + + +org.questdb +questdb +7.3.7 + + +org.mockito +mockito-junit-jupiter + + + + +org.springframework +spring-core +6.1.2 Review Comment: This version can be removed because it is managed at the root pom.xml. Is this dependency required? ## nifi-nar-bundles/nifi-questdb-bundle/nifi-questdb/src/main/java/org/apache/nifi/questdb/rollover/DeleteOldRolloverStrategy.java: ## @@ -0,0 +1,95 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.questdb.rollover; + +import org.apache.nifi.questdb.Client; +import org.apache.nifi.questdb.QueryResultProcessor; +import org.apache.nifi.questdb.QueryRowContext; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.time.ZoneOffset; +import java.time.ZonedDateTime; +import java.time.format.DateTimeFormatter; +import java.util.Collections; +import java.util.LinkedList; +import java.util.List; +import java.util.function.Supplier; + +final class DeleteOldRolloverStrategy implements RolloverStrategy { +private static final Logger LOGGER = LoggerFactory.getLogger(DeleteOldRolloverStrategy.class); +private static final DateTimeFormatter DATE_FORMATTER = DateTimeFormatter.ofPattern("-MM-dd").withZone(ZoneOffset.UTC); Review Comment: Is there a reason for using `UTC` in this case? It seems like it would be more intuitive to use `LocalDateTime`, matching the system time. ## nifi-nar-bundles/nifi-questdb-bundle/nifi-questdb/src/main/java/org/apache/nifi/questdb/embedded/EmbeddedClient.java: ## @@ -0,0 +1,151 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.questdb.embedded; + +import io.questdb.cairo.CairoEngine; +import io.questdb.cairo.CairoError; +import io.questdb.cairo.TableToken; +import io.questdb.cairo.TableWriter; +import io.questdb.cairo.sql.RecordCursor; +import io.questdb.cairo.sql.RecordCursorFactory; +import io.questdb.griffin.CompiledQuery; +import io.questdb.griffin.SqlCompiler; +import io.questdb.griffin.SqlCompilerFactoryImpl; +import io.questdb.griffin.SqlException; +import io.questdb.griffin.SqlExecutionContext; +import io.questdb.mp.SCSequence; +import io.questdb.mp.TimeoutBlockingWaitStrategy; +import org.apache.nifi.questdb.Client; +import org.apache.nifi.questdb.DatabaseException; +import org.apache.nifi.questdb.InsertRowDataSource; +import org.apache.nifi.questdb.QueryResultProcessor; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +
Re: [PR] NIFI-12270: Add new provenance event: UPLOAD [nifi]
exceptionfactory commented on code in PR #8094: URL: https://github.com/apache/nifi/pull/8094#discussion_r1458173813 ## nifi-mock/src/main/java/org/apache/nifi/util/MockProvenanceReporter.java: ## @@ -225,6 +228,49 @@ public void send(final FlowFile flowFile, final String transitUri, final String } } +@Override +public void upload(final FlowFile flowFile, final FileResource fileResource, final String transitUri) { +upload(flowFile, fileResource, transitUri, null, -1L, true); + +} + +@Override +public void upload(final FlowFile flowFile, final FileResource fileResource, final String transitUri, final long transmissionMillis) { +upload(flowFile, fileResource, transitUri, transmissionMillis, true); +} + +@Override +public void upload(final FlowFile flowFile, final FileResource fileResource, final String transitUri, final String details, final long transmissionMillis) { +upload(flowFile, fileResource, transitUri, details, transmissionMillis, true); +} + +@Override +public void upload(final FlowFile flowFile, final FileResource fileResource, final String transitUri, final long transmissionMillis, final boolean force) { +upload(flowFile, fileResource, transitUri, null, transmissionMillis, force); +} + +@Override +public void upload(FlowFile flowFile, FileResource fileResource, String transitUri, String details, long transmissionMillis, boolean force) { +try { +final String enrichedDetails = StringUtils.isNotBlank(details) ? details + " " + fileResource.toString() : fileResource.toString(); +final ProvenanceEventRecord record = build(flowFile, ProvenanceEventType.UPLOAD) +.setTransitUri(transitUri) +.setEventDuration(transmissionMillis) +.setDetails(enrichedDetails) +.build(); +if (force) { + sharedSessionState.addProvenanceEvents(Collections.singleton(record)); +} else { +events.add(record); +} +} catch (final Exception e) { +logger.error("Failed to generate Provenance Event due to " + e); +if (logger.isDebugEnabled()) { +logger.error("", e); +} Review Comment: The `due to` convention should be avoided, I also recommend avoid the conditional debug for the stack trace. Although it is present in some places, we should move away from it in general. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-12616) Enable @use_case and @multi_processor_use_case decorators to be added to Python Processors
[ https://issues.apache.org/jira/browse/NIFI-12616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-12616. - Resolution: Fixed > Enable @use_case and @multi_processor_use_case decorators to be added to > Python Processors > -- > > Key: NIFI-12616 > URL: https://issues.apache.org/jira/browse/NIFI-12616 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 40m > Remaining Estimate: 0h > > Currently, Python processors have no way of articulating specific use cases > and multi-processor use cases in their docs. Introduce new decorators to > allow for these. > We use decorators here in order to keep the structure similar to that of Java > but also because it offers a clean mechanism for defining the > MultiProcessorUseCase, which becomes awkward if trying to include in the > ProcessorDetails inner class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12616) Enable @use_case and @multi_processor_use_case decorators to be added to Python Processors
[ https://issues.apache.org/jira/browse/NIFI-12616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-12616: Issue Type: Improvement (was: Bug) > Enable @use_case and @multi_processor_use_case decorators to be added to > Python Processors > -- > > Key: NIFI-12616 > URL: https://issues.apache.org/jira/browse/NIFI-12616 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 40m > Remaining Estimate: 0h > > Currently, Python processors have no way of articulating specific use cases > and multi-processor use cases in their docs. Introduce new decorators to > allow for these. > We use decorators here in order to keep the structure similar to that of Java > but also because it offers a clean mechanism for defining the > MultiProcessorUseCase, which becomes awkward if trying to include in the > ProcessorDetails inner class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12616) Enable @use_case and @multi_processor_use_case decorators to be added to Python Processors
[ https://issues.apache.org/jira/browse/NIFI-12616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808416#comment-17808416 ] ASF subversion and git services commented on NIFI-12616: Commit 2acc1038c988f02487a13679e403f492db45ff47 in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2acc1038c9 ] NIFI-12616 Added Processor Documentation Support for Python - Added some Use Case docs for Python processors and updated Runtime Manifests to include Python based processors as well as Use Case/MultiProcessorUseCase documentation elements. Refactored/cleaned up some of the Python code and added unit tests. - Added python-unit-tests profile and enabled on Ubuntu and macOS GitHub workflows This closes #8253 Signed-off-by: David Handermann > Enable @use_case and @multi_processor_use_case decorators to be added to > Python Processors > -- > > Key: NIFI-12616 > URL: https://issues.apache.org/jira/browse/NIFI-12616 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 40m > Remaining Estimate: 0h > > Currently, Python processors have no way of articulating specific use cases > and multi-processor use cases in their docs. Introduce new decorators to > allow for these. > We use decorators here in order to keep the structure similar to that of Java > but also because it offers a clean mechanism for defining the > MultiProcessorUseCase, which becomes awkward if trying to include in the > ProcessorDetails inner class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12616: Added some Use Case docs for Python processors and update… [nifi]
exceptionfactory closed pull request #8253: NIFI-12616: Added some Use Case docs for Python processors and update… URL: https://github.com/apache/nifi/pull/8253 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12616: Added some Use Case docs for Python processors and update… [nifi]
exceptionfactory commented on code in PR #8253: URL: https://github.com/apache/nifi/pull/8253#discussion_r1457513311 ## nifi-nar-bundles/nifi-py4j-bundle/nifi-python-framework/pom.xml: ## @@ -73,6 +73,35 @@ + +org.codehaus.mojo +exec-maven-plugin +3.1.1 + + +python-test +test + +exec + + +python3 Review Comment: Reviewing these changes in light of the recent build failures, it looks like this plugin execution should be optional so that it does not result in failures on systems that do not have the `python3` executable installed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12637) Automatically update InvokeHTTP Proxy configuration properties
[ https://issues.apache.org/jira/browse/NIFI-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-12637: Resolution: Fixed Status: Resolved (was: Patch Available) > Automatically update InvokeHTTP Proxy configuration properties > -- > > Key: NIFI-12637 > URL: https://issues.apache.org/jira/browse/NIFI-12637 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 20m > Remaining Estimate: 0h > > Users updating from 1.x are finding that InvokeHTTP is failing because the > "Proxy Type" property that was previously defined no longer is. As a result, > InvokeHTTP treats it as a header and attempts to send it as an HTTP Header. > However, since it has a space in the name, it's invalid and InvokeHTTP fails. > We should automatically handle migrating the Proxy properties to make this > seamless. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12638) Add @UseCase documentation to QueryRecord to explain how to use as a record-based Router
[ https://issues.apache.org/jira/browse/NIFI-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-12638: Resolution: Fixed Status: Resolved (was: Patch Available) > Add @UseCase documentation to QueryRecord to explain how to use as a > record-based Router > > > Key: NIFI-12638 > URL: https://issues.apache.org/jira/browse/NIFI-12638 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 20m > Remaining Estimate: 0h > > A common use case for QueryRecord is to use it to route Records to one route > or another. Add use case documentation explaining how to set this up. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12638) Add @UseCase documentation to QueryRecord to explain how to use as a record-based Router
[ https://issues.apache.org/jira/browse/NIFI-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808412#comment-17808412 ] ASF subversion and git services commented on NIFI-12638: Commit 2212afe482d6fb4458347214910dbb267ff37cb0 in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2212afe482 ] NIFI-12638 Add Use Case on how to use QueryRecord as a router This closes #8271 Signed-off-by: David Handermann > Add @UseCase documentation to QueryRecord to explain how to use as a > record-based Router > > > Key: NIFI-12638 > URL: https://issues.apache.org/jira/browse/NIFI-12638 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 20m > Remaining Estimate: 0h > > A common use case for QueryRecord is to use it to route Records to one route > or another. Add use case documentation explaining how to set this up. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12635) Upgrade slack client to 1.37.0
[ https://issues.apache.org/jira/browse/NIFI-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-12635: Resolution: Fixed Status: Resolved (was: Patch Available) > Upgrade slack client to 1.37.0 > -- > > Key: NIFI-12635 > URL: https://issues.apache.org/jira/browse/NIFI-12635 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 20m > Remaining Estimate: 0h > > I sometimes see the ListenSlack spew errors about Rate Limiting and > connection failures. This appears to be fixed in the 1.37.0 version of the > client according to [https://github.com/slackapi/java-slack-sdk/pull/1265] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12637) Automatically update InvokeHTTP Proxy configuration properties
[ https://issues.apache.org/jira/browse/NIFI-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808411#comment-17808411 ] ASF subversion and git services commented on NIFI-12637: Commit bf1dfd0615a8c1eb2f8b90480c8a70ca3b86cba4 in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=bf1dfd0615 ] NIFI-12637 Handle migrating Proxy properties for InvokeHTTP This closes #8270 Signed-off-by: David Handermann > Automatically update InvokeHTTP Proxy configuration properties > -- > > Key: NIFI-12637 > URL: https://issues.apache.org/jira/browse/NIFI-12637 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 20m > Remaining Estimate: 0h > > Users updating from 1.x are finding that InvokeHTTP is failing because the > "Proxy Type" property that was previously defined no longer is. As a result, > InvokeHTTP treats it as a header and attempts to send it as an HTTP Header. > However, since it has a space in the name, it's invalid and InvokeHTTP fails. > We should automatically handle migrating the Proxy properties to make this > seamless. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12635) Upgrade slack client to 1.37.0
[ https://issues.apache.org/jira/browse/NIFI-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808410#comment-17808410 ] ASF subversion and git services commented on NIFI-12635: Commit ddec0dff5a019dcc51fb8da3a9a191e881da5ac9 in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ddec0dff5a ] NIFI-12635 Upgraded Slack client from 1.36.1 to 1.37.0 This closes #8269 Signed-off-by: David Handermann > Upgrade slack client to 1.37.0 > -- > > Key: NIFI-12635 > URL: https://issues.apache.org/jira/browse/NIFI-12635 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 20m > Remaining Estimate: 0h > > I sometimes see the ListenSlack spew errors about Rate Limiting and > connection failures. This appears to be fixed in the 1.37.0 version of the > client according to [https://github.com/slackapi/java-slack-sdk/pull/1265] -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12638: Add Use Case documentation on how to use QueryRecord as a… [nifi]
exceptionfactory closed pull request #8271: NIFI-12638: Add Use Case documentation on how to use QueryRecord as a… URL: https://github.com/apache/nifi/pull/8271 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12637: Handle migrating Proxy properties for InvokeHTTP [nifi]
exceptionfactory closed pull request #8270: NIFI-12637: Handle migrating Proxy properties for InvokeHTTP URL: https://github.com/apache/nifi/pull/8270 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12635: Update slack client to 1.37.0 [nifi]
exceptionfactory closed pull request #8269: NIFI-12635: Update slack client to 1.37.0 URL: https://github.com/apache/nifi/pull/8269 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-9677 Fixed LookupRecord issue that an empty JSON array caused mismatch [nifi]
mattyb149 commented on code in PR #8266: URL: https://github.com/apache/nifi/pull/8266#discussion_r1458090443 ## nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/LookupRecord.java: ## @@ -487,7 +487,13 @@ public Set lookup(final Record record, final ProcessContext contex final RecordPath recordPath = entry.getValue(); final RecordPathResult pathResult = recordPath.evaluate(record); -final List lookupFieldValues = pathResult.getSelectedFields() +final List allFieldValues = pathResult.getSelectedFields().toList(); +if (allFieldValues.isEmpty()) { Review Comment: I was able to get this "working" by just changing line 503 to `continue;` after setting `hasUnmatchedValue = true`, removing the temp `rels` variable and updating the debug line. That avoids having to call `toList()` and then `stream()` later. But since it should not be an unmatched value, that's less than ideal too. I think the crux of the issue is that `getSelectedFields()` returns an empty list, when it should return a field with an `Object[0]` in it. This is a result of `WildcardIndexPath:67` which generates the empty list instead of a singleton list with a empty array in it. We could add a check for `array.length == 0` there and return the one-element empty array instead. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12270: Add new provenance event: UPLOAD [nifi]
Lehel44 commented on code in PR #8094: URL: https://github.com/apache/nifi/pull/8094#discussion_r1458055270 ## nifi-api/src/main/java/org/apache/nifi/provenance/upload/FileResource.java: ## @@ -20,15 +20,26 @@ * Holds information of a file resource for UPLOAD * provenance events. */ -public interface FileResource { +public class FileResource { Review Comment: I excluded methods based on if they were either unused or less commonly utilized in the SEND events. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12639) Backport the changes made in NIFI-11627 to the support/nifi-1.x branch
Daniel Stieglitz created NIFI-12639: --- Summary: Backport the changes made in NIFI-11627 to the support/nifi-1.x branch Key: NIFI-12639 URL: https://issues.apache.org/jira/browse/NIFI-12639 Project: Apache NiFi Issue Type: Improvement Reporter: Daniel Stieglitz The changes made for NIFI-11627 were only committed on the 2.x branch and not backported to the support/nifi-1.x branch. The purpose of this ticket is to backport the code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-12639) Backport the changes made in NIFI-11627 to the support/nifi-1.x branch
[ https://issues.apache.org/jira/browse/NIFI-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz reassigned NIFI-12639: --- Assignee: Daniel Stieglitz > Backport the changes made in NIFI-11627 to the support/nifi-1.x branch > -- > > Key: NIFI-12639 > URL: https://issues.apache.org/jira/browse/NIFI-12639 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Major > > The changes made for NIFI-11627 were only committed on the 2.x branch and not > backported to the support/nifi-1.x branch. The purpose of this ticket is to > backport the code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12638) Add @UseCase documentation to QueryRecord to explain how to use as a record-based Router
[ https://issues.apache.org/jira/browse/NIFI-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-12638: -- Status: Patch Available (was: Open) > Add @UseCase documentation to QueryRecord to explain how to use as a > record-based Router > > > Key: NIFI-12638 > URL: https://issues.apache.org/jira/browse/NIFI-12638 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 10m > Remaining Estimate: 0h > > A common use case for QueryRecord is to use it to route Records to one route > or another. Add use case documentation explaining how to set this up. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12638: Add Use Case documentation on how to use QueryRecord as a… [nifi]
markap14 opened a new pull request, #8271: URL: https://github.com/apache/nifi/pull/8271 … router # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12638) Add @UseCase documentation to QueryRecord to explain how to use as a record-based Router
Mark Payne created NIFI-12638: - Summary: Add @UseCase documentation to QueryRecord to explain how to use as a record-based Router Key: NIFI-12638 URL: https://issues.apache.org/jira/browse/NIFI-12638 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Mark Payne Assignee: Mark Payne Fix For: 2.0.0-M2 A common use case for QueryRecord is to use it to route Records to one route or another. Add use case documentation explaining how to set this up. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12637) Automatically update InvokeHTTP Proxy configuration properties
[ https://issues.apache.org/jira/browse/NIFI-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-12637: -- Status: Patch Available (was: Open) > Automatically update InvokeHTTP Proxy configuration properties > -- > > Key: NIFI-12637 > URL: https://issues.apache.org/jira/browse/NIFI-12637 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 10m > Remaining Estimate: 0h > > Users updating from 1.x are finding that InvokeHTTP is failing because the > "Proxy Type" property that was previously defined no longer is. As a result, > InvokeHTTP treats it as a header and attempts to send it as an HTTP Header. > However, since it has a space in the name, it's invalid and InvokeHTTP fails. > We should automatically handle migrating the Proxy properties to make this > seamless. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12637: Handle migrating Proxy properties for InvokeHTTP [nifi]
markap14 opened a new pull request, #8270: URL: https://github.com/apache/nifi/pull/8270 # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12637) Automatically update InvokeHTTP Proxy configuration properties
Mark Payne created NIFI-12637: - Summary: Automatically update InvokeHTTP Proxy configuration properties Key: NIFI-12637 URL: https://issues.apache.org/jira/browse/NIFI-12637 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Mark Payne Assignee: Mark Payne Fix For: 2.0.0-M2 Users updating from 1.x are finding that InvokeHTTP is failing because the "Proxy Type" property that was previously defined no longer is. As a result, InvokeHTTP treats it as a header and attempts to send it as an HTTP Header. However, since it has a space in the name, it's invalid and InvokeHTTP fails. We should automatically handle migrating the Proxy properties to make this seamless. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-12636) Upgrades for Pinecode processors
Timothy J Spann created NIFI-12636: -- Summary: Upgrades for Pinecode processors Key: NIFI-12636 URL: https://issues.apache.org/jira/browse/NIFI-12636 Project: Apache NiFi Issue Type: Improvement Components: Extensions Affects Versions: 2.0.0-M1 Environment: Mac, JDK 21, python 3.10 Reporter: Timothy J Spann PutPinecone QueryPinecone vectorstores/requirements.txt see [https://github.com/tspannhw/FLaNK-VectorDB] Upgrade Pinecone Python to 3.0.0 Upgrade langchain -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINIFICPP-2288 Remove the caching of loggers [nifi-minifi-cpp]
martinzink commented on code in PR #1718: URL: https://github.com/apache/nifi-minifi-cpp/pull/1718#discussion_r1457992373 ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -267,52 +257,56 @@ std::shared_ptr LoggerConfiguration::initialize_names return root_namespace; } -std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr& logger, const std::shared_ptr _namespace, std::string_view name_view, -const std::shared_ptr& formatter, bool remove_if_present) { - std::string name{name_view}; - std::shared_ptr spdlogger = spdlog::get(name); - if (spdlogger) { -if (remove_if_present) { - spdlog::drop(name); -} else { - return spdlogger; -} +std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr _namespace, const std::string_view name_view, +const std::shared_ptr& formatter) { + const std::string name{name_view}; + if (auto spdlogger = spdlog::get(name)) { +return spdlogger; } + return create_logger(root_namespace, name, formatter); +} + +void LoggerConfiguration::setupSpdLogger(const std::shared_ptr& spd_logger, +const std::shared_ptr& root_namespace, +const std::string& name, +const std::shared_ptr& formatter) { + if (!spd_logger) +return; std::shared_ptr current_namespace = root_namespace; std::vector> sinks = root_namespace->sinks; std::vector> inherited_sinks; spdlog::level::level_enum level = root_namespace->level; std::string current_namespace_str; - std::string sink_namespace_str = "root"; - std::string level_namespace_str = "root"; for (auto const & name_segment : utils::string::split(name, "::")) { current_namespace_str += name_segment; auto child_pair = current_namespace->children.find(name_segment); if (child_pair == current_namespace->children.end()) { break; } -std::copy(current_namespace->exported_sinks.begin(), current_namespace->exported_sinks.end(), std::back_inserter(inherited_sinks)); +ranges::copy(current_namespace->exported_sinks, std::back_inserter(inherited_sinks)); + current_namespace = child_pair->second; if (!current_namespace->sinks.empty()) { sinks = current_namespace->sinks; - sink_namespace_str = current_namespace_str; } if (current_namespace->has_level) { level = current_namespace->level; - level_namespace_str = current_namespace_str; } current_namespace_str += "::"; } - if (logger != nullptr) { -logger->log_debug("{} logger got sinks from namespace {} and level {} from namespace {}", name, sink_namespace_str, spdlog::level::to_string_view(level), level_namespace_str); - } - std::copy(inherited_sinks.begin(), inherited_sinks.end(), std::back_inserter(sinks)); - spdlogger = std::make_shared(name, begin(sinks), end(sinks)); - spdlogger->set_level(level); - spdlogger->set_formatter(formatter->clone()); - spdlogger->flush_on(std::max(spdlog::level::info, current_namespace->level)); + ranges::copy(inherited_sinks, std::back_inserter(sinks)); + spd_logger->sinks() = sinks; + spd_logger->set_level(level); + spd_logger->set_formatter(formatter->clone()); + spd_logger->flush_on(std::max(spdlog::level::info, current_namespace->level)); Review Comment: Convinced myself and deleted the test in https://github.com/apache/nifi-minifi-cpp/pull/1718/commits/0f8988bd5e960ef85e9118b1f41d8632ce8876a5 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINIFICPP-2288 Remove the caching of loggers [nifi-minifi-cpp]
martinzink commented on code in PR #1718: URL: https://github.com/apache/nifi-minifi-cpp/pull/1718#discussion_r1457989914 ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -267,52 +257,56 @@ std::shared_ptr LoggerConfiguration::initialize_names return root_namespace; } -std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr& logger, const std::shared_ptr _namespace, std::string_view name_view, -const std::shared_ptr& formatter, bool remove_if_present) { - std::string name{name_view}; - std::shared_ptr spdlogger = spdlog::get(name); - if (spdlogger) { -if (remove_if_present) { - spdlog::drop(name); -} else { - return spdlogger; -} +std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr _namespace, const std::string_view name_view, +const std::shared_ptr& formatter) { + const std::string name{name_view}; + if (auto spdlogger = spdlog::get(name)) { +return spdlogger; } + return create_logger(root_namespace, name, formatter); +} + +void LoggerConfiguration::setupSpdLogger(const std::shared_ptr& spd_logger, +const std::shared_ptr& root_namespace, +const std::string& name, +const std::shared_ptr& formatter) { + if (!spd_logger) +return; std::shared_ptr current_namespace = root_namespace; std::vector> sinks = root_namespace->sinks; std::vector> inherited_sinks; spdlog::level::level_enum level = root_namespace->level; std::string current_namespace_str; - std::string sink_namespace_str = "root"; - std::string level_namespace_str = "root"; for (auto const & name_segment : utils::string::split(name, "::")) { current_namespace_str += name_segment; auto child_pair = current_namespace->children.find(name_segment); if (child_pair == current_namespace->children.end()) { break; } -std::copy(current_namespace->exported_sinks.begin(), current_namespace->exported_sinks.end(), std::back_inserter(inherited_sinks)); +ranges::copy(current_namespace->exported_sinks, std::back_inserter(inherited_sinks)); + current_namespace = child_pair->second; if (!current_namespace->sinks.empty()) { sinks = current_namespace->sinks; - sink_namespace_str = current_namespace_str; } if (current_namespace->has_level) { level = current_namespace->level; - level_namespace_str = current_namespace_str; } current_namespace_str += "::"; } - if (logger != nullptr) { -logger->log_debug("{} logger got sinks from namespace {} and level {} from namespace {}", name, sink_namespace_str, spdlog::level::to_string_view(level), level_namespace_str); - } - std::copy(inherited_sinks.begin(), inherited_sinks.end(), std::back_inserter(sinks)); - spdlogger = std::make_shared(name, begin(sinks), end(sinks)); - spdlogger->set_level(level); - spdlogger->set_formatter(formatter->clone()); - spdlogger->flush_on(std::max(spdlog::level::info, current_namespace->level)); + ranges::copy(inherited_sinks, std::back_inserter(sinks)); + spd_logger->sinks() = sinks; + spd_logger->set_level(level); + spd_logger->set_formatter(formatter->clone()); + spd_logger->flush_on(std::max(spdlog::level::info, current_namespace->level)); Review Comment: Im starting to think we should just simply delete this test... We shouldnt test private implementation details in the first space... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12635) Upgrade slack client to 1.37.0
[ https://issues.apache.org/jira/browse/NIFI-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-12635: -- Status: Patch Available (was: Open) > Upgrade slack client to 1.37.0 > -- > > Key: NIFI-12635 > URL: https://issues.apache.org/jira/browse/NIFI-12635 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 10m > Remaining Estimate: 0h > > I sometimes see the ListenSlack spew errors about Rate Limiting and > connection failures. This appears to be fixed in the 1.37.0 version of the > client according to [https://github.com/slackapi/java-slack-sdk/pull/1265] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12635: Update slack client to 1.37.0 [nifi]
markap14 opened a new pull request, #8269: URL: https://github.com/apache/nifi/pull/8269 # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12635) Upgrade slack client to 1.37.0
Mark Payne created NIFI-12635: - Summary: Upgrade slack client to 1.37.0 Key: NIFI-12635 URL: https://issues.apache.org/jira/browse/NIFI-12635 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Mark Payne Assignee: Mark Payne Fix For: 2.0.0-M2 I sometimes see the ListenSlack spew errors about Rate Limiting and connection failures. This appears to be fixed in the 1.37.0 version of the client according to [https://github.com/slackapi/java-slack-sdk/pull/1265] -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12614: Create record reader service for Protobuf messages [nifi]
mark-bathori commented on PR #8250: URL: https://github.com/apache/nifi/pull/8250#issuecomment-1899211820 Thanks @dan-s1 for the comment. I've added additionDeatils page to the Reader in my latest commit. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-11171) Reorganize Standard Components for 2.0.0
[ https://issues.apache.org/jira/browse/NIFI-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808366#comment-17808366 ] David Handermann commented on NIFI-11171: - [~dstiegli1] The JSON Processors probably require a closer evaluation since they use different libraries in some cases. As a next step, I recommend evaluating the dependencies for each one and see what overlaps and what does not. This may be less valuable in the end due to the prevalence of some JSON-based components, but at least reviewing dependencies would help evaluate the best path forward. > Reorganize Standard Components for 2.0.0 > > > Key: NIFI-11171 > URL: https://issues.apache.org/jira/browse/NIFI-11171 > Project: Apache NiFi > Issue Type: Epic >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > The {{nifi-standard-processors}} and {{nifi-standard-services}} modules > include a large number of Processors and Controller Services supporting an > array of capabilities. Some of these capabilities require specialized > libraries that apply to a limited number of components. > Moving Processors and Controller Services changes class names and bundle > coordinates which will break existing flow configurations. For this reason, > the selection of components for reorganization should be limited and focused. > Components with less frequent updates or usage and components with large > dependencies trees should be considered. > The following items should be considered as described in the [NiFi 2.0 > Release > Goals|https://cwiki.apache.org/confluence/display/NIFI/NiFi+2.0+Release+Goals]: > * SFTP Processors > * Jolt Transform Processors > * Jetty HTTP Processors > * JSON Processors > * Netty-based Processors -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11627) Add JSON Schema Registry Service for ValidateJson Processor
[ https://issues.apache.org/jira/browse/NIFI-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808365#comment-17808365 ] David Handermann commented on NIFI-11627: - [~dstiegli1] This is only merged into the main branch for now. If you are interested in back-porting the feature, the best approach would be to open a new Jira issue and pull request. This may be a bit more involved due to other changes on the main branch, so handling it in a separate Jira would be the best approach. It may not be included in time for a 1.25.0 release, based on when the release candidate process starts, but feel free to evaluate the changes required for the support branch. > Add JSON Schema Registry Service for ValidateJson Processor > --- > > Key: NIFI-11627 > URL: https://issues.apache.org/jira/browse/NIFI-11627 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.19.1 >Reporter: Chuck Tilly >Assignee: Daniel Stieglitz >Priority: Major > Labels: backport-needed > Fix For: 2.0.0-M2 > > Time Spent: 9h > Remaining Estimate: 0h > > For the ValidateJSON processor, add support for flowfile attribute references > that will allow for a JSON schema located in the Parameter Contexts, to be > referenced dynamically based on a flowfile attribute. e.g. > {code:java} > #{${schema.name}} {code} > > The benefits of adding support for attribute references are significant. > Adding this capability will allow a single processor to be used for all JSON > schema validation. Unfortunately, the current version of this processor > requires a dedicated processor for every schema, i.e. 12 schemas requires 12 > ValidateJSON processors. This is very laborious to construct and maintain, > and resource expensive. > ValidateJSON processor (https://issues.apache.org/jira/browse/NIFI-7392) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12634) Kubernetes Components Should Ignore Empty Prefix Properties
[ https://issues.apache.org/jira/browse/NIFI-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-12634: Status: Patch Available (was: Open) > Kubernetes Components Should Ignore Empty Prefix Properties > --- > > Key: NIFI-12634 > URL: https://issues.apache.org/jira/browse/NIFI-12634 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 10m > Remaining Estimate: 0h > > Following recent changes on the main branch to support optional prefix > properties for Kubernetes Leases and ConfigMaps, testing indicated that the > Leader Election Manager and State Provider included empty strings as valid > values. This changes the default behavior based on the default > nifi.properties and state-management.xml including empty strings for prefix > values. The components should be modified to ignore empty strings in addition > to null values, aligning with current behavior prior to the introduction of > these properties. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12634 Ignore Blank Prefix Values in Kubernetes Components [nifi]
exceptionfactory opened a new pull request, #8268: URL: https://github.com/apache/nifi/pull/8268 # Summary [NIFI-12634](https://issues.apache.org/jira/browse/NIFI-12634) Updates `KubernetesConfigMapStateProvider` and `KubernetesLeaderElectionManager` to ignore blank prefix values as provided in default configuration files. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINIFICPP-2288 Remove the caching of loggers [nifi-minifi-cpp]
martinzink commented on code in PR #1718: URL: https://github.com/apache/nifi-minifi-cpp/pull/1718#discussion_r1457944711 ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -267,52 +257,56 @@ std::shared_ptr LoggerConfiguration::initialize_names return root_namespace; } -std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr& logger, const std::shared_ptr _namespace, std::string_view name_view, -const std::shared_ptr& formatter, bool remove_if_present) { - std::string name{name_view}; - std::shared_ptr spdlogger = spdlog::get(name); - if (spdlogger) { -if (remove_if_present) { - spdlog::drop(name); -} else { - return spdlogger; -} +std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr _namespace, const std::string_view name_view, +const std::shared_ptr& formatter) { + const std::string name{name_view}; + if (auto spdlogger = spdlog::get(name)) { +return spdlogger; } + return create_logger(root_namespace, name, formatter); +} + +void LoggerConfiguration::setupSpdLogger(const std::shared_ptr& spd_logger, +const std::shared_ptr& root_namespace, +const std::string& name, +const std::shared_ptr& formatter) { + if (!spd_logger) +return; std::shared_ptr current_namespace = root_namespace; std::vector> sinks = root_namespace->sinks; std::vector> inherited_sinks; spdlog::level::level_enum level = root_namespace->level; std::string current_namespace_str; - std::string sink_namespace_str = "root"; - std::string level_namespace_str = "root"; for (auto const & name_segment : utils::string::split(name, "::")) { current_namespace_str += name_segment; auto child_pair = current_namespace->children.find(name_segment); if (child_pair == current_namespace->children.end()) { break; } -std::copy(current_namespace->exported_sinks.begin(), current_namespace->exported_sinks.end(), std::back_inserter(inherited_sinks)); +ranges::copy(current_namespace->exported_sinks, std::back_inserter(inherited_sinks)); + current_namespace = child_pair->second; if (!current_namespace->sinks.empty()) { sinks = current_namespace->sinks; - sink_namespace_str = current_namespace_str; } if (current_namespace->has_level) { level = current_namespace->level; - level_namespace_str = current_namespace_str; } current_namespace_str += "::"; } - if (logger != nullptr) { -logger->log_debug("{} logger got sinks from namespace {} and level {} from namespace {}", name, sink_namespace_str, spdlog::level::to_string_view(level), level_namespace_str); - } - std::copy(inherited_sinks.begin(), inherited_sinks.end(), std::back_inserter(sinks)); - spdlogger = std::make_shared(name, begin(sinks), end(sinks)); - spdlogger->set_level(level); - spdlogger->set_formatter(formatter->clone()); - spdlogger->flush_on(std::max(spdlog::level::info, current_namespace->level)); + ranges::copy(inherited_sinks, std::back_inserter(sinks)); + spd_logger->sinks() = sinks; + spd_logger->set_level(level); + spd_logger->set_formatter(formatter->clone()); + spd_logger->flush_on(std::max(spdlog::level::info, current_namespace->level)); Review Comment: This caused the LoggerConfigurationTests to fail to compile. I've added a fix, now it builds but its really ugly... (it wasnt much prettier before), but this perfectly highlights the shortcomings of this security measure (not a new thing here) in LoggerConfigurations. I'd rather not delay this fix, because besides this hack is only used in the test, and these lock protected functions are private/protected... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12634) Kubernetes Components Should Ignore Empty Prefix Properties
David Handermann created NIFI-12634: --- Summary: Kubernetes Components Should Ignore Empty Prefix Properties Key: NIFI-12634 URL: https://issues.apache.org/jira/browse/NIFI-12634 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: David Handermann Assignee: David Handermann Fix For: 2.0.0-M2 Following recent changes on the main branch to support optional prefix properties for Kubernetes Leases and ConfigMaps, testing indicated that the Leader Election Manager and State Provider included empty strings as valid values. This changes the default behavior based on the default nifi.properties and state-management.xml including empty strings for prefix values. The components should be modified to ignore empty strings in addition to null values, aligning with current behavior prior to the introduction of these properties. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-9677) LookUpRecord record path evaluation is "breaking" the next evaluation in case data is missing
[ https://issues.apache.org/jira/browse/NIFI-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-9677: --- Affects Version/s: (was: 1.16.0) Status: Patch Available (was: Open) > LookUpRecord record path evaluation is "breaking" the next evaluation in case > data is missing > - > > Key: NIFI-9677 > URL: https://issues.apache.org/jira/browse/NIFI-9677 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions > Environment: Apache NiFi custom built from Github repo. >Reporter: Peter Molnar >Assignee: Jim Steinebrey >Priority: Major > Attachments: LookUpRecord_empty_array_data_issue.xml, > image-2022-02-11-12-30-53-134.png, image-2022-02-11-12-32-01-833.png, > image-2022-02-11-12-33-23-283.png > > Time Spent: 20m > Remaining Estimate: 0h > > Input JSON generated by GenerateFlowFile processor looks like this (actually > I just added a currencies array under each record in addition to the "Record > Update Strategy - Replace Existing Values" example here > [https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.15.0/org.apache.nifi.processors.standard.LookupRecord/additionalDetails.html).] > *Note:* for the first record currencies array is empty. > > {code:java} > [ > { > "locales": [ > { > "region": "FR", > "language": "fr" > }, { > "region": "US", > "language": "en" > } > ], > "currencies": [] > }, { > "locales": [ > { > "region": "CA", > "language": "fr" > }, > { > "region": "JP", > "language": "ja" > } > ], > "currencies": [ > { > "currency": "CAD" > }, { > "currency": "JPY" > } > ] > } > ]{code} > > SimpleKeyValueLookUp service contains the following values: > !image-2022-02-11-12-33-23-283.png! > > LookUpRecord processor is configured as follows: > !image-2022-02-11-12-30-53-134.png! > Once I execute the LookUpRecord processor for the flow file, language look up > works fine, but the look up for currencies and regions do not work. > > !image-2022-02-11-12-32-01-833.png! > *Note:* in case the 1st currencies array is not empty but contains \{ > "currency": "EUR" }, \{ "currency": "USD" }, all look up works fine. But a > missing data seems to break the next evaluation of the record path. > Please find the template for reproducing the issue enclosed as > "LookUpRecord_empty_array_data_issue.xml". > Thank you. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-9677 Fixed LookupRecord issue that an empty JSON array caused mismatch [nifi]
mattyb149 commented on PR #8266: URL: https://github.com/apache/nifi/pull/8266#issuecomment-1899126576 Reviewing... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12633) GetSolr processor clears state on startup
[ https://issues.apache.org/jira/browse/NIFI-12633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rob_pr updated NIFI-12633: -- Status: Patch Available (was: Open) > GetSolr processor clears state on startup > - > > Key: NIFI-12633 > URL: https://issues.apache.org/jira/browse/NIFI-12633 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 2.0.0-M1 >Reporter: rob_pr >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The GetSolr processor from nifi-solr stores the latest date of a configured > field so that the same data will not be fetched multiple times. When the user > changes relevant properties, the state is cleared automatically and it starts > fetching records from the beginning. > This also happens unexpectedly when the processor is loaded after > (re)starting NiFi since setting up the properties is incorrectly regarded as > a configuration change. This leads to fetching old records every time NiFi is > restarted. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12633 GetSolr processor clears state on startup [nifi]
rproutermedia opened a new pull request, #8267: URL: https://github.com/apache/nifi/pull/8267 # Summary [NIFI-12633](https://issues.apache.org/jira/browse/NIFI-12633) Reset the clearState flag of GetSolr after configuration is restored. This fixes clearing the processor's date filter after a NiFi restart. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 21 ### Licensing - [x] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [x] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [x] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12633) GetSolr processor clears state on startup
[ https://issues.apache.org/jira/browse/NIFI-12633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rob_pr updated NIFI-12633: -- Flags: (was: Patch) > GetSolr processor clears state on startup > - > > Key: NIFI-12633 > URL: https://issues.apache.org/jira/browse/NIFI-12633 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 2.0.0-M1 >Reporter: rob_pr >Priority: Major > > The GetSolr processor from nifi-solr stores the latest date of a configured > field so that the same data will not be fetched multiple times. When the user > changes relevant properties, the state is cleared automatically and it starts > fetching records from the beginning. > This also happens unexpectedly when the processor is loaded after > (re)starting NiFi since setting up the properties is incorrectly regarded as > a configuration change. This leads to fetching old records every time NiFi is > restarted. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-12633) GetSolr processor clears state on startup
rob_pr created NIFI-12633: - Summary: GetSolr processor clears state on startup Key: NIFI-12633 URL: https://issues.apache.org/jira/browse/NIFI-12633 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 2.0.0-M1 Reporter: rob_pr The GetSolr processor from nifi-solr stores the latest date of a configured field so that the same data will not be fetched multiple times. When the user changes relevant properties, the state is cleared automatically and it starts fetching records from the beginning. This also happens unexpectedly when the processor is loaded after (re)starting NiFi since setting up the properties is incorrectly regarded as a configuration change. This leads to fetching old records every time NiFi is restarted. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (MINIFICPP-2289) Refactor LoggerConfiguration mutex/lock
Martin Zink created MINIFICPP-2289: -- Summary: Refactor LoggerConfiguration mutex/lock Key: MINIFICPP-2289 URL: https://issues.apache.org/jira/browse/MINIFICPP-2289 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Martin Zink -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINIFICPP-2288 Remove the caching of loggers [nifi-minifi-cpp]
martinzink commented on code in PR #1718: URL: https://github.com/apache/nifi-minifi-cpp/pull/1718#discussion_r1457821386 ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -267,52 +257,56 @@ std::shared_ptr LoggerConfiguration::initialize_names return root_namespace; } -std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr& logger, const std::shared_ptr _namespace, std::string_view name_view, -const std::shared_ptr& formatter, bool remove_if_present) { - std::string name{name_view}; - std::shared_ptr spdlogger = spdlog::get(name); - if (spdlogger) { -if (remove_if_present) { - spdlog::drop(name); -} else { - return spdlogger; -} +std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr _namespace, const std::string_view name_view, +const std::shared_ptr& formatter) { + const std::string name{name_view}; + if (auto spdlogger = spdlog::get(name)) { Review Comment: Makes sense, included this in https://github.com/apache/nifi-minifi-cpp/pull/1718/commits/d1d9008ca3f4edca4cc0cf847978f536581aa3ae ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -267,52 +257,56 @@ std::shared_ptr LoggerConfiguration::initialize_names return root_namespace; } -std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr& logger, const std::shared_ptr _namespace, std::string_view name_view, -const std::shared_ptr& formatter, bool remove_if_present) { - std::string name{name_view}; - std::shared_ptr spdlogger = spdlog::get(name); - if (spdlogger) { -if (remove_if_present) { - spdlog::drop(name); -} else { - return spdlogger; -} +std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr _namespace, const std::string_view name_view, +const std::shared_ptr& formatter) { + const std::string name{name_view}; + if (auto spdlogger = spdlog::get(name)) { +return spdlogger; } + return create_logger(root_namespace, name, formatter); +} + +void LoggerConfiguration::setupSpdLogger(const std::shared_ptr& spd_logger, +const std::shared_ptr& root_namespace, +const std::string& name, +const std::shared_ptr& formatter) { + if (!spd_logger) +return; std::shared_ptr current_namespace = root_namespace; std::vector> sinks = root_namespace->sinks; std::vector> inherited_sinks; spdlog::level::level_enum level = root_namespace->level; std::string current_namespace_str; - std::string sink_namespace_str = "root"; - std::string level_namespace_str = "root"; for (auto const & name_segment : utils::string::split(name, "::")) { current_namespace_str += name_segment; auto child_pair = current_namespace->children.find(name_segment); if (child_pair == current_namespace->children.end()) { break; } -std::copy(current_namespace->exported_sinks.begin(), current_namespace->exported_sinks.end(), std::back_inserter(inherited_sinks)); +ranges::copy(current_namespace->exported_sinks, std::back_inserter(inherited_sinks)); + current_namespace = child_pair->second; if (!current_namespace->sinks.empty()) { sinks = current_namespace->sinks; - sink_namespace_str = current_namespace_str; } if (current_namespace->has_level) { level = current_namespace->level; - level_namespace_str = current_namespace_str; } current_namespace_str += "::"; } - if (logger != nullptr) { -logger->log_debug("{} logger got sinks from namespace {} and level {} from namespace {}", name, sink_namespace_str, spdlog::level::to_string_view(level), level_namespace_str); - } - std::copy(inherited_sinks.begin(), inherited_sinks.end(), std::back_inserter(sinks)); - spdlogger = std::make_shared(name, begin(sinks), end(sinks)); - spdlogger->set_level(level); - spdlogger->set_formatter(formatter->clone()); - spdlogger->flush_on(std::max(spdlog::level::info, current_namespace->level)); + ranges::copy(inherited_sinks, std::back_inserter(sinks)); + spd_logger->sinks() = sinks; + spd_logger->set_level(level); + spd_logger->set_formatter(formatter->clone()); + spd_logger->flush_on(std::max(spdlog::level::info, current_namespace->level)); Review Comment: Good idea, unfortunetly spdlog doesnt offer that functionality... However we are doing this from a singleton object (LoggerConfiguration is used as a singleton from prod code), so with this information in mind, I've expanded the already existing pattern of requiring the lock_guard for this type of code to run. (of course this doesnt protect us from malicious code that creates a lock_guard from a seperate mutex, we gotta refactor this whole mess in a
[jira] [Comment Edited] (NIFI-11171) Reorganize Standard Components for 2.0.0
[ https://issues.apache.org/jira/browse/NIFI-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808331#comment-17808331 ] Daniel Stieglitz edited comment on NIFI-11171 at 1/18/24 6:01 PM: -- [~exceptionfactory] I would like to start work on the JSON processors. I just wanted to ascertain which ones you had in mind. I came up with: * AbstractJsonPathProcessor.java * AttributesTOJson * ConvertJSONToSQL * EvaluateJsonPath * SplitJson * ValidateJson though I was not sure about ConvertJSONToSQL since it has dependencies similar to other processors which use SQL (e.g. AbstractExecuteSQL,). Please advise if this is the list you had in mind. Thanks! was (Author: JIRAUSER294662): [~exceptionfactory] I would like to start work on the JSON processors. I just wanted to ascertain which ones you had in mind. I came up with: * AbstractJsonPathProcessor.java * AttributesTOJson * ConvertJSONToSQL * EvaluateJsonPath * SplitJson * ValidateJson though I was not sure about ConvertJSONToSQL since it has dependencies similar to other processors which use SQL (e.g. AbstractExecuteSQL,). Please advise if this is the list you had in mind. Thanks! > Reorganize Standard Components for 2.0.0 > > > Key: NIFI-11171 > URL: https://issues.apache.org/jira/browse/NIFI-11171 > Project: Apache NiFi > Issue Type: Epic >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > The {{nifi-standard-processors}} and {{nifi-standard-services}} modules > include a large number of Processors and Controller Services supporting an > array of capabilities. Some of these capabilities require specialized > libraries that apply to a limited number of components. > Moving Processors and Controller Services changes class names and bundle > coordinates which will break existing flow configurations. For this reason, > the selection of components for reorganization should be limited and focused. > Components with less frequent updates or usage and components with large > dependencies trees should be considered. > The following items should be considered as described in the [NiFi 2.0 > Release > Goals|https://cwiki.apache.org/confluence/display/NIFI/NiFi+2.0+Release+Goals]: > * SFTP Processors > * Jolt Transform Processors > * Jetty HTTP Processors > * JSON Processors > * Netty-based Processors -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11171) Reorganize Standard Components for 2.0.0
[ https://issues.apache.org/jira/browse/NIFI-11171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808331#comment-17808331 ] Daniel Stieglitz commented on NIFI-11171: - [~exceptionfactory] I would like to start work on the JSON processors. I just wanted to ascertain which ones you had in mind. I came up with: * AbstractJsonPathProcessor.java * AttributesTOJson * ConvertJSONToSQL * EvaluateJsonPath * SplitJson * ValidateJson though I was not sure about ConvertJSONToSQL since it has dependencies similar to other processors which use SQL (e.g. AbstractExecuteSQL,). Please advise if this is the list you had in mind. Thanks! > Reorganize Standard Components for 2.0.0 > > > Key: NIFI-11171 > URL: https://issues.apache.org/jira/browse/NIFI-11171 > Project: Apache NiFi > Issue Type: Epic >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > The {{nifi-standard-processors}} and {{nifi-standard-services}} modules > include a large number of Processors and Controller Services supporting an > array of capabilities. Some of these capabilities require specialized > libraries that apply to a limited number of components. > Moving Processors and Controller Services changes class names and bundle > coordinates which will break existing flow configurations. For this reason, > the selection of components for reorganization should be limited and focused. > Components with less frequent updates or usage and components with large > dependencies trees should be considered. > The following items should be considered as described in the [NiFi 2.0 > Release > Goals|https://cwiki.apache.org/confluence/display/NIFI/NiFi+2.0+Release+Goals]: > * SFTP Processors > * Jolt Transform Processors > * Jetty HTTP Processors > * JSON Processors > * Netty-based Processors -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (MINIFICPP-2287) Site-to-site with large files: "Site2Site transaction xxx peer unknown respond code 14"
[ https://issues.apache.org/jira/browse/MINIFICPP-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Gerlits reassigned MINIFICPP-2287: - Assignee: Ferenc Gerlits > Site-to-site with large files: "Site2Site transaction xxx peer unknown > respond code 14" > --- > > Key: MINIFICPP-2287 > URL: https://issues.apache.org/jira/browse/MINIFICPP-2287 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Marton Szasz >Assignee: Ferenc Gerlits >Priority: Major > > It looks like nifi may have extended the protocol, and minifi c++ didn't > follow the development. > > From Thomas on the nifi slack: > [https://apachenifi.slack.com/archives/CDF1VC1UZ/p1705327811015419] > > {quote}Running minifi c++ v0.15, I am getting errors when transferring large > files (10gb) via site to site to a Nifi (v1.20) cluster. Per the logs,the > transfer is on going for a while (warning logs, inputPortName has been > running for x ms in \{connection ID} > then it looks like the transfer completes (info log, Site to Site transaction > ... set flow 1 flow records with total size xxx-yyy-zzz ) ALSO, the large > file appears on the remote Nifi cluster > then it looks like the transfer failed (warning log, Site2Site transaction > xxx peer unknown respond code 14) > then another error, (warning log , ProcessSession rollback for inputPortName > executed ) > the finally, (warning protocol transmission failed, yielding ( xxx-yyy-zzz ) > This results in endless copies of the large files as presumably minifi > retries the file despite successfully transferring the file. > The logs show that other smaller files continue to be transferred while the > large files yield. (edited) > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12631) Upgrade Apache MINA SSHD to 2.12.0
[ https://issues.apache.org/jira/browse/NIFI-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-12631: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Upgrade Apache MINA SSHD to 2.12.0 > -- > > Key: NIFI-12631 > URL: https://issues.apache.org/jira/browse/NIFI-12631 > Project: Apache NiFi > Issue Type: Improvement >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 20m > Remaining Estimate: 0h > > Apache MINA SSHD > [2.12.0|https://mina.apache.org/sshd-project/download_2.12.0.html] includes > several bug fixes, including support for strict key exchange to mitigate > CVE-2023-48795. > MINA SSHD 2.12.0 is not compatible with JGit 5, so this upgrade only applies > to the main branch for NiFi 2.0.0. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12631) Upgrade Apache MINA SSHD to 2.12.0
[ https://issues.apache.org/jira/browse/NIFI-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808308#comment-17808308 ] ASF subversion and git services commented on NIFI-12631: Commit 74fdd1cf9835e935c8e6752ec057bc3f5c9177b7 in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=74fdd1cf98 ] NIFI-12631 Upgraded Apache MINA SSHD from 2.11.0 to 2.12.0 Signed-off-by: Pierre Villard This closes #8265. > Upgrade Apache MINA SSHD to 2.12.0 > -- > > Key: NIFI-12631 > URL: https://issues.apache.org/jira/browse/NIFI-12631 > Project: Apache NiFi > Issue Type: Improvement >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 10m > Remaining Estimate: 0h > > Apache MINA SSHD > [2.12.0|https://mina.apache.org/sshd-project/download_2.12.0.html] includes > several bug fixes, including support for strict key exchange to mitigate > CVE-2023-48795. > MINA SSHD 2.12.0 is not compatible with JGit 5, so this upgrade only applies > to the main branch for NiFi 2.0.0. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12631 Upgraded Apache MINA SSHD from 2.11.0 to 2.12.0 [nifi]
asfgit closed pull request #8265: NIFI-12631 Upgraded Apache MINA SSHD from 2.11.0 to 2.12.0 URL: https://github.com/apache/nifi/pull/8265 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] NIFI-9677 Fixed LookupRecord issue that an empty JSON array caused mismatch [nifi]
jrsteinebrey opened a new pull request, #8266: URL: https://github.com/apache/nifi/pull/8266 ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI-9677) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification I added two new unit tests to verify my fix. I also manually tested the new code works as intended in a local build of NiFi editor. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 21 ### Licensing N/A ### Documentation N/A -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12632) Extract SFTP components out of the standard bundle
[ https://issues.apache.org/jira/browse/NIFI-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] endzeit updated NIFI-12632: --- Description: NIFI-11171 and the goals for NIFI 2.0 outline the desire to extract the SFTP based components out of the standard bundle into a separate bundle. > Extract SFTP components out of the standard bundle > -- > > Key: NIFI-12632 > URL: https://issues.apache.org/jira/browse/NIFI-12632 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: endzeit >Assignee: endzeit >Priority: Major > > NIFI-11171 and the goals for NIFI 2.0 outline the desire to extract the SFTP > based components out of the standard bundle into a separate bundle. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-12632) Extract SFTP components out of the standard bundle
endzeit created NIFI-12632: -- Summary: Extract SFTP components out of the standard bundle Key: NIFI-12632 URL: https://issues.apache.org/jira/browse/NIFI-12632 Project: Apache NiFi Issue Type: Sub-task Reporter: endzeit Assignee: endzeit -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-8606 Added Stop & Configure button to the Controller Services De… [nifi]
mosermw commented on PR #7562: URL: https://github.com/apache/nifi/pull/7562#issuecomment-1898743694 @mcgilman Are we at the point on the main branch where changes to nifi-web-ui also have to be made on the new nifi-web-frontend? I'm not sure how they have been kept up-to-date. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12631) Upgrade Apache MINA SSHD to 2.12.0
[ https://issues.apache.org/jira/browse/NIFI-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-12631: Status: Patch Available (was: In Progress) > Upgrade Apache MINA SSHD to 2.12.0 > -- > > Key: NIFI-12631 > URL: https://issues.apache.org/jira/browse/NIFI-12631 > Project: Apache NiFi > Issue Type: Improvement >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 10m > Remaining Estimate: 0h > > Apache MINA SSHD > [2.12.0|https://mina.apache.org/sshd-project/download_2.12.0.html] includes > several bug fixes, including support for strict key exchange to mitigate > CVE-2023-48795. > MINA SSHD 2.12.0 is not compatible with JGit 5, so this upgrade only applies > to the main branch for NiFi 2.0.0. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12631 Upgraded Apache MINA SSHD from 2.11.0 to 2.12.0 [nifi]
exceptionfactory opened a new pull request, #8265: URL: https://github.com/apache/nifi/pull/8265 # Summary [NIFI-12631](https://issues.apache.org/jira/browse/NIFI-12631) Upgrades Apache MINA SSHD from 2.11.0 to [2.12.0](https://mina.apache.org/sshd-project/download_2.12.0.html) incorporating several bug fixes, including strict key exchange support for mitigating CVE-2023-48795. This upgrade applies to the main branch only, due to an incompatibility with JGit 5 on the support branch. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12631) Upgrade Apache MINA SSHD to 2.12.0
David Handermann created NIFI-12631: --- Summary: Upgrade Apache MINA SSHD to 2.12.0 Key: NIFI-12631 URL: https://issues.apache.org/jira/browse/NIFI-12631 Project: Apache NiFi Issue Type: Improvement Reporter: David Handermann Assignee: David Handermann Fix For: 2.0.0-M2 Apache MINA SSHD [2.12.0|https://mina.apache.org/sshd-project/download_2.12.0.html] includes several bug fixes, including support for strict key exchange to mitigate CVE-2023-48795. MINA SSHD 2.12.0 is not compatible with JGit 5, so this upgrade only applies to the main branch for NiFi 2.0.0. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINIFICPP-2288 Remove the caching of loggers [nifi-minifi-cpp]
szaszm commented on code in PR #1718: URL: https://github.com/apache/nifi-minifi-cpp/pull/1718#discussion_r1457526803 ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -267,52 +257,56 @@ std::shared_ptr LoggerConfiguration::initialize_names return root_namespace; } -std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr& logger, const std::shared_ptr _namespace, std::string_view name_view, -const std::shared_ptr& formatter, bool remove_if_present) { - std::string name{name_view}; - std::shared_ptr spdlogger = spdlog::get(name); - if (spdlogger) { -if (remove_if_present) { - spdlog::drop(name); -} else { - return spdlogger; -} +std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr _namespace, const std::string_view name_view, +const std::shared_ptr& formatter) { + const std::string name{name_view}; + if (auto spdlogger = spdlog::get(name)) { Review Comment: I'd change this to take a `const std::string&` instead: now we're calling it with a `std::string`, creating a temporary `std::string_view`, only to use it to create a temporary `std::string` that we can pass further. ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -267,52 +257,56 @@ std::shared_ptr LoggerConfiguration::initialize_names return root_namespace; } -std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr& logger, const std::shared_ptr _namespace, std::string_view name_view, -const std::shared_ptr& formatter, bool remove_if_present) { - std::string name{name_view}; - std::shared_ptr spdlogger = spdlog::get(name); - if (spdlogger) { -if (remove_if_present) { - spdlog::drop(name); -} else { - return spdlogger; -} +std::shared_ptr LoggerConfiguration::get_logger(const std::shared_ptr _namespace, const std::string_view name_view, +const std::shared_ptr& formatter) { + const std::string name{name_view}; + if (auto spdlogger = spdlog::get(name)) { +return spdlogger; } + return create_logger(root_namespace, name, formatter); +} + +void LoggerConfiguration::setupSpdLogger(const std::shared_ptr& spd_logger, +const std::shared_ptr& root_namespace, +const std::string& name, +const std::shared_ptr& formatter) { + if (!spd_logger) +return; std::shared_ptr current_namespace = root_namespace; std::vector> sinks = root_namespace->sinks; std::vector> inherited_sinks; spdlog::level::level_enum level = root_namespace->level; std::string current_namespace_str; - std::string sink_namespace_str = "root"; - std::string level_namespace_str = "root"; for (auto const & name_segment : utils::string::split(name, "::")) { current_namespace_str += name_segment; auto child_pair = current_namespace->children.find(name_segment); if (child_pair == current_namespace->children.end()) { break; } -std::copy(current_namespace->exported_sinks.begin(), current_namespace->exported_sinks.end(), std::back_inserter(inherited_sinks)); +ranges::copy(current_namespace->exported_sinks, std::back_inserter(inherited_sinks)); + current_namespace = child_pair->second; if (!current_namespace->sinks.empty()) { sinks = current_namespace->sinks; - sink_namespace_str = current_namespace_str; } if (current_namespace->has_level) { level = current_namespace->level; - level_namespace_str = current_namespace_str; } current_namespace_str += "::"; } - if (logger != nullptr) { -logger->log_debug("{} logger got sinks from namespace {} and level {} from namespace {}", name, sink_namespace_str, spdlog::level::to_string_view(level), level_namespace_str); - } - std::copy(inherited_sinks.begin(), inherited_sinks.end(), std::back_inserter(sinks)); - spdlogger = std::make_shared(name, begin(sinks), end(sinks)); - spdlogger->set_level(level); - spdlogger->set_formatter(formatter->clone()); - spdlogger->flush_on(std::max(spdlog::level::info, current_namespace->level)); + ranges::copy(inherited_sinks, std::back_inserter(sinks)); + spd_logger->sinks() = sinks; + spd_logger->set_level(level); + spd_logger->set_formatter(formatter->clone()); + spd_logger->flush_on(std::max(spdlog::level::info, current_namespace->level)); Review Comment: Synchronization may become an issue here. Can we do these atomically on the spdlog logger object? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact
Re: [PR] MINIFICPP-2288 Remove the caching of loggers [nifi-minifi-cpp]
martinzink commented on code in PR #1718: URL: https://github.com/apache/nifi-minifi-cpp/pull/1718#discussion_r1457463615 ## libminifi/src/core/logging/LoggerConfiguration.cpp: ## @@ -145,40 +143,33 @@ void LoggerConfiguration::initialize(const std::shared_ptr } formatter_ = std::make_shared(spdlog_pattern); - std::map> spdloggers; - for (auto const & logger_impl : loggers) { -std::shared_ptr spdlogger; -auto it = spdloggers.find(logger_impl->name); -if (it == spdloggers.end()) { - spdlogger = get_logger(logger_, root_namespace_, logger_impl->name, formatter_, true); - spdloggers[logger_impl->name] = spdlogger; -} else { - spdlogger = it->second; -} -logger_impl->set_delegate(spdlogger); - } + spdlog::apply_all([&](auto spd_logger) { +setupSpdLogger(spd_logger, root_namespace_, spd_logger->name(), formatter_); + }); Review Comment: This is where the magic happens, everything else is basically a refactor/test fix. Previously we had a shared_ptr to every LoggerImpl ever created and if we wanted to modify the settings we iterated it and recreated the underlying spdlogger. Now we rely on the registry in spdlog, so if we want to modify the logger properties we modify the underlying spdlogger without touching the LoggerImpl-s so we dont need a shared_ptr for these. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] MINIFICPP-2288 Remove the caching of loggers [nifi-minifi-cpp]
martinzink opened a new pull request, #1718: URL: https://github.com/apache/nifi-minifi-cpp/pull/1718 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12506 /nifi-api/flow/metrics endpoint times out if flow is big [nifi]
timeabarna commented on PR #8158: URL: https://github.com/apache/nifi/pull/8158#issuecomment-1898452142 Thanks @exceptionfactory , I've created a processing service and updated both PRs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12630) NPE getLogger in ConsumeSlack
Pierre Villard created NIFI-12630: - Summary: NPE getLogger in ConsumeSlack Key: NIFI-12630 URL: https://issues.apache.org/jira/browse/NIFI-12630 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.24.0, 2.0.0-M1 Reporter: Pierre Villard Getting this NPE when hitting the Slack rate limit: {code:java} 2024-01-18 16:07:53,560 WARN [Timer-Driven Process Thread-12] o.a.n.controller.tasks.ConnectableTask Processing halted: uncaught exception in Component [ConsumeSlack[id=592d68c7-3fe6-3039-53e4-eae3bfbfbd57]] java.lang.NullPointerException: Cannot invoke "org.apache.nifi.logging.ComponentLog.debug(String, Object[])" because "this.logger" is null at org.apache.nifi.processors.slack.util.RateLimit.isLimitReached(RateLimit.java:42) at org.apache.nifi.processors.slack.ConsumeSlack.onTrigger(ConsumeSlack.java:332) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.base/java.lang.Thread.run(Thread.java:1583) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-12452) Improve support for Enum & DescribedValue for allowableValues
[ https://issues.apache.org/jira/browse/NIFI-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Turcsanyi reassigned NIFI-12452: -- Assignee: endzeit > Improve support for Enum & DescribedValue for allowableValues > - > > Key: NIFI-12452 > URL: https://issues.apache.org/jira/browse/NIFI-12452 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: endzeit >Assignee: endzeit >Priority: Major > Fix For: 2.0.0-M2 > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The {{PropertyDescriptor.Builder}} supports providing a {{Class}} that is > both an {{Enum}} and {{{}DescribedValue{}}}. This improves type safety, by > avoiding the passing of sheer arbitrary {{String}} values. > I'd like to propose extensions to both the {{PropertyDescriptor.Builder}} > class as well as the {{PropertyValue}} interface. > > The {{PropertyDescriptor.Builder}} should allow not only a raw {{String}} to > be provided as {{{}defaultValue(){}}}, but also provide an overload that > instead accepts an {{{}DescribedValue{}}}. > {code:java} > public Builder defaultValue(final DescribedValue value) {code} > This allows to replace > {code:java} > .allowableValues(Foo.class) > .defaultValue(Foo.BAR.getValue()) {code} > with > {code:java} > .allowableValues(Foo.class) > .defaultValue(Foo.BAR) {code} > > The {{PropertyValue}} should allow to receive the value as one of the Enum > constants, similar to one of the existing {{as...}} methods. > {code:java} > & DescribedValue> E asAllowableValue(Class clazz) {code} > This way processor implementations rely on type-safe mappings of allowable > values instead of matching on {{String}} values manually. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12629 - adding metadata filtering to QueryPinecone [nifi]
pvillard31 opened a new pull request, #8264: URL: https://github.com/apache/nifi/pull/8264 # Summary [NIFI-12629](https://issues.apache.org/jira/browse/NIFI-12629) - adding metadata filtering to QueryPinecone # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12629) Add metadata filtering to QueryPinecone
[ https://issues.apache.org/jira/browse/NIFI-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-12629: -- Status: Patch Available (was: Open) > Add metadata filtering to QueryPinecone > --- > > Key: NIFI-12629 > URL: https://issues.apache.org/jira/browse/NIFI-12629 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 2.0.0-M1 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > The QueryPinecone processor should be improved to allow for metadata > filtering. > [https://docs.pinecone.io/docs/metadata-filtering] > [https://medium.com/@gmarcilhacy/deep-dive-into-langchain-and-pinecone-metadata-filtering-75a9b6eba9c] > An optional filter property should be added to the processor allowing a user > to specify which metadata filters should be applied to the query. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-12629) Add metadata filtering to QueryPinecone
Pierre Villard created NIFI-12629: - Summary: Add metadata filtering to QueryPinecone Key: NIFI-12629 URL: https://issues.apache.org/jira/browse/NIFI-12629 Project: Apache NiFi Issue Type: Improvement Components: Extensions Affects Versions: 2.0.0-M1 Reporter: Pierre Villard Assignee: Pierre Villard The QueryPinecone processor should be improved to allow for metadata filtering. [https://docs.pinecone.io/docs/metadata-filtering] [https://medium.com/@gmarcilhacy/deep-dive-into-langchain-and-pinecone-metadata-filtering-75a9b6eba9c] An optional filter property should be added to the processor allowing a user to specify which metadata filters should be applied to the query. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12255 refactor PutElasticsearchRecord and PutElasticsearchJson relationships to be more consistent with other processors [nifi]
ChrisSamo632 commented on PR #7940: URL: https://github.com/apache/nifi/pull/7940#issuecomment-1897996225 RE the migration testing, I discussed this with @markap14 at the time. I find some issues with the property mock processing, so fixed that (I'll have to reverse engineer what the problem was now, if more details are needed). Adding the relationship mocking is something that could be separated out, although at the time it helped me confirm the updates I wanted to make and I thought it might be helpful for others in the future. It does, of course, increase this PR a bit though. Happy to create a separate jira ticket and either link it here (to reflect the additional change), or try to separate out the changes if people think it's necessary -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12255 refactor PutElasticsearchRecord and PutElasticsearchJson relationships to be more consistent with other processors [nifi]
ChrisSamo632 commented on code in PR #7940: URL: https://github.com/apache/nifi/pull/7940#discussion_r1457069952 ## nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractPutElasticsearch.java: ## @@ -55,10 +55,41 @@ import java.util.stream.Collectors; public abstract class AbstractPutElasticsearch extends AbstractProcessor implements ElasticsearchRestProcessor { +static final Relationship REL_ORIGINAL = new Relationship.Builder() +.name("original") +.description("All flowfiles that are sent to Elasticsearch without request failures go to this relationship.") +.build(); + +static final Relationship REL_SUCCESSFUL = new Relationship.Builder() +.name("successful") Review Comment: That, and I think renaming the relationship might impact other ES processors (I'll have to double check) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12255 refactor PutElasticsearchRecord and PutElasticsearchJson relationships to be more consistent with other processors [nifi]
ChrisSamo632 commented on code in PR #7940: URL: https://github.com/apache/nifi/pull/7940#discussion_r1457066410 ## nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractPutElasticsearch.java: ## @@ -55,10 +55,41 @@ import java.util.stream.Collectors; public abstract class AbstractPutElasticsearch extends AbstractProcessor implements ElasticsearchRestProcessor { +static final Relationship REL_ORIGINAL = new Relationship.Builder() +.name("original") +.description("All flowfiles that are sent to Elasticsearch without request failures go to this relationship.") +.build(); + +static final Relationship REL_SUCCESSFUL = new Relationship.Builder() +.name("successful") Review Comment: My thinking (from memory) is that `success` in nifi is usually the output of the incoming FlowFile once it has passed through the processor without error Here, the semantics are different in that this relationship will contain the (potentially reformatted) record(s)/json that has been processed within elasticsearch without error. That is, sent to the ES _bulk endpoint and ES has reported the operation as successful without error. ES docs that fail on the ES side are sent to the errors output in nifi (so the original incoming FlowFile may end up being split between two outputs, even though the "send" to ES was a success). So separating the two seemed reasonable. That said, I'm not against using success for consistency in banking, provided the difference in semantics won't lead to greater confusion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org