[GitHub] [nifi] joewitt opened a new pull request #4132: NIFI-7244 Updated all tests which dont run well on windows to either …
joewitt opened a new pull request #4132: NIFI-7244 Updated all tests which dont run well on windows to either … URL: https://github.com/apache/nifi/pull/4132 …work or be ignored on windows Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390724601 ## File path: nifi-docs/src/main/asciidoc/administration-guide.adoc ## @@ -3247,6 +3247,9 @@ For example, when running in a Docker container or behind a proxy (e.g. localhos host[:port] that NiFi is bound to. |`nifi.web.proxy.context.path`|A comma separated list of allowed HTTP X-ProxyContextPath, X-Forwarded-Context, or X-Forwarded-Prefix header values to consider. By default, this value is blank meaning all requests containing a proxy context path are rejected. Configuring this property would allow requests where the proxy path is contained in this listing. +|`nifi.web.max.content.size`|The maximum size for regular PUT and POST requests. The default value is `10 MB`. Review comment: Sorry, I thought you had reached an understanding of the settings. @natural this is confusing to people, so we should improve the documentation around it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] ottobackwards commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
ottobackwards commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390722033 ## File path: nifi-docs/src/main/asciidoc/administration-guide.adoc ## @@ -3247,6 +3247,9 @@ For example, when running in a Docker container or behind a proxy (e.g. localhos host[:port] that NiFi is bound to. |`nifi.web.proxy.context.path`|A comma separated list of allowed HTTP X-ProxyContextPath, X-Forwarded-Context, or X-Forwarded-Prefix header values to consider. By default, this value is blank meaning all requests containing a proxy context path are rejected. Configuring this property would allow requests where the proxy path is contained in this listing. +|`nifi.web.max.content.size`|The maximum size for regular PUT and POST requests. The default value is `10 MB`. Review comment: I know you resolved this @alopresto, but this doesn't tell when each large setting should be used, or if they can be used together or what not. Still confusing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] bakaid edited a comment on issue #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid edited a comment on issue #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#issuecomment-597409388 @arpadboda @szaszm The PR is ready for review, with the following issues. Blocking the PR: - When loading a new configuration through C2, notifyStop is not called on the ControllerServices, resulting in the RocksDB database staying open when the new controller service in the new configuration tries to reopen it, making the state retrieval fail and the processors think that they have no state. I am not sure how yet, but this could most likely be hotfixed without the whole memory leak/reload validation refactor. Will take a look next week. Follow-up issues: - validation of descendant-of-* Properties: since in the new version of this PR the default injected `CoreComponentStateManagerProvider` is configurable through `minifi.properties` (whether it should be always persisting, and if not, what should be the auto persistence interval), the possibility of using your own controller service for this (defined in `config.yml`) becomes not that important for the time being. You can still do it, it will just lack a preemptive validation. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1173 - cleaning up state storage: this is a hard question. I am not sure when we want to clean up state storage at all. For example, if a new configuration is loaded, we might want to clean the state of components that no longer exist in the flow, but this would mean that a single misconfiguration would make us loose our state (and that we can't properly roll back to an older configuration, because we have lost the state of processors no longer referenced). For the time being I think it is perfectly fine not to do any state cleanup: we don't have many processors using state, we don't have many instances of those processors that do use state, and states are small strings. We can handle the at maximum few hundred states stored in our DB (and this is the most extreme example I can imagine). If it really becomes an issue for someone, they can just delete the state directory/file. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1174 - ControllerService notifyStop and destruction on shutdown: ControllerServices don't get destructed on shutdown (known shared_ptr cycle issue), but they also don't get a notifyStop, which Processors at least get. This is an issue, because this way the state won't be persisted on shutdown. I have worked this around by including an explicit `persist` in every Processors's notifyStop that uses state, but it should be fixed properly on the long run. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1175 - ListSFTP migration: ListSFTP was released in 0.7.0, but I don't know about it being used in the wild. I have rewrote it for the new state handling, but didn't write any migration logic. If it is acceptable, I would prefer not to, and just make it drop the old state in the new release. I have created a blocking (for the next release) follow-up issue for this: https://issues.apache.org/jira/browse/MINIFICPP-1176 - TailFile: TailFile is messed up. I have written state migration for it, both for the legacy and new single mode and multiple mode, but TailFile itself has issues, especially with multiple mode and rollover, it should really be rewritten from the grounds up sometime after we merged this PR. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1177 - C2: we have to make the state storage queryable and per-component clearable through C2 for it to be really useful. I have done some preliminary work for this in this PR, but mostly for validating that it fits the architecture. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1178 All the processors that (to my knowledge) used state has been rewritten to use the new mechanism: - TailFile: tested manually, created automated migration tests - ListSFTP: tested with automated tests - QueryDatabaseTable: tested state migration and normal usage manually - ConsumeWindowsEventLog: tested state migration and normal usage manually This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] bakaid edited a comment on issue #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid edited a comment on issue #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#issuecomment-597409388 @arpadboda @szaszm The PR is ready for review, with the following issues. Blocking the PR: - When loading a new configuration through C2, notifyStop is not called on the ControllerServices, resulting in the RocksDB database staying open when the new controller service in the new configuration tries to reopen it, making the state retrieval fail and the processors think that they have no state. I am not sure how yet, but this could most likely be hotfixed without the whole memory leak/reload validation refactor. Will take a look next week. Follow-up issues: - validation of descendant-of-* Properties: since in the new version of this PR the default injected `CoreComponentStateManagerProvider` is configurable through `minifi.properties` (whether it should be always persisting, and if not, what should be the auto persistence interval), the possibility of using your own controller service for this (defined in `config.yml`) becomes not that important for the time being. You can still do it, it will just lack a preemptive validation. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1173 - cleaning up state storage: this is a hard question. I am not sure when we want to clean up state storage at all. For example, if a new configuration is loaded, we might want to clean the state of components that no longer exist in the flow, but this would mean that a single misconfiguration would make us loose our state (and that we can't properly roll back to an older configuration, because we have lost the state of processors no longer referenced). For the time being I think it is perfectly fine not to do any state cleanup: we don't have many processors using state, we don't have many instances of those processors that do use state, and states are small strings. We can handle the at maximum few hundred states stored in our DB (and this is the most extreme example I can imagine). If it really becomes an issue for someone, they can just delete the state directory/file. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1174 - ControllerService notifyStop and destruction on shutdown: ControllerServices don't get destructed on shutdown (known shared_ptr cycle issue), but they also don't get a notifyStop, which Processors at least get. This is an issue, because this way the state won't be persisted on shutdown. I have worked this around by including an explicit `persist` in every Processors's notifyStop that uses state, but it should be fixed properly on the long run. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1175 - ListSFTP migration: ListSFTP was released in 0.7.0, but I don't know about it being used in the wild. I have rewrote it for the new state handling, but didn't write any migration logic. If it is acceptable, I would prefer not to, and just make it drop the old state in the new release. I have created a blocking (for the next release) follow-up issue for this: https://issues.apache.org/jira/browse/MINIFICPP-1176 - TailFile: TailFile is messed up. I have written state migration for it, both for the legacy and new single mode and multiple mode, but TailFile itself has issues, especially with multiple mode and rollover, it should really be rewritten from the grounds up sometime after we merged this PR. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1177 - C2: we have to make the state storage queryable and per-component clearable for it to be really useful. I have done some preliminary work for this in this PR, but mostly for validating that it fits the architecture. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1178 All the processors that (to my knowledge) used state has been rewritten to use the new mechanism: - TailFile: tested manually, created automated migration tests - ListSFTP: tested with automated tests - QueryDatabaseTable: tested state migration and normal usage manually - ConsumeWindowsEventLog: tested state migration and normal usage manually This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (MINIFICPP-1178) Make the state storage queryable and per-component clearable through C2
Dániel Bakai created MINIFICPP-1178: --- Summary: Make the state storage queryable and per-component clearable through C2 Key: MINIFICPP-1178 URL: https://issues.apache.org/jira/browse/MINIFICPP-1178 Project: Apache NiFi MiNiFi C++ Issue Type: Task Reporter: Dániel Bakai Assignee: Dániel Bakai Fix For: 0.8.0 With the new state storage it is not trivial anymore to delete the state of a single processor. We should expose getting and clearing the processor states through C2. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#discussion_r390714137 ## File path: libminifi/src/c2/C2Agent.cpp ## @@ -435,6 +437,27 @@ void C2Agent::handle_c2_server_response(const C2ContentResponse ) { update_sink_->drainRepositories(); C2Payload response(Operation::ACKNOWLEDGE, resp.ident, false, true); enqueue_c2_response(std::move(response)); + } else if (resp.name == "corecomponentstate") { Review comment: This has only been written to make sure that it fits the architecture, it hasn't even been tested manually. Once https://github.com/apache/nifi-minifi-cpp/pull/743 is done and merged, I can probably write reasonable tests for it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#discussion_r390713995 ## File path: libminifi/src/c2/C2Agent.cpp ## @@ -596,6 +619,29 @@ void C2Agent::handle_describe(const C2ContentResponse ) { } enqueue_c2_response(std::move(response)); } + } else if (resp.name == "corecomponentstate") { Review comment: This has only been tested manually. Once https://github.com/apache/nifi-minifi-cpp/pull/743 is done and merged, I can probably write reasonable tests for it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (MINIFICPP-1176) Figure out whether we need state migration logic for ListSFTP
[ https://issues.apache.org/jira/browse/MINIFICPP-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056596#comment-17056596 ] Joe Witt commented on MINIFICPP-1176: - as it is an 0.x line such an approach is reasonable. just doc in migration guide. > Figure out whether we need state migration logic for ListSFTP > - > > Key: MINIFICPP-1176 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1176 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Reporter: Dániel Bakai >Priority: Blocker > Fix For: 0.8.0 > > > ListSFTP was released in 0.7.0, but I don't know about it being used in the > wild. I have rewrote it for the new state handling, but didn't write any > migration logic. If it is acceptable, I would prefer not to, and just make it > drop the old state in the new release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#discussion_r390712104 ## File path: libminifi/include/core/ProcessContext.h ## @@ -193,6 +216,61 @@ class ProcessContext : public controller::ControllerServiceLookup, public core:: return controller_service_provider_->getControllerServiceName(identifier); } + static constexpr char const* DefaultStateManagerProviderName = "defaultstatemanagerprovider"; + + std::shared_ptr getStateManager() { +if (state_manager_provider_ == nullptr) { + return nullptr; +} +return state_manager_provider_->getCoreComponentStateManager(*processor_node_); + } + + static std::shared_ptr getOrCreateDefaultStateManagerProvider( + std::shared_ptr controller_service_provider, + const char *base_path = "") { +static std::mutex mutex; +std::lock_guard lock(mutex); + +/* See if we have already created a default provider */ +std::shared_ptr node = controller_service_provider->getControllerServiceNode(DefaultStateManagerProviderName); // TODO +if (node != nullptr) { + return std::dynamic_pointer_cast(node->getControllerServiceImplementation()); +} + +/* Try to create a RocksDB-backed provider */ +node = controller_service_provider->createControllerService("RocksDbPersistableKeyValueStoreService", + "org.apache.nifi.minifi.controllers.RocksDbPersistableKeyValueStoreService", + DefaultStateManagerProviderName, +true /*firstTimeAdded*/); +if (node != nullptr) { + node->initialize(); + auto provider = node->getControllerServiceImplementation(); + if (provider != nullptr) { +provider->setProperty("Directory", utils::file::FileUtils::concat_path(base_path, "corecomponentstate")); +node->enable(); +return std::dynamic_pointer_cast(provider); + } +} + +/* Fall back to a locked unordered map-backed provider */ +node = controller_service_provider->createControllerService("UnorderedMapPersistableKeyValueStoreService", + "org.apache.nifi.minifi.controllers.UnorderedMapPersistableKeyValueStoreService", + DefaultStateManagerProviderName, +true /*firstTimeAdded*/); +if (node != nullptr) { + node->initialize(); + auto provider = node->getControllerServiceImplementation(); + if (provider != nullptr) { +provider->setProperty("File", utils::file::FileUtils::concat_path(base_path, "corecomponentstate.txt")); +node->enable(); +return std::dynamic_pointer_cast(provider); + } +} Review comment: @szaszm Ended up adding more code to this, and now it really made sense to deduplicate it, so I've done that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] bakaid edited a comment on issue #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid edited a comment on issue #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#issuecomment-597409388 @arpadboda @szaszm The PR is ready for review, with the following issues. Blocking the PR: - When loading a new configuration through C2, notifyStop is not called on the ControllerServices, resulting in the RocksDB database staying open when the new controller service in the new configuration tries to reopen it, making the state retrieval fail and the processors think that they have no state. I am not sure how yet, but this could most likely be hotfixed without the whole memory leak/reload validation refactor. Will take a look next week. Follow-up issues: - validation of descendant-of-* Properties: since in the new version of this PR the default injected `CoreComponentStateManagerProvider` is configurable through `minifi.properties` (whether it should be always persisting, and if not, what should be the auto persistence interval), the possibility of using your own controller service for this (defined in `config.yml`) becomes not that important for the time being. You can still do it, it will just lack a preemptive validation. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1173 - cleaning up state storage: this is a hard question. I am not sure when we want to clean up state storage at all. For example, if a new configuration is loaded, we might want to clean the state of components that no longer exist in the flow, but this would mean that a single misconfiguration would make us loose our state (and that we can't properly roll back to an older configuration, because we have lost the state of processors no longer referenced). For the time being I think it is perfectly fine not to do any state cleanup: we don't have many processors using state, we don't have many instances of those processors that do use state, and states are small strings. We can handle the at maximum few hundred states stored in our DB (and this is the most extreme example I can imagine). If it really becomes an issue for someone, they can just delete the state directory/file. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1174 - ControllerService notifyStop and destruction on shutdown: ControllerServices don't get destructed on shutdown (known shared_ptr cycle issue), but they also don't get a notifyStop, which Processors at least get. This is an issue, because this way the state won't be persisted on shutdown. I have worked this around by including an explicit `persist` in every Processors's notifyStop that uses state, but it should be fixed properly on the long run. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1175 - ListSFTP migration: ListSFTP was released in 0.7.0, but I don't know about it being used in the wild. I have rewrote it for the new state handling, but didn't write any migration logic. If it is acceptable, I would prefer not to, and just make it drop the old state in the new release. I have created a blocking (for the next release) follow-up issue for this: https://issues.apache.org/jira/browse/MINIFICPP-1176 - TailFile: TailFile is messed up. I have written state migration for it, both for the legacy and new single mode and multiple mode, but TailFile itself has issues, especially with multiple mode and rollover, it should really be rewritten from the grounds up sometime after we merged this PR. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1177 All the processors that (to my knowledge) used state has been rewritten to use the new mechanism: - TailFile: tested manually, created automated migration tests - ListSFTP: tested with automated tests - QueryDatabaseTable: tested state migration and normal usage manually - ConsumeWindowsEventLog: tested state migration and normal usage manually This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] bakaid commented on issue #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid commented on issue #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#issuecomment-597409388 @aboda @szaszm The PR is ready for review, with the following issues. Blocking the PR: - When loading a new configuration through C2, notifyStop is not called on the ControllerServices, resulting in the RocksDB database staying open when the new controller service in the new configuration tries to reopen it, making the state retrieval fail and the processors think that they have no state. I am not sure how yet, but this could most likely be hotfixed without the whole memory leak/reload validation refactor. Will take a look next week. Follow-up issues: - validation of descendant-of-* Properties: since in the new version of this PR the default injected `CoreComponentStateManagerProvider` is configurable through `minifi.properties` (whether it should be always persisting, and if not, what should be the auto persistence interval), the possibility of using your own controller service for this (defined in `config.yml`) becomes not that important for the time being. You can still do it, it will just lack a preemptive validation. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1173 - cleaning up state storage: this is a hard question. I am not sure when we want to clean up state storage at all. For example, if a new configuration is loaded, we might want to clean the state of components that no longer exist in the flow, but this would mean that a single misconfiguration would make us loose our state (and that we can't properly roll back to an older configuration, because we have lost the state of processors no longer referenced). For the time being I think it is perfectly fine not to do any state cleanup: we don't have many processors using state, we don't have many instances of those processors that do use state, and states are small strings. We can handle the at maximum few hundred states stored in our DB (and this is the most extreme example I can imagine). If it really becomes an issue for someone, they can just delete the state directory/file. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1174 - ControllerService notifyStop and destruction on shutdown: ControllerServices don't get destructed on shutdown (known shared_ptr cycle issue), but they also don't get a notifyStop, which Processors at least get. This is an issue, because this way the state won't be persisted on shutdown. I have worked this around by including an explicit `persist` in every Processors's notifyStop that uses state, but it should be fixed properly on the long run. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1175 - ListSFTP migration: ListSFTP was released in 0.7.0, but I don't know about it being used in the wild. I have rewrote it for the new state handling, but didn't write any migration logic. If it is acceptable, I would prefer not to, and just make it drop the old state in the new release. I have created a blocking (for the next release) follow-up issue for this: https://issues.apache.org/jira/browse/MINIFICPP-1176 - TailFile: TailFile is messed up. I have written state migration for it, both for the legacy and new single mode and multiple mode, but TailFile itself has issues, especially with multiple mode and rollover, it should really be rewritten from the grounds up sometime after we merged this PR. Follow-up issue created: https://issues.apache.org/jira/browse/MINIFICPP-1177 All the processors that (to my knowledge) used state has been rewritten to use the new mechanism: - TailFile: tested manually, created automated migration tests - ListSFTP: tested with automated tests - QueryDatabaseTable: tested state migration and normal usage manually - ConsumeWindowsEventLog: tested state migration and normal usage manually This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] sburges commented on issue #4126: NIFI-7103 Adding PutAzureDataLakeStorage Processor to provide native support for Azure Data Lake Storage Gen 2
sburges commented on issue #4126: NIFI-7103 Adding PutAzureDataLakeStorage Processor to provide native support for Azure Data Lake Storage Gen 2 URL: https://github.com/apache/nifi/pull/4126#issuecomment-597409188 LGTM, thanks for addressing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (MINIFICPP-1177) Rewrite TailFile
Dániel Bakai created MINIFICPP-1177: --- Summary: Rewrite TailFile Key: MINIFICPP-1177 URL: https://issues.apache.org/jira/browse/MINIFICPP-1177 Project: Apache NiFi MiNiFi C++ Issue Type: Task Reporter: Dániel Bakai Fix For: 0.8.0 Our TailFile implementation, especially in the handling of rollover and multiple file mode is buggy, and has significant, erroneus deviations from the NiFi implementation. It has been patched time and time again, but it is still not up to par. Since this is the processor used for one of our primary business use cases, log parsing, it should be the best implementation we can achieve. We should review how NiFi works and reimplement this processor based on that, utilizing our recent processor design best practices. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1176) Figure out whether we need state migration logic for ListSFTP
Dániel Bakai created MINIFICPP-1176: --- Summary: Figure out whether we need state migration logic for ListSFTP Key: MINIFICPP-1176 URL: https://issues.apache.org/jira/browse/MINIFICPP-1176 Project: Apache NiFi MiNiFi C++ Issue Type: Task Reporter: Dániel Bakai Fix For: 0.8.0 ListSFTP was released in 0.7.0, but I don't know about it being used in the wild. I have rewrote it for the new state handling, but didn't write any migration logic. If it is acceptable, I would prefer not to, and just make it drop the old state in the new release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1175) ControllerService neither notifyStop nor destructor is called on shutdown
[ https://issues.apache.org/jira/browse/MINIFICPP-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dániel Bakai updated MINIFICPP-1175: Description: This means that we can't clean up Controller Services cleanly on shutdown. Not being destructed is most likely caused by MINIFICPP-839, but we should still at least (and independently of that) call notifyStop. A concrete issue is persisting PersistableKeyValueStorageServices on shutdown. was: This means that we can't clean up Controller Services cleanly on shutdown. A concrete issue is persisting PersistableKeyValueStorageServices on shutdown. > ControllerService neither notifyStop nor destructor is called on shutdown > - > > Key: MINIFICPP-1175 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1175 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Dániel Bakai >Priority: Blocker > Fix For: 0.8.0 > > > This means that we can't clean up Controller Services cleanly on shutdown. > Not being destructed is most likely caused by MINIFICPP-839, but we should > still at least (and independently of that) call notifyStop. > A concrete issue is persisting PersistableKeyValueStorageServices on shutdown. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1175) ControllerService neither notifyStop nor destructor is called on shutdown
Dániel Bakai created MINIFICPP-1175: --- Summary: ControllerService neither notifyStop nor destructor is called on shutdown Key: MINIFICPP-1175 URL: https://issues.apache.org/jira/browse/MINIFICPP-1175 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Dániel Bakai Fix For: 0.8.0 This means that we can't clean up Controller Services cleanly on shutdown. A concrete issue is persisting PersistableKeyValueStorageServices on shutdown. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1174) Figure out how to clean up state storage, if at all
Dániel Bakai created MINIFICPP-1174: --- Summary: Figure out how to clean up state storage, if at all Key: MINIFICPP-1174 URL: https://issues.apache.org/jira/browse/MINIFICPP-1174 Project: Apache NiFi MiNiFi C++ Issue Type: Task Reporter: Dániel Bakai I am not sure when we want to clean up state storage at all. For example, if a new configuration is loaded, we might want to clean the state of components that no longer exist in the flow, but this would mean that a single misconfiguration would make us loose our state (and that we can't properly roll back to an older configuration, because we have lost the state of processors no longer referenced). For the time being I think it is perfectly fine not to do any state cleanup: we don't have many processors using state, we don't have many instances of those processors that do use state, and states are small strings. We can handle the at maximum few hundred states stored in our DB (and this is the most extreme example I can imagine). If it really becomes an issue for someone, they can just delete the state directory/file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1173) Add descendant-of-/implements- asType Property validator
Dániel Bakai created MINIFICPP-1173: --- Summary: Add descendant-of-/implements- asType Property validator Key: MINIFICPP-1173 URL: https://issues.apache.org/jira/browse/MINIFICPP-1173 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Dániel Bakai Assignee: Dániel Bakai We want to be able to request a Controller Service that implements a particular interface, specifically in the context of state storage, where we want a CoreComponentStateManagerProvider, either implemented by RocksDbPersistableKeyValueStoreService or UnorderedMapPersistableKeyValueStoreService (for the time being, there might be more). We will also need to communicate this information (that a Controller Service implements specific interfaces) via C2 so that a C2 controller would be able to do its own config validation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390709410 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/nf-error-handler.js ## @@ -73,6 +73,8 @@ $('#message-title').text('Insufficient Permissions'); } else if (xhr.status === 409) { $('#message-title').text('Invalid State'); +} else if (xhr.status === 413) { Review comment: Can response 417 actually be returned by `LimitedContentLengthRequest` in the NiFi server if the content-length header value is different than the actual content length? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390709146 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/nf-error-handler.js ## @@ -89,7 +91,7 @@ } // status code 400, 404, and 409 are expected response codes for nfCommon errors. Review comment: This comment should be updated to include 413. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390708884 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java ## @@ -590,11 +591,15 @@ private WebAppContext loadWar(final File warFile, final String contextPath, fina // add HTTP security headers to all responses final String ALL_PATHS = "/*"; -ArrayList> filters = new ArrayList<>(Arrays.asList(XFrameOptionsFilter.class, ContentSecurityPolicyFilter.class, XSSProtectionFilter.class)); +ArrayList> filters = new ArrayList<>(Arrays.asList( Review comment: Not a complete solution; see comments on unit test below. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390707896 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/test/java/org/apache/nifi/web/security/request/ContentLengthFilterTest.java ## @@ -0,0 +1,183 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.apache.commons.lang3.StringUtils; +import org.eclipse.jetty.server.LocalConnector; +import org.eclipse.jetty.server.Server; +import org.eclipse.jetty.servlet.FilterHolder; +import org.eclipse.jetty.servlet.ServletContextHandler; + +import org.eclipse.jetty.servlet.ServletHolder; +import org.junit.After; +import org.junit.Assert; +import org.junit.Test; + +import javax.servlet.DispatcherType; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import java.io.IOException; +import java.util.EnumSet; +import java.util.concurrent.TimeUnit; + + +/** + * This test exercises the {@link ContentLengthFilter} class. + * + * The approach here is to use a {@link LocalConnector} and raw strings for HTTP requests. The additional complexity + * of a complete HTTP client isn't required to determine the behavior, and any client would introduce a new dependency. + * + */ +public class ContentLengthFilterTest { +private static final int MAX_CONTENT_LENGTH = 1000; +private static final int SERVER_IDLE_TIMEOUT = 2500; // only one request needed + value large enough for slow systems +private static final String POST_REQUEST = "POST / HTTP/1.1\r\nContent-Length: %d\r\nHost: h\r\n\r\n%s"; +private static final String FORM_REQUEST = "POST / HTTP/1.1\r\nContent-Length: %d\r\nHost: h\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n%s"; +public static final int FORM_CONTENT_SIZE = 128; + +private Server serverUnderTest; +private LocalConnector localConnector; +private ServletContextHandler contextUnderTest; + +@After +public void stopServer() throws Exception { +if (serverUnderTest != null && serverUnderTest.isRunning()) { +serverUnderTest.stop(); +} +} + + +@Test +public void testRequestsWithMissingContentLengthHeader() throws Exception { +configureAndStartServer(readFullyAndRespondOK, -1); + +// This shows that the ContentLengthFilter allows a request that does not have a content-length header. +String response = localConnector.getResponse("POST / HTTP/1.0\r\n\r\n"); +Assert.assertFalse(StringUtils.containsIgnoreCase(response, "411 Length Required")); +} + + +@Test +public void testRequestsWithContentLengthHeader() throws Exception { +configureAndStartServer(readFullyAndRespondOK, -1); + +int smallClaim = 150; +int largeClaim = 2000; + +String incompletePayload = StringUtils.repeat("1", 10); +String largePayload = StringUtils.repeat("1", largeClaim + 200); + +// This shows that the ContentLengthFilter rejects a request when the client claims more than the max + sends more than the max: +String response = localConnector.getResponse(String.format(POST_REQUEST, largeClaim, largePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "413 Payload Too Large")); + +// This shows that the ContentLengthFilter rejects a request when the client claims more than the max + sends less the max: +response = localConnector.getResponse(String.format(POST_REQUEST, largeClaim, incompletePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "413 Payload Too Large")); + +// This shows that the ContentLengthFilter allows a request when it claims less than the max + sends more than the max: +response = localConnector.getResponse(String.format(POST_REQUEST, smallClaim, largePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "200
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390707633 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/test/java/org/apache/nifi/web/security/request/ContentLengthFilterTest.java ## @@ -0,0 +1,183 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.apache.commons.lang3.StringUtils; +import org.eclipse.jetty.server.LocalConnector; +import org.eclipse.jetty.server.Server; +import org.eclipse.jetty.servlet.FilterHolder; +import org.eclipse.jetty.servlet.ServletContextHandler; + +import org.eclipse.jetty.servlet.ServletHolder; +import org.junit.After; +import org.junit.Assert; +import org.junit.Test; + +import javax.servlet.DispatcherType; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import java.io.IOException; +import java.util.EnumSet; +import java.util.concurrent.TimeUnit; + + +/** + * This test exercises the {@link ContentLengthFilter} class. + * + * The approach here is to use a {@link LocalConnector} and raw strings for HTTP requests. The additional complexity + * of a complete HTTP client isn't required to determine the behavior, and any client would introduce a new dependency. + * + */ +public class ContentLengthFilterTest { +private static final int MAX_CONTENT_LENGTH = 1000; +private static final int SERVER_IDLE_TIMEOUT = 2500; // only one request needed + value large enough for slow systems +private static final String POST_REQUEST = "POST / HTTP/1.1\r\nContent-Length: %d\r\nHost: h\r\n\r\n%s"; +private static final String FORM_REQUEST = "POST / HTTP/1.1\r\nContent-Length: %d\r\nHost: h\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n%s"; +public static final int FORM_CONTENT_SIZE = 128; + +private Server serverUnderTest; +private LocalConnector localConnector; +private ServletContextHandler contextUnderTest; + +@After +public void stopServer() throws Exception { +if (serverUnderTest != null && serverUnderTest.isRunning()) { +serverUnderTest.stop(); +} +} + + +@Test +public void testRequestsWithMissingContentLengthHeader() throws Exception { +configureAndStartServer(readFullyAndRespondOK, -1); + +// This shows that the ContentLengthFilter allows a request that does not have a content-length header. +String response = localConnector.getResponse("POST / HTTP/1.0\r\n\r\n"); +Assert.assertFalse(StringUtils.containsIgnoreCase(response, "411 Length Required")); +} + + +@Test +public void testRequestsWithContentLengthHeader() throws Exception { +configureAndStartServer(readFullyAndRespondOK, -1); + +int smallClaim = 150; +int largeClaim = 2000; + +String incompletePayload = StringUtils.repeat("1", 10); +String largePayload = StringUtils.repeat("1", largeClaim + 200); + +// This shows that the ContentLengthFilter rejects a request when the client claims more than the max + sends more than the max: +String response = localConnector.getResponse(String.format(POST_REQUEST, largeClaim, largePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "413 Payload Too Large")); + +// This shows that the ContentLengthFilter rejects a request when the client claims more than the max + sends less the max: +response = localConnector.getResponse(String.format(POST_REQUEST, largeClaim, incompletePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "413 Payload Too Large")); + +// This shows that the ContentLengthFilter allows a request when it claims less than the max + sends more than the max: +response = localConnector.getResponse(String.format(POST_REQUEST, smallClaim, largePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "200
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390706399 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/test/java/org/apache/nifi/web/security/request/ContentLengthFilterTest.java ## @@ -0,0 +1,183 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.apache.commons.lang3.StringUtils; +import org.eclipse.jetty.server.LocalConnector; +import org.eclipse.jetty.server.Server; +import org.eclipse.jetty.servlet.FilterHolder; +import org.eclipse.jetty.servlet.ServletContextHandler; + +import org.eclipse.jetty.servlet.ServletHolder; +import org.junit.After; +import org.junit.Assert; +import org.junit.Test; + +import javax.servlet.DispatcherType; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import java.io.IOException; +import java.util.EnumSet; +import java.util.concurrent.TimeUnit; + + +/** + * This test exercises the {@link ContentLengthFilter} class. + * + * The approach here is to use a {@link LocalConnector} and raw strings for HTTP requests. The additional complexity + * of a complete HTTP client isn't required to determine the behavior, and any client would introduce a new dependency. + * + */ +public class ContentLengthFilterTest { +private static final int MAX_CONTENT_LENGTH = 1000; +private static final int SERVER_IDLE_TIMEOUT = 2500; // only one request needed + value large enough for slow systems +private static final String POST_REQUEST = "POST / HTTP/1.1\r\nContent-Length: %d\r\nHost: h\r\n\r\n%s"; +private static final String FORM_REQUEST = "POST / HTTP/1.1\r\nContent-Length: %d\r\nHost: h\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n%s"; +public static final int FORM_CONTENT_SIZE = 128; + +private Server serverUnderTest; +private LocalConnector localConnector; +private ServletContextHandler contextUnderTest; + +@After +public void stopServer() throws Exception { +if (serverUnderTest != null && serverUnderTest.isRunning()) { +serverUnderTest.stop(); +} +} + + +@Test +public void testRequestsWithMissingContentLengthHeader() throws Exception { +configureAndStartServer(readFullyAndRespondOK, -1); + +// This shows that the ContentLengthFilter allows a request that does not have a content-length header. +String response = localConnector.getResponse("POST / HTTP/1.0\r\n\r\n"); +Assert.assertFalse(StringUtils.containsIgnoreCase(response, "411 Length Required")); +} + + +@Test +public void testRequestsWithContentLengthHeader() throws Exception { +configureAndStartServer(readFullyAndRespondOK, -1); + +int smallClaim = 150; +int largeClaim = 2000; + +String incompletePayload = StringUtils.repeat("1", 10); +String largePayload = StringUtils.repeat("1", largeClaim + 200); + +// This shows that the ContentLengthFilter rejects a request when the client claims more than the max + sends more than the max: +String response = localConnector.getResponse(String.format(POST_REQUEST, largeClaim, largePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "413 Payload Too Large")); + +// This shows that the ContentLengthFilter rejects a request when the client claims more than the max + sends less the max: +response = localConnector.getResponse(String.format(POST_REQUEST, largeClaim, incompletePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "413 Payload Too Large")); + +// This shows that the ContentLengthFilter allows a request when it claims less than the max + sends more than the max: +response = localConnector.getResponse(String.format(POST_REQUEST, smallClaim, largePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "200
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390704136 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/test/java/org/apache/nifi/web/security/request/ContentLengthFilterTest.java ## @@ -0,0 +1,183 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.apache.commons.lang3.StringUtils; +import org.eclipse.jetty.server.LocalConnector; +import org.eclipse.jetty.server.Server; +import org.eclipse.jetty.servlet.FilterHolder; +import org.eclipse.jetty.servlet.ServletContextHandler; + +import org.eclipse.jetty.servlet.ServletHolder; +import org.junit.After; +import org.junit.Assert; +import org.junit.Test; + +import javax.servlet.DispatcherType; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import java.io.IOException; +import java.util.EnumSet; +import java.util.concurrent.TimeUnit; + + +/** + * This test exercises the {@link ContentLengthFilter} class. + * + * The approach here is to use a {@link LocalConnector} and raw strings for HTTP requests. The additional complexity + * of a complete HTTP client isn't required to determine the behavior, and any client would introduce a new dependency. + * + */ +public class ContentLengthFilterTest { +private static final int MAX_CONTENT_LENGTH = 1000; +private static final int SERVER_IDLE_TIMEOUT = 2500; // only one request needed + value large enough for slow systems +private static final String POST_REQUEST = "POST / HTTP/1.1\r\nContent-Length: %d\r\nHost: h\r\n\r\n%s"; +private static final String FORM_REQUEST = "POST / HTTP/1.1\r\nContent-Length: %d\r\nHost: h\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n%s"; +public static final int FORM_CONTENT_SIZE = 128; + +private Server serverUnderTest; +private LocalConnector localConnector; +private ServletContextHandler contextUnderTest; + +@After +public void stopServer() throws Exception { +if (serverUnderTest != null && serverUnderTest.isRunning()) { +serverUnderTest.stop(); +} +} + + +@Test +public void testRequestsWithMissingContentLengthHeader() throws Exception { +configureAndStartServer(readFullyAndRespondOK, -1); + +// This shows that the ContentLengthFilter allows a request that does not have a content-length header. +String response = localConnector.getResponse("POST / HTTP/1.0\r\n\r\n"); +Assert.assertFalse(StringUtils.containsIgnoreCase(response, "411 Length Required")); +} + + +@Test +public void testRequestsWithContentLengthHeader() throws Exception { +configureAndStartServer(readFullyAndRespondOK, -1); + +int smallClaim = 150; +int largeClaim = 2000; + +String incompletePayload = StringUtils.repeat("1", 10); +String largePayload = StringUtils.repeat("1", largeClaim + 200); + +// This shows that the ContentLengthFilter rejects a request when the client claims more than the max + sends more than the max: +String response = localConnector.getResponse(String.format(POST_REQUEST, largeClaim, largePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "413 Payload Too Large")); + +// This shows that the ContentLengthFilter rejects a request when the client claims more than the max + sends less the max: +response = localConnector.getResponse(String.format(POST_REQUEST, largeClaim, incompletePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "413 Payload Too Large")); + +// This shows that the ContentLengthFilter allows a request when it claims less than the max + sends more than the max: +response = localConnector.getResponse(String.format(POST_REQUEST, smallClaim, largePayload)); +Assert.assertTrue(StringUtils.containsIgnoreCase(response, "200
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390701556 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java ## @@ -590,11 +591,15 @@ private WebAppContext loadWar(final File warFile, final String contextPath, fina // add HTTP security headers to all responses final String ALL_PATHS = "/*"; -ArrayList> filters = new ArrayList<>(Arrays.asList(XFrameOptionsFilter.class, ContentSecurityPolicyFilter.class, XSSProtectionFilter.class)); +ArrayList> filters = new ArrayList<>(Arrays.asList( Review comment: I notice the max form content size is set to 600KB on line 590 above. If this is set, doesn't the `10 MB` / `100 MB` limit not apply because Jetty overrides it anyway? https://www.eclipse.org/jetty/documentation/current/setting-form-size.html >Jetty limits the amount of data that can post back from a browser or other client to the server. This helps protect the server against denial of service attacks by malicious clients sending huge amounts of data. The default maximum size Jetty permits is 20 bytes. You can change this default for a particular webapp, for all webapps on a particular Server instance, or all webapps within the same JVM. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390701556 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java ## @@ -590,11 +591,15 @@ private WebAppContext loadWar(final File warFile, final String contextPath, fina // add HTTP security headers to all responses final String ALL_PATHS = "/*"; -ArrayList> filters = new ArrayList<>(Arrays.asList(XFrameOptionsFilter.class, ContentSecurityPolicyFilter.class, XSSProtectionFilter.class)); +ArrayList> filters = new ArrayList<>(Arrays.asList( Review comment: I notice the max form content size is set to 600KB on line 590 above. If this is set, doesn't the `10 MB` / `100 MB` limit not apply because Jetty overrides it anyway? https://www.eclipse.org/jetty/documentation/current/setting-form-size.html This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390697974 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); +} + +@Override +public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { +HttpServletRequest httpRequest = (HttpServletRequest) request; +String httpMethod = httpRequest.getMethod(); + +// Check the HTTP method because the spec says clients don't have to send a content-length header for methods +// that don't use it. So even though an attacker may provide a large body in a GET request, the body should go +// unread and a size filter is unneeded at best. See RFC 2616 section 14.13, and RFC 1945 section 10.4. +boolean willReadInputStream = maxContentLength > 0 && (httpMethod.equalsIgnoreCase("POST") || httpMethod.equalsIgnoreCase("PUT")); +if (!willReadInputStream) { +logger.info("No length check of request with method {} and maximum {}", httpMethod, maxContentLength); +next.doFilter(request, response); +return; +} + +HttpServletResponse httpResponse = (HttpServletResponse) response; +int contentLength = request.getContentLength(); +if (contentLength > maxContentLength) { +// Request with a client-specified length greater than our max is rejected: +logger.info("Content length check rejected request with content-length {} greater than maximum {}", contentLength, maxContentLength); +httpResponse.setContentType("text/plain"); +httpResponse.getOutputStream().write("Payload Too large".getBytes()); + httpResponse.setStatus(HttpServletResponse.SC_REQUEST_ENTITY_TOO_LARGE); +} else { +// If or when the request is read, this limits the read to our max: +logger.info("Content length check allowed request with content-length {} less than maximum {}", contentLength, maxContentLength); Review comment: I think this should be set to `debug` to avoid flooding the logs. Should probably add units to these log statements for clarity.
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390698146 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); +} + +@Override +public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { +HttpServletRequest httpRequest = (HttpServletRequest) request; +String httpMethod = httpRequest.getMethod(); + +// Check the HTTP method because the spec says clients don't have to send a content-length header for methods +// that don't use it. So even though an attacker may provide a large body in a GET request, the body should go +// unread and a size filter is unneeded at best. See RFC 2616 section 14.13, and RFC 1945 section 10.4. +boolean willReadInputStream = maxContentLength > 0 && (httpMethod.equalsIgnoreCase("POST") || httpMethod.equalsIgnoreCase("PUT")); +if (!willReadInputStream) { +logger.info("No length check of request with method {} and maximum {}", httpMethod, maxContentLength); +next.doFilter(request, response); +return; +} + +HttpServletResponse httpResponse = (HttpServletResponse) response; +int contentLength = request.getContentLength(); +if (contentLength > maxContentLength) { +// Request with a client-specified length greater than our max is rejected: +logger.info("Content length check rejected request with content-length {} greater than maximum {}", contentLength, maxContentLength); Review comment: And this should probably be `warn`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390697974 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); +} + +@Override +public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { +HttpServletRequest httpRequest = (HttpServletRequest) request; +String httpMethod = httpRequest.getMethod(); + +// Check the HTTP method because the spec says clients don't have to send a content-length header for methods +// that don't use it. So even though an attacker may provide a large body in a GET request, the body should go +// unread and a size filter is unneeded at best. See RFC 2616 section 14.13, and RFC 1945 section 10.4. +boolean willReadInputStream = maxContentLength > 0 && (httpMethod.equalsIgnoreCase("POST") || httpMethod.equalsIgnoreCase("PUT")); +if (!willReadInputStream) { +logger.info("No length check of request with method {} and maximum {}", httpMethod, maxContentLength); +next.doFilter(request, response); +return; +} + +HttpServletResponse httpResponse = (HttpServletResponse) response; +int contentLength = request.getContentLength(); +if (contentLength > maxContentLength) { +// Request with a client-specified length greater than our max is rejected: +logger.info("Content length check rejected request with content-length {} greater than maximum {}", contentLength, maxContentLength); +httpResponse.setContentType("text/plain"); +httpResponse.getOutputStream().write("Payload Too large".getBytes()); + httpResponse.setStatus(HttpServletResponse.SC_REQUEST_ENTITY_TOO_LARGE); +} else { +// If or when the request is read, this limits the read to our max: +logger.info("Content length check allowed request with content-length {} less than maximum {}", contentLength, maxContentLength); Review comment: I think this should be set to `debug` to avoid flooding the logs. This is an automated message from the Apache Git Service. To
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390697821 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); +} + +@Override +public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { +HttpServletRequest httpRequest = (HttpServletRequest) request; +String httpMethod = httpRequest.getMethod(); + +// Check the HTTP method because the spec says clients don't have to send a content-length header for methods +// that don't use it. So even though an attacker may provide a large body in a GET request, the body should go +// unread and a size filter is unneeded at best. See RFC 2616 section 14.13, and RFC 1945 section 10.4. +boolean willReadInputStream = maxContentLength > 0 && (httpMethod.equalsIgnoreCase("POST") || httpMethod.equalsIgnoreCase("PUT")); +if (!willReadInputStream) { +logger.info("No length check of request with method {} and maximum {}", httpMethod, maxContentLength); Review comment: Should probably add units to these log statements for clarity. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390697808 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); +} + +@Override +public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { +HttpServletRequest httpRequest = (HttpServletRequest) request; +String httpMethod = httpRequest.getMethod(); + +// Check the HTTP method because the spec says clients don't have to send a content-length header for methods +// that don't use it. So even though an attacker may provide a large body in a GET request, the body should go +// unread and a size filter is unneeded at best. See RFC 2616 section 14.13, and RFC 1945 section 10.4. +boolean willReadInputStream = maxContentLength > 0 && (httpMethod.equalsIgnoreCase("POST") || httpMethod.equalsIgnoreCase("PUT")); +if (!willReadInputStream) { +logger.info("No length check of request with method {} and maximum {}", httpMethod, maxContentLength); +next.doFilter(request, response); +return; +} + +HttpServletResponse httpResponse = (HttpServletResponse) response; +int contentLength = request.getContentLength(); +if (contentLength > maxContentLength) { +// Request with a client-specified length greater than our max is rejected: +logger.info("Content length check rejected request with content-length {} greater than maximum {}", contentLength, maxContentLength); Review comment: Should probably add units to these log statements for clarity. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390697644 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); +} + +@Override +public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { +HttpServletRequest httpRequest = (HttpServletRequest) request; +String httpMethod = httpRequest.getMethod(); + +// Check the HTTP method because the spec says clients don't have to send a content-length header for methods +// that don't use it. So even though an attacker may provide a large body in a GET request, the body should go +// unread and a size filter is unneeded at best. See RFC 2616 section 14.13, and RFC 1945 section 10.4. +boolean willReadInputStream = maxContentLength > 0 && (httpMethod.equalsIgnoreCase("POST") || httpMethod.equalsIgnoreCase("PUT")); +if (!willReadInputStream) { +logger.info("No length check of request with method {} and maximum {}", httpMethod, maxContentLength); Review comment: I think this should be set to `debug` so it doesn't flood the logs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390689380 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); +} + +@Override +public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { +HttpServletRequest httpRequest = (HttpServletRequest) request; +String httpMethod = httpRequest.getMethod(); + +// Check the HTTP method because the spec says clients don't have to send a content-length header for methods +// that don't use it. So even though an attacker may provide a large body in a GET request, the body should go +// unread and a size filter is unneeded at best. See RFC 2616 section 14.13, and RFC 1945 section 10.4. +boolean willReadInputStream = maxContentLength > 0 && (httpMethod.equalsIgnoreCase("POST") || httpMethod.equalsIgnoreCase("PUT")); Review comment: Does this variable actually determine if the full input stream will be read or just if the request length needs to be examined? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390688868 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); +} + +@Override +public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { +HttpServletRequest httpRequest = (HttpServletRequest) request; +String httpMethod = httpRequest.getMethod(); + +// Check the HTTP method because the spec says clients don't have to send a content-length header for methods Review comment: This is a good and helpful comment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390686447 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; +private int maxContentLength; + +public ContentLengthFilter() { +maxContentLength = MAX_LENGTH_DEFAULT; +} + +public ContentLengthFilter(int maxLength) { +maxContentLength = maxLength; +} + +@Override +public void init(FilterConfig config) throws ServletException { +String maxLength = config.getInitParameter(MAX_LENGTH_INIT_PARAM); +int length = maxLength == null ? MAX_LENGTH_DEFAULT : Integer.parseInt(maxLength); +if (length < 0) { +throw new ServletException("Invalid max request length."); +} +maxContentLength = length; +logger.info("Max content length set: " + maxLength + "b"); Review comment: Is the max content length in _bytes_ (`B`) or _bits_ (`b`)? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390686173 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/request/ContentLengthFilter.java ## @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.security.request; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.servlet.Filter; +import javax.servlet.FilterChain; +import javax.servlet.FilterConfig; +import javax.servlet.ReadListener; +import javax.servlet.ServletException; +import javax.servlet.ServletInputStream; +import javax.servlet.ServletRequest; +import javax.servlet.ServletResponse; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletRequestWrapper; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * This {@link Filter} rejects HTTP requests that exceed a specific, maximum size. + */ +public class ContentLengthFilter implements Filter { +private static final Logger logger = LoggerFactory.getLogger(ContentLengthFilter.class); +public final static String MAX_LENGTH_INIT_PARAM = "maxContentLength"; +public final static int MAX_LENGTH_DEFAULT = 10_000_000; Review comment: If the default is defined in `nifi.properties` I don't think it should be repeated here. If it needs to be changed in the future, it will likely lead to confusion. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390685901 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/resources/org/apache/nifi/web/webdefault.xml ## @@ -551,6 +551,24 @@ TRACE - + Review comment: I agree with @mcgilman 's question about why we can't define and associate this filter in the `JettyServer` class for consistency. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390685310 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java ## @@ -607,6 +612,25 @@ private WebAppContext loadWar(final File warFile, final String contextPath, fina return webappContext; } +private void addContentLengthFilter(String contextPath, String pathSpec, WebAppContext webappContext) { +final FilterHolder holder = new FilterHolder(ContentLengthFilter.class); +final Map largePaths = props.getWebMaxContentSizeLargePaths(); +int size; + +if (!largePaths.containsValue(contextPath)) { +size = DataUnit.parseDataSize(props.getWebMaxContentSize(), DataUnit.B).intValue(); +} else { +size = DataUnit.parseDataSize(props.getWebMaxContentSizeLarge(), DataUnit.B).intValue(); +} +holder.setInitParameters(new HashMap() {{ +put("maxContentLength", String.valueOf(size)); +}}); + +logger.info("Adding Content Length Filter to context at pathSpec: " + contextPath + " with max size: " + size + "b"); +// NB: the use of pathSpec vs contextPath Review comment: What is the difference between `pathSpec` and `contextPath` here? It appears from the calling method that `pathSpec` will always be `ALL_PATHS` -- is it necessary to parameterize? How does the `contextPath` hierarchy work here when multiple values are present in `largePaths`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390685310 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java ## @@ -607,6 +612,25 @@ private WebAppContext loadWar(final File warFile, final String contextPath, fina return webappContext; } +private void addContentLengthFilter(String contextPath, String pathSpec, WebAppContext webappContext) { +final FilterHolder holder = new FilterHolder(ContentLengthFilter.class); +final Map largePaths = props.getWebMaxContentSizeLargePaths(); +int size; + +if (!largePaths.containsValue(contextPath)) { +size = DataUnit.parseDataSize(props.getWebMaxContentSize(), DataUnit.B).intValue(); +} else { +size = DataUnit.parseDataSize(props.getWebMaxContentSizeLarge(), DataUnit.B).intValue(); +} +holder.setInitParameters(new HashMap() {{ +put("maxContentLength", String.valueOf(size)); +}}); + +logger.info("Adding Content Length Filter to context at pathSpec: " + contextPath + " with max size: " + size + "b"); +// NB: the use of pathSpec vs contextPath Review comment: What is the difference between `pathSpec` and `contextPath` here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390685093 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java ## @@ -607,6 +612,25 @@ private WebAppContext loadWar(final File warFile, final String contextPath, fina return webappContext; } +private void addContentLengthFilter(String contextPath, String pathSpec, WebAppContext webappContext) { +final FilterHolder holder = new FilterHolder(ContentLengthFilter.class); +final Map largePaths = props.getWebMaxContentSizeLargePaths(); +int size; + +if (!largePaths.containsValue(contextPath)) { +size = DataUnit.parseDataSize(props.getWebMaxContentSize(), DataUnit.B).intValue(); Review comment: What happens if the max content size is not parseable as a data unit? In the unit tests, these properties accepted "size value" and "large size value" without complaint, so I don't believe there's any internal validation occurring. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] msharee9 commented on issue #734: MINIFICPP-1157 Implement lightweight C2 heartbeat.
msharee9 commented on issue #734: MINIFICPP-1157 Implement lightweight C2 heartbeat. URL: https://github.com/apache/nifi-minifi-cpp/pull/734#issuecomment-597372007 Closed this PR in lieu of https://github.com/apache/nifi-minifi-cpp/pull/743 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] msharee9 closed pull request #734: MINIFICPP-1157 Implement lightweight C2 heartbeat.
msharee9 closed pull request #734: MINIFICPP-1157 Implement lightweight C2 heartbeat. URL: https://github.com/apache/nifi-minifi-cpp/pull/734 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390669052 ## File path: extensions/http-curl/tests/HTTPHandlers.h ## @@ -343,4 +345,104 @@ class DeleteTransactionResponder : public CivetHandler { std::string response_code; }; +class HeartbeatHandler : public CivetHandler { + public: + explicit HeartbeatHandler(bool isSecure) + : isSecure(isSecure) { + } + + std::string readPost(struct mg_connection *conn) { +std::string response; +int blockSize = 1024 * sizeof(char), readBytes; Review comment: Not sure why, but makes sense to simplify this. We are just reading 1024 bytes from the connection in each call. The variable blockSize is extraneous. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] alopresto commented on issue #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
alopresto commented on issue #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#issuecomment-597366292 Reviewing... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390614132 ## File path: libminifi/include/FlowController.h ## @@ -304,23 +305,23 @@ class FlowController : public core::controller::ControllerServiceProvider, publi virtual void enableAllControllerServices(); /** - * Retrieves all root response nodes from this source. - * @param metric_vector -- metrics will be placed in this vector. - * @return result of the get operation. - * 0 Success - * 1 No error condition, but cannot obtain lock in timely manner. - * -1 failure + * Retrieves metrics node + * @return metrics response node */ - virtual int16_t getResponseNodes(std::vector> _vector, uint16_t metricsClass); + virtual std::shared_ptr getMetricsNode() const; + + /** + * Retrieves root nodes configured to be included in heartbeat + * @param includeManifest -- determines if manifest is to be included + * @return a list of response nodes + */ + virtual std::vector> getHeartbeatNodes(bool includeManifest) const; Review comment: We can definitely have raw pointers in root_response_nodes_ but is there a reason not to use shared pointers? I understand we cannot use unique_ptrs here as we want to have the FlowController maintain ownership of the nodes and share it with C2Agent. But, given the choices of smart pointers we have shared_ptr is the best one we can use in this case. Having said that, I am not a strong adherent of using shared_ptr universally. In this case I think it is more of a coding guidelines/style/consistency in the code base. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390665167 ## File path: extensions/http-curl/tests/HTTPHandlers.h ## @@ -343,4 +345,104 @@ class DeleteTransactionResponder : public CivetHandler { std::string response_code; }; +class HeartbeatHandler : public CivetHandler { + public: + explicit HeartbeatHandler(bool isSecure) + : isSecure(isSecure) { + } + + std::string readPost(struct mg_connection *conn) { +std::string response; +int blockSize = 1024 * sizeof(char), readBytes; + +char buffer[1024]; +while ((readBytes = mg_read(conn, buffer, blockSize)) > 0) { + response.append(buffer, 0, (readBytes / sizeof(char))); +} +return response; + } + + void sendStopOperation(struct mg_connection *conn) { +std::string resp = "{\"operation\" : \"heartbeat\", \"requested_operations\" : [{ \"operationid\" : 41, \"operation\" : \"stop\", \"operand\" : \"invoke\" }, " +"{ \"operationid\" : 42, \"operation\" : \"stop\", \"operand\" : \"FlowController\" } ]}"; +mg_printf(conn, "HTTP/1.1 200 OK\r\nContent-Type: " + "text/plain\r\nContent-Length: %lu\r\nConnection: close\r\n\r\n", + resp.length()); +mg_printf(conn, "%s", resp.c_str()); + } + + void sendHeartbeatResponse(const std::string& operation, const std::string& operand, const std::string& operationId, struct mg_connection * conn) { +std::string heartbeat_response = "{\"operation\" : \"heartbeat\",\"requested_operations\": [ {" + "\"operation\" : \"" + operation + "\"," + "\"operationid\" : \"" + operationId + "\"," + "\"operand\": \"" + operand + "\"}]}"; + + mg_printf(conn, "HTTP/1.1 200 OK\r\nContent-Type: " +"text/plain\r\nContent-Length: %lu\r\nConnection: close\r\n\r\n", +heartbeat_response.length()); + mg_printf(conn, "%s", heartbeat_response.c_str()); + } + + void verifyJsonHasAgentManifest(const rapidjson::Document& root) { +bool found = false; +assert(root.HasMember("agentInfo") == true); +assert(root["agentInfo"].HasMember("agentManifest") == true); +assert(root["agentInfo"]["agentManifest"].HasMember("bundles") == true); + +for (auto : root["agentInfo"]["agentManifest"]["bundles"].GetArray()) { + assert(bundle.HasMember("artifact")); + std::string str = bundle["artifact"].GetString(); + if (str == "minifi-system") { + +std::vector classes; +for (auto : bundle["componentManifest"]["processors"].GetArray()) { + classes.push_back(proc["type"].GetString()); +} + +auto group = minifi::BuildDescription::getClassDescriptions(str); +for (auto proc : group.processors_) { + assert(std::find(classes.begin(), classes.end(), proc.class_name_) != std::end(classes)); + found = true; +} + + } +} +assert(found == true); + } + + virtual void handleHeartbeat(const rapidjson::Document& root, struct mg_connection * conn) { +(void)conn; Review comment: Good point. Will remove the variable name from the argument list. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390664802 ## File path: extensions/http-curl/tests/HTTPHandlers.h ## @@ -343,4 +345,104 @@ class DeleteTransactionResponder : public CivetHandler { std::string response_code; }; +class HeartbeatHandler : public CivetHandler { + public: + explicit HeartbeatHandler(bool isSecure) + : isSecure(isSecure) { + } + + std::string readPost(struct mg_connection *conn) { +std::string response; +int blockSize = 1024 * sizeof(char), readBytes; + +char buffer[1024]; +while ((readBytes = mg_read(conn, buffer, blockSize)) > 0) { + response.append(buffer, 0, (readBytes / sizeof(char))); +} +return response; + } + + void sendStopOperation(struct mg_connection *conn) { +std::string resp = "{\"operation\" : \"heartbeat\", \"requested_operations\" : [{ \"operationid\" : 41, \"operation\" : \"stop\", \"operand\" : \"invoke\" }, " +"{ \"operationid\" : 42, \"operation\" : \"stop\", \"operand\" : \"FlowController\" } ]}"; Review comment: I like the neatness of using string literals however, I would rather build a json with some json library and serialize it. But I would keep it as it is for the test code. If we ever want to reuse this response string elsewhere, I would be happy to refactor this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#discussion_r390660992 ## File path: libminifi/include/controllers/keyvalue/PersistableKeyValueStoreService.h ## @@ -0,0 +1,55 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef LIBMINIFI_INCLUDE_KEYVALUE_PersistableKeyValueStoreService_H_ +#define LIBMINIFI_INCLUDE_KEYVALUE_PersistableKeyValueStoreService_H_ + +#include "KeyValueStoreService.h" +#include "AbstractCoreComponentStateManagerProvider.h" +#include "core/Core.h" +#include "properties/Configure.h" + +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace controllers { + +class PersistableKeyValueStoreService : virtual public KeyValueStoreService, public AbstractCoreComponentStateManagerProvider { Review comment: `PersistableKeyValueStoreService` is just an interface extension of `KeyValueStoreService`, and forcefully wrapping a `PersistableKeyValueStoreService` into some class that provides the `CoreComponentStateManagerProvider` interface after loading the `PersistableKeyValueStoreService` with the classloader would make it impossible to implement and use a `CoreComponentStateManagerProvider` directly. This way everything "just works": if you implement a `PersistableKeyValueStoreService` you automatically have a `CoreComponentStateManagerProvider`, but you can implement a `CoreComponentStateManagerProvider` directly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#discussion_r390660992 ## File path: libminifi/include/controllers/keyvalue/PersistableKeyValueStoreService.h ## @@ -0,0 +1,55 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef LIBMINIFI_INCLUDE_KEYVALUE_PersistableKeyValueStoreService_H_ +#define LIBMINIFI_INCLUDE_KEYVALUE_PersistableKeyValueStoreService_H_ + +#include "KeyValueStoreService.h" +#include "AbstractCoreComponentStateManagerProvider.h" +#include "core/Core.h" +#include "properties/Configure.h" + +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace controllers { + +class PersistableKeyValueStoreService : virtual public KeyValueStoreService, public AbstractCoreComponentStateManagerProvider { Review comment: PersistableKeyValueStoreService is just an interface extension of PersistableKeyValueStoreService, and forcefully wrapping a PersistableKeyValueStoreService into some class that provides the CoreComponentStateManagerProvider interface after loading the PersistableKeyValueStoreService with the classloader would make it impossible to implement a CoreComponentStateManagerProvider directly. This way everything "just works": if you implement a PersistableKeyValueStoreService you automatically have a CoreComponentStateManagerProvider, but you can implement a CoreComponentStateManagerProvider directly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st…
bakaid commented on a change in pull request #605: MINIFICPP-550 - Implement RocksDB controller service and component st… URL: https://github.com/apache/nifi-minifi-cpp/pull/605#discussion_r390660992 ## File path: libminifi/include/controllers/keyvalue/PersistableKeyValueStoreService.h ## @@ -0,0 +1,55 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef LIBMINIFI_INCLUDE_KEYVALUE_PersistableKeyValueStoreService_H_ +#define LIBMINIFI_INCLUDE_KEYVALUE_PersistableKeyValueStoreService_H_ + +#include "KeyValueStoreService.h" +#include "AbstractCoreComponentStateManagerProvider.h" +#include "core/Core.h" +#include "properties/Configure.h" + +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace controllers { + +class PersistableKeyValueStoreService : virtual public KeyValueStoreService, public AbstractCoreComponentStateManagerProvider { Review comment: `PersistableKeyValueStoreService` is just an interface extension of `KeyValueStoreService`, and forcefully wrapping a `PersistableKeyValueStoreService` into some class that provides the `CoreComponentStateManagerProvider` interface after loading the `PersistableKeyValueStoreService` with the classloader would make it impossible to implement and use a `CoreComponentStateManagerProvider` directly. This way everything "just works": if you implement a `PersistableKeyValueStoreService` you automatically have a `CoreComponentStateManagerProvider`, but you can implement a CoreComponentStateManagerProvider directly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] MuazmaZ commented on issue #4126: NIFI-7103 Adding PutAzureDataLakeStorage Processor to provide native support for Azure Data Lake Storage Gen 2
MuazmaZ commented on issue #4126: NIFI-7103 Adding PutAzureDataLakeStorage Processor to provide native support for Azure Data Lake Storage Gen 2 URL: https://github.com/apache/nifi/pull/4126#issuecomment-597337355 Thanks @sburges for the feedback, indentation is fixed now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (NIFI-7239) Upgrade the Hive 3 bundle to use Apache Hive 3.1.2
[ https://issues.apache.org/jira/browse/NIFI-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Turcsanyi updated NIFI-7239: -- Status: Patch Available (was: In Progress) > Upgrade the Hive 3 bundle to use Apache Hive 3.1.2 > -- > > Key: NIFI-7239 > URL: https://issues.apache.org/jira/browse/NIFI-7239 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Peter Turcsanyi >Assignee: Peter Turcsanyi >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > NiFi's Hive 3 processors use 3.1.0 Hive client currently. > Hive 3.1.1 and 3.1.2 contain a lot of fixes. > Eg. this critical one: HIVE-20979 - Fix memory leak in hive streaming. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390615024 ## File path: libminifi/include/FlowController.h ## @@ -406,22 +409,27 @@ class FlowController : public core::controller::ControllerServiceProvider, publi std::chrono::steady_clock::time_point start_time_; - std::mutex metrics_mutex_; + mutable std::mutex metrics_mutex_; // root_nodes cache std::map> root_response_nodes_; + // metrics cache std::map> device_information_; // metrics cache std::map> component_metrics_; std::map>> component_metrics_by_id_; + // metrics last run std::chrono::steady_clock::time_point last_metrics_capture_; private: std::shared_ptr logger_; std::string serial_number_; + + std::shared_ptr c2_agent_; Review comment: You have a valid point. Made it a unique_ptr This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
msharee9 commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390614132 ## File path: libminifi/include/FlowController.h ## @@ -304,23 +305,23 @@ class FlowController : public core::controller::ControllerServiceProvider, publi virtual void enableAllControllerServices(); /** - * Retrieves all root response nodes from this source. - * @param metric_vector -- metrics will be placed in this vector. - * @return result of the get operation. - * 0 Success - * 1 No error condition, but cannot obtain lock in timely manner. - * -1 failure + * Retrieves metrics node + * @return metrics response node */ - virtual int16_t getResponseNodes(std::vector> _vector, uint16_t metricsClass); + virtual std::shared_ptr getMetricsNode() const; + + /** + * Retrieves root nodes configured to be included in heartbeat + * @param includeManifest -- determines if manifest is to be included + * @return a list of response nodes + */ + virtual std::vector> getHeartbeatNodes(bool includeManifest) const; Review comment: We can definitely have raw pointers in root_response_nodes_ but is there a reason not to use shared pointers? I understand we cannot use unique_ptrs here as we want to have the FlowController maintain ownership of the nodes. But, given the choices of smart pointers we have shared_ptr is the best one we can use in this case. Having said that, I am not a strong adherent of using shared_ptr universally. In this case I think it is more of a coding guidelines/style/consistency in the code base. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] ottobackwards commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
ottobackwards commented on a change in pull request #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#discussion_r390612015 ## File path: nifi-docs/src/main/asciidoc/administration-guide.adoc ## @@ -3247,6 +3247,9 @@ For example, when running in a Docker container or behind a proxy (e.g. localhos host[:port] that NiFi is bound to. |`nifi.web.proxy.context.path`|A comma separated list of allowed HTTP X-ProxyContextPath, X-Forwarded-Context, or X-Forwarded-Prefix header values to consider. By default, this value is blank meaning all requests containing a proxy context path are rejected. Configuring this property would allow requests where the proxy path is contained in this listing. +|`nifi.web.max.content.size`|The maximum size for regular PUT and POST requests. The default value is `10 MB`. Review comment: Ok, looking again I understand what path is, sorry. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] mcgilman commented on issue #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter
mcgilman commented on issue #4125: NIFI-7153 Adds ContentLengthFilter and DoSFilter URL: https://github.com/apache/nifi/pull/4125#issuecomment-597299639 Thanks for the PR @natural! In addition to @ottobackwards's comment above..., more clarification about the new feature may be helpful. A couple of comments from my review. It looks like the path property allows the nifi admin to set which paths the should use the larger limits. And these limits are set per context path. Is this too broad? Should the configurable paths be more specific since there are some endpoints that expect potential large payloads? It's probably ok if not initially as I can appreciate the motivation here. The nifi admin can just ensure that the large value is enough to cover those potential larges payloads. What are your thoughts on configuring the `DoSFilter` in `JettyServer` as well? My only concern would be maintainability of the codebase going forward and whether folks would know to find that configuration in the future in the `webdefault.xml`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (NIFI-7176) Add support expression language timeout value for InvokeHTTP processor
[ https://issues.apache.org/jira/browse/NIFI-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056341#comment-17056341 ] Güven Cenan Güvenal commented on NIFI-7176: --- I do PR #4131 > Add support expression language timeout value for InvokeHTTP processor > -- > > Key: NIFI-7176 > URL: https://issues.apache.org/jira/browse/NIFI-7176 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Güven Cenan Güvenal >Assignee: Güven Cenan Güvenal >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Hi, > I think invokehttp should support expression language to timeout because each > request can has different timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] guvencenanguvenal opened a new pull request #4131: NIFI-7176 added InvokeHTTP support parametric READ.TIMEOUT and CONNEC…
guvencenanguvenal opened a new pull request #4131: NIFI-7176 added InvokeHTTP support parametric READ.TIMEOUT and CONNEC… URL: https://github.com/apache/nifi/pull/4131 InvokeHTTP support evaluate expression for timeout parameters. NIFI-7126 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] anaylor opened a new pull request #4130: NIFI-6235 - Prioritizing standard content war loading order
anaylor opened a new pull request #4130: NIFI-6235 - Prioritizing standard content war loading order URL: https://github.com/apache/nifi/pull/4130 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Adds standard content viewer wars to a prioritized list to be loaded before any custom content viewers. This fixes [NIFI-6235 JettyServer Loads content viewers in arbitrary order](https://issues.apache.org/jira/browse/NIFI-6235)_ ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [x] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] sburges commented on issue #4126: NIFI-7103 Adding PutAzureDataLakeStorage Processor to provide native support for Azure Data Lake Storage Gen 2
sburges commented on issue #4126: NIFI-7103 Adding PutAzureDataLakeStorage Processor to provide native support for Azure Data Lake Storage Gen 2 URL: https://github.com/apache/nifi/pull/4126#issuecomment-597270803 Functionally this looks good, I built locally and tested against an Azure Data Lake account in Azure. The success and error cases work as expected. @MuazmaZ One small thing, it looks like something happened to the indenting in the AbstractAzureDataLakeStorageProcessor class. It would be good to clean that up for readability. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390355100 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerSearchService.java ## @@ -16,564 +16,150 @@ */ package org.apache.nifi.web.controller; -import org.apache.commons.collections4.CollectionUtils; -import org.apache.commons.lang3.StringUtils; import org.apache.nifi.authorization.Authorizer; import org.apache.nifi.authorization.RequestAction; +import org.apache.nifi.authorization.resource.Authorizable; import org.apache.nifi.authorization.user.NiFiUser; -import org.apache.nifi.authorization.user.NiFiUserUtils; -import org.apache.nifi.components.PropertyDescriptor; -import org.apache.nifi.components.validation.ValidationStatus; -import org.apache.nifi.connectable.Connectable; import org.apache.nifi.connectable.Connection; import org.apache.nifi.connectable.Funnel; import org.apache.nifi.connectable.Port; import org.apache.nifi.controller.FlowController; import org.apache.nifi.controller.ProcessorNode; -import org.apache.nifi.controller.ScheduledState; import org.apache.nifi.controller.label.Label; -import org.apache.nifi.controller.queue.FlowFileQueue; -import org.apache.nifi.flowfile.FlowFilePrioritizer; import org.apache.nifi.groups.ProcessGroup; import org.apache.nifi.groups.RemoteProcessGroup; -import org.apache.nifi.nar.NarCloseable; import org.apache.nifi.parameter.Parameter; import org.apache.nifi.parameter.ParameterContext; -import org.apache.nifi.parameter.ParameterContextManager; -import org.apache.nifi.processor.DataUnit; -import org.apache.nifi.processor.Processor; -import org.apache.nifi.processor.Relationship; -import org.apache.nifi.registry.ComponentVariableRegistry; -import org.apache.nifi.registry.VariableDescriptor; -import org.apache.nifi.registry.VariableRegistry; -import org.apache.nifi.remote.PublicPort; -import org.apache.nifi.scheduling.ExecutionNode; -import org.apache.nifi.scheduling.SchedulingStrategy; -import org.apache.nifi.search.SearchContext; -import org.apache.nifi.search.SearchResult; -import org.apache.nifi.search.Searchable; import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; import org.apache.nifi.web.api.dto.search.SearchResultsDTO; +import org.apache.nifi.web.search.ComponentMatcher; +import org.apache.nifi.web.search.MatchEnriching; +import org.apache.nifi.web.search.query.SearchQuery; -import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; -import java.util.Map; +import java.util.Optional; import java.util.Set; -import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; /** * NiFi web controller's helper service that implements component search. */ public class ControllerSearchService { +private final static String FILTER_NAME_GROUP = "group"; +private final static String FILTER_NAME_SCOPE = "scope"; +private final static String FILTER_SCOPE_VALUE_HERE = "here"; + private FlowController flowController; private Authorizer authorizer; -private VariableRegistry variableRegistry; - -/** - * Searches term in the controller beginning from a given process group. - * - * @param results Search results - * @param search The search term - * @param group The init process group - */ -public void search(final SearchResultsDTO results, final String search, final ProcessGroup group) { -final NiFiUser user = NiFiUserUtils.getNiFiUser(); - -if (group.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO groupMatch = search(search, group); -if (groupMatch != null) { -// get the parent group, not the current one -groupMatch.setParentGroup(buildResultGroup(group.getParent(), user)); - groupMatch.setVersionedGroup(buildVersionedGroup(group.getParent(), user)); -results.getProcessGroupResults().add(groupMatch); -} -} - -for (final ProcessorNode procNode : group.getProcessors()) { -if (procNode.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO match = search(search, procNode); -if (match != null) { -match.setGroupId(group.getIdentifier()); -match.setParentGroup(buildResultGroup(group, user)); -match.setVersionedGroup(buildVersionedGroup(group, user)); -results.getProcessorResults().add(match); -} -} -} - -for (final Connection connection :
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390380984 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/search/attributematchers/BasicAttributeMatcher.java ## @@ -0,0 +1,55 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.search.attributematchers; + +import org.apache.nifi.connectable.Connectable; +import org.apache.nifi.web.search.query.SearchQuery; + +import java.util.List; + +import static org.apache.nifi.web.search.attributematchers.AttributeMatcher.addIfMatching; + +public class BasicAttributeMatcher implements AttributeMatcher { Review comment: Usually it's better to avoid boolean flags (especially if they have no business logic implications). I'm wondering if it wouldn't be better to use polymorphism instead, like this: ```java public class BasicAttributeMatcher implements AttributeMatcher { private static final String LABEL_ID = "Id"; private static final String LABEL_VERSION_CONTROL_ID = "Version Control ID"; @Override public void match(final T component, final SearchQuery query, final List matches) { final String searchTerm = query.getTerm(); addIfMatching(searchTerm, component.getIdentifier(), LABEL_ID, matches); addIfMatching(searchTerm, component.getVersionedComponentId().orElse(null), LABEL_VERSION_CONTROL_ID, matches); } } public class ExtendedAttributeMatcher extends BasicAttributeMatcher { private static final String LABEL_NAME = "Name"; private static final String LABEL_COMMENTS = "Comments"; @Override public void match(final T component, final SearchQuery query, final List matches) { super.match(component, query, matches); final String searchTerm = query.getTerm(); addIfMatching(searchTerm, component.getName(), LABEL_NAME, matches); addIfMatching(searchTerm, component.getComments(), LABEL_COMMENTS, matches); } } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390351630 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/search/ComponentMatcherFactory.java ## @@ -0,0 +1,78 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.search; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.connectable.Connectable; +import org.apache.nifi.connectable.Connection; +import org.apache.nifi.groups.ProcessGroup; +import org.apache.nifi.groups.RemoteProcessGroup; +import org.apache.nifi.parameter.Parameter; +import org.apache.nifi.parameter.ParameterContext; +import org.apache.nifi.web.search.attributematchers.AttributeMatcher; + +import java.util.List; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class ComponentMatcherFactory { +public ComponentMatcher getInstanceForConnectable(final List> attributeMatchers) { +return new AttributeBasedComponentMatcher<>(attributeMatchers, component -> component.getIdentifier(), component -> component.getName()); +} + +public ComponentMatcher getInstanceForConnection(final List> attributeMatchers) { +return new AttributeBasedComponentMatcher<>(attributeMatchers, component -> component.getIdentifier(), new GetConnectionName()); +} + +public ComponentMatcher getInstanceForParameter(final List> attributeMatchers) { +return new AttributeBasedComponentMatcher<>(attributeMatchers, component -> component.getDescriptor().getName(), component -> component.getDescriptor().getName()); +} + +public ComponentMatcher getInstanceForParameterContext(final List> attributeMatchers) { +return new AttributeBasedComponentMatcher<>(attributeMatchers, component -> component.getIdentifier(), component -> component.getName()); +} + +public ComponentMatcher getInstanceForProcessGroup(final List> attributeMatchers) { +return new AttributeBasedComponentMatcher<>(attributeMatchers, component -> component.getIdentifier(), component -> component.getName()); +} + +public ComponentMatcher getInstanceForRemoteProcessGroup(final List> attributeMatchers) { +return new AttributeBasedComponentMatcher<>(attributeMatchers, component -> component.getIdentifier(), component -> component.getName()); +} + +private static class GetConnectionName implements Function { +private static final String DEFAULT_NAME_PREFIX = "From source "; +private static final String SEPARATOR = ", "; + +public String apply(final Connection component) { +String result = null; + +if (StringUtils.isNotBlank(component.getName())) { +result = component.getName(); +} else if (!component.getRelationships().isEmpty()) { +result = component.getRelationships().stream() // Review comment: Maybe slightly simpler: ```java result = connection.getRelationships().stream() .map(Relationship::getName) .filter(StringUtils::isNotBlank) .collect(Collectors.joining(SEPARATOR)); ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390475982 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/search/MatchEnriching.java ## @@ -0,0 +1,56 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.search; + +import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; +import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; + +import java.util.Optional; +import java.util.function.Function; + +public class MatchEnriching implements Function { +private final Optional groupIdentifier; +private final Optional parentGroup; +private final Optional versionedGroup; + +public MatchEnriching(final String groupIdentifier, final SearchResultGroupDTO parentGroup, final SearchResultGroupDTO versionedGroup) { +this(Optional.ofNullable(groupIdentifier), Optional.ofNullable(parentGroup), Optional.ofNullable(versionedGroup)); +} + +public MatchEnriching(final Optional groupIdentifier, final Optional parentGroup, final Optional versionedGroup) { +this.groupIdentifier = groupIdentifier; +this.parentGroup = parentGroup; +this.versionedGroup = versionedGroup; +} + +@Override +public ComponentSearchResultDTO apply(final ComponentSearchResultDTO match) { + Review comment: Minor suggestion: Maybe a bit simpler: ```java groupIdentifier.ifPresent(match::setGroupId); parentGroup.ifPresent(match::setParentGroup); versionedGroup.ifPresent(match::setVersionedGroup); ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390455730 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerFacade.java ## @@ -147,8 +148,8 @@ // properties private NiFiProperties properties; private DtoFactory dtoFactory; -private VariableRegistry variableRegistry; private ControllerSearchService controllerSearchService; +private SearchQueryParser searchQueryParser; Review comment: As far as responsibilities go, `SearchQueryParser` seems to be something that should be in the `ControllerSearchService` instead of the `ControllerFacade`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390513175 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerSearchService.java ## @@ -16,564 +16,162 @@ */ package org.apache.nifi.web.controller; -import org.apache.commons.collections4.CollectionUtils; -import org.apache.commons.lang3.StringUtils; import org.apache.nifi.authorization.Authorizer; import org.apache.nifi.authorization.RequestAction; +import org.apache.nifi.authorization.resource.Authorizable; import org.apache.nifi.authorization.user.NiFiUser; -import org.apache.nifi.authorization.user.NiFiUserUtils; -import org.apache.nifi.components.PropertyDescriptor; -import org.apache.nifi.components.validation.ValidationStatus; -import org.apache.nifi.connectable.Connectable; import org.apache.nifi.connectable.Connection; import org.apache.nifi.connectable.Funnel; import org.apache.nifi.connectable.Port; import org.apache.nifi.controller.FlowController; import org.apache.nifi.controller.ProcessorNode; -import org.apache.nifi.controller.ScheduledState; import org.apache.nifi.controller.label.Label; -import org.apache.nifi.controller.queue.FlowFileQueue; -import org.apache.nifi.flowfile.FlowFilePrioritizer; import org.apache.nifi.groups.ProcessGroup; import org.apache.nifi.groups.RemoteProcessGroup; -import org.apache.nifi.nar.NarCloseable; import org.apache.nifi.parameter.Parameter; import org.apache.nifi.parameter.ParameterContext; -import org.apache.nifi.parameter.ParameterContextManager; -import org.apache.nifi.processor.DataUnit; -import org.apache.nifi.processor.Processor; -import org.apache.nifi.processor.Relationship; -import org.apache.nifi.registry.ComponentVariableRegistry; -import org.apache.nifi.registry.VariableDescriptor; -import org.apache.nifi.registry.VariableRegistry; -import org.apache.nifi.remote.PublicPort; -import org.apache.nifi.scheduling.ExecutionNode; -import org.apache.nifi.scheduling.SchedulingStrategy; -import org.apache.nifi.search.SearchContext; -import org.apache.nifi.search.SearchResult; -import org.apache.nifi.search.Searchable; import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; import org.apache.nifi.web.api.dto.search.SearchResultsDTO; +import org.apache.nifi.web.search.ComponentMatcher; +import org.apache.nifi.web.search.MatchEnriching; +import org.apache.nifi.web.search.query.SearchQuery; -import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; -import java.util.Map; +import java.util.Optional; import java.util.Set; -import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; /** * NiFi web controller's helper service that implements component search. */ public class ControllerSearchService { +private final static String FILTER_NAME_GROUP = "group"; +private final static String FILTER_NAME_SCOPE = "scope"; +private final static String FILTER_SCOPE_VALUE_HERE = "here"; + private FlowController flowController; private Authorizer authorizer; -private VariableRegistry variableRegistry; - -/** - * Searches term in the controller beginning from a given process group. - * - * @param results Search results - * @param search The search term - * @param group The init process group - */ -public void search(final SearchResultsDTO results, final String search, final ProcessGroup group) { -final NiFiUser user = NiFiUserUtils.getNiFiUser(); - -if (group.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO groupMatch = search(search, group); -if (groupMatch != null) { -// get the parent group, not the current one -groupMatch.setParentGroup(buildResultGroup(group.getParent(), user)); - groupMatch.setVersionedGroup(buildVersionedGroup(group.getParent(), user)); -results.getProcessGroupResults().add(groupMatch); -} -} - -for (final ProcessorNode procNode : group.getProcessors()) { -if (procNode.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO match = search(search, procNode); -if (match != null) { -match.setGroupId(group.getIdentifier()); -match.setParentGroup(buildResultGroup(group, user)); -match.setVersionedGroup(buildVersionedGroup(group, user)); -results.getProcessorResults().add(match); -} -} -} - -for (final Connection connection :
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390517409 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerSearchService.java ## @@ -16,564 +16,162 @@ */ package org.apache.nifi.web.controller; -import org.apache.commons.collections4.CollectionUtils; -import org.apache.commons.lang3.StringUtils; import org.apache.nifi.authorization.Authorizer; import org.apache.nifi.authorization.RequestAction; +import org.apache.nifi.authorization.resource.Authorizable; import org.apache.nifi.authorization.user.NiFiUser; -import org.apache.nifi.authorization.user.NiFiUserUtils; -import org.apache.nifi.components.PropertyDescriptor; -import org.apache.nifi.components.validation.ValidationStatus; -import org.apache.nifi.connectable.Connectable; import org.apache.nifi.connectable.Connection; import org.apache.nifi.connectable.Funnel; import org.apache.nifi.connectable.Port; import org.apache.nifi.controller.FlowController; import org.apache.nifi.controller.ProcessorNode; -import org.apache.nifi.controller.ScheduledState; import org.apache.nifi.controller.label.Label; -import org.apache.nifi.controller.queue.FlowFileQueue; -import org.apache.nifi.flowfile.FlowFilePrioritizer; import org.apache.nifi.groups.ProcessGroup; import org.apache.nifi.groups.RemoteProcessGroup; -import org.apache.nifi.nar.NarCloseable; import org.apache.nifi.parameter.Parameter; import org.apache.nifi.parameter.ParameterContext; -import org.apache.nifi.parameter.ParameterContextManager; -import org.apache.nifi.processor.DataUnit; -import org.apache.nifi.processor.Processor; -import org.apache.nifi.processor.Relationship; -import org.apache.nifi.registry.ComponentVariableRegistry; -import org.apache.nifi.registry.VariableDescriptor; -import org.apache.nifi.registry.VariableRegistry; -import org.apache.nifi.remote.PublicPort; -import org.apache.nifi.scheduling.ExecutionNode; -import org.apache.nifi.scheduling.SchedulingStrategy; -import org.apache.nifi.search.SearchContext; -import org.apache.nifi.search.SearchResult; -import org.apache.nifi.search.Searchable; import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; import org.apache.nifi.web.api.dto.search.SearchResultsDTO; +import org.apache.nifi.web.search.ComponentMatcher; +import org.apache.nifi.web.search.MatchEnriching; +import org.apache.nifi.web.search.query.SearchQuery; -import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; -import java.util.Map; +import java.util.Optional; import java.util.Set; -import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; /** * NiFi web controller's helper service that implements component search. */ public class ControllerSearchService { +private final static String FILTER_NAME_GROUP = "group"; +private final static String FILTER_NAME_SCOPE = "scope"; +private final static String FILTER_SCOPE_VALUE_HERE = "here"; + private FlowController flowController; private Authorizer authorizer; -private VariableRegistry variableRegistry; - -/** - * Searches term in the controller beginning from a given process group. - * - * @param results Search results - * @param search The search term - * @param group The init process group - */ -public void search(final SearchResultsDTO results, final String search, final ProcessGroup group) { -final NiFiUser user = NiFiUserUtils.getNiFiUser(); - -if (group.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO groupMatch = search(search, group); -if (groupMatch != null) { -// get the parent group, not the current one -groupMatch.setParentGroup(buildResultGroup(group.getParent(), user)); - groupMatch.setVersionedGroup(buildVersionedGroup(group.getParent(), user)); -results.getProcessGroupResults().add(groupMatch); -} -} - -for (final ProcessorNode procNode : group.getProcessors()) { -if (procNode.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO match = search(search, procNode); -if (match != null) { -match.setGroupId(group.getIdentifier()); -match.setParentGroup(buildResultGroup(group, user)); -match.setVersionedGroup(buildVersionedGroup(group, user)); -results.getProcessorResults().add(match); -} -} -} - -for (final Connection connection :
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390533681 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerSearchService.java ## @@ -16,564 +16,162 @@ */ package org.apache.nifi.web.controller; -import org.apache.commons.collections4.CollectionUtils; -import org.apache.commons.lang3.StringUtils; import org.apache.nifi.authorization.Authorizer; import org.apache.nifi.authorization.RequestAction; +import org.apache.nifi.authorization.resource.Authorizable; import org.apache.nifi.authorization.user.NiFiUser; -import org.apache.nifi.authorization.user.NiFiUserUtils; -import org.apache.nifi.components.PropertyDescriptor; -import org.apache.nifi.components.validation.ValidationStatus; -import org.apache.nifi.connectable.Connectable; import org.apache.nifi.connectable.Connection; import org.apache.nifi.connectable.Funnel; import org.apache.nifi.connectable.Port; import org.apache.nifi.controller.FlowController; import org.apache.nifi.controller.ProcessorNode; -import org.apache.nifi.controller.ScheduledState; import org.apache.nifi.controller.label.Label; -import org.apache.nifi.controller.queue.FlowFileQueue; -import org.apache.nifi.flowfile.FlowFilePrioritizer; import org.apache.nifi.groups.ProcessGroup; import org.apache.nifi.groups.RemoteProcessGroup; -import org.apache.nifi.nar.NarCloseable; import org.apache.nifi.parameter.Parameter; import org.apache.nifi.parameter.ParameterContext; -import org.apache.nifi.parameter.ParameterContextManager; -import org.apache.nifi.processor.DataUnit; -import org.apache.nifi.processor.Processor; -import org.apache.nifi.processor.Relationship; -import org.apache.nifi.registry.ComponentVariableRegistry; -import org.apache.nifi.registry.VariableDescriptor; -import org.apache.nifi.registry.VariableRegistry; -import org.apache.nifi.remote.PublicPort; -import org.apache.nifi.scheduling.ExecutionNode; -import org.apache.nifi.scheduling.SchedulingStrategy; -import org.apache.nifi.search.SearchContext; -import org.apache.nifi.search.SearchResult; -import org.apache.nifi.search.Searchable; import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; import org.apache.nifi.web.api.dto.search.SearchResultsDTO; +import org.apache.nifi.web.search.ComponentMatcher; +import org.apache.nifi.web.search.MatchEnriching; +import org.apache.nifi.web.search.query.SearchQuery; -import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; -import java.util.Map; +import java.util.Optional; import java.util.Set; -import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; /** * NiFi web controller's helper service that implements component search. */ public class ControllerSearchService { +private final static String FILTER_NAME_GROUP = "group"; +private final static String FILTER_NAME_SCOPE = "scope"; +private final static String FILTER_SCOPE_VALUE_HERE = "here"; + private FlowController flowController; private Authorizer authorizer; -private VariableRegistry variableRegistry; - -/** - * Searches term in the controller beginning from a given process group. - * - * @param results Search results - * @param search The search term - * @param group The init process group - */ -public void search(final SearchResultsDTO results, final String search, final ProcessGroup group) { -final NiFiUser user = NiFiUserUtils.getNiFiUser(); - -if (group.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO groupMatch = search(search, group); -if (groupMatch != null) { -// get the parent group, not the current one -groupMatch.setParentGroup(buildResultGroup(group.getParent(), user)); - groupMatch.setVersionedGroup(buildVersionedGroup(group.getParent(), user)); -results.getProcessGroupResults().add(groupMatch); -} -} - -for (final ProcessorNode procNode : group.getProcessors()) { -if (procNode.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO match = search(search, procNode); -if (match != null) { -match.setGroupId(group.getIdentifier()); -match.setParentGroup(buildResultGroup(group, user)); -match.setVersionedGroup(buildVersionedGroup(group, user)); -results.getProcessorResults().add(match); -} -} -} - -for (final Connection connection :
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390485055 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/search/MatchEnriching.java ## @@ -0,0 +1,56 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.search; + +import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; +import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; + +import java.util.Optional; +import java.util.function.Function; + +public class MatchEnriching implements Function { Review comment: This class is perfectly fine if it doesn't implement `Function`. The advantage would be that the `apply` method would be freed up and 1. could be renamed (to `enrich` for example) 1. would be easier to find in a call hierarchy (which really helps with finding out what it is used for) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (NIFI-7242) Parameters update not taken into account in controller services
[ https://issues.apache.org/jira/browse/NIFI-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056302#comment-17056302 ] Mark Payne commented on NIFI-7242: -- Looking into this, it looks like the behavior that is occurring is slightly different than is laid out in the description. The Controller Service appears to be properly Disabled and Re- Enabled. However, the AvroSchemaRegistry compiles the schemas in the `onPropertyModified` method: {code:java} public void onPropertyModified(final PropertyDescriptor descriptor, final String oldValue, final String newValue) { if(descriptor.isDynamic()) { // Dynamic property = schema, validate it if (newValue == null) { recordSchemas.remove(descriptor.getName()); } else { try { // Use a non-strict parser here, a strict parse can be done (if specified) in customValidate(). final Schema avroSchema = new Schema.Parser().setValidate(false).parse(newValue); final SchemaIdentifier schemaId = SchemaIdentifier.builder().name(descriptor.getName()).build(); final RecordSchema recordSchema = AvroTypeUtil.createSchema(avroSchema, newValue, schemaId); recordSchemas.put(descriptor.getName(), recordSchema); } catch (final Exception e) { // not a problem - the service won't be valid and the validation message will indicate what is wrong. } } } } {code} Currently, changing the value of a Parameter does not trigger `onPropertyModified` to be called. However, I do think it should, as the value of the property is definitely changing. Will look into making these updates. > Parameters update not taken into account in controller services > --- > > Key: NIFI-7242 > URL: https://issues.apache.org/jira/browse/NIFI-7242 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Pierre Villard >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.12.0, 1.11.4 > > Attachments: parametersIssue.xml > > > There is a bug with the parameters when used in Controller Services: > * when updating a parameter that is referenced in a controller service (in > this case avro schema registry), changing the value of the parameter does not > seem to trigger the restart of the controller service > * even if I do restart the components manually, the old value of the > parameter is still used... NiFi restart is the only way to get the new value > applied > With the supplied template, create a Parameter Context with schema = > {code:java} > { > "type" : "record", > "name" : "myData", > "namespace" : "myLine", > "fields" : [ > { > "name" : "myField1", > "type" : "string" > } > ] > } > {code} > The AvroSchemaRegistry contains the schema with: > schema => #\{schema} > Get everything running: output data has only one column. Then update the > Parameter Context to have schema = > {code:java} > { > "type" : "record", > "name" : "myData", > "namespace" : "myLine", > "fields" : [ > { > "name" : "myField1", > "type" : "string" > }, { > "name" : "myField2", > "type" : "string" > } > ] > } > {code} > Output data has still one column only when it should have two with the new > schema. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390537947 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerSearchService.java ## @@ -16,564 +16,162 @@ */ package org.apache.nifi.web.controller; -import org.apache.commons.collections4.CollectionUtils; -import org.apache.commons.lang3.StringUtils; import org.apache.nifi.authorization.Authorizer; import org.apache.nifi.authorization.RequestAction; +import org.apache.nifi.authorization.resource.Authorizable; import org.apache.nifi.authorization.user.NiFiUser; -import org.apache.nifi.authorization.user.NiFiUserUtils; -import org.apache.nifi.components.PropertyDescriptor; -import org.apache.nifi.components.validation.ValidationStatus; -import org.apache.nifi.connectable.Connectable; import org.apache.nifi.connectable.Connection; import org.apache.nifi.connectable.Funnel; import org.apache.nifi.connectable.Port; import org.apache.nifi.controller.FlowController; import org.apache.nifi.controller.ProcessorNode; -import org.apache.nifi.controller.ScheduledState; import org.apache.nifi.controller.label.Label; -import org.apache.nifi.controller.queue.FlowFileQueue; -import org.apache.nifi.flowfile.FlowFilePrioritizer; import org.apache.nifi.groups.ProcessGroup; import org.apache.nifi.groups.RemoteProcessGroup; -import org.apache.nifi.nar.NarCloseable; import org.apache.nifi.parameter.Parameter; import org.apache.nifi.parameter.ParameterContext; -import org.apache.nifi.parameter.ParameterContextManager; -import org.apache.nifi.processor.DataUnit; -import org.apache.nifi.processor.Processor; -import org.apache.nifi.processor.Relationship; -import org.apache.nifi.registry.ComponentVariableRegistry; -import org.apache.nifi.registry.VariableDescriptor; -import org.apache.nifi.registry.VariableRegistry; -import org.apache.nifi.remote.PublicPort; -import org.apache.nifi.scheduling.ExecutionNode; -import org.apache.nifi.scheduling.SchedulingStrategy; -import org.apache.nifi.search.SearchContext; -import org.apache.nifi.search.SearchResult; -import org.apache.nifi.search.Searchable; import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; import org.apache.nifi.web.api.dto.search.SearchResultsDTO; +import org.apache.nifi.web.search.ComponentMatcher; +import org.apache.nifi.web.search.MatchEnriching; +import org.apache.nifi.web.search.query.SearchQuery; -import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; -import java.util.Map; +import java.util.Optional; import java.util.Set; -import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; /** * NiFi web controller's helper service that implements component search. */ public class ControllerSearchService { +private final static String FILTER_NAME_GROUP = "group"; +private final static String FILTER_NAME_SCOPE = "scope"; +private final static String FILTER_SCOPE_VALUE_HERE = "here"; + private FlowController flowController; private Authorizer authorizer; -private VariableRegistry variableRegistry; - -/** - * Searches term in the controller beginning from a given process group. - * - * @param results Search results - * @param search The search term - * @param group The init process group - */ -public void search(final SearchResultsDTO results, final String search, final ProcessGroup group) { -final NiFiUser user = NiFiUserUtils.getNiFiUser(); - -if (group.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO groupMatch = search(search, group); -if (groupMatch != null) { -// get the parent group, not the current one -groupMatch.setParentGroup(buildResultGroup(group.getParent(), user)); - groupMatch.setVersionedGroup(buildVersionedGroup(group.getParent(), user)); -results.getProcessGroupResults().add(groupMatch); -} -} - -for (final ProcessorNode procNode : group.getProcessors()) { -if (procNode.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO match = search(search, procNode); -if (match != null) { -match.setGroupId(group.getIdentifier()); -match.setParentGroup(buildResultGroup(group, user)); -match.setVersionedGroup(buildVersionedGroup(group, user)); -results.getProcessorResults().add(match); -} -} -} - -for (final Connection connection :
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390546189 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/search/attributematchers/AttributeMatcher.java ## @@ -0,0 +1,38 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.search.attributematchers; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.web.search.query.SearchQuery; + +import java.util.List; + +public interface AttributeMatcher { +String SEPARATOR = ": "; + +void match(T component, SearchQuery query, List matches); + +static void addIfMatching(final String searchTerm, final String subject, final String label, final List matches) { +final String match = (label == null) // +? subject // +: new StringBuilder(label).append(SEPARATOR).append(subject).toString(); Review comment: `label + ": " + subject` is perfectly fine, `StringBuilder` will be used automatically under the hood. Its explicit use becomes important when used in a separate block (like in a `for` cycle for example). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390510434 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerSearchService.java ## @@ -16,564 +16,162 @@ */ package org.apache.nifi.web.controller; -import org.apache.commons.collections4.CollectionUtils; -import org.apache.commons.lang3.StringUtils; import org.apache.nifi.authorization.Authorizer; import org.apache.nifi.authorization.RequestAction; +import org.apache.nifi.authorization.resource.Authorizable; import org.apache.nifi.authorization.user.NiFiUser; -import org.apache.nifi.authorization.user.NiFiUserUtils; -import org.apache.nifi.components.PropertyDescriptor; -import org.apache.nifi.components.validation.ValidationStatus; -import org.apache.nifi.connectable.Connectable; import org.apache.nifi.connectable.Connection; import org.apache.nifi.connectable.Funnel; import org.apache.nifi.connectable.Port; import org.apache.nifi.controller.FlowController; import org.apache.nifi.controller.ProcessorNode; -import org.apache.nifi.controller.ScheduledState; import org.apache.nifi.controller.label.Label; -import org.apache.nifi.controller.queue.FlowFileQueue; -import org.apache.nifi.flowfile.FlowFilePrioritizer; import org.apache.nifi.groups.ProcessGroup; import org.apache.nifi.groups.RemoteProcessGroup; -import org.apache.nifi.nar.NarCloseable; import org.apache.nifi.parameter.Parameter; import org.apache.nifi.parameter.ParameterContext; -import org.apache.nifi.parameter.ParameterContextManager; -import org.apache.nifi.processor.DataUnit; -import org.apache.nifi.processor.Processor; -import org.apache.nifi.processor.Relationship; -import org.apache.nifi.registry.ComponentVariableRegistry; -import org.apache.nifi.registry.VariableDescriptor; -import org.apache.nifi.registry.VariableRegistry; -import org.apache.nifi.remote.PublicPort; -import org.apache.nifi.scheduling.ExecutionNode; -import org.apache.nifi.scheduling.SchedulingStrategy; -import org.apache.nifi.search.SearchContext; -import org.apache.nifi.search.SearchResult; -import org.apache.nifi.search.Searchable; import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; import org.apache.nifi.web.api.dto.search.SearchResultsDTO; +import org.apache.nifi.web.search.ComponentMatcher; +import org.apache.nifi.web.search.MatchEnriching; +import org.apache.nifi.web.search.query.SearchQuery; -import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; -import java.util.Map; +import java.util.Optional; import java.util.Set; -import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; /** * NiFi web controller's helper service that implements component search. */ public class ControllerSearchService { +private final static String FILTER_NAME_GROUP = "group"; +private final static String FILTER_NAME_SCOPE = "scope"; +private final static String FILTER_SCOPE_VALUE_HERE = "here"; + private FlowController flowController; private Authorizer authorizer; -private VariableRegistry variableRegistry; - -/** - * Searches term in the controller beginning from a given process group. - * - * @param results Search results - * @param search The search term - * @param group The init process group - */ -public void search(final SearchResultsDTO results, final String search, final ProcessGroup group) { -final NiFiUser user = NiFiUserUtils.getNiFiUser(); - -if (group.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO groupMatch = search(search, group); -if (groupMatch != null) { -// get the parent group, not the current one -groupMatch.setParentGroup(buildResultGroup(group.getParent(), user)); - groupMatch.setVersionedGroup(buildVersionedGroup(group.getParent(), user)); -results.getProcessGroupResults().add(groupMatch); -} -} - -for (final ProcessorNode procNode : group.getProcessors()) { -if (procNode.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO match = search(search, procNode); -if (match != null) { -match.setGroupId(group.getIdentifier()); -match.setParentGroup(buildResultGroup(group, user)); -match.setVersionedGroup(buildVersionedGroup(group, user)); -results.getProcessorResults().add(match); -} -} -} - -for (final Connection connection :
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390406702 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/search/query/RegexSearchQueryParser.java ## @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.search.query; + +import org.apache.nifi.authorization.user.NiFiUser; +import org.apache.nifi.groups.ProcessGroup; + +import javax.annotation.Nonnull; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +public class RegexSearchQueryParser implements SearchQueryParser { +private static final String REGEX = "(?(([\\w]+\\:[\\w]+[\\s]+)*)(([\\w]+\\:[\\w]+){0,1}))(([\\w]+\\:[\\w]+)|(?.*))"; Review comment: It seems this pattern can be simplified. For example this pattern seems to cover the same cases (at least passes all the tests): ```java private static final String REGEX = "(?(\\w+:\\w+\\s+)*(\\w+:\\w+)?)(?.*)"; ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390370433 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerSearchService.java ## @@ -16,564 +16,150 @@ */ package org.apache.nifi.web.controller; -import org.apache.commons.collections4.CollectionUtils; -import org.apache.commons.lang3.StringUtils; import org.apache.nifi.authorization.Authorizer; import org.apache.nifi.authorization.RequestAction; +import org.apache.nifi.authorization.resource.Authorizable; import org.apache.nifi.authorization.user.NiFiUser; -import org.apache.nifi.authorization.user.NiFiUserUtils; -import org.apache.nifi.components.PropertyDescriptor; -import org.apache.nifi.components.validation.ValidationStatus; -import org.apache.nifi.connectable.Connectable; import org.apache.nifi.connectable.Connection; import org.apache.nifi.connectable.Funnel; import org.apache.nifi.connectable.Port; import org.apache.nifi.controller.FlowController; import org.apache.nifi.controller.ProcessorNode; -import org.apache.nifi.controller.ScheduledState; import org.apache.nifi.controller.label.Label; -import org.apache.nifi.controller.queue.FlowFileQueue; -import org.apache.nifi.flowfile.FlowFilePrioritizer; import org.apache.nifi.groups.ProcessGroup; import org.apache.nifi.groups.RemoteProcessGroup; -import org.apache.nifi.nar.NarCloseable; import org.apache.nifi.parameter.Parameter; import org.apache.nifi.parameter.ParameterContext; -import org.apache.nifi.parameter.ParameterContextManager; -import org.apache.nifi.processor.DataUnit; -import org.apache.nifi.processor.Processor; -import org.apache.nifi.processor.Relationship; -import org.apache.nifi.registry.ComponentVariableRegistry; -import org.apache.nifi.registry.VariableDescriptor; -import org.apache.nifi.registry.VariableRegistry; -import org.apache.nifi.remote.PublicPort; -import org.apache.nifi.scheduling.ExecutionNode; -import org.apache.nifi.scheduling.SchedulingStrategy; -import org.apache.nifi.search.SearchContext; -import org.apache.nifi.search.SearchResult; -import org.apache.nifi.search.Searchable; import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; import org.apache.nifi.web.api.dto.search.SearchResultsDTO; +import org.apache.nifi.web.search.ComponentMatcher; +import org.apache.nifi.web.search.MatchEnriching; +import org.apache.nifi.web.search.query.SearchQuery; -import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; -import java.util.Map; +import java.util.Optional; import java.util.Set; -import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; /** * NiFi web controller's helper service that implements component search. */ public class ControllerSearchService { +private final static String FILTER_NAME_GROUP = "group"; +private final static String FILTER_NAME_SCOPE = "scope"; +private final static String FILTER_SCOPE_VALUE_HERE = "here"; + private FlowController flowController; private Authorizer authorizer; -private VariableRegistry variableRegistry; - -/** - * Searches term in the controller beginning from a given process group. - * - * @param results Search results - * @param search The search term - * @param group The init process group - */ -public void search(final SearchResultsDTO results, final String search, final ProcessGroup group) { -final NiFiUser user = NiFiUserUtils.getNiFiUser(); - -if (group.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO groupMatch = search(search, group); -if (groupMatch != null) { -// get the parent group, not the current one -groupMatch.setParentGroup(buildResultGroup(group.getParent(), user)); - groupMatch.setVersionedGroup(buildVersionedGroup(group.getParent(), user)); -results.getProcessGroupResults().add(groupMatch); -} -} - -for (final ProcessorNode procNode : group.getProcessors()) { -if (procNode.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO match = search(search, procNode); -if (match != null) { -match.setGroupId(group.getIdentifier()); -match.setParentGroup(buildResultGroup(group, user)); -match.setVersionedGroup(buildVersionedGroup(group, user)); -results.getProcessorResults().add(match); -} -} -} - -for (final Connection connection :
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390544344 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/search/attributematchers/AttributeMatcher.java ## @@ -0,0 +1,38 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.web.search.attributematchers; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.web.search.query.SearchQuery; + +import java.util.List; + +public interface AttributeMatcher { +String SEPARATOR = ": "; + +void match(T component, SearchQuery query, List matches); + +static void addIfMatching(final String searchTerm, final String subject, final String label, final List matches) { +final String match = (label == null) // Review comment: Is there a reason to start considering `label == null` a valid case? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi] tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors
tpalfy commented on a change in pull request #4123: NIFI-7188: Adding filter capabilities into search & prerequisite refactors URL: https://github.com/apache/nifi/pull/4123#discussion_r390493353 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/controller/ControllerSearchService.java ## @@ -16,564 +16,162 @@ */ package org.apache.nifi.web.controller; -import org.apache.commons.collections4.CollectionUtils; -import org.apache.commons.lang3.StringUtils; import org.apache.nifi.authorization.Authorizer; import org.apache.nifi.authorization.RequestAction; +import org.apache.nifi.authorization.resource.Authorizable; import org.apache.nifi.authorization.user.NiFiUser; -import org.apache.nifi.authorization.user.NiFiUserUtils; -import org.apache.nifi.components.PropertyDescriptor; -import org.apache.nifi.components.validation.ValidationStatus; -import org.apache.nifi.connectable.Connectable; import org.apache.nifi.connectable.Connection; import org.apache.nifi.connectable.Funnel; import org.apache.nifi.connectable.Port; import org.apache.nifi.controller.FlowController; import org.apache.nifi.controller.ProcessorNode; -import org.apache.nifi.controller.ScheduledState; import org.apache.nifi.controller.label.Label; -import org.apache.nifi.controller.queue.FlowFileQueue; -import org.apache.nifi.flowfile.FlowFilePrioritizer; import org.apache.nifi.groups.ProcessGroup; import org.apache.nifi.groups.RemoteProcessGroup; -import org.apache.nifi.nar.NarCloseable; import org.apache.nifi.parameter.Parameter; import org.apache.nifi.parameter.ParameterContext; -import org.apache.nifi.parameter.ParameterContextManager; -import org.apache.nifi.processor.DataUnit; -import org.apache.nifi.processor.Processor; -import org.apache.nifi.processor.Relationship; -import org.apache.nifi.registry.ComponentVariableRegistry; -import org.apache.nifi.registry.VariableDescriptor; -import org.apache.nifi.registry.VariableRegistry; -import org.apache.nifi.remote.PublicPort; -import org.apache.nifi.scheduling.ExecutionNode; -import org.apache.nifi.scheduling.SchedulingStrategy; -import org.apache.nifi.search.SearchContext; -import org.apache.nifi.search.SearchResult; -import org.apache.nifi.search.Searchable; import org.apache.nifi.web.api.dto.search.ComponentSearchResultDTO; import org.apache.nifi.web.api.dto.search.SearchResultGroupDTO; import org.apache.nifi.web.api.dto.search.SearchResultsDTO; +import org.apache.nifi.web.search.ComponentMatcher; +import org.apache.nifi.web.search.MatchEnriching; +import org.apache.nifi.web.search.query.SearchQuery; -import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; -import java.util.Map; +import java.util.Optional; import java.util.Set; -import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; /** * NiFi web controller's helper service that implements component search. */ public class ControllerSearchService { +private final static String FILTER_NAME_GROUP = "group"; +private final static String FILTER_NAME_SCOPE = "scope"; +private final static String FILTER_SCOPE_VALUE_HERE = "here"; + private FlowController flowController; private Authorizer authorizer; -private VariableRegistry variableRegistry; - -/** - * Searches term in the controller beginning from a given process group. - * - * @param results Search results - * @param search The search term - * @param group The init process group - */ -public void search(final SearchResultsDTO results, final String search, final ProcessGroup group) { -final NiFiUser user = NiFiUserUtils.getNiFiUser(); - -if (group.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO groupMatch = search(search, group); -if (groupMatch != null) { -// get the parent group, not the current one -groupMatch.setParentGroup(buildResultGroup(group.getParent(), user)); - groupMatch.setVersionedGroup(buildVersionedGroup(group.getParent(), user)); -results.getProcessGroupResults().add(groupMatch); -} -} - -for (final ProcessorNode procNode : group.getProcessors()) { -if (procNode.isAuthorized(authorizer, RequestAction.READ, user)) { -final ComponentSearchResultDTO match = search(search, procNode); -if (match != null) { -match.setGroupId(group.getIdentifier()); -match.setParentGroup(buildResultGroup(group, user)); -match.setVersionedGroup(buildVersionedGroup(group, user)); -results.getProcessorResults().add(match); -} -} -} - -for (final Connection connection :
[jira] [Assigned] (NIFI-7242) Parameters update not taken into account in controller services
[ https://issues.apache.org/jira/browse/NIFI-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne reassigned NIFI-7242: Assignee: Mark Payne > Parameters update not taken into account in controller services > --- > > Key: NIFI-7242 > URL: https://issues.apache.org/jira/browse/NIFI-7242 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Pierre Villard >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.12.0, 1.11.4 > > Attachments: parametersIssue.xml > > > There is a bug with the parameters when used in Controller Services: > * when updating a parameter that is referenced in a controller service (in > this case avro schema registry), changing the value of the parameter does not > seem to trigger the restart of the controller service > * even if I do restart the components manually, the old value of the > parameter is still used... NiFi restart is the only way to get the new value > applied > With the supplied template, create a Parameter Context with schema = > {code:java} > { > "type" : "record", > "name" : "myData", > "namespace" : "myLine", > "fields" : [ > { > "name" : "myField1", > "type" : "string" > } > ] > } > {code} > The AvroSchemaRegistry contains the schema with: > schema => #\{schema} > Get everything running: output data has only one column. Then update the > Parameter Context to have schema = > {code:java} > { > "type" : "record", > "name" : "myData", > "namespace" : "myLine", > "fields" : [ > { > "name" : "myField1", > "type" : "string" > }, { > "name" : "myField2", > "type" : "string" > } > ] > } > {code} > Output data has still one column only when it should have two with the new > schema. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7244) Ignore tests not written to work on Windows
Joe Witt created NIFI-7244: -- Summary: Ignore tests not written to work on Windows Key: NIFI-7244 URL: https://issues.apache.org/jira/browse/NIFI-7244 Project: Apache NiFi Issue Type: Task Reporter: Joe Witt Assignee: Joe Witt -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7210) Add process group information in bulletins
[ https://issues.apache.org/jira/browse/NIFI-7210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-7210: --- Fix Version/s: 1.11.4 > Add process group information in bulletins > -- > > Key: NIFI-7210 > URL: https://issues.apache.org/jira/browse/NIFI-7210 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.12.0, 1.11.4 > > Time Spent: 50m > Remaining Estimate: 0h > > Similarly to NIFI-7106, some information could be added regarding the process > group where the bulletin has been generated (when it's for a component that > is in a process group). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7224) Unable to import a "Download flow" JSON file into Registry
[ https://issues.apache.org/jira/browse/NIFI-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-7224: --- Fix Version/s: 1.11.4 > Unable to import a "Download flow" JSON file into Registry > -- > > Key: NIFI-7224 > URL: https://issues.apache.org/jira/browse/NIFI-7224 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andrew M. Lim >Assignee: Bryan Bende >Priority: Major > Fix For: 1.12.0, 1.11.4 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Selecting "Download flow" for a process group which generated the file: > {{simple_download_flow.json}} > {{Tried to import this into Registry:}} > ./cli.sh demo quick-import -i > /Users/andrew.lim/Downloads/simple_download_flow.json > But got this error: > {{ERROR: Error executing command 'quick-import' : null}} > Added -verbose and see this stack trace: > org.apache.nifi.toolkit.cli.api.CommandException: Error executing command > 'quick-import' : null > at > org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188) > at > org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145) > at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72) > Caused by: java.lang.NullPointerException > at > org.apache.nifi.toolkit.cli.impl.command.registry.flow.ImportFlowVersion.doExecute(ImportFlowVersion.java:92) > at > org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.importFlowVersion(QuickImport.java:150) > at > org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:124) > at > org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:48) > at > org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:80) > ... 5 more -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-7200) IPv4 socket resource leak
[ https://issues.apache.org/jira/browse/NIFI-7200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt resolved NIFI-7200. Resolution: Fixed > IPv4 socket resource leak > - > > Key: NIFI-7200 > URL: https://issues.apache.org/jira/browse/NIFI-7200 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joe Witt >Priority: Critical > Fix For: 1.12.0, 1.11.4 > > Attachments: Screen Shot 2020-03-06 at 1.04.57 PM.png, lsof > test-node2.txt, test-node1-flow.png, test-node1.dump, test-node1.logs.zip, > test-node2-flow.png, test-node2.dump, test-node2.logs.zip > > > https://issues.apache.org/jira/browse/NIFI-7114?focusedCommentId=17044888=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17044888 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7224) Unable to import a "Download flow" JSON file into Registry
[ https://issues.apache.org/jira/browse/NIFI-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-7224: --- Fix Version/s: (was: 1.11.4) > Unable to import a "Download flow" JSON file into Registry > -- > > Key: NIFI-7224 > URL: https://issues.apache.org/jira/browse/NIFI-7224 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andrew M. Lim >Assignee: Bryan Bende >Priority: Major > Fix For: 1.12.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Selecting "Download flow" for a process group which generated the file: > {{simple_download_flow.json}} > {{Tried to import this into Registry:}} > ./cli.sh demo quick-import -i > /Users/andrew.lim/Downloads/simple_download_flow.json > But got this error: > {{ERROR: Error executing command 'quick-import' : null}} > Added -verbose and see this stack trace: > org.apache.nifi.toolkit.cli.api.CommandException: Error executing command > 'quick-import' : null > at > org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:84) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188) > at > org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145) > at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72) > Caused by: java.lang.NullPointerException > at > org.apache.nifi.toolkit.cli.impl.command.registry.flow.ImportFlowVersion.doExecute(ImportFlowVersion.java:92) > at > org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.importFlowVersion(QuickImport.java:150) > at > org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:124) > at > org.apache.nifi.toolkit.cli.impl.command.composite.QuickImport.doExecute(QuickImport.java:48) > at > org.apache.nifi.toolkit.cli.impl.command.composite.AbstractCompositeCommand.execute(AbstractCompositeCommand.java:80) > ... 5 more -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6530) HTTP SiteToSite server returns 201 in case no data is available
[ https://issues.apache.org/jira/browse/NIFI-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056247#comment-17056247 ] ASF subversion and git services commented on NIFI-6530: --- Commit afad982e91debd1109a6ec6d1865a77e8b3470ee in nifi's branch refs/heads/master from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=afad982 ] NIFI-7200: Revert "NIFI-6530 - HTTP SiteToSite server returns 201 in case no data is available" This reverts commit f01668e66ad2e45197915769e966a4be27e1592e. Signed-off-by: Joe Witt > HTTP SiteToSite server returns 201 in case no data is available > --- > > Key: NIFI-6530 > URL: https://issues.apache.org/jira/browse/NIFI-6530 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Arpad Boda >Assignee: Arpad Boda >Priority: Major > Fix For: 1.10.0 > > Time Spent: 6.5h > Remaining Estimate: 0h > > When MiNiFi or other NiFi connects to a HTTP SiteToSite server, the server > always returns 201 in case of transaction creation. > This is inefficient as transactions are created, tracked and later deleted > without anything really being transmitted. According to comments in MiNiFI > code 200 should be returned in such case, although 204 would be a better > choice in HTTP standard point of view. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] joewitt closed pull request #4128: Revert "NIFI-6530 - HTTP SiteToSite server returns 201 in case no dat…
joewitt closed pull request #4128: Revert "NIFI-6530 - HTTP SiteToSite server returns 201 in case no dat… URL: https://github.com/apache/nifi/pull/4128 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (NIFI-7200) IPv4 socket resource leak
[ https://issues.apache.org/jira/browse/NIFI-7200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056246#comment-17056246 ] ASF subversion and git services commented on NIFI-7200: --- Commit afad982e91debd1109a6ec6d1865a77e8b3470ee in nifi's branch refs/heads/master from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=afad982 ] NIFI-7200: Revert "NIFI-6530 - HTTP SiteToSite server returns 201 in case no data is available" This reverts commit f01668e66ad2e45197915769e966a4be27e1592e. Signed-off-by: Joe Witt > IPv4 socket resource leak > - > > Key: NIFI-7200 > URL: https://issues.apache.org/jira/browse/NIFI-7200 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joe Witt >Priority: Critical > Fix For: 1.12.0, 1.11.4 > > Attachments: Screen Shot 2020-03-06 at 1.04.57 PM.png, lsof > test-node2.txt, test-node1-flow.png, test-node1.dump, test-node1.logs.zip, > test-node2-flow.png, test-node2.dump, test-node2.logs.zip > > > https://issues.apache.org/jira/browse/NIFI-7114?focusedCommentId=17044888=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17044888 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7235) 1.11.3 broke SSL
[ https://issues.apache.org/jira/browse/NIFI-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-7235: --- Fix Version/s: (was: 1.11.3) > 1.11.3 broke SSL > > > Key: NIFI-7235 > URL: https://issues.apache.org/jira/browse/NIFI-7235 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.11.3 > Environment: Linux, Java 8 and 11 >Reporter: Lance Kinley >Priority: Major > Attachments: nifi-error.png > > > After signing in via client certificate, the UI shows: > PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > This does not occur on 1.10.0 - 1.11.2 > I am using a self-signed CA and certs generated from it. > Stack trace in log: > 2020-03-07 06:10:30,369 WARN [Replicate Request Thread-1] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator > javax.net.ssl.SSLHandshakeException: > sun.security.validator.ValidatorException: PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) > at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310) > at > sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639) > at > sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223) > at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037) > at sun.security.ssl.Handshaker.process_record(Handshaker.java:965) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064) > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379) > at > okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:302) > at > okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:270) > at > okhttp3.internal.connection.RealConnection.connect(RealConnection.java:162) > at > okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257) > at > okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135) > at > okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114) > at > okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) > at okhttp3.RealCall.execute(RealCall.java:77) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:143) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:137) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:647) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:839) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: sun.security.validator.ValidatorException: PKIX path building > failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to > find
[GitHub] [nifi] turcsanyip opened a new pull request #4129: NIFI-7239: Upgrade the Hive 3 bundle to use Apache Hive 3.1.2
turcsanyip opened a new pull request #4129: NIFI-7239: Upgrade the Hive 3 bundle to use Apache Hive 3.1.2 URL: https://github.com/apache/nifi/pull/4129 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on both JDK 8 and JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Reopened] (NIFI-7235) 1.11.3 broke SSL
[ https://issues.apache.org/jira/browse/NIFI-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt reopened NIFI-7235: > 1.11.3 broke SSL > > > Key: NIFI-7235 > URL: https://issues.apache.org/jira/browse/NIFI-7235 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.11.3 > Environment: Linux, Java 8 and 11 >Reporter: Lance Kinley >Priority: Major > Attachments: nifi-error.png > > > After signing in via client certificate, the UI shows: > PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > This does not occur on 1.10.0 - 1.11.2 > I am using a self-signed CA and certs generated from it. > Stack trace in log: > 2020-03-07 06:10:30,369 WARN [Replicate Request Thread-1] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator > javax.net.ssl.SSLHandshakeException: > sun.security.validator.ValidatorException: PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) > at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310) > at > sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639) > at > sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223) > at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037) > at sun.security.ssl.Handshaker.process_record(Handshaker.java:965) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064) > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379) > at > okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:302) > at > okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:270) > at > okhttp3.internal.connection.RealConnection.connect(RealConnection.java:162) > at > okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257) > at > okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135) > at > okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114) > at > okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) > at okhttp3.RealCall.execute(RealCall.java:77) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:143) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:137) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:647) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:839) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: sun.security.validator.ValidatorException: PKIX path building > failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to > find valid certification path to
[jira] [Resolved] (NIFI-7235) 1.11.3 broke SSL
[ https://issues.apache.org/jira/browse/NIFI-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt resolved NIFI-7235. Resolution: Information Provided > 1.11.3 broke SSL > > > Key: NIFI-7235 > URL: https://issues.apache.org/jira/browse/NIFI-7235 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.11.3 > Environment: Linux, Java 8 and 11 >Reporter: Lance Kinley >Priority: Major > Attachments: nifi-error.png > > > After signing in via client certificate, the UI shows: > PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > This does not occur on 1.10.0 - 1.11.2 > I am using a self-signed CA and certs generated from it. > Stack trace in log: > 2020-03-07 06:10:30,369 WARN [Replicate Request Thread-1] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator > javax.net.ssl.SSLHandshakeException: > sun.security.validator.ValidatorException: PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) > at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310) > at > sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639) > at > sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223) > at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037) > at sun.security.ssl.Handshaker.process_record(Handshaker.java:965) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064) > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379) > at > okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:302) > at > okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:270) > at > okhttp3.internal.connection.RealConnection.connect(RealConnection.java:162) > at > okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257) > at > okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135) > at > okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114) > at > okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) > at okhttp3.RealCall.execute(RealCall.java:77) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:143) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:137) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:647) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:839) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: sun.security.validator.ValidatorException: PKIX path building > failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to >
[jira] [Resolved] (NIFI-7235) 1.11.3 broke SSL
[ https://issues.apache.org/jira/browse/NIFI-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lance Kinley resolved NIFI-7235. Fix Version/s: 1.11.3 Resolution: Fixed Closing due to workable solution > 1.11.3 broke SSL > > > Key: NIFI-7235 > URL: https://issues.apache.org/jira/browse/NIFI-7235 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.11.3 > Environment: Linux, Java 8 and 11 >Reporter: Lance Kinley >Priority: Major > Fix For: 1.11.3 > > Attachments: nifi-error.png > > > After signing in via client certificate, the UI shows: > PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > This does not occur on 1.10.0 - 1.11.2 > I am using a self-signed CA and certs generated from it. > Stack trace in log: > 2020-03-07 06:10:30,369 WARN [Replicate Request Thread-1] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator > javax.net.ssl.SSLHandshakeException: > sun.security.validator.ValidatorException: PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) > at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310) > at > sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639) > at > sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223) > at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037) > at sun.security.ssl.Handshaker.process_record(Handshaker.java:965) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064) > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379) > at > okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:302) > at > okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:270) > at > okhttp3.internal.connection.RealConnection.connect(RealConnection.java:162) > at > okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257) > at > okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135) > at > okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114) > at > okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) > at okhttp3.RealCall.execute(RealCall.java:77) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:143) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:137) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:647) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:839) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: sun.security.validator.ValidatorException: PKIX path
[jira] [Commented] (NIFI-7235) 1.11.3 broke SSL
[ https://issues.apache.org/jira/browse/NIFI-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056222#comment-17056222 ] Lance Kinley commented on NIFI-7235: Yes, adding keyPasswd in nifi.properties to match the keystorePassword fixed it. Thanks for the help. > 1.11.3 broke SSL > > > Key: NIFI-7235 > URL: https://issues.apache.org/jira/browse/NIFI-7235 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.11.3 > Environment: Linux, Java 8 and 11 >Reporter: Lance Kinley >Priority: Major > Attachments: nifi-error.png > > > After signing in via client certificate, the UI shows: > PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > This does not occur on 1.10.0 - 1.11.2 > I am using a self-signed CA and certs generated from it. > Stack trace in log: > 2020-03-07 06:10:30,369 WARN [Replicate Request Thread-1] > o.a.n.c.c.h.r.ThreadPoolRequestReplicator > javax.net.ssl.SSLHandshakeException: > sun.security.validator.ValidatorException: PKIX path building failed: > sun.security.provider.certpath.SunCertPathBuilderException: unable to find > valid certification path to requested target > at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) > at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316) > at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310) > at > sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639) > at > sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223) > at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037) > at sun.security.ssl.Handshaker.process_record(Handshaker.java:965) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064) > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395) > at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379) > at > okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:302) > at > okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:270) > at > okhttp3.internal.connection.RealConnection.connect(RealConnection.java:162) > at > okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257) > at > okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135) > at > okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114) > at > okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at > okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) > at > okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) > at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) > at okhttp3.RealCall.execute(RealCall.java:77) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:143) > at > org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:137) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:647) > at > org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:839) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by:
[GitHub] [nifi] MuazmaZ commented on issue #4126: NIFI-7103 Adding PutAzureDataLakeStorage Processor to provide native support for Azure Data Lake Storage Gen 2
MuazmaZ commented on issue #4126: NIFI-7103 Adding PutAzureDataLakeStorage Processor to provide native support for Azure Data Lake Storage Gen 2 URL: https://github.com/apache/nifi/pull/4126#issuecomment-597213476 @turcsanyip added the missing dependency. All checks have passed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390477569 ## File path: libminifi/include/FlowController.h ## @@ -304,23 +305,23 @@ class FlowController : public core::controller::ControllerServiceProvider, publi virtual void enableAllControllerServices(); /** - * Retrieves all root response nodes from this source. - * @param metric_vector -- metrics will be placed in this vector. - * @return result of the get operation. - * 0 Success - * 1 No error condition, but cannot obtain lock in timely manner. - * -1 failure + * Retrieves metrics node + * @return metrics response node */ - virtual int16_t getResponseNodes(std::vector> _vector, uint16_t metricsClass); + virtual std::shared_ptr getMetricsNode() const; + + /** + * Retrieves root nodes configured to be included in heartbeat + * @param includeManifest -- determines if manifest is to be included + * @return a list of response nodes + */ + virtual std::vector> getHeartbeatNodes(bool includeManifest) const; Review comment: (response to @arpadboda) I think `root_response_nodes_` owns the objects and this function shares this ownership with the callers. Whether this shared ownership is needed or not is not clear to me. We can definitely not use `unique_ptr`s for the returned pointers here, as ownership belongs to `root_response_nodes_`, but we could maybe use `state::response::ResponseNode*` (i.e. observer ptr) if the lifetime of the nodes don't necessitate shared ownership. Additionally, `root_response_nodes_` is protected, which means it's part of our API and contains `shared_ptr`s, so we're not going to get `unique_ptr`s there. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390477569 ## File path: libminifi/include/FlowController.h ## @@ -304,23 +305,23 @@ class FlowController : public core::controller::ControllerServiceProvider, publi virtual void enableAllControllerServices(); /** - * Retrieves all root response nodes from this source. - * @param metric_vector -- metrics will be placed in this vector. - * @return result of the get operation. - * 0 Success - * 1 No error condition, but cannot obtain lock in timely manner. - * -1 failure + * Retrieves metrics node + * @return metrics response node */ - virtual int16_t getResponseNodes(std::vector> _vector, uint16_t metricsClass); + virtual std::shared_ptr getMetricsNode() const; + + /** + * Retrieves root nodes configured to be included in heartbeat + * @param includeManifest -- determines if manifest is to be included + * @return a list of response nodes + */ + virtual std::vector> getHeartbeatNodes(bool includeManifest) const; Review comment: I think `root_response_nodes_` owns the objects and this function shares this ownership with the callers. Whether this shared ownership is needed or not is not clear to me. We can definitely not use `unique_ptr`s for the returned pointers here, as ownership belongs to `root_response_nodes_`, but we could maybe use `state::response::ResponseNode*` (i.e. observer ptr) if the lifetime of the nodes don't necessitate shared ownership. Additionally, `root_response_nodes_` is protected, which means it's part of our API and contains `shared_ptr`s, so we're not going to get `unique_ptr`s there. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390463141 ## File path: extensions/http-curl/tests/HTTPIntegrationBase.h ## @@ -91,4 +87,64 @@ void CoapIntegrationBase::setUrl(std::string url, CivetHandler *handler) { } } +class VerifyC2Base : public CoapIntegrationBase { + public: + explicit VerifyC2Base(bool isSecure) + : isSecure(isSecure) { + } + + virtual void testSetup() { +LogTestController::getInstance().setDebug(); +LogTestController::getInstance().setDebug(); + } + + void runAssertions() { Review comment: It's already declared to be pure virtual by the base class. I'd just omit it from this class definition. Supporting guideline: "33. Make non-leaf classes abstract" from More Effective C++ (1996), Scott Meyers This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390460122 ## File path: extensions/http-curl/tests/HTTPHandlers.h ## @@ -343,4 +345,104 @@ class DeleteTransactionResponder : public CivetHandler { std::string response_code; }; +class HeartbeatHandler : public CivetHandler { + public: + explicit HeartbeatHandler(bool isSecure) + : isSecure(isSecure) { + } + + std::string readPost(struct mg_connection *conn) { +std::string response; +int blockSize = 1024 * sizeof(char), readBytes; + +char buffer[1024]; +while ((readBytes = mg_read(conn, buffer, blockSize)) > 0) { + response.append(buffer, 0, (readBytes / sizeof(char))); +} +return response; + } + + void sendStopOperation(struct mg_connection *conn) { +std::string resp = "{\"operation\" : \"heartbeat\", \"requested_operations\" : [{ \"operationid\" : 41, \"operation\" : \"stop\", \"operand\" : \"invoke\" }, " +"{ \"operationid\" : 42, \"operation\" : \"stop\", \"operand\" : \"FlowController\" } ]}"; +mg_printf(conn, "HTTP/1.1 200 OK\r\nContent-Type: " + "text/plain\r\nContent-Length: %lu\r\nConnection: close\r\n\r\n", + resp.length()); +mg_printf(conn, "%s", resp.c_str()); + } + + void sendHeartbeatResponse(const std::string& operation, const std::string& operand, const std::string& operationId, struct mg_connection * conn) { +std::string heartbeat_response = "{\"operation\" : \"heartbeat\",\"requested_operations\": [ {" + "\"operation\" : \"" + operation + "\"," + "\"operationid\" : \"" + operationId + "\"," + "\"operand\": \"" + operand + "\"}]}"; + + mg_printf(conn, "HTTP/1.1 200 OK\r\nContent-Type: " +"text/plain\r\nContent-Length: %lu\r\nConnection: close\r\n\r\n", +heartbeat_response.length()); + mg_printf(conn, "%s", heartbeat_response.c_str()); + } + + void verifyJsonHasAgentManifest(const rapidjson::Document& root) { +bool found = false; +assert(root.HasMember("agentInfo") == true); +assert(root["agentInfo"].HasMember("agentManifest") == true); +assert(root["agentInfo"]["agentManifest"].HasMember("bundles") == true); + +for (auto : root["agentInfo"]["agentManifest"]["bundles"].GetArray()) { + assert(bundle.HasMember("artifact")); + std::string str = bundle["artifact"].GetString(); + if (str == "minifi-system") { + +std::vector classes; +for (auto : bundle["componentManifest"]["processors"].GetArray()) { + classes.push_back(proc["type"].GetString()); +} + +auto group = minifi::BuildDescription::getClassDescriptions(str); +for (auto proc : group.processors_) { + assert(std::find(classes.begin(), classes.end(), proc.class_name_) != std::end(classes)); + found = true; +} + + } +} +assert(found == true); + } + + virtual void handleHeartbeat(const rapidjson::Document& root, struct mg_connection * conn) { +(void)conn; Review comment: You can avoid unused warnings by not naming the argument variable. In this case the name doesn't provide extra information (for readers) over the type. http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Rf-unused This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting
szaszm commented on a change in pull request #743: Minificpp 1169 - Simplify C2 metrics collection and reporting URL: https://github.com/apache/nifi-minifi-cpp/pull/743#discussion_r390456234 ## File path: extensions/http-curl/tests/HTTPHandlers.h ## @@ -343,4 +345,104 @@ class DeleteTransactionResponder : public CivetHandler { std::string response_code; }; +class HeartbeatHandler : public CivetHandler { + public: + explicit HeartbeatHandler(bool isSecure) + : isSecure(isSecure) { + } + + std::string readPost(struct mg_connection *conn) { +std::string response; +int blockSize = 1024 * sizeof(char), readBytes; + +char buffer[1024]; +while ((readBytes = mg_read(conn, buffer, blockSize)) > 0) { + response.append(buffer, 0, (readBytes / sizeof(char))); +} +return response; + } + + void sendStopOperation(struct mg_connection *conn) { +std::string resp = "{\"operation\" : \"heartbeat\", \"requested_operations\" : [{ \"operationid\" : 41, \"operation\" : \"stop\", \"operand\" : \"invoke\" }, " +"{ \"operationid\" : 42, \"operation\" : \"stop\", \"operand\" : \"FlowController\" } ]}"; Review comment: Using a raw string literal would be nice for hardcoded JSONs. We can use the preprocessor for concatenation of string literals. We can use char arrays to avoid heap allocations. I wish we had `std::string_view`. I'm not sure about wrapping and copy-initialization vs direct-list-initialization. ```cpp const char heartbeat_response[] = R"json({"operation": "heartbeat", "requested_operations: [)json" R"json({"operationid": 41, "operation": "stop", "operand": "invoke"}, )json" R"json({"operationid": 42, "operation": "stop", "operand": "FlowController"}]})json"; const char heartbeat_response2[]{ R"json({"operation": "heartbeat", "requested_operations: [{"operationid": 41, "operation": "stop", "operand": "invoke"}, {"operationid": 42, "operation": "stop", "operand": "FlowController"}]})json" }; const auto response_length = sizeof(heartbeat_response); ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services