[GitHub] [nifi] simonbence commented on a change in pull request #5059: NIFI-8519 Adding HDFS support for NAR autoload

2021-05-14 Thread GitBox


simonbence commented on a change in pull request #5059:
URL: https://github.com/apache/nifi/pull/5059#discussion_r632373207



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/nar/hadoop/HDFSNarProvider.java
##
@@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.nar.hadoop;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.nifi.hadoop.SecurityUtil;
+import org.apache.nifi.nar.NarProvider;
+import org.apache.nifi.nar.NarProviderInitializationContext;
+import org.apache.nifi.nar.hadoop.util.ExtensionFilter;
+import org.apache.nifi.processors.hadoop.ExtendedConfiguration;
+import org.apache.nifi.processors.hadoop.HdfsResources;
+import org.apache.nifi.security.krb.KerberosKeytabUser;
+import org.apache.nifi.security.krb.KerberosPasswordUser;
+import org.apache.nifi.security.krb.KerberosUser;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.SocketFactory;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.URI;
+import java.security.PrivilegedExceptionAction;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+public class HDFSNarProvider implements NarProvider {
+private static final Logger LOGGER = 
LoggerFactory.getLogger(HDFSNarProvider.class);
+
+private static final String RESOURCES_PARAMETER = "resources";
+private static final String SOURCE_DIRECTORY_PARAMETER = 
"source.directory";
+private static final String KERBEROS_PRINCIPAL_PARAMETER = 
"kerberos.principal";
+private static final String KERBEROS_KEYTAB_PARAMETER = "kerberos.keytab";
+private static final String KERBEROS_PASSWORD_PARAMETER = 
"kerberos.password";
+
+private static final String NAR_EXTENSION = "nar";
+private static final String DELIMITER = "/";
+private static final int BUFFER_SIZE_DEFAULT = 4096;
+private static final Object RESOURCES_LOCK = new Object();
+
+private volatile List resources = null;
+private volatile Path sourceDirectory = null;
+
+private volatile NarProviderInitializationContext context;
+
+private volatile boolean initialized = false;
+
+public void initialize(final NarProviderInitializationContext context) {
+resources = 
Arrays.stream(Objects.requireNonNull(context.getParameters().get(RESOURCES_PARAMETER)).split(",")).map(s
 -> s.trim()).collect(Collectors.toList());
+
+if (resources.isEmpty()) {
+throw new IllegalArgumentException("At least one HDFS 
configuration resource is necessary");
+}
+
+this.sourceDirectory = new 
Path(Objects.requireNonNull(context.getParameters().get(SOURCE_DIRECTORY_PARAMETER)));
+this.context = context;
+this.initialized = true;
+}
+
+@Override
+public Collection listNars() throws IOException {
+if (!initialized) {
+LOGGER.error("Provider is not initialized");
+}
+
+final HdfsResources hdfsResources = getHdfsResources();
+final FileStatus[] fileStatuses = 
hdfsResources.getFileSystem().listStatus(sourceDirectory, new 
ExtensionFilter(NAR_EXTENSION));
+
+final List result = Arrays.stream(fileStatuses)
+.filter(fileStatus -> fileStatus.isFile())
+.map(fileStatus -> fileStatus.getPath().getName())
+.collect(Collectors.toList());
+
+if (LOGGER.isDebugEnabled()) {
+LOGGER.debug("The following nars were found: " + String.join(", ", 
result));
+}
+
+return result;
+}
+
+@Override
+public InputStream fetchNarContents(final String location) throws 
IOException {
+if (!initialized) {
+LOGGER.error("Provider is not 

[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1044: MINIFICPP-1526 Add ConsumeJournald to consume systemd journal messages

2021-05-14 Thread GitBox


martinzink commented on a change in pull request #1044:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1044#discussion_r632373602



##
File path: extensions/systemd/CMakeLists.txt
##
@@ -0,0 +1,28 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+include(${CMAKE_SOURCE_DIR}/extensions/ExtensionHeader.txt)
+
+add_library(minifi-systemd STATIC ConsumeJournald.cpp WorkerThread.cpp 
libwrapper/LibWrapper.h Common.h libwrapper/LibWrapper.cpp 
libwrapper/DlopenWrapper.cpp libwrapper/DlopenWrapper.h)
+set_property(TARGET minifi-systemd PROPERTY POSITION_INDEPENDENT_CODE ON)
+
+target_link_libraries(minifi-systemd ${LIBMINIFI} Threads::Threads date::date)
+
+set(SYSTEMD-EXTENSION minifi-systemd PARENT_SCOPE)
+register_extension(minifi-systemd)

Review comment:
   I think, we should add this extension to the linter check as well
   ```suggestion
   register_extension(minifi-systemd)
   register_extension_linter(minifi-systemd-extension-linter)
   ```
   
   this also brings out a couple linter violations which should be fixed as well




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on pull request #1069: MINIFICPP-1084: Linter check should be platform independent

2021-05-14 Thread GitBox


martinzink commented on pull request #1069:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1069#issuecomment-841044256


   > You could also assign yourself and link 
[MINIFICPP-1521](https://issues.apache.org/jira/browse/MINIFICPP-1521) jira 
ticket to this PR. It seems to me the only thing missing from that is the 
`--quiet` flag to satisfy that ticket as well.
   
   Good idea, I added the quiet flag in 
[7e3e513](https://github.com/martinzink/nifi-minifi-cpp/commit/7e3e51367d92da90a684b8fc29551e4fd7013d6b)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-6501) Add config to limit buffer queue size in CaptureChangeMySQL

2021-05-14 Thread Pawel Chudzik (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344364#comment-17344364
 ] 

Pawel Chudzik commented on NIFI-6501:
-

Hey,

Why hasn't this PR been merged ? The proposed solution seems to be good enough 
and the probability of data loss can be decreased by setting a high timeout.  

> Add config to limit buffer queue size in CaptureChangeMySQL
> ---
>
> Key: NIFI-6501
> URL: https://issues.apache.org/jira/browse/NIFI-6501
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.9.2
>Reporter: Purushotham Pushpavanthar
>Assignee: Purushotham Pushpavanthar
>Priority: Critical
>  Labels: easyfix
> Attachments: image-2019-08-02-11-29-10-829.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> CaptureChangeMySQL processor registers a listener with Blocking Queue as 
> buffer with the BinLogClient. When the thread polling from the Queue is 
> slower compared to writer, the queue grows uncontrollably and brings down the 
> node.
> Since the flow writing to listeners in 
> [mysql-binlog-connector-java|[https://github.com/shyiko/mysql-binlog-connector-java]]
>  is blocking, we should initialize the queue with *initial size* and *queue 
> offer timeout* specified by the user based in cluster configuration.
> [http://apache-nifi-developer-list.39713.n7.nabble.com/NiFi-Cluster-crashes-while-running-CaptureChangeMySQL-for-CDC-td20895.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1066: MINIFICPP-1532: Create a processor to capture resource consumption me…

2021-05-14 Thread GitBox


adamdebreceni commented on a change in pull request #1066:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1066#discussion_r632382945



##
File path: extensions/pdh/PerformanceDataMonitor.cpp
##
@@ -0,0 +1,285 @@
+/**
+ * @file GenerateFlowFile.cpp
+ * GenerateFlowFile class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "PerformanceDataMonitor.h"
+#include "PDHCounters.h"
+#include "MemoryConsumptionCounter.h"
+#include "utils/StringUtils.h"
+#include "utils/JsonCallback.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+core::Relationship PerformanceDataMonitor::Success("success", "All files are 
routed to success");
+
+core::Property PerformanceDataMonitor::PredefinedGroups(
+core::PropertyBuilder::createProperty("Predefined Groups")->
+withDescription("Comma separated list from the allowable values, to 
monitor multiple common Windows Performance counters related to these groups")->
+withDefaultValue("")->build());
+
+core::Property PerformanceDataMonitor::CustomPDHCounters(
+core::PropertyBuilder::createProperty("Custom PDH Counters")->
+withDescription("Comma separated list of Windows Performance Counters to 
monitor")->
+withDefaultValue("")->build());
+
+core::Property PerformanceDataMonitor::OutputFormatProperty(
+core::PropertyBuilder::createProperty("Output Format")->
+withDescription("Format of the created flowfiles")->
+withAllowableValue(JSON_FORMAT_STR)->
+withAllowableValue(OPEN_TELEMETRY_FORMAT_STR)->
+withDefaultValue(JSON_FORMAT_STR)->build());
+
+PerformanceDataMonitor::~PerformanceDataMonitor() {
+  PdhCloseQuery(pdh_query_);
+}
+
+void PerformanceDataMonitor::onSchedule(const 
std::shared_ptr& context, const 
std::shared_ptr& sessionFactory) {
+  setupMembersFromProperties(context);
+
+  PdhOpenQueryA(nullptr, 0, _query_);
+
+  for (auto it = resource_consumption_counters_.begin(); it != 
resource_consumption_counters_.end();) {
+PDHCounter* pdh_counter = dynamic_cast (it->get());
+if (pdh_counter != nullptr) {
+  PDH_STATUS add_to_query_result = pdh_counter->addToQuery(pdh_query_);
+  if (add_to_query_result != ERROR_SUCCESS) {
+logger_->log_error("Error adding %s to query, error code: 0x%x", 
pdh_counter->getName(), add_to_query_result);
+it = resource_consumption_counters_.erase(it);
+continue;
+  }
+}
+++it;
+  }
+
+  PDH_STATUS collect_query_data_result = PdhCollectQueryData(pdh_query_);
+  if (ERROR_SUCCESS != collect_query_data_result) {
+logger_->log_error("Error during PdhCollectQueryData, error code: 0x%x", 
collect_query_data_result);
+  }
+}
+
+void PerformanceDataMonitor::onTrigger(core::ProcessContext* context, 
core::ProcessSession* session) {
+  if (resource_consumption_counters_.empty()) {
+logger_->log_error("No valid counters for PerformanceDataMonitor");
+yield();
+return;
+  }
+
+  std::shared_ptr flowFile = session->create();
+  if (!flowFile) {
+logger_->log_error("Failed to create flowfile!");
+yield();
+return;
+  }
+
+  PDH_STATUS collect_query_data_result = PdhCollectQueryData(pdh_query_);
+  if (ERROR_SUCCESS != collect_query_data_result) {
+logger_->log_error("Error during PdhCollectQueryData, error code: 0x%x", 
collect_query_data_result);
+yield();
+return;
+  }
+
+  rapidjson::Document root = rapidjson::Document(rapidjson::kObjectType);
+  rapidjson::Value& body = prepareJSONBody(root);
+  for (auto& counter : resource_consumption_counters_) {
+if (counter->collectData())
+  counter->addToJson(body, root.GetAllocator());
+  }
+  utils::JsonOutputCallback callback(std::move(root));
+  session->write(flowFile, );
+  session->transfer(flowFile, Success);
+}
+
+void PerformanceDataMonitor::initialize() {
+  setSupportedProperties({ CustomPDHCounters, PredefinedGroups, 
OutputFormatProperty });
+  setSupportedRelationships({ PerformanceDataMonitor::Success });
+}
+
+rapidjson::Value& PerformanceDataMonitor::prepareJSONBody(rapidjson::Document& 
root) {
+  switch (output_format_) {
+case OutputFormat::OPENTELEMETRY:
+  root.AddMember("Name", 

[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1066: MINIFICPP-1532: Create a processor to capture resource consumption me…

2021-05-14 Thread GitBox


martinzink commented on a change in pull request #1066:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1066#discussion_r632397255



##
File path: extensions/pdh/PerformanceDataMonitor.cpp
##
@@ -0,0 +1,285 @@
+/**
+ * @file GenerateFlowFile.cpp
+ * GenerateFlowFile class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "PerformanceDataMonitor.h"
+#include "PDHCounters.h"
+#include "MemoryConsumptionCounter.h"
+#include "utils/StringUtils.h"
+#include "utils/JsonCallback.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+core::Relationship PerformanceDataMonitor::Success("success", "All files are 
routed to success");
+
+core::Property PerformanceDataMonitor::PredefinedGroups(
+core::PropertyBuilder::createProperty("Predefined Groups")->
+withDescription("Comma separated list from the allowable values, to 
monitor multiple common Windows Performance counters related to these groups")->
+withDefaultValue("")->build());
+
+core::Property PerformanceDataMonitor::CustomPDHCounters(
+core::PropertyBuilder::createProperty("Custom PDH Counters")->
+withDescription("Comma separated list of Windows Performance Counters to 
monitor")->
+withDefaultValue("")->build());
+
+core::Property PerformanceDataMonitor::OutputFormatProperty(
+core::PropertyBuilder::createProperty("Output Format")->
+withDescription("Format of the created flowfiles")->
+withAllowableValue(JSON_FORMAT_STR)->
+withAllowableValue(OPEN_TELEMETRY_FORMAT_STR)->
+withDefaultValue(JSON_FORMAT_STR)->build());
+
+PerformanceDataMonitor::~PerformanceDataMonitor() {
+  PdhCloseQuery(pdh_query_);
+}
+
+void PerformanceDataMonitor::onSchedule(const 
std::shared_ptr& context, const 
std::shared_ptr& sessionFactory) {
+  setupMembersFromProperties(context);
+
+  PdhOpenQueryA(nullptr, 0, _query_);
+
+  for (auto it = resource_consumption_counters_.begin(); it != 
resource_consumption_counters_.end();) {
+PDHCounter* pdh_counter = dynamic_cast (it->get());
+if (pdh_counter != nullptr) {
+  PDH_STATUS add_to_query_result = pdh_counter->addToQuery(pdh_query_);
+  if (add_to_query_result != ERROR_SUCCESS) {
+logger_->log_error("Error adding %s to query, error code: 0x%x", 
pdh_counter->getName(), add_to_query_result);
+it = resource_consumption_counters_.erase(it);
+continue;
+  }
+}
+++it;
+  }
+
+  PDH_STATUS collect_query_data_result = PdhCollectQueryData(pdh_query_);
+  if (ERROR_SUCCESS != collect_query_data_result) {
+logger_->log_error("Error during PdhCollectQueryData, error code: 0x%x", 
collect_query_data_result);
+  }
+}
+
+void PerformanceDataMonitor::onTrigger(core::ProcessContext* context, 
core::ProcessSession* session) {
+  if (resource_consumption_counters_.empty()) {
+logger_->log_error("No valid counters for PerformanceDataMonitor");
+yield();
+return;
+  }
+
+  std::shared_ptr flowFile = session->create();
+  if (!flowFile) {
+logger_->log_error("Failed to create flowfile!");
+yield();
+return;
+  }
+
+  PDH_STATUS collect_query_data_result = PdhCollectQueryData(pdh_query_);
+  if (ERROR_SUCCESS != collect_query_data_result) {
+logger_->log_error("Error during PdhCollectQueryData, error code: 0x%x", 
collect_query_data_result);
+yield();
+return;
+  }
+
+  rapidjson::Document root = rapidjson::Document(rapidjson::kObjectType);
+  rapidjson::Value& body = prepareJSONBody(root);
+  for (auto& counter : resource_consumption_counters_) {
+if (counter->collectData())
+  counter->addToJson(body, root.GetAllocator());
+  }
+  utils::JsonOutputCallback callback(std::move(root));
+  session->write(flowFile, );
+  session->transfer(flowFile, Success);
+}
+
+void PerformanceDataMonitor::initialize() {
+  setSupportedProperties({ CustomPDHCounters, PredefinedGroups, 
OutputFormatProperty });
+  setSupportedRelationships({ PerformanceDataMonitor::Success });
+}
+
+rapidjson::Value& PerformanceDataMonitor::prepareJSONBody(rapidjson::Document& 
root) {
+  switch (output_format_) {
+case OutputFormat::OPENTELEMETRY:
+  root.AddMember("Name", 

[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #1074: MINIFICPP-1560 - Change some c2 log levels

2021-05-14 Thread GitBox


adamdebreceni opened a new pull request #1074:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1074


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] simonbence commented on a change in pull request #5059: NIFI-8519 Adding HDFS support for NAR autoload

2021-05-14 Thread GitBox


simonbence commented on a change in pull request #5059:
URL: https://github.com/apache/nifi/pull/5059#discussion_r632359050



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/nar/hadoop/HDFSNarProvider.java
##
@@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.nar.hadoop;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.nifi.hadoop.SecurityUtil;
+import org.apache.nifi.nar.NarProvider;
+import org.apache.nifi.nar.NarProviderInitializationContext;
+import org.apache.nifi.nar.hadoop.util.ExtensionFilter;
+import org.apache.nifi.processors.hadoop.ExtendedConfiguration;
+import org.apache.nifi.processors.hadoop.HdfsResources;
+import org.apache.nifi.security.krb.KerberosKeytabUser;
+import org.apache.nifi.security.krb.KerberosPasswordUser;
+import org.apache.nifi.security.krb.KerberosUser;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.SocketFactory;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.URI;
+import java.security.PrivilegedExceptionAction;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+public class HDFSNarProvider implements NarProvider {
+private static final Logger LOGGER = 
LoggerFactory.getLogger(HDFSNarProvider.class);
+
+private static final String RESOURCES_PARAMETER = "resources";
+private static final String SOURCE_DIRECTORY_PARAMETER = 
"source.directory";
+private static final String KERBEROS_PRINCIPAL_PARAMETER = 
"kerberos.principal";
+private static final String KERBEROS_KEYTAB_PARAMETER = "kerberos.keytab";
+private static final String KERBEROS_PASSWORD_PARAMETER = 
"kerberos.password";
+
+private static final String NAR_EXTENSION = "nar";
+private static final String DELIMITER = "/";
+private static final int BUFFER_SIZE_DEFAULT = 4096;
+private static final Object RESOURCES_LOCK = new Object();
+
+private volatile List resources = null;
+private volatile Path sourceDirectory = null;
+
+private volatile NarProviderInitializationContext context;
+
+private volatile boolean initialized = false;
+
+public void initialize(final NarProviderInitializationContext context) {
+resources = 
Arrays.stream(Objects.requireNonNull(context.getParameters().get(RESOURCES_PARAMETER)).split(",")).map(s
 -> s.trim()).collect(Collectors.toList());
+
+if (resources.isEmpty()) {
+throw new IllegalArgumentException("At least one HDFS 
configuration resource is necessary");
+}
+
+this.sourceDirectory = new 
Path(Objects.requireNonNull(context.getParameters().get(SOURCE_DIRECTORY_PARAMETER)));

Review comment:
   Path constructor provides some checks, but you are right with that it 
provides (and has) no information about what the path is for.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1044: MINIFICPP-1526 Add ConsumeJournald to consume systemd journal messages

2021-05-14 Thread GitBox


martinzink commented on a change in pull request #1044:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1044#discussion_r632376008



##
File path: CMakeLists.txt
##
@@ -564,6 +560,14 @@ if (ENABLE_ALL OR ENABLE_AZURE)
createExtension(AZURE-EXTENSIONS "AZURE EXTENSIONS" "This enables Azure 
support" "extensions/azure" "${TEST_DIR}/azure-tests")
 endif()
 
+## Add the systemd extension
+if(CMAKE_SYSTEM_NAME STREQUAL "Linux")

Review comment:
   How does this behave on distros without systemd? e.g Artix




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1560) Reclassify some c2 logs

2021-05-14 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1560:
-

 Summary: Reclassify some c2 logs
 Key: MINIFICPP-1560
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1560
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Adam Debreceni
Assignee: Adam Debreceni


Currently we log a lot of stuff on DEBUG level in C2Agent, this clutters the 
"happy-path". We should move these to TRACE level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-registry] exceptionfactory commented on a change in pull request #319: NIFIREG-395 - Implemented the ability to import and export versioned flows through the UI

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #319:
URL: https://github.com/apache/nifi-registry/pull/319#discussion_r632501622



##
File path: 
nifi-registry-core/nifi-registry-web-api/src/main/java/org/apache/nifi/registry/web/api/BucketFlowResource.java
##
@@ -291,6 +294,42 @@ public Response createFlowVersion(
 return 
Response.status(Response.Status.OK).entity(createdSnapshot).build();
 }
 
+@POST
+@Path("{flowId}/versions/import")
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@ApiOperation(
+value = "Import flow version",
+notes = "Import the next version of a flow. The version number of 
the object being created will be the " +
+"next available version integer. Flow versions are 
immutable after they are created.",
+response = VersionedFlowSnapshot.class,
+extensions = {
+@Extension(name = "access-policy", properties = {
+@ExtensionProperty(name = "action", value = 
"write"),
+@ExtensionProperty(name = "resource", value = 
"/buckets/{bucketId}") })
+}
+)
+@ApiResponses({
+@ApiResponse(code = 400, message = HttpStatusMessages.MESSAGE_400),
+@ApiResponse(code = 401, message = HttpStatusMessages.MESSAGE_401),
+@ApiResponse(code = 403, message = HttpStatusMessages.MESSAGE_403),
+@ApiResponse(code = 404, message = HttpStatusMessages.MESSAGE_404),
+@ApiResponse(code = 409, message = HttpStatusMessages.MESSAGE_409) 
})
+public Response importVersionedFlow(
+@PathParam("bucketId")
+@ApiParam("The bucket identifier")
+final String bucketId,
+@PathParam("flowId")
+@ApiParam(value = "The flow identifier")
+final String flowId,
+@ApiParam("file") final VersionedFlowSnapshot 
versionedFlowSnapshot,
+@HeaderParam("Comments") final String comments) {
+
+final VersionedFlowSnapshot createdSnapshot = 
serviceFacade.importVersionedFlowSnapshot(versionedFlowSnapshot, bucketId, 
flowId, comments);
+publish(EventFactory.flowVersionCreated(createdSnapshot));
+return 
Response.status(Response.Status.CREATED).entity(createdSnapshot).build();

Review comment:
   Best practice when returning an HTTP 201 includes adding a `Location` 
header with the URL of the new resource.  See 
`ApplicationResource.generateCreatedResponse()` as a helper method for 
returning the Location header with the created entity.

##
File path: 
nifi-registry-core/nifi-registry-web-api/src/main/java/org/apache/nifi/registry/web/api/BucketFlowResource.java
##
@@ -291,6 +294,42 @@ public Response createFlowVersion(
 return 
Response.status(Response.Status.OK).entity(createdSnapshot).build();
 }
 
+@POST
+@Path("{flowId}/versions/import")
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@ApiOperation(
+value = "Import flow version",
+notes = "Import the next version of a flow. The version number of 
the object being created will be the " +
+"next available version integer. Flow versions are 
immutable after they are created.",
+response = VersionedFlowSnapshot.class,
+extensions = {
+@Extension(name = "access-policy", properties = {
+@ExtensionProperty(name = "action", value = 
"write"),
+@ExtensionProperty(name = "resource", value = 
"/buckets/{bucketId}") })
+}
+)
+@ApiResponses({
+@ApiResponse(code = 400, message = HttpStatusMessages.MESSAGE_400),

Review comment:
   Adding an `ApiResponse` annotation with a code of `201` would be helpful 
for completeness in the generated documentation.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gschaer commented on a change in pull request #5069: NIFI-6685: Add align and distribute UI actions

2021-05-14 Thread GitBox


gschaer commented on a change in pull request #5069:
URL: https://github.com/apache/nifi/pull/5069#discussion_r632440456



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/pom.xml
##
@@ -246,6 +246,11 @@
 moment/README.md
 moment/LICENSE
 
+
+
@carbon/icons/svg/**/*

Review comment:
   Good point. Commit af43e5c integrates carbon icon directly into the 
flowfont.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gschaer commented on a change in pull request #5069: NIFI-6685: Add align and distribute UI actions

2021-05-14 Thread GitBox


gschaer commented on a change in pull request #5069:
URL: https://github.com/apache/nifi/pull/5069#discussion_r632441945



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-context-menu.js
##
@@ -844,9 +853,21 @@
 {id: 'show-source-menu-item', condition: isConnection, menuItem: 
{clazz: 'fa fa-long-arrow-left', text: 'Go to source', action: 'showSource'}},
 {id: 'show-destination-menu-item', condition: isConnection, menuItem: 
{clazz: 'fa fa-long-arrow-right', text: 'Go to destination', action: 
'showDestination'}},
 {separator: true},
-{id: 'align-menu-item', groupMenuItem: {clazz: 'fa', text: 'Align'}, 
menuItems: [
-{id: 'align-horizontal-menu-item', condition: canAlign, menuItem: 
{ clazz: 'fa fa-align-center fa-rotate-90', text: 'Horizontally', action: 
'alignHorizontal'}},
-{id: 'align-vertical-menu-item', condition: canAlign, menuItem: 
{clazz: 'fa fa-align-center', text: 'Vertically', action: 'alignVertical'}}
+{id: 'align-menu-item', groupMenuItem: {clazz: 'carbon 
carbon-align--horizontal-left', text: 'Align'}, menuItems: [

Review comment:
   By moving the carbon icons to the flowfont with commit af43e5c solves 
this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


gresockj commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632070182



##
File path: 
nifi-nar-bundles/nifi-extension-utils/nifi-event-transport/src/main/java/org/apache/nifi/event/transport/message/ByteArrayMessage.java
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.event.transport.message;
+
+/**
+ * Byte Array Message with Sender
+ */
+public class ByteArrayMessage {
+private byte[] message;

Review comment:
   Any benefit in making these final, since there's no getters?

##
File path: 
nifi-nar-bundles/nifi-extension-utils/nifi-event-transport/src/main/java/org/apache/nifi/event/transport/netty/NettyEventSenderFactory.java
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.event.transport.netty;
+
+import io.netty.bootstrap.Bootstrap;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelHandler;
+import io.netty.channel.ChannelInitializer;
+import io.netty.channel.ChannelOption;
+import io.netty.channel.EventLoopGroup;
+import io.netty.channel.pool.ChannelHealthChecker;
+import io.netty.channel.pool.ChannelPool;
+import io.netty.channel.pool.ChannelPoolHandler;
+import io.netty.channel.pool.FixedChannelPool;
+import io.netty.channel.socket.nio.NioDatagramChannel;
+import io.netty.channel.socket.nio.NioSocketChannel;
+import org.apache.nifi.event.transport.EventSender;
+import org.apache.nifi.event.transport.EventSenderFactory;
+import org.apache.nifi.event.transport.configuration.TransportProtocol;
+import 
org.apache.nifi.event.transport.netty.channel.pool.InitializingChannelPoolHandler;
+import 
org.apache.nifi.event.transport.netty.channel.ssl.ClientSslStandardChannelInitializer;
+import 
org.apache.nifi.event.transport.netty.channel.StandardChannelInitializer;
+
+import javax.net.ssl.SSLContext;
+import java.net.InetSocketAddress;
+import java.net.SocketAddress;
+import java.time.Duration;
+import java.util.Collections;
+import java.util.List;
+import java.util.Objects;
+import java.util.function.Supplier;
+
+/**
+ * Netty Event Sender Factory
+ */
+public class NettyEventSenderFactory extends EventLoopGroupFactory 
implements EventSenderFactory {
+private static final int MAX_PENDING_ACQUIRES = 1024;
+
+private Duration timeout = Duration.ofSeconds(30);
+
+private int maxConnections = Runtime.getRuntime().availableProcessors() * 
2;
+
+private Supplier> handlerSupplier = () -> 
Collections.emptyList();
+
+private final String address;

Review comment:
   Can you move the final ones to the top, just to make it easier to 
visually distinguish the mutable vs. immutable fields?

##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenSyslog.java
##
@@ -152,7 +145,7 @@
 "The maximum number of Syslog events to add to a single 
FlowFile. If multiple events are available, they will be concatenated along 
with "
 + "the  up to this configured maximum 
number of messages")
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
-.expressionLanguageSupported(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)

Review comment:
   If you enable EL, you'll need to call 

[jira] [Created] (MINIFICPP-1561) Support rocksdb encryption

2021-05-14 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1561:
-

 Summary: Support rocksdb encryption
 Key: MINIFICPP-1561
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1561
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Adam Debreceni
Assignee: Adam Debreceni


We should make it possible to enable encryption in a rocksdb database.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] gschaer commented on a change in pull request #5069: NIFI-6685: Add align and distribute UI actions

2021-05-14 Thread GitBox


gschaer commented on a change in pull request #5069:
URL: https://github.com/apache/nifi/pull/5069#discussion_r632440456



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/pom.xml
##
@@ -246,6 +246,11 @@
 moment/README.md
 moment/LICENSE
 
+
+
@carbon/icons/svg/**/*

Review comment:
   Good point. #af43e5c integrates carbon icon directly into the flowfont.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1044: MINIFICPP-1526 Add ConsumeJournald to consume systemd journal messages

2021-05-14 Thread GitBox


martinzink commented on a change in pull request #1044:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1044#discussion_r632507305



##
File path: extensions/systemd/ConsumeJournald.cpp
##
@@ -0,0 +1,262 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ConsumeJournald.h"
+
+#include 
+
+#include 
+#include "spdlog/spdlog.h"  // TODO(szaszm): make fmt directly available
+#include "utils/GeneralUtils.h"
+
+namespace org { namespace apache { namespace nifi { namespace minifi { 
namespace extensions { namespace systemd {
+
+constexpr const char* ConsumeJournald::CURSOR_KEY;
+const core::Relationship ConsumeJournald::Success("success", "Successfully 
consumed journal messages.");
+
+const core::Property ConsumeJournald::BatchSize = 
core::PropertyBuilder::createProperty("Batch Size")
+->withDescription("The maximum number of entries processed in a single 
execution.")
+->withDefaultValue(1000)
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::PayloadFormat = 
core::PropertyBuilder::createProperty("Payload Format")
+->withDescription("Configures flow file content formatting. Raw: only the 
message. Syslog: similar to syslog or journalctl output.")
+->withDefaultValue(PAYLOAD_FORMAT_SYSLOG)
+->withAllowableValues({PAYLOAD_FORMAT_RAW, 
PAYLOAD_FORMAT_SYSLOG})
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::IncludeTimestamp = 
core::PropertyBuilder::createProperty("Include Timestamp")
+->withDescription("Include message timestamp in the 'timestamp' 
attribute.")
+->withDefaultValue(true)
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::JournalType = 
core::PropertyBuilder::createProperty("Journal Type")
+->withDescription("Type of journal to consume.")
+->withDefaultValue(JOURNAL_TYPE_SYSTEM)
+->withAllowableValues({JOURNAL_TYPE_USER, 
JOURNAL_TYPE_SYSTEM, JOURNAL_TYPE_BOTH})
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::ProcessOldMessages = 
core::PropertyBuilder::createProperty("Process Old Messages")
+->withDescription("Process events created before the first usage 
(schedule) of the processor instance.")
+->withDefaultValue(false)
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::TimestampFormat = 
core::PropertyBuilder::createProperty("Timestamp Format")
+->withDescription("Format string to use when creating the timestamp 
attribute or writing messages in the syslog format.")
+->withDefaultValue("%x %X %Z")
+->isRequired(true)
+->build();
+
+ConsumeJournald::ConsumeJournald(const std::string , const 
utils::Identifier , std::unique_ptr&& libwrapper)
+:core::Processor{name, id}, libwrapper_{std::move(libwrapper)}
+{}
+
+void ConsumeJournald::initialize() {
+  setSupportedProperties({BatchSize, PayloadFormat, IncludeTimestamp, 
JournalType, ProcessOldMessages, TimestampFormat});
+  setSupportedRelationships({Success});
+
+  worker_ = utils::make_unique();
+}
+
+void ConsumeJournald::notifyStop() {
+  bool running = true;
+  if (!running_.compare_exchange_strong(running, false, 
std::memory_order_acq_rel) || !journal_) return;
+  worker_->enqueue([this] {
+journal_ = nullptr;
+  }).get();
+  worker_ = nullptr;
+}
+
+void ConsumeJournald::onSchedule(core::ProcessContext* const context, 
core::ProcessSessionFactory* const sessionFactory) {
+  gsl_Expects(context && sessionFactory && !running_ && worker_);
+  using JournalTypeEnum = systemd::JournalType;
+
+  const auto parse_payload_format = [](const std::string& property_value) -> 
utils::optional {
+if (utils::StringUtils::equalsIgnoreCase(property_value, 
PAYLOAD_FORMAT_RAW)) return systemd::PayloadFormat::Raw;
+if (utils::StringUtils::equalsIgnoreCase(property_value, 
PAYLOAD_FORMAT_SYSLOG)) return systemd::PayloadFormat::Syslog;
+return utils::nullopt;
+  };
+  const auto parse_journal_type = [](const std::string& property_value) -> 
utils::optional {
+if (utils::StringUtils::equalsIgnoreCase(property_value, 
JOURNAL_TYPE_USER)) return JournalTypeEnum::User;
+if 

[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1044: MINIFICPP-1526 Add ConsumeJournald to consume systemd journal messages

2021-05-14 Thread GitBox


martinzink commented on a change in pull request #1044:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1044#discussion_r632507305



##
File path: extensions/systemd/ConsumeJournald.cpp
##
@@ -0,0 +1,262 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ConsumeJournald.h"
+
+#include 
+
+#include 
+#include "spdlog/spdlog.h"  // TODO(szaszm): make fmt directly available
+#include "utils/GeneralUtils.h"
+
+namespace org { namespace apache { namespace nifi { namespace minifi { 
namespace extensions { namespace systemd {
+
+constexpr const char* ConsumeJournald::CURSOR_KEY;
+const core::Relationship ConsumeJournald::Success("success", "Successfully 
consumed journal messages.");
+
+const core::Property ConsumeJournald::BatchSize = 
core::PropertyBuilder::createProperty("Batch Size")
+->withDescription("The maximum number of entries processed in a single 
execution.")
+->withDefaultValue(1000)
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::PayloadFormat = 
core::PropertyBuilder::createProperty("Payload Format")
+->withDescription("Configures flow file content formatting. Raw: only the 
message. Syslog: similar to syslog or journalctl output.")
+->withDefaultValue(PAYLOAD_FORMAT_SYSLOG)
+->withAllowableValues({PAYLOAD_FORMAT_RAW, 
PAYLOAD_FORMAT_SYSLOG})
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::IncludeTimestamp = 
core::PropertyBuilder::createProperty("Include Timestamp")
+->withDescription("Include message timestamp in the 'timestamp' 
attribute.")
+->withDefaultValue(true)
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::JournalType = 
core::PropertyBuilder::createProperty("Journal Type")
+->withDescription("Type of journal to consume.")
+->withDefaultValue(JOURNAL_TYPE_SYSTEM)
+->withAllowableValues({JOURNAL_TYPE_USER, 
JOURNAL_TYPE_SYSTEM, JOURNAL_TYPE_BOTH})
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::ProcessOldMessages = 
core::PropertyBuilder::createProperty("Process Old Messages")
+->withDescription("Process events created before the first usage 
(schedule) of the processor instance.")
+->withDefaultValue(false)
+->isRequired(true)
+->build();
+
+const core::Property ConsumeJournald::TimestampFormat = 
core::PropertyBuilder::createProperty("Timestamp Format")
+->withDescription("Format string to use when creating the timestamp 
attribute or writing messages in the syslog format.")
+->withDefaultValue("%x %X %Z")
+->isRequired(true)
+->build();
+
+ConsumeJournald::ConsumeJournald(const std::string , const 
utils::Identifier , std::unique_ptr&& libwrapper)
+:core::Processor{name, id}, libwrapper_{std::move(libwrapper)}
+{}
+
+void ConsumeJournald::initialize() {
+  setSupportedProperties({BatchSize, PayloadFormat, IncludeTimestamp, 
JournalType, ProcessOldMessages, TimestampFormat});
+  setSupportedRelationships({Success});
+
+  worker_ = utils::make_unique();
+}
+
+void ConsumeJournald::notifyStop() {
+  bool running = true;
+  if (!running_.compare_exchange_strong(running, false, 
std::memory_order_acq_rel) || !journal_) return;
+  worker_->enqueue([this] {
+journal_ = nullptr;
+  }).get();
+  worker_ = nullptr;
+}
+
+void ConsumeJournald::onSchedule(core::ProcessContext* const context, 
core::ProcessSessionFactory* const sessionFactory) {
+  gsl_Expects(context && sessionFactory && !running_ && worker_);
+  using JournalTypeEnum = systemd::JournalType;
+
+  const auto parse_payload_format = [](const std::string& property_value) -> 
utils::optional {
+if (utils::StringUtils::equalsIgnoreCase(property_value, 
PAYLOAD_FORMAT_RAW)) return systemd::PayloadFormat::Raw;
+if (utils::StringUtils::equalsIgnoreCase(property_value, 
PAYLOAD_FORMAT_SYSLOG)) return systemd::PayloadFormat::Syslog;
+return utils::nullopt;
+  };
+  const auto parse_journal_type = [](const std::string& property_value) -> 
utils::optional {
+if (utils::StringUtils::equalsIgnoreCase(property_value, 
JOURNAL_TYPE_USER)) return JournalTypeEnum::User;
+if 

[GitHub] [nifi] exceptionfactory commented on pull request #5068: NIFI-8516 Enabled HTTPS and Single User Authentication by default

2021-05-14 Thread GitBox


exceptionfactory commented on pull request #5068:
URL: https://github.com/apache/nifi/pull/5068#issuecomment-841239615


   > Great work, this has a nice feel to it when you put it all together! Just 
documentation comments below. Also, we should update the Getting Started guide, 
which still references localhost:8080.
   
   Thanks for the feedback @gresockj!  I pushed an update with additional 
details in the Getting Started guide describing the generated credentials and 
referencing to new default URL.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5068: NIFI-8516 Enabled HTTPS and Single User Authentication by default

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5068:
URL: https://github.com/apache/nifi/pull/5068#discussion_r632521823



##
File path: nifi-docs/src/main/asciidoc/administration-guide.adoc
##
@@ -3527,7 +3527,7 @@ For example, to provide two additional network 
interfaces, a user could also spe
  +
 Providing three total network interfaces, including  
`nifi.web.http.network.interface.default`.
 |`nifi.web.https.host`|The HTTPS host. It is blank by default.

Review comment:
   Thanks for catching that detail, I pushed an update with changes.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on a change in pull request #5072: WIP NIFI-8490: Adding inherited parameter contexts

2021-05-14 Thread GitBox


gresockj commented on a change in pull request #5072:
URL: https://github.com/apache/nifi/pull/5072#discussion_r632768725



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ParameterContextResource.java
##
@@ -147,7 +147,10 @@ private void authorizeReadParameterContext(final String 
parameterContextId) {
 @ApiResponse(code = 404, message = "The specified resource could not 
be found."),
 @ApiResponse(code = 409, message = "The request was valid but NiFi was 
not in the appropriate state to process it. Retrying the same request later may 
be successful.")
 })
-public Response getParameterContext(@ApiParam("The ID of the Parameter 
Context") @PathParam("id") final String parameterContextId) {
+public Response getParameterContext(@ApiParam("The ID of the Parameter 
Context") @PathParam("id") final String parameterContextId,
+@ApiParam("Whether or not to include 
inherited parameters from other parameter contexts, and therefore also 
overridden values.  " +
+"If true, the result will be 
the 'effective' parameter context.") @QueryParam("includeInheritedParameters")
+@DefaultValue("false") final boolean 
includeInheritedParameters) {
 // authorize access
 authorizeReadParameterContext(parameterContextId);

Review comment:
   Good question.. the current authorization code is not conditional on 
includeInheritedParameters (it always requires the user to have read access to 
all inherited param contexts), but I can see how that could be nonintuitive.  
Perhaps it's better to err on the side of more restrictive, though?  Any strong 
opinion on this?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-registry] sardell commented on pull request #319: NIFIREG-395 - Implemented the ability to import and export versioned flows through the UI

2021-05-14 Thread GitBox


sardell commented on pull request #319:
URL: https://github.com/apache/nifi-registry/pull/319#issuecomment-841502608


   @mtien-apache I just finished looking at the UI code and testing your 
changes in the UI. Nice work. I'm a +1 (non-binding).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #5068: NIFI-8516 Enabled HTTPS and Single User Authentication by default

2021-05-14 Thread GitBox


exceptionfactory commented on pull request #5068:
URL: https://github.com/apache/nifi/pull/5068#issuecomment-841502727


   > Haven't tried it to confirm, but I don't think this will work with the 
Docker start scripts, which still default to no authentication (or is that a 
conscious decision)?
   
   Thanks for the feedback @ChrisSamo632, that's a good question.  The change 
to the default properties will impact the Docker image.  It looks like some 
adjustments to the startup property configuration in the Docker image is going 
to be necessary.  I will look into it and evaluate potential changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8535) PutDatabaseRecord should give a better message when the table cannot be found

2021-05-14 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8535:
---
Fix Version/s: 1.14.0

> PutDatabaseRecord should give a better message when the table cannot be found
> -
>
> Key: NIFI-8535
> URL: https://issues.apache.org/jira/browse/NIFI-8535
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently PutDatabaseRecord calls DatabaseMetaData.getColumns() to try and 
> match the columns from the specified table to the fields in the incoming 
> record(s). However if the table itself is not found, this method returns an 
> empty ResultSet, so it is not known whether the table does not exist or if it 
> exists with no columns.
> PutDatabaseRecord should call DatabaseMetaData.getTables() if the column list 
> is empty, and give a more descriptive error message if the table is not 
> found. This can help the user determine whether there is a field/column 
> mismatch or a catalog/schema/table name mismatch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8333) Issues with invokehttp/SSL Service after "Terminate Task" (e.g. due to long running transactions) and misleading error-message

2021-05-14 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344877#comment-17344877
 ] 

David Handermann commented on NIFI-8333:


[~aklingens] Thanks for following up and confirming the same issue on version 
1.13.2.  Do you have any custom processors installed or custom code in one of 
the execute or invoke scripted processors?  Related to that, do you have an SSL 
Context Service configured for InvokeHTTP or not?  If you do not have one 
configured, then InvokeHTTP falls back to using the JVM default trust store.  
If there is custom code setting the Java System property 
{{javax.net.ssl.trustStore}}, that would explain why the issue only happens 
after restarting InvokeHTTP.

> Issues with invokehttp/SSL Service after "Terminate Task" (e.g. due to long 
> running transactions) and misleading error-message
> --
>
> Key: NIFI-8333
> URL: https://issues.apache.org/jira/browse/NIFI-8333
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.1, 1.13.2
>Reporter: Andreas Klingenstein
>Priority: Major
>  Labels: https, security
> Attachments: 1_config_invokeHttp_settings.png, 
> 2_config_invokeHttp_scheduling.png, 3_config_invokeHttp_config_1.png, 
> 4_config_invokeHttp_config_2.png, 5_config_invokeHttp_config_3.png, 
> 6_sslcontextservice_settings.png, 7_sslcontextservice_properties.png, 
> 8_config_invokeHttp_Terminate_Task.png, SSLHandshakeException.txt
>
>
> There seems to be an issue with InvokeHTTP 1.12.1 or 
> StandardRestrictedSSLContextService 1.12.1 or both.
> We observed a lot of "SSLHandshakeException"s (...) "unable to find valid 
> certification path to requested target" in some situations.
> At first we had a very close look into our keystore which is used in our 
> StandardRestrictedSSLContextService and made sure everything was fine. In the 
> end we figured out we had used an older, no longer valid oAuth token and the 
> webservice simlpy had rejected our request.
> Then we got the "SSLHandshakeException"s out of the blue in situations of 
> very intense testing where we started and stopped tests runs over and over 
> again. The oAuth token was still valid and other processors of the same kind 
> and even exact copies of the now failing processor worked fine
> We discovered the following steps to reproduce the issue in our environment 
> (Nifi 1.12.1 & 1.13.2):
>  - create input data for the InvokeHTTP
>  - make sure the contacted endpoint does not respond but have a high timeout 
> setting in InvokeHttp to be able to terminate the processor
>  - start InvokeHTTP
>  - let it run for some time
>  - stop the InvokeHTTP-processor
>  - "Terminate" should be available in the context menu if there are still 
> open requests waiting for an answer (or timeout)
>  - terminate it
>  - wait until all tasks are finally stopped
>  - start the InvokeHTTP again
> "SSLHandshakeException" Errors should be visible now in the nifi-app.log on 
> all machines where the invokehttp-processor had to terminate some tasks. 
> They'll
>  stay until you restart those nifi instances
> Our configuration
>  - Cluster: 3 server, one nifi instance each
>  - nifi in "http-mode" on port 8081 (nifi-properties)
> more details in the screenshots
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ChrisSamo632 commented on pull request #5068: NIFI-8516 Enabled HTTPS and Single User Authentication by default

2021-05-14 Thread GitBox


ChrisSamo632 commented on pull request #5068:
URL: https://github.com/apache/nifi/pull/5068#issuecomment-841496430


   Haven't tried it to confirm, but I don't think this will work with the 
Docker start scripts, which still default to no authentication (or is that a 
conscious decision)?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mcgilman commented on a change in pull request #5072: WIP NIFI-8490: Adding inherited parameter contexts

2021-05-14 Thread GitBox


mcgilman commented on a change in pull request #5072:
URL: https://github.com/apache/nifi/pull/5072#discussion_r632759213



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ParameterContextResource.java
##
@@ -147,7 +147,10 @@ private void authorizeReadParameterContext(final String 
parameterContextId) {
 @ApiResponse(code = 404, message = "The specified resource could not 
be found."),
 @ApiResponse(code = 409, message = "The request was valid but NiFi was 
not in the appropriate state to process it. Retrying the same request later may 
be successful.")
 })
-public Response getParameterContext(@ApiParam("The ID of the Parameter 
Context") @PathParam("id") final String parameterContextId) {
+public Response getParameterContext(@ApiParam("The ID of the Parameter 
Context") @PathParam("id") final String parameterContextId,
+@ApiParam("Whether or not to include 
inherited parameters from other parameter contexts, and therefore also 
overridden values.  " +
+"If true, the result will be 
the 'effective' parameter context.") @QueryParam("includeInheritedParameters")
+@DefaultValue("false") final boolean 
includeInheritedParameters) {
 // authorize access
 authorizeReadParameterContext(parameterContextId);

Review comment:
   When `includeInheritedParameters` is true, should we be verifying that 
the current user has permissions to every inherited parameter context? 
   
   Is this a behavior we want to be conditional? Or once a Parameter Context 
has included inherited Parameter Contexts and it becomes unauthorized for some 
user without permissions to every inherited Parameter Context should we always 
prevent access to the requested Parameter Context regardless of the value of 
`includeInheritedParameters`? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-registry] mtien-apache commented on a change in pull request #319: NIFIREG-395 - Implemented the ability to import and export versioned flows through the UI

2021-05-14 Thread GitBox


mtien-apache commented on a change in pull request #319:
URL: https://github.com/apache/nifi-registry/pull/319#discussion_r632768175



##
File path: 
nifi-registry-core/nifi-registry-web-api/src/main/java/org/apache/nifi/registry/web/api/BucketFlowResource.java
##
@@ -291,6 +294,42 @@ public Response createFlowVersion(
 return 
Response.status(Response.Status.OK).entity(createdSnapshot).build();
 }
 
+@POST
+@Path("{flowId}/versions/import")
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@ApiOperation(
+value = "Import flow version",
+notes = "Import the next version of a flow. The version number of 
the object being created will be the " +
+"next available version integer. Flow versions are 
immutable after they are created.",
+response = VersionedFlowSnapshot.class,
+extensions = {
+@Extension(name = "access-policy", properties = {
+@ExtensionProperty(name = "action", value = 
"write"),
+@ExtensionProperty(name = "resource", value = 
"/buckets/{bucketId}") })
+}
+)
+@ApiResponses({
+@ApiResponse(code = 400, message = HttpStatusMessages.MESSAGE_400),
+@ApiResponse(code = 401, message = HttpStatusMessages.MESSAGE_401),
+@ApiResponse(code = 403, message = HttpStatusMessages.MESSAGE_403),
+@ApiResponse(code = 404, message = HttpStatusMessages.MESSAGE_404),
+@ApiResponse(code = 409, message = HttpStatusMessages.MESSAGE_409) 
})
+public Response importVersionedFlow(
+@PathParam("bucketId")
+@ApiParam("The bucket identifier")
+final String bucketId,
+@PathParam("flowId")
+@ApiParam(value = "The flow identifier")
+final String flowId,
+@ApiParam("file") final VersionedFlowSnapshot 
versionedFlowSnapshot,
+@HeaderParam("Comments") final String comments) {
+
+final VersionedFlowSnapshot createdSnapshot = 
serviceFacade.importVersionedFlowSnapshot(versionedFlowSnapshot, bucketId, 
flowId, comments);
+publish(EventFactory.flowVersionCreated(createdSnapshot));
+return 
Response.status(Response.Status.CREATED).entity(createdSnapshot).build();

Review comment:
   @exceptionfactory I've added the header. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] thenatog commented on pull request #5077: NIFI-8604 Upgraded Apache Accumulo to 2.0.1

2021-05-14 Thread GitBox


thenatog commented on pull request #5077:
URL: https://github.com/apache/nifi/pull/5077#issuecomment-841537814


   I checked the Accumulo page for release notes on this and checked the 
commits for the 2.0.1 tag, looks like there shouldn't be any compatibility 
issues with this patch version (as we'd expect). Do you foresee any issues with 
this upgrade? I ran the ScanAccumuloIT which passed just fine.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] thenatog commented on pull request #5068: NIFI-8516 Enabled HTTPS and Single User Authentication by default

2021-05-14 Thread GitBox


thenatog commented on pull request #5068:
URL: https://github.com/apache/nifi/pull/5068#issuecomment-841539840


   Will review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] asfgit closed pull request #5074: NIFI-8343 - Updated solr from 8.4.1 to 8.8.2. Small code changes were…

2021-05-14 Thread GitBox


asfgit closed pull request #5074:
URL: https://github.com/apache/nifi/pull/5074


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8343) Upgrade solr dependency

2021-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344708#comment-17344708
 ] 

ASF subversion and git services commented on NIFI-8343:
---

Commit 7e54ef9421901f15bf7030cd71087457b65b5d68 in nifi's branch 
refs/heads/main from Nathan Gough
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7e54ef9 ]

NIFI-8343 - Updated solr from 8.4.1 to 8.8.2. Small code changes were required.

This closes #5074

Signed-off-by: David Handermann 


> Upgrade solr dependency
> ---
>
> Key: NIFI-8343
> URL: https://issues.apache.org/jira/browse/NIFI-8343
> Project: Apache NiFi
>  Issue Type: Sub-task
>Affects Versions: 1.13.0, 1.13.1
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
>  Labels: dependency, dependency-upgrade, solr
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Upgrade the solr dependency from 8.4.1 to 8.8.1. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] simonbence commented on a change in pull request #5059: NIFI-8519 Adding HDFS support for NAR autoload

2021-05-14 Thread GitBox


simonbence commented on a change in pull request #5059:
URL: https://github.com/apache/nifi/pull/5059#discussion_r632378409



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/nar/hadoop/HDFSNarProvider.java
##
@@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.nar.hadoop;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.nifi.hadoop.SecurityUtil;
+import org.apache.nifi.nar.NarProvider;
+import org.apache.nifi.nar.NarProviderInitializationContext;
+import org.apache.nifi.nar.hadoop.util.ExtensionFilter;
+import org.apache.nifi.processors.hadoop.ExtendedConfiguration;
+import org.apache.nifi.processors.hadoop.HdfsResources;
+import org.apache.nifi.security.krb.KerberosKeytabUser;
+import org.apache.nifi.security.krb.KerberosPasswordUser;
+import org.apache.nifi.security.krb.KerberosUser;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.SocketFactory;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.URI;
+import java.security.PrivilegedExceptionAction;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+public class HDFSNarProvider implements NarProvider {
+private static final Logger LOGGER = 
LoggerFactory.getLogger(HDFSNarProvider.class);
+
+private static final String RESOURCES_PARAMETER = "resources";
+private static final String SOURCE_DIRECTORY_PARAMETER = 
"source.directory";
+private static final String KERBEROS_PRINCIPAL_PARAMETER = 
"kerberos.principal";
+private static final String KERBEROS_KEYTAB_PARAMETER = "kerberos.keytab";
+private static final String KERBEROS_PASSWORD_PARAMETER = 
"kerberos.password";
+
+private static final String NAR_EXTENSION = "nar";
+private static final String DELIMITER = "/";
+private static final int BUFFER_SIZE_DEFAULT = 4096;
+private static final Object RESOURCES_LOCK = new Object();
+
+private volatile List resources = null;
+private volatile Path sourceDirectory = null;
+
+private volatile NarProviderInitializationContext context;
+
+private volatile boolean initialized = false;
+
+public void initialize(final NarProviderInitializationContext context) {
+resources = 
Arrays.stream(Objects.requireNonNull(context.getParameters().get(RESOURCES_PARAMETER)).split(",")).map(s
 -> s.trim()).collect(Collectors.toList());
+
+if (resources.isEmpty()) {
+throw new IllegalArgumentException("At least one HDFS 
configuration resource is necessary");
+}
+
+this.sourceDirectory = new 
Path(Objects.requireNonNull(context.getParameters().get(SOURCE_DIRECTORY_PARAMETER)));
+this.context = context;
+this.initialized = true;
+}
+
+@Override
+public Collection listNars() throws IOException {
+if (!initialized) {
+LOGGER.error("Provider is not initialized");
+}
+
+final HdfsResources hdfsResources = getHdfsResources();
+final FileStatus[] fileStatuses = 
hdfsResources.getFileSystem().listStatus(sourceDirectory, new 
ExtensionFilter(NAR_EXTENSION));
+
+final List result = Arrays.stream(fileStatuses)
+.filter(fileStatus -> fileStatus.isFile())
+.map(fileStatus -> fileStatus.getPath().getName())
+.collect(Collectors.toList());
+
+if (LOGGER.isDebugEnabled()) {
+LOGGER.debug("The following nars were found: " + String.join(", ", 
result));
+}
+
+return result;
+}
+
+@Override
+public InputStream fetchNarContents(final String location) throws 
IOException {
+if (!initialized) {
+LOGGER.error("Provider is not 

[GitHub] [nifi] pvillard31 commented on pull request #3511: NIFI-6175 Spark Livy - Improving Livy

2021-05-14 Thread GitBox


pvillard31 commented on pull request #3511:
URL: https://github.com/apache/nifi/pull/3511#issuecomment-841119994


   @patricker - any chance you can rebase this against main/latest?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8343) Upgrade solr dependency

2021-05-14 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-8343.

Resolution: Fixed

> Upgrade solr dependency
> ---
>
> Key: NIFI-8343
> URL: https://issues.apache.org/jira/browse/NIFI-8343
> Project: Apache NiFi
>  Issue Type: Sub-task
>Affects Versions: 1.13.0, 1.13.1
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
>  Labels: dependency, dependency-upgrade, solr
> Fix For: 1.14.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Upgrade the solr dependency from 8.4.1 to 8.8.1. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632532916



##
File path: 
nifi-nar-bundles/nifi-extension-utils/nifi-event-transport/src/main/java/org/apache/nifi/event/transport/netty/EventLoopGroupFactory.java
##
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.event.transport.netty;
+
+import io.netty.channel.EventLoopGroup;
+import io.netty.channel.nio.NioEventLoopGroup;
+import io.netty.util.concurrent.DefaultThreadFactory;
+
+import java.util.Objects;
+import java.util.concurrent.ThreadFactory;
+
+/**
+ * Event Loop Group Factory for standardized instance creation
+ */
+class EventLoopGroupFactory {
+private static final String DEFAULT_THREAD_NAME_PREFIX = 
"NettyEventLoopGroup";
+
+private static final boolean DAEMON_THREAD_ENABLED = true;
+
+private String threadNamePrefix = DEFAULT_THREAD_NAME_PREFIX;
+
+private int workerThreads;
+
+/**
+ * Set Thread Name Prefix used in Netty NioEventLoopGroup defaults to 
NettyChannel
+ *
+ * @param threadNamePrefix Thread Name Prefix
+ */
+public void setThreadNamePrefix(final String threadNamePrefix) {
+this.threadNamePrefix = Objects.requireNonNull(threadNamePrefix, 
"Thread Name Prefix required");
+}
+
+/**
+ * Set Worker Threads used in Netty NioEventLoopGroup with 0 interpreted 
as the default based on available processors
+ *
+ * @param workerThreads NioEventLoopGroup Worker Threads
+ */
+public void setWorkerThreads(final int workerThreads) {
+this.workerThreads = workerThreads;
+}
+
+EventLoopGroup getEventLoopGroup() {

Review comment:
   Good question, I originally intended for this base class and the 
inheriting subclasses to be scoped as narrowly as possible so that 
implementations would remain in this package.  Changing this to protected would 
maintain that intent while allowing a bit more flexibility, so will change to 
protected.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632534325



##
File path: 
nifi-nar-bundles/nifi-extension-utils/nifi-event-transport/src/main/java/org/apache/nifi/event/transport/netty/NettyEventSenderFactory.java
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.event.transport.netty;
+
+import io.netty.bootstrap.Bootstrap;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelHandler;
+import io.netty.channel.ChannelInitializer;
+import io.netty.channel.ChannelOption;
+import io.netty.channel.EventLoopGroup;
+import io.netty.channel.pool.ChannelHealthChecker;
+import io.netty.channel.pool.ChannelPool;
+import io.netty.channel.pool.ChannelPoolHandler;
+import io.netty.channel.pool.FixedChannelPool;
+import io.netty.channel.socket.nio.NioDatagramChannel;
+import io.netty.channel.socket.nio.NioSocketChannel;
+import org.apache.nifi.event.transport.EventSender;
+import org.apache.nifi.event.transport.EventSenderFactory;
+import org.apache.nifi.event.transport.configuration.TransportProtocol;
+import 
org.apache.nifi.event.transport.netty.channel.pool.InitializingChannelPoolHandler;
+import 
org.apache.nifi.event.transport.netty.channel.ssl.ClientSslStandardChannelInitializer;
+import 
org.apache.nifi.event.transport.netty.channel.StandardChannelInitializer;
+
+import javax.net.ssl.SSLContext;
+import java.net.InetSocketAddress;
+import java.net.SocketAddress;
+import java.time.Duration;
+import java.util.Collections;
+import java.util.List;
+import java.util.Objects;
+import java.util.function.Supplier;
+
+/**
+ * Netty Event Sender Factory
+ */
+public class NettyEventSenderFactory extends EventLoopGroupFactory 
implements EventSenderFactory {
+private static final int MAX_PENDING_ACQUIRES = 1024;
+
+private Duration timeout = Duration.ofSeconds(30);
+
+private int maxConnections = Runtime.getRuntime().availableProcessors() * 
2;
+
+private Supplier> handlerSupplier = () -> 
Collections.emptyList();
+
+private final String address;
+
+private final int port;
+
+private final TransportProtocol protocol;
+
+private SSLContext sslContext;
+
+public NettyEventSenderFactory(final String address, final int port, final 
TransportProtocol protocol) {
+this.address = address;
+this.port = port;
+this.protocol = protocol;
+}
+
+/**
+ * Set Channel Handler Supplier
+ *
+ * @param handlerSupplier Channel Handler Supplier
+ */
+public void setHandlerSupplier(final Supplier> 
handlerSupplier) {
+this.handlerSupplier = Objects.requireNonNull(handlerSupplier);
+}
+
+/**
+ * Set SSL Context to enable TLS Channel Handler
+ *
+ * @param sslContext SSL Context
+ */
+public void setSslContext(final SSLContext sslContext) {
+this.sslContext = sslContext;
+}
+
+/**
+ * Set Timeout for Connections and Communication
+ *
+ * @param timeout Timeout Duration
+ */
+public void setTimeout(final Duration timeout) {
+this.timeout = Objects.requireNonNull(timeout, "Timeout required");
+}
+
+/**
+ * Set Maximum Connections for Channel Pool
+ *
+ * @param maxConnections Maximum Number of connections defaults to 
available processors multiplied by 2
+ */
+public void setMaxConnections(final int maxConnections) {
+this.maxConnections = maxConnections;
+}
+
+/**
+ * Get Event Sender with connected Channel
+ *
+ * @return Connected Event Sender
+ */
+public EventSender getEventSender() {
+final Bootstrap bootstrap = new Bootstrap();
+bootstrap.remoteAddress(new InetSocketAddress(address, port));
+final EventLoopGroup group = getEventLoopGroup();
+bootstrap.group(group);
+
+if (TransportProtocol.UDP.equals(protocol)) {
+bootstrap.channel(NioDatagramChannel.class);
+} else {
+bootstrap.channel(NioSocketChannel.class);
+}
+
+setChannelOptions(bootstrap);
+return getConfiguredEventSender(bootstrap);
+}
+
+private void 

[jira] [Updated] (NIFI-8527) LivySessionController validator issue

2021-05-14 Thread Celso Marques (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Celso Marques updated NIFI-8527:

Attachment: livy-error.png
livy-config.png
hadoop.png
docker.png

> LivySessionController validator issue
> -
>
> Key: NIFI-8527
> URL: https://issues.apache.org/jira/browse/NIFI-8527
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.13.1, 1.13.2
>Reporter: Celso Marques
>Assignee: Matt Burgess
>Priority: Major
> Attachments: docker.png, hadoop.png, livy-config.png, livy-error.png, 
> nifi-livy.png
>
>
> In  
> [NIFI-5906|https://github.com/apache/nifi/commit/1955e7ac617c2ce962a388076b4d8b0f646fe449]
>  the FILE_EXISTS_VALIDATOR was removed but new issue was introduced as 
> attached image shows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8527) LivySessionController validator issue

2021-05-14 Thread Celso Marques (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344606#comment-17344606
 ] 

Celso Marques commented on NIFI-8527:
-

Unfortunately no! I tried to point to HDFS location and didn't work as attached 
images show.

!livy-error.png!

!livy-config.png!

!docker.png!

!hadoop.png!

> LivySessionController validator issue
> -
>
> Key: NIFI-8527
> URL: https://issues.apache.org/jira/browse/NIFI-8527
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.13.1, 1.13.2
>Reporter: Celso Marques
>Assignee: Matt Burgess
>Priority: Major
> Attachments: docker.png, hadoop.png, livy-config.png, livy-error.png, 
> nifi-livy.png
>
>
> In  
> [NIFI-5906|https://github.com/apache/nifi/commit/1955e7ac617c2ce962a388076b4d8b0f646fe449]
>  the FILE_EXISTS_VALIDATOR was removed but new issue was introduced as 
> attached image shows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] gresockj commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


gresockj commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632555033



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenSyslog.java
##
@@ -152,7 +145,7 @@
 "The maximum number of Syslog events to add to a single 
FlowFile. If multiple events are available, they will be concatenated along 
with "
 + "the  up to this configured maximum 
number of messages")
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
-.expressionLanguageSupported(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)

Review comment:
   It might still make sense to let it be 
ExpressionLanguage.VARIABLE_REGISTRY if you want to allow for the additional 
flexibility, though it seems like a setting that wouldn't likely be 
externalized to the variable registry.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8538) Upgrade Apache Commons IO to 2.8.0

2021-05-14 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8538:
---
Fix Version/s: 1.14.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Apache Commons IO to 2.8.0
> --
>
> Key: NIFI-8538
> URL: https://issues.apache.org/jira/browse/NIFI-8538
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.13.2
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Labels: security
> Fix For: 1.14.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Apache Commons IO version 2.6 and below are vulnerable to 
> [CVE-2021-29425|https://nvd.nist.gov/vuln/detail/CVE-2021-29425]. Although 
> NiFi does not appear to have any direct calls to 
> {{FileNameUtils.normalize()}}, numerous libraries leverage Commons IO. 
> Upgrading to version 2.8.0 addresses this issue and also includes a number of 
> other minor bug fixes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 closed pull request #5073: NIFI-8538 Upgraded Apache Commons IO to 2.8.0

2021-05-14 Thread GitBox


mattyb149 closed pull request #5073:
URL: https://github.com/apache/nifi/pull/5073


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8538) Upgrade Apache Commons IO to 2.8.0

2021-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344596#comment-17344596
 ] 

ASF subversion and git services commented on NIFI-8538:
---

Commit 6776765a928952a68373d633fac1d848ddcc7b50 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=6776765 ]

NIFI-8538 Upgraded Apache Commons IO to 2.8.0

- Upgraded direct dependencies from 2.6 to 2.8.0
- Added dependency management configuration to use 2.8.0 for some modules
- Updated scripted Groovy tests to avoid copying unnecessary files

Signed-off-by: Matthew Burgess 

This closes #5073


> Upgrade Apache Commons IO to 2.8.0
> --
>
> Key: NIFI-8538
> URL: https://issues.apache.org/jira/browse/NIFI-8538
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.13.2
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Labels: security
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Apache Commons IO version 2.6 and below are vulnerable to 
> [CVE-2021-29425|https://nvd.nist.gov/vuln/detail/CVE-2021-29425]. Although 
> NiFi does not appear to have any direct calls to 
> {{FileNameUtils.normalize()}}, numerous libraries leverage Commons IO. 
> Upgrading to version 2.8.0 addresses this issue and also includes a number of 
> other minor bug fixes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632533499



##
File path: 
nifi-nar-bundles/nifi-extension-utils/nifi-event-transport/src/main/java/org/apache/nifi/event/transport/netty/NettyEventSenderFactory.java
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.event.transport.netty;
+
+import io.netty.bootstrap.Bootstrap;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelHandler;
+import io.netty.channel.ChannelInitializer;
+import io.netty.channel.ChannelOption;
+import io.netty.channel.EventLoopGroup;
+import io.netty.channel.pool.ChannelHealthChecker;
+import io.netty.channel.pool.ChannelPool;
+import io.netty.channel.pool.ChannelPoolHandler;
+import io.netty.channel.pool.FixedChannelPool;
+import io.netty.channel.socket.nio.NioDatagramChannel;
+import io.netty.channel.socket.nio.NioSocketChannel;
+import org.apache.nifi.event.transport.EventSender;
+import org.apache.nifi.event.transport.EventSenderFactory;
+import org.apache.nifi.event.transport.configuration.TransportProtocol;
+import 
org.apache.nifi.event.transport.netty.channel.pool.InitializingChannelPoolHandler;
+import 
org.apache.nifi.event.transport.netty.channel.ssl.ClientSslStandardChannelInitializer;
+import 
org.apache.nifi.event.transport.netty.channel.StandardChannelInitializer;
+
+import javax.net.ssl.SSLContext;
+import java.net.InetSocketAddress;
+import java.net.SocketAddress;
+import java.time.Duration;
+import java.util.Collections;
+import java.util.List;
+import java.util.Objects;
+import java.util.function.Supplier;
+
+/**
+ * Netty Event Sender Factory
+ */
+public class NettyEventSenderFactory extends EventLoopGroupFactory 
implements EventSenderFactory {
+private static final int MAX_PENDING_ACQUIRES = 1024;
+
+private Duration timeout = Duration.ofSeconds(30);
+
+private int maxConnections = Runtime.getRuntime().availableProcessors() * 
2;
+
+private Supplier> handlerSupplier = () -> 
Collections.emptyList();
+
+private final String address;

Review comment:
   Sure, thanks for the suggestion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632536283



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutSyslog.java
##
@@ -150,6 +149,7 @@
 "messages will be sent over a secure connection.")
 .required(false)
 .identifiesControllerService(SSLContextService.class)
+.dependsOn(PROTOCOL, TCP_VALUE)

Review comment:
   Good catch, I had intended to update both processors to make SSL Context 
Service depend on TCP.
   
   I noticed the issue with SSL Context Service and changing from TCP to UDP.  
It seems like that is something that might make sense to address at the 
framework level.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#issuecomment-841257313


   Thanks for the thorough review and helpful feedback @gresockj!  I added a 
commit including the changes and rebased on main to incorporate recent updates.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632529701



##
File path: 
nifi-nar-bundles/nifi-extension-utils/nifi-event-transport/src/main/java/org/apache/nifi/event/transport/message/ByteArrayMessage.java
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.event.transport.message;
+
+/**
+ * Byte Array Message with Sender
+ */
+public class ByteArrayMessage {
+private byte[] message;

Review comment:
   Good point, will make the change.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #5073: NIFI-8538 Upgraded Apache Commons IO to 2.8.0

2021-05-14 Thread GitBox


mattyb149 commented on pull request #5073:
URL: https://github.com/apache/nifi/pull/5073#issuecomment-841247051


   +1 LGTM, ran various flows using some affected processors, everything looks 
good. Thanks for the upgrade! Merging to main


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] simonbence commented on a change in pull request #5059: NIFI-8519 Adding HDFS support for NAR autoload

2021-05-14 Thread GitBox


simonbence commented on a change in pull request #5059:
URL: https://github.com/apache/nifi/pull/5059#discussion_r632531276



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-nar-loading-utils/src/main/java/org/apache/nifi/nar/NarProviderTask.java
##
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.nar;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.nio.file.StandardCopyOption;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Set;
+import java.util.UUID;
+import java.util.stream.Collectors;
+
+final class NarProviderTask implements Runnable {
+private static final Logger LOGGER = 
LoggerFactory.getLogger(NarProviderTask.class);
+private static final String NAR_EXTENSION = "nar";
+
+// A unique id is necessary for temporary files not to collide with 
temporary files from other instances.
+private final String id = UUID.randomUUID().toString();
+
+private final NarProvider narProvider;
+private final long pollTimeInMs;
+private final File extensionDirectory;
+
+private volatile boolean stopped = false;
+
+NarProviderTask(final NarProvider narProvider, final File 
extensionDirectory, final long pollTimeInMs) {
+this.narProvider = narProvider;
+this.pollTimeInMs = pollTimeInMs;
+this.extensionDirectory = extensionDirectory;
+}
+
+@Override
+public void run() {
+LOGGER.info("Nar provider task is started");
+
+while (!stopped) {
+try {
+LOGGER.debug("Task starts fetching NARs from provider");
+final Set loadedNars = getLoadedNars();
+final Collection availableNars = 
narProvider.listNars();

Review comment:
   This one actually breaks the behaviour. I am not sure why at this point, 
but it behaves the same way as if the context classloader would not be set.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632537625



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenSyslog.java
##
@@ -152,7 +145,7 @@
 "The maximum number of Syslog events to add to a single 
FlowFile. If multiple events are available, they will be concatenated along 
with "
 + "the  up to this configured maximum 
number of messages")
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
-.expressionLanguageSupported(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)

Review comment:
   Good catch, that should not have been changed, will revert back.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632540020



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutSyslog.java
##
@@ -222,213 +224,97 @@ protected void init(final ProcessorInitializationContext 
context) {
 }
 
 @OnScheduled
-public void onScheduled(final ProcessContext context) throws IOException {
-// initialize the queue of senders, one per task, senders will get 
created on the fly in onTrigger
-this.senderPool = new 
LinkedBlockingQueue<>(context.getMaxConcurrentTasks());
-}
-
-protected ChannelSender createSender(final ProcessContext context) throws 
IOException {
-final int port = 
context.getProperty(PORT).evaluateAttributeExpressions().asInteger();
-final String host = 
context.getProperty(HOSTNAME).evaluateAttributeExpressions().getValue();
+public void onScheduled(final ProcessContext context) throws 
InterruptedException {
+eventSender = getEventSender(context);
 final String protocol = context.getProperty(PROTOCOL).getValue();
-final int maxSendBuffer = 
context.getProperty(MAX_SOCKET_SEND_BUFFER_SIZE).evaluateAttributeExpressions().asDataSize(DataUnit.B).intValue();
-final int timeout = 
context.getProperty(TIMEOUT).evaluateAttributeExpressions().asTimePeriod(TimeUnit.MILLISECONDS).intValue();
-final SSLContextService sslContextService = 
context.getProperty(SSL_CONTEXT_SERVICE).asControllerService(SSLContextService.class);
-return createSender(sslContextService, protocol, host, port, 
maxSendBuffer, timeout);
-}
-
-// visible for testing to override and provide a mock sender if desired
-protected ChannelSender createSender(final SSLContextService 
sslContextService, final String protocol, final String host,
- final int port, final int 
maxSendBufferSize, final int timeout)
-throws IOException {
-
-ChannelSender sender;
-if (protocol.equals(UDP_VALUE.getValue())) {
-sender = new DatagramChannelSender(host, port, maxSendBufferSize, 
getLogger());
-} else {
-// if an SSLContextService is provided then we make a secure sender
-if (sslContextService != null) {
-final SSLContext sslContext = 
sslContextService.createContext();
-sender = new SSLSocketChannelSender(host, port, 
maxSendBufferSize, sslContext, getLogger());
-} else {
-sender = new SocketChannelSender(host, port, 
maxSendBufferSize, getLogger());
-}
-}
-sender.setTimeout(timeout);
-sender.open();
-return sender;
+final String hostname = 
context.getProperty(HOSTNAME).evaluateAttributeExpressions().getValue();
+final int port = 
context.getProperty(PORT).evaluateAttributeExpressions().asInteger();
+transitUri = String.format("%s://%s:%s", protocol, hostname, port);
 }
 
 @OnStopped
-public void onStopped() {
-if (senderPool != null) {
-ChannelSender sender = senderPool.poll();
-while (sender != null) {
-sender.close();
-sender = senderPool.poll();
-}
+public void onStopped() throws Exception {
+if (eventSender != null) {
+eventSender.close();
 }
 }
 
-private PruneResult pruneIdleSenders(final long idleThreshold){
-int numClosed = 0;
-int numConsidered = 0;
-
-long currentTime = System.currentTimeMillis();
-final List putBack = new ArrayList<>();
-
-// if a connection hasn't been used with in the threshold then it gets 
closed
-ChannelSender sender;
-while ((sender = senderPool.poll()) != null) {
-numConsidered++;
-if (currentTime > (sender.getLastUsed() + idleThreshold)) {
-getLogger().debug("Closing idle connection...");
-sender.close();
-numClosed++;
-} else {
-putBack.add(sender);
-}
-}
-
-// re-queue senders that weren't idle, but if the queue is full then 
close the sender
-for (ChannelSender putBackSender : putBack) {
-boolean returned = senderPool.offer(putBackSender);
-if (!returned) {
-putBackSender.close();
-}
-}
-
-return new PruneResult(numClosed, numConsidered);
-}
-
 @Override
-public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
-final String protocol = context.getProperty(PROTOCOL).getValue();
+public void onTrigger(final ProcessContext context, final ProcessSession 
session) throws ProcessException {
 final int batchSize = 

[GitHub] [nifi] exceptionfactory commented on a change in pull request #5034: NIFI-8445: Implementing VaultCommunicationService

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5034:
URL: https://github.com/apache/nifi/pull/5034#discussion_r632548420



##
File path: 
nifi-commons/nifi-vault-utils/src/test/java/org/apache/nifi/vault/hashicorp/TestHashiCorpVaultConfiguration.java
##
@@ -0,0 +1,195 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.vault.hashicorp;
+
+import org.apache.nifi.vault.hashicorp.config.HashiCorpVaultConfiguration;
+import org.apache.nifi.vault.hashicorp.config.HashiCorpVaultProperties;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.springframework.vault.authentication.ClientAuthentication;
+import org.springframework.vault.client.VaultEndpoint;
+import org.springframework.vault.support.SslConfiguration;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.Writer;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+public class TestHashiCorpVaultConfiguration {
+public static final String VAULT_AUTHENTICATION = "vault.authentication";
+public static final String VAULT_TOKEN = "vault.token";
+
+private static final String TEST_TOKEN_VALUE = "test-token";
+private static final String TOKEN_VALUE = "TOKEN";
+private static final String URI_VALUE = "http://localhost:8200;;
+private static final String KEYSTORE_PASSWORD_VALUE = "keystorePassword";
+private static final String KEYSTORE_TYPE_VALUE = "keystoreType";
+private static final String TRUSTSTORE_PASSWORD_VALUE = 
"truststorePassword";
+private static final String TRUSTSTORE_TYPE_VALUE = "truststoreType";
+public static final String TLS_V_1_3_VALUE = "TLSv1.3";
+public static final String TEST_CIPHER_SUITE_VALUE = "Test cipher suite";
+
+private static Path keystoreFile;
+private static Path truststoreFile;
+
+private HashiCorpVaultProperties.HashiCorpVaultPropertiesBuilder 
propertiesBuilder;
+private static File authProps;
+
+private HashiCorpVaultConfiguration config;
+
+@BeforeClass
+public static void initClass() throws IOException {
+keystoreFile = Files.createTempFile("test", ".jks");
+truststoreFile = Files.createTempFile("test", ".jks");
+authProps = writeBasicVaultAuthProperties();
+}
+
+@AfterClass
+public static void cleanUpClass() throws IOException {
+Files.deleteIfExists(keystoreFile);
+Files.deleteIfExists(truststoreFile);
+Files.deleteIfExists(authProps.toPath());
+}
+
+@Before
+public void init() throws IOException {
+propertiesBuilder = new 
HashiCorpVaultProperties.HashiCorpVaultPropertiesBuilder()
+.setUri(URI_VALUE)
+.setAuthPropertiesFilename(authProps.getAbsolutePath());
+
+}
+
+public static File writeVaultAuthProperties(final Map 
properties) throws IOException {
+File authProps = File.createTempFile("vault-", ".properties");
+writeProperties(properties, authProps);
+return authProps;
+}
+
+/**
+ * Writes a new temp vault authentication properties file with the 
following properties:
+ * vault.authentication=TOKEN
+ * vault.token=test-token
+ * @return The created temp file
+ * @throws IOException If the file could not be written
+ */
+public static File writeBasicVaultAuthProperties() throws IOException {
+Map properties = new HashMap<>();
+properties.put(VAULT_AUTHENTICATION, TOKEN_VALUE);
+properties.put(VAULT_TOKEN, TEST_TOKEN_VALUE);
+return writeVaultAuthProperties(properties);
+}
+
+public static void writeProperties(Map props, File 
authProps) throws IOException {
+Properties properties = new Properties();
+
+for (Map.Entry entry : props.entrySet()) {
+properties.put(entry.getKey(), entry.getValue());
+}
+try (Writer writer = new FileWriter(authProps)) {
+properties.store(writer, "Vault test authentication properties");
+ 

[GitHub] [nifi] exceptionfactory commented on a change in pull request #5044: NIFI-8462 Refactored PutSyslog and ListenSyslog using Netty

2021-05-14 Thread GitBox


exceptionfactory commented on a change in pull request #5044:
URL: https://github.com/apache/nifi/pull/5044#discussion_r632574971



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenSyslog.java
##
@@ -152,7 +145,7 @@
 "The maximum number of Syslog events to add to a single 
FlowFile. If multiple events are available, they will be concatenated along 
with "
 + "the  up to this configured maximum 
number of messages")
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
-.expressionLanguageSupported(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)

Review comment:
   Yes, that would be an option, but agree that there doesn't seem to be 
much value in making it configurable through the variable registry.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] scottyaslan commented on a change in pull request #5060: NIFI-8520 - Parameter Contexts - Show the wrong information of referencing compon…

2021-05-14 Thread GitBox


scottyaslan commented on a change in pull request #5060:
URL: https://github.com/apache/nifi/pull/5060#discussion_r632740807



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-parameter-contexts.js
##
@@ -2397,8 +2397,6 @@
  * @param parameterToSelect   Optional, name of the parameter to 
select in the table.
  */
 showParameterContext: function (id, readOnly, parameterToSelect) {
-parameterCount = 0;

Review comment:
   This is needed as the count gets reset every time the 
`#parameter-context-dialog` is opened and this count is used to track the index 
of any new parameters. Instead of removing this line try calling `resetUsage()` 
before we reset the `parameterCount = 0;`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #5070: NIFI-8535: Better error message in PutDatabaseRecord when table does not exist

2021-05-14 Thread GitBox


markap14 commented on pull request #5070:
URL: https://github.com/apache/nifi/pull/5070#issuecomment-841448744


   Thanks @mattyb149 looks good to me +1 merged to main


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 merged pull request #5070: NIFI-8535: Better error message in PutDatabaseRecord when table does not exist

2021-05-14 Thread GitBox


markap14 merged pull request #5070:
URL: https://github.com/apache/nifi/pull/5070


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8535) PutDatabaseRecord should give a better message when the table cannot be found

2021-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344828#comment-17344828
 ] 

ASF subversion and git services commented on NIFI-8535:
---

Commit f812dfdfc0f008d372581c6e4c8770c88ad1c3e6 in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=f812dfd ]

NIFI-8535: Better error message in PutDatabaseRecord when table does not exist 
(#5070)

* NIFI-8535: Better error message in PutDatabaseRecord when table does not exist

> PutDatabaseRecord should give a better message when the table cannot be found
> -
>
> Key: NIFI-8535
> URL: https://issues.apache.org/jira/browse/NIFI-8535
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently PutDatabaseRecord calls DatabaseMetaData.getColumns() to try and 
> match the columns from the specified table to the fields in the incoming 
> record(s). However if the table itself is not found, this method returns an 
> empty ResultSet, so it is not known whether the table does not exist or if it 
> exists with no columns.
> PutDatabaseRecord should call DatabaseMetaData.getTables() if the column list 
> is empty, and give a more descriptive error message if the table is not 
> found. This can help the user determine whether there is a field/column 
> mismatch or a catalog/schema/table name mismatch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8535) PutDatabaseRecord should give a better message when the table cannot be found

2021-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344829#comment-17344829
 ] 

ASF subversion and git services commented on NIFI-8535:
---

Commit f812dfdfc0f008d372581c6e4c8770c88ad1c3e6 in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=f812dfd ]

NIFI-8535: Better error message in PutDatabaseRecord when table does not exist 
(#5070)

* NIFI-8535: Better error message in PutDatabaseRecord when table does not exist

> PutDatabaseRecord should give a better message when the table cannot be found
> -
>
> Key: NIFI-8535
> URL: https://issues.apache.org/jira/browse/NIFI-8535
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently PutDatabaseRecord calls DatabaseMetaData.getColumns() to try and 
> match the columns from the specified table to the fields in the incoming 
> record(s). However if the table itself is not found, this method returns an 
> empty ResultSet, so it is not known whether the table does not exist or if it 
> exists with no columns.
> PutDatabaseRecord should call DatabaseMetaData.getTables() if the column list 
> is empty, and give a more descriptive error message if the table is not 
> found. This can help the user determine whether there is a field/column 
> mismatch or a catalog/schema/table name mismatch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8535) PutDatabaseRecord should give a better message when the table cannot be found

2021-05-14 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8535:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> PutDatabaseRecord should give a better message when the table cannot be found
> -
>
> Key: NIFI-8535
> URL: https://issues.apache.org/jira/browse/NIFI-8535
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently PutDatabaseRecord calls DatabaseMetaData.getColumns() to try and 
> match the columns from the specified table to the fields in the incoming 
> record(s). However if the table itself is not found, this method returns an 
> empty ResultSet, so it is not known whether the table does not exist or if it 
> exists with no columns.
> PutDatabaseRecord should call DatabaseMetaData.getTables() if the column list 
> is empty, and give a more descriptive error message if the table is not 
> found. This can help the user determine whether there is a field/column 
> mismatch or a catalog/schema/table name mismatch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory opened a new pull request #5077: NIFI-8604 Upgraded Apache Accumulo to 2.0.1

2021-05-14 Thread GitBox


exceptionfactory opened a new pull request #5077:
URL: https://github.com/apache/nifi/pull/5077


    Description of PR
   
   NIFI-8604 Upgrades Apache Accumulo libraries from 2.0.0 to 2.0.1.  This 
update brings the client version up to date with the current server version and 
avoids the OWASP library vulnerability notification that impacted Apache 
Accumulo server 2.0.0.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [X] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #5076: NIFI-8542: When returning content via TriggerResult.readContent(FlowF…

2021-05-14 Thread GitBox


mattyb149 commented on pull request #5076:
URL: https://github.com/apache/nifi/pull/5076#issuecomment-841372884


   +1 LGTM, verified the system tests and stateless flow with SplitText work as 
expected. Thanks for the fix! Merging to main


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 closed pull request #5076: NIFI-8542: When returning content via TriggerResult.readContent(FlowF…

2021-05-14 Thread GitBox


mattyb149 closed pull request #5076:
URL: https://github.com/apache/nifi/pull/5076


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8542) The FlowFile content returned by Stateless TriggerResult.readContent may contain too much data

2021-05-14 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8542:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> The FlowFile content returned by Stateless TriggerResult.readContent may 
> contain too much data
> --
>
> Key: NIFI-8542
> URL: https://issues.apache.org/jira/browse/NIFI-8542
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Stateless
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When using Stateless NiFi, the TriggerResult has a readContent(FlowFile) 
> method that should return the contents of the FlowFile. However, if a 
> FlowFile was created using {{ProcessSession.clone(FlowFile flowFile, long 
> offset, long size)}} then the contents of the entire content claim are still 
> returned, not just the contents between offset & offset + size. This means 
> that any stateless flow where the output is from SplitText or SegmentContent 
> will return too much content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8542) The FlowFile content returned by Stateless TriggerResult.readContent may contain too much data

2021-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344721#comment-17344721
 ] 

ASF subversion and git services commented on NIFI-8542:
---

Commit 7c08fbc4d495bf592ef5993e52c1e5aed20a450b in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7c08fbc ]

NIFI-8542: When returning content via TriggerResult.readContent(FlowFile), 
ensure that we take into account the content claim offset and length

Signed-off-by: Matthew Burgess 

This closes #5076


> The FlowFile content returned by Stateless TriggerResult.readContent may 
> contain too much data
> --
>
> Key: NIFI-8542
> URL: https://issues.apache.org/jira/browse/NIFI-8542
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Stateless
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When using Stateless NiFi, the TriggerResult has a readContent(FlowFile) 
> method that should return the contents of the FlowFile. However, if a 
> FlowFile was created using {{ProcessSession.clone(FlowFile flowFile, long 
> offset, long size)}} then the contents of the entire content claim are still 
> returned, not just the contents between offset & offset + size. This means 
> that any stateless flow where the output is from SplitText or SegmentContent 
> will return too much content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 closed pull request #5059: NIFI-8519 Adding HDFS support for NAR autoload

2021-05-14 Thread GitBox


markap14 closed pull request #5059:
URL: https://github.com/apache/nifi/pull/5059


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8604) Upgrade Apache Accumulo to 2.0.1

2021-05-14 Thread David Handermann (Jira)
David Handermann created NIFI-8604:
--

 Summary: Upgrade Apache Accumulo to 2.0.1
 Key: NIFI-8604
 URL: https://issues.apache.org/jira/browse/NIFI-8604
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.13.2
Reporter: David Handermann
Assignee: David Handermann


Apache Accumulo server 2.0.0 is impacted by 
[CVE-2020-17533|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-17533], 
which was resolved in version 2.0.1.  Although the vulnerability does not 
impact NiFi client code, upgrading from version 2.0.0 maintains version parity 
with Accumulo server and avoids false positives in the OWASP Dependency Check 
Report.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8519) HDFS support for Hot Loading

2021-05-14 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8519:
-
Fix Version/s: 1.14.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> HDFS support for Hot Loading
> 
>
> Key: NIFI-8519
> URL: https://issues.apache.org/jira/browse/NIFI-8519
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: 
> 0001-NIFI-8519-Support-RequiresInstanceClassLoading-annot.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> In NiFi, we have
> {code:java}
> nifi.nar.library.autoload.directory=./extensions
> {code}
> This directory is where NiFi would be looking for automatically loading NARs 
> without the need for a restart of the service. It would be comfortable in 
> multiple environments to support loading NARs from object stores, like HDFS.
> In this story I am willing to extend autoload with this capability and 
> provide HDFS support for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #5034: NIFI-8445: Implementing VaultCommunicationService

2021-05-14 Thread GitBox


asfgit closed pull request #5034:
URL: https://github.com/apache/nifi/pull/5034


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8445) Implement Hashicorp VaultCommunicationService

2021-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344778#comment-17344778
 ] 

ASF subversion and git services commented on NIFI-8445:
---

Commit ed591e0f222cab83ca0c4bff830274ae8f48bcea in nifi's branch 
refs/heads/main from Joe Gresock
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ed591e0 ]

NIFI-8445: Implemented HashiCorpVaultCommunicationService in nifi-vault-utils

This closes #5034

Signed-off-by: David Handermann 


> Implement Hashicorp VaultCommunicationService
> -
>
> Key: NIFI-8445
> URL: https://issues.apache.org/jira/browse/NIFI-8445
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
>Priority: Minor
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Suggest using Spring's VaultTemplate to expose an initial set of 
> Vault-related methods that can be later expanded. 
> Should take the following configuration:
>  # Vault address (e.g., [https://localhost:8200|https://localhost:8200/])
>  # TLS configuration (used only if address is https)
>  # Properties file location for configuring Vault Authentication Method
> Should expose the initial methods:
>  * encrypt(String transitKey, byte[] plainText)
>  * decrypt(String transitKey, String cipherText)
> This service should be able to be used initially in the NiFi core code and 
> Encrypt-Config tool code, and should eventually be made available to the NiFi 
> Registry code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-8445) Implement Hashicorp VaultCommunicationService

2021-05-14 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-8445.

Fix Version/s: 1.14.0
   Resolution: Fixed

> Implement Hashicorp VaultCommunicationService
> -
>
> Key: NIFI-8445
> URL: https://issues.apache.org/jira/browse/NIFI-8445
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
>Priority: Minor
> Fix For: 1.14.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Suggest using Spring's VaultTemplate to expose an initial set of 
> Vault-related methods that can be later expanded. 
> Should take the following configuration:
>  # Vault address (e.g., [https://localhost:8200|https://localhost:8200/])
>  # TLS configuration (used only if address is https)
>  # Properties file location for configuring Vault Authentication Method
> Should expose the initial methods:
>  * encrypt(String transitKey, byte[] plainText)
>  * decrypt(String transitKey, String cipherText)
> This service should be able to be used initially in the NiFi core code and 
> Encrypt-Config tool code, and should eventually be made available to the NiFi 
> Registry code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-registry] andrewmlim commented on pull request #319: NIFIREG-395 - Implemented the ability to import and export versioned flows through the UI

2021-05-14 Thread GitBox


andrewmlim commented on pull request #319:
URL: https://github.com/apache/nifi-registry/pull/319#issuecomment-841409765


   > Recent changes include:
   > 
   > * Changed the dialog title from 'Download Version' to 'Export Version' and 
refactored the component. @bbende @andrewmlim @moranr Thanks for the feedback 
and agreed 'Export' seems more consistent
   > * Changed the 'Export Version' dialog dropdown menu item to "Latest 
(Version #)"
   > * Changed the Actions menu hover style to differentiate from the focus 
style
   
   These look good! Thanks for making all of these improvements @mtien-apache !
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8329) Upgrade NiFi direct dependencies

2021-05-14 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-8329.

Resolution: Fixed

Closing based on completion of sub-tasks.

> Upgrade NiFi direct dependencies
> 
>
> Key: NIFI-8329
> URL: https://issues.apache.org/jira/browse/NIFI-8329
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.13.1
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> |org.apache.derby:derby|
> |org.apache.ignite:ignite|
> |com.fasterxml.jackson.core:jackson-databind|
> |org.apache.lucene:lucene-core|
> |spring-integration|
> |xerces:xerces or xercesimpl|
> |org.apache.activemq:activemq|
> |Jetty (already upgraded)|
> |org.yaml:snakeyaml|
> |org.apache.storm:storm-core|
> |com.ibm.icu:icu4j|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7737) Add support for String[] to PutCassandraRecord

2021-05-14 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-7737:
---
Affects Version/s: (was: 1.11.4)
   Status: Patch Available  (was: Open)

> Add support for String[] to PutCassandraRecord
> --
>
> Key: NIFI-7737
> URL: https://issues.apache.org/jira/browse/NIFI-7737
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Wouter de Vries
>Assignee: Wouter de Vries
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the PutCassandraRecord processor does support string arrays. Trying 
> to use them results in the following error:
> {noformat}
> | 2020-08-13 15:15:37,861 ERROR [Timer-Driven Process Thread-5] 
> o.a.n.p.cassandra.PutCassandraRecord 
> PutCassandraRecord[id=af756410-1ef0-3d80-045e-160f02632c54] Unable to write 
> the records into Cassandra table due to 
> com.datastax.driver.core.exceptions.InvalidTypeException: Value 3 of type 
> class [Ljava.lang.Object; does not correspond to any CQL3 type: 
> com.datastax.driver.core.exceptions.InvalidTypeException: Value 3 of type 
> class [Ljava.lang.Object; does not correspond to any CQL3 type
> | com.datastax.driver.core.exceptions.InvalidTypeException: Value 3 of type 
> class [Ljava.lang.Object; does not correspond to any CQL3 type
> | at com.datastax.driver.core.querybuilder.Utils.convert(Utils.java:361)
> | at 
> com.datastax.driver.core.querybuilder.BuiltStatement.getValues(BuiltStatement.java:265)
> | at 
> com.datastax.driver.core.BatchStatement.getIdAndValues(BatchStatement.java:92)
> | at 
> com.datastax.driver.core.SessionManager.makeRequestMessage(SessionManager.java:597)
> | at 
> com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:131)
> | at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
> | at 
> org.apache.nifi.processors.cassandra.PutCassandraRecord.onTrigger(PutCassandraRecord.java:300)
> | at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> | at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
> | at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
> | at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
> | at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> | at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> | at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> | at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> | at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> | at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> | at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> | at java.lang.Thread.run(Thread.java:748)
> | Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException: 
> Codec not found for requested operation: [ANY <-> [Ljava.lang.Object;]
> | at 
> com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:741)
> | at 
> com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:602)
> | at 
> com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:582)
> | at 
> com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:429)
> | at com.datastax.driver.core.querybuilder.Utils.convert(Utils.java:357)
> | ... 18 common frames omitted{noformat}
> The solution is to add the appropriate codec for this type conversion. I will 
> submit a PR for this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 commented on pull request #5005: NIFI-7737: add string array option to putcassandrarecord

2021-05-14 Thread GitBox


mattyb149 commented on pull request #5005:
URL: https://github.com/apache/nifi/pull/5005#issuecomment-841383963


   Reviewing...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8519) HDFS support for Hot Loading

2021-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17344736#comment-17344736
 ] 

ASF subversion and git services commented on NIFI-8519:
---

Commit 51aae5bcf6dd7ec1aaece74ec69744817e923011 in nifi's branch 
refs/heads/main from Bence Simon
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=51aae5b ]

NIFI-8519 Adding HDFS support for NAR autoload
- Refining classloader management with the help of @markap14

This closes #5059

Signed-off-by: Mark Payne 


> HDFS support for Hot Loading
> 
>
> Key: NIFI-8519
> URL: https://issues.apache.org/jira/browse/NIFI-8519
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Attachments: 
> 0001-NIFI-8519-Support-RequiresInstanceClassLoading-annot.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In NiFi, we have
> {code:java}
> nifi.nar.library.autoload.directory=./extensions
> {code}
> This directory is where NiFi would be looking for automatically loading NARs 
> without the need for a restart of the service. It would be comfortable in 
> multiple environments to support loading NARs from object stores, like HDFS.
> In this story I am willing to extend autoload with this capability and 
> provide HDFS support for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 commented on pull request #5059: NIFI-8519 Adding HDFS support for NAR autoload

2021-05-14 Thread GitBox


markap14 commented on pull request #5059:
URL: https://github.com/apache/nifi/pull/5059#issuecomment-841386401


   Thanks @simonbence ! All looks good now. Fixed one checkstyle violation but 
all was good otherwise. +1 merged to main!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org